...
- + O(1) lookup
- Just keep a table, shift the address to get the shard number, then lookup host by shard id
- - Size of a shard can never grow beyond capacity of the largest machine
- Might not always be able to store even when there is excess capacity in the system
- Could do something hackish; have saturated host forward requests it doesn't have the entries for to another host with a different part of the same shard
- - Nearly impossible to determine address range chunk size initially
- - Nightmare if we decide we need a new address range chunk size
- Requires "rehashing" the whole thing
- - Adding and removing hosts
- Some of these issues are addressable with linear hashing techniques.
DHT
- + Simple
- + Natural replication
- Partition Shard address space (needed for distribution anyway), multiple servers cover each partitionshard
- - Latency
- Address to shard mapping has log(# shards) time in general, if shards are not fixed width in terms of # of addresses
- Can be mitgated for index space tradeoff using radix tree or tries
- How many levels is too deep?
- Even 2-3 in the face of cache misses? (what is cache miss cost)
- Definitely, at 3.0 GHz we get 300 cycles per us
- + Load sharing
- - More difficult to co-locate related data on a single machine
- Probably the case that we want to intentionally distrbute related data (more network overhead, but reduces latency because lookups happen on independent machines)
...