Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • + Simple
  • + Natural replication
    • Shard address space (needed for distribution anyway), multiple servers cover each shard
  • - Latency
    • Address to shard mapping has log(# shards) time in general, if shards are not fixed width in terms of # of addresses
    • Can be mitgated for index space tradeoff using radix tree or tries
    • How many levels is too deep?
    • Even 2-3 in the face of cache misses? (what is cache miss cost)
      • Definitely, at At 3.0 GHz we get 300 3000 cycles per us
      • Cache miss seem to be about 60 cycles = 30 ns?
  • + Load sharing
  • - More difficult to co-locate related data on a single machine
    • Probably the case that we want to intentionally distrbute related data (more network overhead, but reduces latency because lookups happen on independent machines)

...