Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • + Simple
  • + Natural replication
    • Partition address space (needed for distribution anyway), multiple servers cover each partition
  • - Latency
    • Address to shard mapping has log(# shards) time in general, if shards are not fixed width
    • Can be mitgated for index space tradeoff using radix tree or tries
    • How many levels is too deep?
    • Even 2-3 in the face of cache misses? (what is cache miss cost)
      • Definitely, at 3.0 GHz we get 300 cycles per us
  • + Load sharing
  • - More difficult to co-locate related data on a single machine
    • Probably the case that we want to intentionally distrbute related data (more network overhead, but reduces latency because lookups happen on independent machines)

...