Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page is intended for recording steps we have taken over time to improve RAMCloud performance, along with measurements of the resulting performance gains. Add new entries at the beginning of the page, so that the entries are in reverse chronological order.

Enqueue replication Rpcs from the service thread to dispatch thread instead of taking the Dispatch lock (December 2014, Henry Qin)

When servicing a Write Rpc, the service thread used to take the dispatch lock to block the dispatch thread from executing, and then proceed to use the transport to send the replication Rpc.

We have introduced the DispatchExec mechanism as a new Poller for the dispatch thread. The service thread will hand work to DispatchExec in lieu of taking the Dispatch Lock.

This optimization moves the median latency for writes from 14.4 us to 13.4 us.

Fetch multiple completions at once in InfRcTransport (August 2014, John Ousterhout)

InfRcTransport used to retrieve completions (both from serverRxCq and clientRxCq) one-at-a-time. This optimization changed the code to retrieve many at once, if there are several available. This improved "clusterperf readThroughput" from 875 kreads/sec to 948 kreads/sec.

Reclaim multiple transmit buffers at once in InfRcTransport (August 2014, John Ousterhout)

InfRcTransport used to call reapTxBuffers automatically at the end of the poll method. As a result, it typically only reclaimed one buffer at a time. This optimization changed the code so that it only calls readTxBuffers when transmit buffers are running low, so it can generally reclaim several at a time. This improved "clusterperf readThroughput" from 812 kreads/sec to 875 kreads/sec (in this benchmark, the dispatch thread is the bottleneck).

Optimizing class Object to treat contiguous Buffers as normal byte buffers (July 2014, Henry Qin)

Before this optimization, we used (relatively) slow Buffer methods like getRange() to access data inside an Object even if the memory was contiguous.

This optimization causes Object to treat contiguous Buffers as normal void*, saving the overhead of Buffer.

We measured an improvement of 25 ns in the read Rpc.

Merging the check for tablet existence with incrementing the read count on the tablet. (July 2014, Henry Qin)

...