...
RAMCloud currently supports the following transports:
basic
Example: basic+udp:host=rc01,port=11000
Basic
is RAMCloud's workhorse transport; it provides the best combination of speed and robustness, so we recommend that you use this transport by default. It uses an unreliable datagram mechanism to deliver packets, and provides reliable, flow-controlled, in-order delivery on top of the underlying datagram mechanism. The basic
transport is distinct from the datagram mechanism, and it can work with any datagram mechanism that implements a simple API. As of 2016, RAMCloud contains drivers for the following datagram mechanisms:
udp: Sends UDP packets through the kernel. This driver is not very fast, since data must flow through the kernel, but it will run almost anywhere.
infud: Uses Infiniband RDMA to send datagrams. The RDMA mechanism allows direct communication between applications and the Infiniband NIC, which provides low latency and high throughput.
dpdk: Uses the Intel DPDK library to send and receive UDP packets. DPDK also uses kernel bypass, so this driver is quite fast.
solarflare: Uses the SolarFlare OpenOnload package to send and receive UDP packets. This driver requires SolarFlare NICs, but it uses kernel bypass, so it is also fast.
Service locators for the basic
transport must specify both the basic
transport and the particular driver, e.g. basic+udp
, plus additional parameters as required by the driver. For details and examples, check the file scripts/cluster.py
in the RAMCloud distribution. In addition, you may need to check the in-code documentation for particular drivers in order to see what parameters they support.
tcp
Example: tcp:host=rc01,port=11000
...
The infrc
transport uses Infiniband reliable connected queue pairs. It is built on the Mellanox RDMA library package, which allows the Infiniband NIC to be accessed directly from user-space applications (kernel bypass). Because of this, infrc
provides the lowest latency of all RAMCloud transports, It is the one we recommend you use. In order to use the infrc
transport, you will need to install Infiniband NICs and switches, such as those from Mellanox. The infrc
transport uses UDP to exchange information during connection setup; the host
and port
parameters in the service locator identify the server's node and the port on which it is listening for UDP packets to open new connections.Once a connection is open, UDP is no longer used; all communication happens directly using Infiniband.
fast+udp
Example: fast+udp:host=rc01,port=11000
Fast
is a transport that uses an unreliable datagram mechanism to deliver packets; the fast
transport provides reliable, flow-controlled, in-order delivery on top of the underlying datagram mechanism. The fast
transport is distinct from the datagram mechanism, and it can work with any datagram mechanism that implements a simple API. Fast+udp
uses kernel UDP sockets as the datagram mechanism; the host
and port
parameters have the same meaning as for the tcp
transport. As with tcp
, fast+udp
is relatively slow since all packets must pass through the kernel.
fast+infud
Example: fast+infud:
Fast+infud
uses the fast
transport along with Infiniband unreliable datagrams. Like infrc
, it is built on the Mellanox RDMA library package with kernel bypass, so it is about as fast as infrc
. Need to fill in details on how to use this transportNote: the infrc
transport was our workhorse transport for many years, but we now recommand the basic
transport instead. Basic
is about as fast as infrc
, but it is more robust and stable.