Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 253 Next »

What is RAMCloud?

RAMCloud is a new class of storage for large-scale datacenter applications. It is a key-value store that keeps all data in DRAM at all times (it is not a cache like memcached). Furthermore,   it takes advantage of high-speed networking such as Infiniband or 10Gb Ethernet to provide very high performance. Applications running in the same datacenter as a RAMCloud cluster can access small objects in about 5μs, which is1000x faster than disk-based storage systems. Small writes take about 15μs. At the same time, RAMCloud storage is durable: data is automatically replicated on nonvolatile secondary storage such as disk or flash, so it is not lost when servers crash. One of RAMCloud's unique features is that it recovers very quickly from server crashes (only 1-2 seconds) so the availability gaps after crashes are almost unnoticeable. Finally, RAMCloud is designed to scale: it can support clusters containing thousands of storage servers, with total capacities up to a few petabytes.

From a practical standpoint, RAMCloud enables a new class of applications that manipulate large data sets very intensively. Using RAMCloud, an application can combine tens of thousands of items of data in real time to provide instantaneous responses to user requests.  Unlike traditional databases, RAMCloud scales to support very large applications, while still providing a high level of consistency. We believe that RAMCloud, or something like it, will become the primary storage system for structured data in cloud computing environments such as Amazon's AWS or Microsoft's Azure. We have built the system not as a research prototype, but as a production-quality software system, suitable for use by real applications.

RAMCloud is also interesting from a research standpoint. Its two most important attributes are latency and scale. The first goal is to provide the lowest possible end-to-end latency for applications accessing the system from within the same datacenter. We currently achieve latencies of around 5μs for reads and 15μs for writes, but hope to improve these in the future. In addition, the system must scale, since no single machine can store enough DRAM to meet the needs of large-scale applications. We have designed RAMCloud to support at least 10,000 storage servers; the system must automatically manage all the information across the servers, so that clients do not need to deal with any distributed systems issues. The combination of latency and scale creates a large number of interesting research issues. To date we have addressed several of these, such as how to ensure data durability without sacrificing the latency of reads and writes, how to take advantage of the scale of the system to recover very quickly after crashes, and how to manage storage in DRAM. Many more issues remain, such as whether we can provide higher-level features such as secondary indexes and multiple-object transactions without sacrificing the latency or scalability of the system. We are currently exploring several of these issues.

The RAMCloud project is based in the Department of Computer Science at Stanford University, but the system is being used at numerous sites around the world.

Learning about RAMCloud

The links below provide general information about RAMCloud, such as talks and papers.

How to deploy and use RAMCloud

System is already usable

RAMCloud Performance

Information for RAMCloud developers

The RAMCloud test cluster at Stanford

Design notes

Project history and status

Related information

Miscellaneous topics

Old Home Page is Below... this will soon be deleted

What is RAMCloud?

The RAMCloud project is creating a new class of storage, based entirely in DRAM, that is 2-3 orders of magnitude faster than existing storage systems. If successful, it will enable new applications that manipulate large-scale datasets much more intensively than has ever been possible before. In addition, we think RAMCloud, or something like it, will become the primary storage system for cloud computing environments such as Amazon's AWS and Microsoft's Azure.

The role of DRAM in storage systems has been increasing rapidly in recent years, driven by the needs of large-scale Web applications. These applications manipulate very large datasets with an intensity that cannot be satisfied by disks alone. As a result, applications are keeping more and more of their data in DRAM. For example, large-scale caching systems such as memcached are being widely used (in 2009 Facebook used a total of 150 TB of DRAM in memcached and other caches for a database containing 200 TB of disk storage), and the major Web search engines now keep their search indexes entirely in DRAM.

Although DRAM's role is increasing, it still tends to be used in limited or specialized ways. In most cases DRAM is just a cache for some other storage system such as a database; in other cases (such as search indexes) DRAM is managed in an application-specific fashion. It is difficult for developers to use DRAM effectively in their applications; for example, the application must manage consistency between caches and the backing storage. In addition, cache misses and backing store overheads make it difficult to capture DRAM's full performance potential.

Our goal for RAMCloud is to create a general-purpose storage system that makes it easy for developers to harness the full performance potential of large-scale DRAM storage. It keeps all data in DRAM all the time, so there are no cache misses. RAMCloud storage is durable and available, so developers need not manage a separate backing store. RAMCloud is designed to scale to thousands of servers and hundreds of terabytes of data while providing uniform low-latency access to all machines within a large datacenter.

As of Fall 2011, we had initial implementations of many of the components of RAMCloud and the system runs well enough to use it for simple tests. On our 60-node test cluster we are able to perform remote reads of 100-byte objects in about 5 microseconds, and an individual server can process more than 800,000 small read requests per second. The basic crash recovery mechanism is running, and RAMCloud can recover 35 GB of memory from a failed server in about 1.6 seconds.

The RAMCloud project is still young, so there are many interesting research issues still to explore, such as the following:

  • Data model. RAMCloud currently supports a very simple data model (key-value store); we would like to see if we can provide higher-level features such as secondary indexes and multi-object transactions while without sacrificing the scalability or performance of the system.
  • Consistency: we believe that RAMCloud can provide strong consistency (linearizability) without sacrificing performance, but there are several interesting problems to solve in order to achieve that.
  • Cluster management: what are the right mechanisms and policies for reorganizing RAMCloud data in response to changes in the amount of data and the access patterns?
  • Network protocols: we don't think that TCP is the right protocol to provide highest performance within a datacenter, so there is an interesting research project to investigate what is the ideal protocol.
  • Multi-tenancy: how to support multiple independent, perhaps hostile, applications sharing the same RAMCloud storage system within a large datacenter? This introduces issues related to access control and also potentially issues of performance isolation.
  • Multiple datacenters: our current design for RAMCloud focuses on a single datacenter, but some applications will require redundancy across datacenters in order to protect against datacenter failures. An interesting question is whether we can provide that level of redundancy without dramatically impacting the performance of the system.

Introduction to RAMCloud

Resources

Development

RAMCloud Cluster

Informational

Current work

Old Topics

Miscellaneous Topics

Personal Wikis

  • No labels