Child pages
  • Infolunch Notes
Skip to end of metadata
Go to start of metadata

Infolunch Presenation

The main takeaway from the talk seemed to be that RAMCloud should be into two separate layers:

  1. The storage layer, offering maybe a blob key, value model (or block access?)
  2. Library layer, which can be used to implement many different data models, and other functionality such as trust and indexing

It was even suggested that RAMCloud should just expose the storage layer to a traditional RDBMS application which would just use this in the place of a disk.

A number of interesting points were mentioned:

  • It was pointed out the the type of logging we propose is already being done on a smaller scale by in memory data bases.
  • Also, how we implement durability depends on how how we handle single object vs multiple object transactions (a la Google Megastore).
  • We could use RAID at the level of racks, where a rack would play the role of one disk in a RAID system
  • Facebook computes joins from data in their data base and then caches the result in memory. They are looking at using Flash instead of disk.
  • In a large RDBMS system like OracleDB, more and more memory is being set aside for communication buffers, which leaves less space for storing the actual data. Low latency transactions would drastically reduce the amount of space required for the communication buffers
  • A question was raised about the frequency of access to some of the data in the RAMCloud. If it is only being accessed once in a while, why should it live in memory?

Comments from Paul Heymann

E-mail follow-up from Paul Heymann (heymann@stanford.edu):

I liked your talk in InfoLunch, but had a few uninformed comments that might be useful:

(1) Is "RAMCloud" what Google and other companies that are really on top of engineering already do? For example, Jeff Dean just gave an excellent talk at WSDM2009 which detailed how their architecture works: http://videolectures.net/wsdm09_dean_cblirs/ with details like the fact that they moved their entire index into main memory a few years ago (requiring a search to hit thousands of machines, but shrinking latency massively). Similarly, I recall hearing a few years ago that SLAC was working on machines with a terabyte of main memory (the best link I can find for that is http://www.violin-memory.com/listing_detail.php?listing=62&id=Press_Releases).

(2) Have you looked at H-Store? There are more details here: http://db.cs.yale.edu/hstore/ . While there is almost certainly older work in main memory databases that explored the space, the H-Store project is/was looking at the results of almost all of the assumptions you were proposing within the last several years. A quote: "For
example, all but the very largest OLTP applications can fit in main memory of a modern shared-nothing cluster of server machines." They also drop certain locking assumptions and other things, and generally are rebuilding everything based on modern assumptions. (H-Store also seems like a big project---MIT, Yale, Brown...) The only major difference seems to be that they are defaulting to the relational model whereas you haven't chosen yours yet.

(3) I was unclear on what you thought the difference was between data that would go in the RAMCloud and the data that wouldn't. For example, you claimed that Facebook could be completely in the RAMCloud, but said that certain data like photos and video wouldn't go in.

(4) Related to (3), I was unclear on how the RAMCloud would differ from a massive cache in what seemed to be your primary use case---web applications. In other words, if you had enough RAM to put all of Facebook's MySQL data into the RAMCloud, would it be any different performance wise than using all of the RAM in a cache in front of the MySQL databases? (i.e., what's the difference if there are no cache misses?)

(5) It seemed pretty unclear to me how RAMCloud relates to Solid State Disks and other forms of flash and flash-like memories. I'm far from an expert in that area, but I would have liked to have seen an argument like: "Although SSDs will eliminate seeks, be accessible through PCI, and have storage capacity on par with hard disks in 5-10 years, we think DRAM will be better for large data sets because _________."

(6) I didn't buy your statements about the size of a $4M amount of RAM increasing to a size that was reasonable for data storage within 5-10 years. Or, at the very least, I'm not completely convinced. Every several years, I get a new laptop which has maybe 4 times the capacity of my previous laptop, yet my laptop hard disk always remains full.
Similarly, companies seem to increase how much data they are gathering on their customers to the maximum extent that they can (e.g., Amazon monitoring every aspect of customer behavior on their website). If you're not convinced that RAMCloud would be useful now, I would need much more convincing that the ratio of "data produced by a company" to "data that can fit in DRAM economically" will be any different in 5-10 years (e.g., is there a fundamental reason why companies' ability to gather data about people, their logistics, and so on will hit a wall within the next 5-10 years?).

(7) It seemed like having a (clearer) motivating application would resolve many of the uncertainties you had when you were presenting. For example, one thing that (I think) makes the relational data model somewhat rigid (one of your complaints) is that it is difficult to change the schema. This isn't really a big issue if you're a bank, or probably even if you've got a mature web product, but if you're a web company doing rapid prototyping, you might need something where experimentation is easier. For example, FriendFeed uses this (somewhat bizarre) half relational, half sort-of-semi-structured data model to make rapid prototyping easier: http://bret.appspot.com/entry/how-friendfeed-uses-mysql . Is RAMCloud for startups? For Facebook? For Google? For IBM? For Bank of America?

Anyway, I would be interested to see a second presentation to the InfoLunch, given how far we got in the first one...

Cheers,
Paul

  • No labels