Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 27 Next »

Contents)

  • Cluster Outline
  • User's Guide
    1. User setup
    2. Running RAMCloud
    3. Development and enhancement
  • Performance result:
    • clusterperf.py
    • TBD) recovery.py
  • System administrator's guide
    1. System configuration
      1. System photograph
      2. Server module specification
      3. Network connection
    2. System administration guide
      1. Security solution

Cluster Outline)

Basically the procedure is the similar to rccluster except that:

  1. NOTE)  Please wait a while to use the ATOM cluster...
    mmres has been ported from rcres. We are now adding resource management feature needed for Intel DPDK kernel bypass driver. 
  2. Use local git repository located on 'mmatom'
    1. The source code and scripts are pretested for the ATOM based 'RAMCloud in a Box'.
  3. The cluster has 132 ATOM servers ( Total : 1,056 cores, 4.1TB DRAM, 16.5 TB SSD) connected to dedicated management server 'mmatom.scs.stanford.edu', which is directly connected to the Internet. Please take a look at System configuration below for detail.
  4. Broken or unstable server is disconnected with management tool, and the continuous IPaddress/hostname among servers is always maintained. IPaddress and hostname is always corresponding. The physical slot and chassis number can be identified in a management file on the management server.

User's Guide)

User setup)

  1. ssh login to management server 'mmatom.scs.stanford.edu' with public key authentification.
  2. Install your 'key pair for the cluster-login' to ~/.ssh . For existing RAMCloud users they are already copied from your home in rcnfs.
    • Add the cluster-login-public key to ~/.ssh/authrized_keys .
      Note) 
      - Your home is shared with all the ATOM servers with NFS, so you can login to all atom servers with public key authentification.
      - Do not copy your private key for mmatom login. Please create different key pair to ssh to AROM cluster from mmatom.

  3. Initialize known_host:
    1. You can use /usr/local/scripts/knownhost_init.sh
      1. Usage)  /usr/local/scripts/knownhost_init.sh <ServerNumberFrom> <ServerNumberTo>
        If the host is already initialized result of 'hostname' on remote machine is displayed, otherwise you are prompted whether you will add the host to known_host database, where you should type 'yes'.
      2. Example)
        $ knownhost_init.sh 1 20
            atom001
            atom002
                : 

Compile RAMCloud on the host 'mmatom' )

  1. Git clone from local RAMCloud repository, which is pre-tested for the ATOM cluster. Directory structure is the same as Stanford RAMCloud.
    • $ git clone /var/git/ramcloud-dpdk.git
    • $ cd ramcloud-dpdk
    • $ git submodule update --init --recursive
  2. Compile
    1. Modify path so that you can use '/usr/bin/gcc (4.4.7)' instead of  '/usr/local/bin/gcc (4.9.1)'  for RAMCloud compile.
      • $ export PATH=/usr/bin:$PATH
    2. Compile RAMCloud: ARCH=atom is now default in GNUmakefile
      • $ make DEBUG=no

Run clusterperf.py from 'mmatom')  - You have RAMCloud source compiled.

  1. Setting is defined in localconfig.py, config.py and imported to clusterperf.py through cluster.py.
  2. Basic settings to run RAMCloud application on ATOM servers are provided in config.py . So far we have tested followings. We are going to test more commands:
    • clusterperf.py (edefault parameter is equivalent to rccluster, which is replica=3, server=4, backup=1)
    • TBD: clusterperf.py  (running all of the standard performance tests)
    • TBD: recovery.py
  3. Reserve ( Lease ) ATOM servers with /usr/local/bin/mmres
    1. mmres has been ported from rcres. Now we are adding resource management needed for DPDK (Please wait a while to use the cluster...)
    2. check usage with: $mmres --help
  4. Edit config.py for your servers reserved.
  5. Run clusterperf.py
      $ scripts/clusterperf.py

Performance result)

Clusterperf.py

Note) We are now trying to move the disturbing DPDK log from standard out to some logfile.

Measured on Wed 25 Feb 2015 12:31:42 PM PST :

basic.read100         13.456 us     read single 100B object (30B key) median
basic.read100.min     12.423 us     read single 100B object (30B key) minimum
basic.read100.9       13.837 us     read single 100B object (30B key) 90%
basic.read100.99      17.698 us     read single 100B object (30B key) 99%
basic.read100.999     24.356 us     read single 100B object (30B key) 99.9%
basic.readBw100        4.7 MB/s   bandwidth reading 100B object (30B key)
basic.read1K          20.425 us     read single 1KB object (30B key) median
basic.read1K.min      19.593 us     read single 1KB object (30B key) minimum
basic.read1K.9        20.786 us     read single 1KB object (30B key) 90%
basic.read1K.99       24.306 us     read single 1KB object (30B key) 99%
basic.read1K.999      36.569 us     read single 1KB object (30B key) 99.9%
basic.readBw1K        33.4 MB/s   bandwidth reading 1KB object (30B key)
basic.read10K         52.461 us     read single 10KB object (30B key) median
basic.read10K.min     51.519 us     read single 10KB object (30B key) minimum
basic.read10K.9       53.294 us     read single 10KB object (30B key) 90%
basic.read10K.99      55.881 us     read single 10KB object (30B key) 99%
basic.read10K.999     86.142 us     read single 10KB object (30B key) 99.9%
basic.readBw10K      125.6 MB/s   bandwidth reading 10KB object (30B key)
basic.read100K       358.567 us     read single 100KB object (30B key) median
basic.read100K.min   356.611 us     read single 100KB object (30B key) minimum
basic.read100K.9     359.489 us     read single 100KB object (30B key) 90%
basic.read100K.99    381.799 us     read single 100KB object (30B key) 99%
basic.read100K.999    38.298 ms     read single 100KB object (30B key) 99.9%
basic.readBw100K     187.6 MB/s   bandwidth reading 100KB object (30B key)

Recovery.py

System administrator's guide)

System configuration)
1. system photograph)

2. ATOM server module specification)

  1. 3 chassis are installed (The rack in above picture contains 16 chassis.)
  2. Installed ATOM modules are with C2730 (1.7GHz, 8 core/8 threads, 12 W)

3. Network connection)

  1. Due to historical reason and considering future experiment, VLAN configuration is different in chassis
  2. Management server is directly connect to the internet, the cluster is isolated from other Stanford servers.
    1. Management server works as firewall, login server, firewall, NIS server, DHCP server, and PXE server for reconfiguring ATOM servers.

Security solution)

 

Reference:
  • No labels