Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Contents)

  • Cluster Outline
  • User's Guide
    1. User setup
    2. Running RAMCloud
    3. Development and enhancement
  • Performance result:
    • clusterperf.py
    • TBD) recovery.py
  • System administrator's guide
    1. System configuration
      1. System photograph
      2. Server module specification
      3. Network connection
    2. System administration guide
      1. Security solution

Cluster Outline)

Basically the procedure is the similar to rccluster except that:

  1. Use local git repository located on 'mmatom'
    1. The source code and scripts are pretested for the ATOM based 'RAMCloud in a Box'.
  2. The cluster has 132 ATOM servers connected to dedicated management server 'mmatom.scs.stanford.edu', which is directly connected to the Internet. Please take a look at System configuration below for detail.

User's Guide)

User setup)

  1. ssh login to management server 'mmatom.scs.stanford.edu' with public key authentification.
  2. Install your 'key pair for the cluster-login' to ~/.ssh . For existing RAMCloud users they are already copied from your home in rcnfs.
    1. Add the cluster-login-public key to ~/.ssh/authrized_keys .
      Note) Your home is shared with all the ATOM servers with NFS, so you can login to all atom servers with public key authentification.
  3. Initialize known_host:
    1. You can use /usr/local/scripts/knownhost_init.sh
      1. Usage)  /usr/local/scripts/knownhost_init.sh <ServerNumberFrom> <ServerNumberTo>
        If the host is already initialized result of 'hostname' on remote machine is displayed, otherwise you are prompted whether you will add the host to known_host database, where you should type 'yes'.
      2. Example)
        $ knownhost_init.sh 1 20
            atom001
            atom002
                : 

Compile RAMCloud on the host 'mmatom' )

  1. Gig clone RAMCloud, which is pre-tested for the ATOM cluster
    1. Directory structure:
  2. Compile
    • cd ramcloud; make DEBUG=no

Run clusterperf.py from 'mmatom')  - You have RAMCloud source compiled.

  • Setting is defined in localconfig.py, config.py and imported to clusterperf.py through cluster.py.
  • Basic settings to run RAMCloud application on ATOM servers are provided in config.py . So far we have tested followings. We are going to test more commands:
    1. clusterperf.py basic  (equivalent default parameter to rccluster: replica=3, server=4, backup=1)
    2. TBD: clusterperf.py  (running all of the standard performance tests)
    3. TBD: recovery.py
  1. Reserve or Lease ATOM servers:
    1. rcres will be ported to 'mmatom' too.
  2. Edit config.py for your servers reserved.
  3. Run clusterperf.py
      $ scripts/clusterperf.py

Performance result)

Clusterperf.py

Recovery.py

System administrator's guide)

System configuration)
1. system photograph)

2. ATOM server module specification)

  1. 3 chassis are installed (The rack in above picture contains 16 chassis.)
  2. Installed ATOM modules are with C2730 (1.7GHz, 8 core/8 threads, 12 W)

3. Network connection)

  1. Due to historical reason and considering future experiment, VLAN configuration is different in chassis
  2. Management server is directly connect to the internet, the cluster is isolated from other Stanford servers.
    1. Management server works as firewall, login server, firewall, NIS server, DHCP server, and PXE server for reconfiguring ATOM servers.

Security solution)

 

Reference:
  • No labels