We have published initial RAMCloud sources to GitHub. We are planning to merge them to PlatformLab/RAMCloud/master. We are still making some trials, please forgive us for some glitches.
Anchor | ||||
---|---|---|---|---|
|
Contents)
- Cluster Outline
- User's Guide
- User setupSetup
- Running RAMCloud
- User Setup
- Compile RAMCloud
- RAMCloud for ATOM server is cloned from GitHub. (Updated on Oct. 23, 2015)
- Run benchmark
- Compile and run applications
- Development and enhancement
- Performance result:
- clusterperf.py
- TBD) recovery.py
- System administrator's guide
- Adding user with dedicated script
- Hardware maintenance
- NEC tools for ATOM server is cloned from GitHub
- Security solution
- For management server setup
- Setup guide
- Config files
- Stanford's ATOM Cluster - configuration, etc
- System photograph
- Server module specification
- Network connection
Cluster Outline)
Anchor | ||||
---|---|---|---|---|
|
Basically the procedure is the similar to rccluster except that:
- NOTE) I have created a repository for test in /home/satoshi/ramcloud-dpdk.git and tested. You can clone the latest ramcloud on ATOM for testing. Please try it. The instruction is written belowATOM Server is moved from Stanford to NEC America on April 2016.
- The cluster has 132 ATOM servers ( Total : 1,056 cores, 4.1TB DRAM, 16.5 TB SSD) connected to dedicated a management server 'mmatom.scsnecam.stanford.edu', which is directly connected to the Internet. com'.
Please take a look at System configuration below for detail. - Constructing an ATOM server home page at http://mmatom.necam.com/
- Unstable server can be disconnected with management tool, and the continuous IP-address/hostname given. IP-address and hostname is always associated. You will find the details in System-Administrator's guide:Hardware maintenance.
User's Guide) Anchor
UserSetupuserGuide
UserSetupuserGuide |
userGuide
userGuide |
User setup)
Anchor | ||||
---|---|---|---|---|
|
- ssh login to management server 'mmatom.scs.stanford.edu' with public key authentification and the special SSH port.
- Install your 'key pair for the cluster-login' to ~/.ssh . For existing RAMCloud users, it is already copied from your home directory.
- Add the cluster-login-public key to ~/.ssh/authorized_keys .
Note)
- Your home is shared with all the ATOM servers with NFS, so you can login to all atom servers with public key authentification.
- Do not copy your private key for mmatom login. Please create different key pair to ssh to AROM cluster from mmatom.
- keypair can be generated with the command such as:$ ssh-keygen -t rsa -b 2048
- To avoid ssh Errors like 'Permission denied (publickey)':
Do not add passcode for the keypair for ATOM cluster login. You just type 'return' for passcode request of 'ssh-keygen'.
No group/other access permission for ~/.ssh and ~/.ssh/autorhized_keys.
0700 for ./ssh directory, 0600 for ./ssh/authorized_keysEach public_key in ~/.ssh/authorized_keys needs to be a single line without line break.
- Add the cluster-login-public key to ~/.ssh/authorized_keys .
- Initialize known_host:
- You can use /usr/local/scripts/knownhost_init.sh
- Usage) /usr/local/scripts/knownhost_init.sh <ServerNumberFrom> <ServerNumberTo>
If the host is already initialized result of 'hostname' on remote machine is displayed, otherwise you are prompted whether you will add the host to known_host database, where you should type 'yes'. - Example)
$ knownhost_init.sh 1 20
atom001
atom002
:
- Usage) /usr/local/scripts/knownhost_init.sh <ServerNumberFrom> <ServerNumberTo>
- Note) You may see the following error:
- 'Permission denied (publickey). '
- If you follow 2.a, 2.b., it maybe because because the remote user information is not created on atomXXX. We do not use NIS or LDAP so far. You need ask your administrator to setup remote user on atomXXX. See : 'System administrator's guide' below.
- You can use /usr/local/scripts/knownhost_init.sh
Compile RAMCloud on the host 'mmatom' ) Anchor CompileRAMCloudcompileRAMCloudCompileRAMCloud compileRAMCloud
- Git clone from RAMCloud GitHub fork, which is pre-tested for the ATOM cluster. Directory structure is similar to the original Stanford RAMCloud. (We are using DPDK source released in Standard RAMCloud).
$ git clone https://github.com/SMatsushi/RAMCloud.git
$ cd RAMCloud
$ git submodule update --init --recursive
- $ git checkout huawei-dpdkNote) This branch in the fork is a branch from :
commit 66fb471c31fd633e666b0a9a018ac7f1a0f758a3
Author: Collin Lee
Date: Wed Oct 7 02:32:33 2015 -0700new-dpdk
- Compile RAMCloud as typing make without any argument.
$ make -j8 DEBUG=no
$ make -j8 ARCH=atom DEBUG=no DPDK=yes<<Now you do not need to specify ARCH and DPDK flags since ./private/MakefragPrivateTop are setting
flags
for
ATOM
Note)
Followingserver specific make options are now default and specified in GNUmakefile. (To be modified to match original RAMCloud)
CC=gcc44 CXX=g++44 # you need gcc
ARCH=atom
.
You need to have java and javac installed. We have tested java/javac version 1.7.0_91.
CC, CXX is directly specified in order to use /usr/bin/gcc (4.8.3) and /usr/local/bin/gcc44 (4.4.7).
Tools below are located in ./scripts/Tools below are located in ./scripts/ManagementTools so far. Please set your path or copy them into your scripts/bin directory.
- mmres/mmres.py, knowhost_init.sh, ipmiaw2, mmfilter
Run benchmark or application samples) Anchor RunBenchmark RunBenchmark
- Preparation:
- You need to have RAMCloud compiled.
- Reserve ( Lease ) ATOM servers with /usr/local/bin/mmres
- mmres has been ported from rcres. It manages resources for RAMCloud, eg. backup file, and DPDK resource file, etc. The backup replica is preserved until the mmres lease expires, so you can reuse the backup in different program while the lease continues.
- Check usage with: $ mmres --help
Example)$ mmres ls -l
$ mmres ls -l atom10-35 or $ mmres ls -l 10-35 // print range. You can use '..' instead of '-'.
$ mmres lease 14:00 atom10-35 -m 'Comment here!!'
Note)
- Use local time on
Added 'dx2k' cluster. dx2k stands for DX-2000 which is a new XeonD based micro modular server.
Try typing mmres ls -l dk2k
- Note)
- Use local time on the server. Please check the local time with 'date' command before trying lease. 'mmres' does not consider take care of user's timezone so far (python library's limitation....).
- No need to edit scripts/config.py. The server reservation information is servers reserved are acquired from mmres in scripts/config.py.
- Now you can spawn benchmarks or applications tasks to ATOM cluster from mmatom (management server) with python scripts.
- The scripts are customized for the ATOM Cluster so that we can run RAMCloud tests with default option
Limitations) to be fixed....
- YouA willlot seeof WARINGsDPDK '(serverdebug not responding)'. They are seen in the normal job completion.
- A lot of DPDK debug messages starting with "EAL ormessages starting with "EAL or PMD" is shown in stdout. Now fixing..
Quick Hack: Run with /usr/local/bin/mmfilter wrapper:
Usage: mmfilter command arguments...
$ mmfilter scripts/clusterperf.py basic
- Note: A shell script 'Run.sh' at the top level directory takes care of RAMCloud compile and run benchmarks. You only need to 'Run.sh' after 'mmres'. Please take a look into 'Run.sh' for details.
- Run clusterperf.py) It's default parameter is equivalent to the , which is replica=3, server=4, backup=1
$ mmfilter scripts/clusterperf.py
Limitation)
- Fails after fifth test at "readDistRandom" out of 9 tests, after "multiWrite_oneMaster". Debugging now.
Run recovery.py)Now debugging. FastTransport bug fix should have been in, but still fails.
As we had some bugs in scripts/*.py, we put the fix in dpdk-recovHang branch in the fork SMatsushi/RAMCloud.py --transport=basic+dpdk [Tests]
[Tests]: Names of tests. Please take a look into clusterperf.py for available tests.
Limitation)
- Problem running test 'indexBasic' or 'indexMultiple'. They are to slow and end with timeout. All other test runs OK.
- Run recovery.py)
- $ mmfilter scripts/recovery.py -v --timeout=1000 --transport=basic+dpdk
- $ mmfilter scripts/recovery.py -v --timeout=1000 --transport=basic+dpdk
Run clientSample) The simple client code.Anchor CompileRunApplications CompileRunApplications - $ cd clientSample
- $ make
- $ make run
- or $mmfilter make run
- Note) Now it successfully completes after original 1M iterations.
Analysis or Debugging)
- Subcommands:
- scripts/clusterperf.py uses run function defined in scripts/cluster.py.
- cluster.py referes common.py for sandbox. Through common.py, default setting in config.py is referred.
- Useful options in the most python scripts.
- -v for verbose, - -dry (Note that two dashes for dry) for dry run (just printing command and create log directory)
- There are four standard transport setting for ATOM server, which can be specified with -T or --transport option. Please take a look at scripts/config.py for the option.
- Result and logfile:
- Print out in the server or client is forwarded to standard out (screen)
- Log messages, printed by RAMCLOUD_LOG(loglevel, format, args..) are stored in logs directory.
- There are four log levels (ERROR, WARNING, NOTICE, DEBUG), DEBUG log is only stored when the log level is debug. Higher level log message is printed even with lower log mode.
- Log level is specified with -l or --logLevel option of any python script.
- Log files are stored in directory with numbers meaning "%date%time" under ./logs directory where the command is executed. Normally the top level, where obj.* and scripts directory exist.
- Useful command is located in /usr/local/bin which are:
- logCleanup.sh : Cleanup log directories. Without any arguments, it looks for log directory under current directory and ask if delete it. You will see its help with -h option.
- logSummary.pl : Print summary of logs in the log directories. You can run it in the log directory or specify the directory as argument. Check its options with -h option. Cluster.py or other scripts must be executed with NOTICE or DEBUG option for the script to analyze coordinator log for server id.
- Useful command is located in /usr/local/bin which are:
Performance result)
Anchor | ||||
---|---|---|---|---|
|
Clusterperf.py
Note)
- Basic.read100 takes 15us 13 us on CentOS 7.1 and DPDK 2.0 with the latest release on GitHub. We are now debugging it.
Please take a look at /usr/include/rte_version.h for DPDK version. See also: http://dpdk.org/doc/api/rte__version_8h.html - We are now trying to move the disturbing DPDK log from standard out to some logfile. mmfilter is the filter to remove the message from stdout.
Measured onOct 23, 2015 : (Still debugging for stability and performance.)
basic.read100 15.8 us read random 100B object (30B key)- message starting from 'EAL' or 'PMD' are seen in stdout . Please use mmfilter wrapper to invoke a test like: 'mmfilter clusterperf.py -v --transport=...' to remove them. We are going to resolve it by patching DPDK source code.
Measured on Jan. 15, 2016 :
basic.read100 13.6 us read random 100B object (30B key) median
basic.read100.min
15
12.
29 us
read
random
100B
object
(30B
key)
minimum
basic.
readBw100read100.9
5.8 MB/s bandwidth reading 100B objects (30B key)
:
basic.write100 46.7 us write random 100B object (30B key) median
basic.write100.min 44.8 us write random 100B object (30B key) minimum
basic.writeBw100 1.8 MB/s bandwidth writing 100B objects (30B key)
Measured on Wed 25 Feb 2015 12:31:42 PM PST :
basic.read100 13.456 us read single 100B13.9 us read random 100B object (30B key) 90%
basic.read100.99 18.7 us read random 100B object (30B key) 99%
basic.read100.999 35.5 us read random 100B object (30B key) 99.9%
basic.readBw100 6.7 MB/s bandwidth reading 100B objects (30B key)
basic.read1K 20.9 us read random 1KB object (30B key) median
basic.
read100.min 12.423 us read single 100Bread1K.min 20.0 us read random 1KB object (30B key) minimum
basic.
read100read1K.
99
1321.
837 us read single 100B5 us read random 1KB object (30B key) 90%
basic.
read100read1K.99
1728.
698 us read single 100B3 us read random 1KB object (30B key) 99%
basic.
read100.999 24.356 us read single 100Bread1K.999 47.9 us read random 1KB object (30B key) 99.9%
basic.
readBw100readBw1K
444.
79 MB/s bandwidth reading
100B object1KB objects (30B key)
basic.
read1K 20.425 us read single 1KBread10K 53.1 us read random 10KB object (30B key) median
basic.
read1Kread10K.min
1951.
5939 us read
single 1KBrandom 10KB object (30B key) minimum
basic.
read1Kread10K.9
20.786 us read single 1KB55.0 us read random 10KB object (30B key) 90%
basic.
read1K.99 24.306 us read single 1KBread10K.99 64.1 us read random 10KB object (30B key) 99%
basic.
read1Kread10K.999
3683.
5696 us read
single 1KBrandom 10KB object (30B key) 99.9%
basic.
readBw1K 33.4readBw10K 177.2 MB/s bandwidth reading
1KB object10KB objects (30B key)
basic.
read10K 52.461 us read single 10KBread100K 360.7 us read random 100KB object (30B key) median
basic.
readBw10K 125.6 MB/s bandwidth reading 10KBread100K.min 358.5 us read random 100KB object (30B key) minimum
basic.read100K
358.567 us read single.9 365.2 us read random 100KB object (30B key)
median90%
basic.
readBw100K 187.6 MB/s bandwidth readingread100K.99 424.1 us read random 100KB object (30B key) 99%
basic.
read1M 3.449 ms read single 1MBread100K.999 452.2 us read random 100KB object (30B key)
median99.9%
basic.
readBw1M 212.1readBw100K 262.0 MB/s bandwidth reading
1MB object100KB objects (30B key)
basic.
write100read1M
43.307 us write single 100B3.5 ms read random 1MB object (30B key) median
basic.
write100read1M.min
41.031 us write single 100B3.4 ms read random 1MB object (30B key) minimum
basic.
write100read1M.9
47.528 us write single 100B3.5 ms read random 1MB object (30B key) 90%
basic.
write100read1M.
9999
86.543 us write single 100B3.5 ms read random 1MB object (30B key) 99%
basic.
write100read1M.999
38.363 ms write single 100B3.6 ms read random 1MB object (30B key) 99.9%
basic.
writeBw100readBw1M
542273.
6 KB3 MB/s
bandwidth writing 100B objectbandwidth reading 1MB objects (30B key)
basic.
write1K 63.481 us write single 1KBwrite100 43.4 us write random 100B object (30B key) median
basic.
write1K.min 60.453 us write single 1KBwrite100.min 41.5 us write random 100B object (30B key) minimum
basic.
write1K.9 66.720 us write single 1KBwrite100.9 47.5 us write random 100B object (30B key) 90%
basic.
write1K.99 126.391 us write single 1KBwrite100.99 117.6 us write random 100B object (30B key) 99%
basic.
write1Kwrite100.999
41.500 ms write single 1KB203.0 us write random 100B object (30B key) 99.9%
basic.
writeBw1K 4.writeBw100 1.9 MB/s bandwidth writing
1KB object100B objects (30B key)
basic.
write10K 199.648 us write single 10KBwrite1K 66.8 us write random 1KB object (30B key) median
basic
.writeBw10K 12.0 MB/s bandwidth writing 10KB.write1K.min 64.4 us write random 1KB object (30B key) minimum
basic.
write100K 1.508 ms write single 100KBwrite1K.9 71.5 us write random 1KB object (30B key)
median90%
basic.
writeBw100K 17.8 MB/s bandwidth writing 100KBwrite1K.99 181.7 us write random 1KB object (30B key) 99%
basic.
write1M 41.949 ms write single 1MBwrite1K.999 309.7 us write random 1KB object (30B key)
median99.9%
basic.
writeBw1M 21.8writeBw1K 12.2 MB/s bandwidth writing
1MB object1KB objects (30B key)
# RAMCloud multiRead performance for 100 B objects with 30 byte keys
1 1 1 23.0 22.99
# located on a single master.
# Generated by 'clusterperf.py multiRead_oneMaster'
#
# Num Objs Num Masters Objs/Master Latency (us) Latency/Obj (us)
#----------------------------------------------------------------------------
2 1 2 28.3 14.15
3 1 3 33.2 11.07
9 1 9 49.5 5.5050 1 50 168.7 3.37
60 1 60 235.5 3.93
70 1 70 209.1 2.99
Recovery.py
System administrator's guide)
Adding User)
So far we do not use NIS/LDAP for account management. Please use a script to setup new users.
NOTE)
User setup command onto ATOM cluster )
- To run the following command, you must already be a privileged user among the management server and all the atom servers.
Please ask us to make or delete privileged users for the cluster. - <User*> below are account-name on mmatom. <User*> must be non-privileged user.
- $ /usr/local/bin/mmuser <User1> [ <User2> .... ]
basic.write10K 210.3 us write random 10KB object (30B key) median
basic.write10K.min 205.5 us write random 10KB object (30B key) minimum
basic.write10K.9 217.5 us write random 10KB object (30B key) 90%
basic.write10K.99 514.1 us write random 10KB object (30B key) 99%
basic.write10K.999 837.4 us write random 10KB object (30B key) 99.9%
basic.writeBw10K 42.2 MB/s bandwidth writing 10KB objects (30B key)
basic.write100K 1.5 ms write random 100KB object (30B key) median
basic.write100K.min 1.5 ms write random 100KB object (30B key) minimum
basic.write100K.9 1.6 ms write random 100KB object (30B key) 90%
basic.write100K.99 2.3 ms write random 100KB object (30B key) 99%
basic.write100K.999 11.6 ms write random 100KB object (30B key) 99.9%
basic.writeBw100K 59.2 MB/s bandwidth writing 100KB objects (30B key)
basic.write1M 15.3 ms write random 1MB object (30B key) median
basic.write1M.min 15.2 ms write random 1MB object (30B key) minimum
basic.write1M.9 15.9 ms write random 1MB object (30B key) 90%
basic.write1M.99 27.9 ms write random 1MB object (30B key) 99%
basic.writeBw1M 60.2 MB/s bandwidth writing 1MB objects (30B key)
Measured on Wed 25 Feb 2015 12:31:42 PM PST :
basic.read100 13.456 us read single 100B object (30B key) median
basic.read100.min 12.423 us read single 100B object (30B key) minimum
basic.read100.9 13.837 us read single 100B object (30B key) 90%
basic.read100.99 17.698 us read single 100B object (30B key) 99%
basic.read100.999 24.356 us read single 100B object (30B key) 99.9%
basic.readBw100 4.7 MB/s bandwidth reading 100B object (30B key)
basic.read1K 20.425 us read single 1KB object (30B key) median
basic.read1K.min 19.593 us read single 1KB object (30B key) minimum
basic.read1K.9 20.786 us read single 1KB object (30B key) 90%
basic.read1K.99 24.306 us read single 1KB object (30B key) 99%
basic.read1K.999 36.569 us read single 1KB object (30B key) 99.9%
basic.readBw1K 33.4 MB/s bandwidth reading 1KB object (30B key)
basic.read10K 52.461 us read single 10KB object (30B key) median
basic.readBw10K 125.6 MB/s bandwidth reading 10KB object (30B key)
basic.read100K 358.567 us read single 100KB object (30B key) median
basic.readBw100K 187.6 MB/s bandwidth reading 100KB object (30B key)
basic.read1M 3.449 ms read single 1MB object (30B key) median
basic.readBw1M 212.1 MB/s bandwidth reading 1MB object (30B key)basic.write100 43.307 us write single 100B object (30B key) median
basic.write100.min 41.031 us write single 100B object (30B key) minimum
basic.write100.9 47.528 us write single 100B object (30B key) 90%
basic.write100.99 86.543 us write single 100B object (30B key) 99%
basic.write100.999 38.363 ms write single 100B object (30B key) 99.9%
basic.writeBw100 542.6 KB/s bandwidth writing 100B object (30B key)
basic.write1K 63.481 us write single 1KB object (30B key) median
basic.write1K.min 60.453 us write single 1KB object (30B key) minimum
basic.write1K.9 66.720 us write single 1KB object (30B key) 90%
basic.write1K.99 126.391 us write single 1KB object (30B key) 99%
basic.write1K.999 41.500 ms write single 1KB object (30B key) 99.9%
basic.writeBw1K 4.9 MB/s bandwidth writing 1KB object (30B key)
basic.write10K 199.648 us write single 10KB object (30B key) median
basic.writeBw10K 12.0 MB/s bandwidth writing 10KB object (30B key)
basic.write100K 1.508 ms write single 100KB object (30B key) median
basic.writeBw100K 17.8 MB/s bandwidth writing 100KB object (30B key)
basic.write1M 41.949 ms write single 1MB object (30B key) median
basic.writeBw1M 21.8 MB/s bandwidth writing 1MB object (30B key)# RAMCloud multiRead performance for 100 B objects with 30 byte keys
# located on a single master.
# Generated by 'clusterperf.py multiRead_oneMaster'
#
# Num Objs Num Masters Objs/Master Latency (us) Latency/Obj (us)
#----------------------------------------------------------------------------1 1 1 23.0 22.99
2 1 2 28.3 14.15
3 1 3 33.2 11.07
9 1 9 49.5 5.5050 1 50 168.7 3.37
60 1 60 235.5 3.93
70 1 70 209.1 2.99
Recovery.py
System administrator's guide)
Anchor | ||||
---|---|---|---|---|
|
Adding User)
So far we do not use NIS/LDAP for account management. Please use a script to setup new users.
NOTE)
User setup command onto ATOM cluster )
- Limitation: mmuser command uses some helper routine located in the directory and locate data to be transferred to the servers using NFS share.
- We need to copy /usr/local/mmutils/mmres to your home directory which is NFS shared to ATOM cluster and run mmuser.sh in the directory.
- You need to have 'sudo su' permission on cluster server. Or you will become root with 'sudo su' on management server.
- NOTE)
- To run the following command, you must already be a privileged user among the management server and all the atom servers.
Please ask us to make or delete privileged users for the cluster. - <User*> below are account-name on mmatom. <User*> must be non-privileged user.
- To run the following command, you must already be a privileged user among the management server and all the atom servers.
- Add user(s) )
- $ cd /usr/local/mmutils/mmuser
- $ ./mmuser.sh <User1> [ <User2> .... ] // create users both on a management host and cluster nodes.
- $./mmuser.sh -c <User1> [ <User2> .... ] // create uses on cluster nodes.
- $ ./mmuser.sh -m <User1> [ <User2> .... ] // create uses only on a management host.
- Delete user(s) )
- $ ./mmuser.sh -d <User1> [ <User2> ....]
- Deliver password related files to cluster nodes) ... Since Ansibile scripts does not work, Use another script.
- login or su to user 'admin'.
- cd ~admin
- $ remoteYum.sh <prefix> <from> <to> passwd
<prefix> : ether 'atom' or 'dx'
<from> : starting host number
<to> : ending host number
eg: remoteYum.sh dx 1 10 passwd // deliver passwd,shadow,group,sudoers to dx001 .. dx010
Hardware maintenance)
- GitHub repository for NEC Tools for the ATOM Server
- The product name of the server is NEC Micro Server DX1000. Please refer repository for the product.Procedures:
- $ git clone https://github.com/SMatsushi/NECTools.git You will find scripts in NECTools/DX1000/scriptsServer DX1000. Please refer repository for the product.
- Procedures:
- $ git clone https://github.com/SMatsushi/NECTools.git
- You will find scripts in NECTools/DX1000/scripts
- ipmiaw2 : a wrapper for impitool
- Save ipmi Username in ~/.atom/ipmi_user.txt , ipmi Password in ~/.atom/impi_password.tx
- ipmiaw2 adds ipmitool option '-l lanplus -U <usenamer> -P <password>', so you do not need to type them.
- You can specify server, management port name with range format. Just enter 'ipmiaw2' see the usage.
- ipmiaw2 : a wrapper for impitool
- You will find the information in mmatom: /etc/hosts :
IPaddress hostname Mac Address Installed slot and host port connected.
- For atomXXX(Y), XXX is always corresponds to final digit of IP address:
eg. atom100 == 192.168.3.100, atom110a == 192.168.4.110
Security solution)
- SSH into management server
- Job spawning to cluster servers from management server
- Cluster management
- Server console
- IPMI to CMM, BMC
- USB connection to ONS from front panel
return to Table of Contents.
For management server setup)
Anchor | ||||
---|---|---|---|---|
|
- Setup guide
- Config files referred in the setup guide
Stanford's ATOM Cluster - configuration, etc)
Anchor | ||||
---|---|---|---|---|
|
1. system photograph)
2. ATOM server module specification)
Anchor | ||||
---|---|---|---|---|
|
- 3 chassis are installed (The rack in above picture contains 16 chassis.)
- Installed ATOM modules are with C2730 (1.7GHz, 8 core/8 threads, 12 W)
3. Network connection) updated on Nov. 17,
20152015
Anchor | ||||
---|---|---|---|---|
|
- Due to historical reason and considering future experiment, VLAN configuration is different in chassis. We would like change every node to default VLAN configuration as Chassis 1.
- Management server is directly connect to the internet, the cluster is isolated from other Stanford servers.
- Management server works as firewall, login server, firewall, NIS server, DHCP server, and PXE server for reconfiguring ATOM servers.