2015-08-21 Meeting notes

Date

Attendees

Goals

  • Discussing the latest simulation result compiled by MetricsDashboard.py script
  • Planning the next steps in the simulation. Possible choices:
    • More statistics to be added
    • Generating graphs from the vector result file
    • Implementing priorities
    • Implementing unscheduled traffic
    • Implementing the re-transmission
    • Implementing the RPC high level request/response behavior  

Dashboard stats for different ByteBucket rates:

  • Bytebucket rate at 98% (9.8Gb/s)


  • Bytebucket rate at 97% (9.7Gb/s)
     


  • Bytebucket rate at 96% (9.6Gb/s)
     


  • Bytebucket rate at 95% (9.5Gb/s)
     

 

 

Discussion items

TimeItemWhoNotes
AllThe Metrics AboveAll
  • Decided to implement action items below



Action items

  • Behnam Montazeri

    To Do List

    1. Change the bytebucket to send grants including the header bytes and all other overheads that are actually being send into the wire. DONE

    2. Can't have an open loop grant mechanism. The problem is similar to the way we want to solve the problem with the core of the network. TODO

    3. The list of issues that we want to return back to them later. ONGOING 

    4. Check for think time in the IP or MAC layer for the packets. We want to react to the request and grant packets not right away. 1.7us think time to be added to each way. DONE

    5. Turn on the propagation delay on the links again. 250ns for the propagation delay. DONE

    6. Look at the link parameters for send at start of frame. Check if using this parameter you can impolement the cut-through router TODO

    7. Check for detailed packet transmission times in different parts of the network and check how the packet will go through and look at the times TODO

    8. Run the simulation with say 0.5% of workload and check that End To End Message minimum stretch will reach zero. DONE

    9. A distribution like this DONE

      100Bytes 50%

      1472Bytes 15%

      10KB 10%

      100KB 24.9%

      1MB 0.1%

      Three different cases for total load from the perspective of the receiver to

      1%

      50%

      80%

    10. Also add the case that all senders are on different Racks from the receiver DONE

    11. In the End to End delay, add percentage and total for the count and mean column DONE

    12. New table that shows the difference between of real latency and ideal latency DONE 

    13. Redo the signal and statistics collection mechanism for End To End delays