Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

0. Making more resource information available at higher levels of software, and letting higher level leverage that to enable better performance.

  • This point is a little too vague, but maybe we can make it more concrete once our implementation is working.

 1. Having many more threads than we could afford in the kernel

        ==> avoid deadlocks based on thread limitations.

 2. Ability to run tasks to completion (no preemption at awkward times)

 3. Fast context switches

      ==> Reuse of idle times without paying for kernel context switch

      ==> Allows us to get high throughput without giving up low latency

4. Policies for relating the number of cores to logical concurrency (number of user threads)

    - Enable us to fill in the idle time without adding extra latency

    - Allow the application to offer concurrency that matches available cores (avoid kernel thread multiplexing)

5. Effective load-balancing by using very short tasks.

6. Reduced tail latency

  • No labels