WHAT WE MEAN WHEN WE SAY THAT WE ARE CORE AWARE

 

 

MINIMUM TIME TO SCHEDULE A THREAD ONTO ANOTHER CORE

 

HANDLING BLOCKING SYSCALLS

 

DATA STRUCTURE FOR TASKS THAT DO NOT YET HAVE A CORE

 

 

HOW TO INTEGRATING PRIORITIES INTO CREATE THREAD MECHANISM

 

 

LOGICAL CONCURRENCY VS DESIRED PARALLELISM (CORE COUNT)

 

 

TIMING GUARANTEES UNDER COOPERATIVE THREADING

 

PRIORITIES

 

 

HOW TO MAKE SCHEDULER VERY LOW LATENCY

 

BENEFITS / SELLING POINTS / CONTRIBUTIONS

 0. Making more resource information available at higher levels of software,

    and letting higher level leverage that to enable better performance.

 1. Having many more threads than we could afford in the kernel

        ==> avoid deadlocks based on thread limitations.

 2. Ability to run to completion (no preemption at weird times)

 3. Fast context switches

      ==> Reuse of idle times without paying for kernel context switch

      ==> Allows us to get high throughput without giving up low latency

 4. Policies for relating the number of cores to logical concurrency (number of user threads)

    - Enable us to fill in the idle time without adding extra latency

 

 

PRIORITIES OF EXISTING THREADS VS THREADS WITHOUT A CORE

 

HOW TO PREVENT STARVATION

 

 

LOAD BALANCING BETWEEN CORES

 

 

USER API

KERNEL API