Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

 

WHAT WE MEAN WHEN WE SAY THAT WE ARE CORE AWARE

 - Originally we thought this meant that applications would schedule to the number of cores.

 - Now we believe that this means intelligently scheduling user threads onto

   cores, and intelligently determining when more cores are necessary to

   maintain low latency.

 - Knowledge of how many cores are available moves up the stack.

     - Originally that knowledge was deep down in the kenrel.

    - Higher and higher in the stack.

    - How high up the stack does it go?

         - It goes at least to user-level threading - Arachne.

         - There is a knowledge of resources that are available.

         - It becomes available higher and higher up.

    - Old model - Application creates threads and kernel schedules them.

    - New model - Application queries number of cores and schedules based on that.

 

 

MINIMUM TIME TO SCHEDULE A THREAD ONTO ANOTHER CORE

 - What is the correct architecture for the fast path?

 

HANDLING BLOCKING SYSCALLS

 - How much or how little kernel assistance do we need?

 - Current idea is to migrate all the kernel threads doing syscalls onto one

   core. Then how do we ensure there is at least one kernel thread on every

   other idle core?

 - Requires experimentation for performance.

 

DATA STRUCTURE FOR TASKS THAT DO NOT YET HAVE A CORE

 - Single Global Queue vs Per-Core Queues.

 

 

HOW TO INTEGRATING PRIORITIES INTO CREATE THREAD MECHANISM

LOGICAL CONCURRENCY VS DESIRED PARALLELISM (CORE COUNT)

TIMING GUARANTEES UNDER COOPERATIVE THREADING

PRIORITIES

HOW TO MAKE SCHEDULER VERY LOW LATENCY

STACK MANAGEMENT & RECYCLING

BENEFITS / SELLING POINTS / CONTRIBUTIONS

PRIORITIES OF EXISTING THREADS VS THREADS WITHOUT A CORE

HOW TO PREVENT STARVATION

LOAD BALANCING BETWEEN CORES

USER API

KERNEL API

  • No labels