...
WHAT WE MEAN WHEN WE SAY THAT WE ARE CORE AWARE -
- Originally we thought this meant that applications would schedule to the number of cores.
...
- Now we believe that this means intelligently scheduling user threads onto
...
- cores, and intelligently determining when more cores are necessary to
...
- maintain low latency.
...
- Knowledge of how many cores are available moves up the stack.
...
- Originally that knowledge was deep down in the
...
- kernel.
- Higher and higher in the stack.
...
- How high up the stack does it need to go?
...
- It should go to at least to user-level threading library - Arachne.
- There is a knowledge of resources that are available.
- It becomes available higher and higher up.
...
- Old model - Application creates threads and kernel schedules them.
...
- New model - Application queries number of cores and schedules based on that.
MINIMUM TIME TO SCHEDULE A THREAD ONTO ANOTHER CORE -
- What is the correct architecture for the fast path, and what is the lowest possible time to schedule to another core?
HANDLING BLOCKING SYSCALLS -
- How much or how little kernel assistance do we need?
...
- Current idea is to migrate all the kernel threads doing syscalls onto one
...
- core. Then how do we ensure there is at least one kernel thread on every
...
- other idle core?
...
- Requires experimentation for performance.
DATA STRUCTURE FOR TASKS THAT DO NOT YET HAVE A CORE -
- Single Global Queue vs Per-Core Queues.
HOW TO INTEGRATING PRIORITIES INTO CREATE THREAD MECHANISM
...
LOGICAL CONCURRENCY VS DESIRED PARALLELISM (CORE COUNT)
- - They are far from identical, because of threading for convenience and deadlock avoidance.
- - When should we ask the kernel for more cores, and when should we return cores to the kernel?
- - Tradeoff - Core efficiency vs latency vs throughput
- - Greedy core allocation is not necessarily best even for latency?
- - How many cores would we be wasting if we did that?
TIMING GUARANTEES UNDER COOPERATIVE THREADING
...