• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/60

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

60 Cards in this Set

  • Front
  • Back

What does an OS scheduler do?

Schedule work

What are some different ways the OS scheduler can schedule work?

1. Assign tasks immediately (FCFS)


2. Assign simple tasks first (SJF)


3. Assign complex tasks first (maximize CPU utilization, devices, memory)

What does the CPU scheduler do?

Decides how and when processes (and their threads) access shared CPUs

When does the CPU scheduler run?

1. When the CPU becomes idle


2. New task becomes ready


3. Timeslice expired timeout

What happens once a thread is scheduled?

It is dispatched on the CPU

What happens when a thread is dispatched on the CPU?

1. Perform a context switch to enter the context of the new thread


2. Enter User Mode


3. Set program counter

How does the OS know which thread to schedule when several are ready?

The OS scheduler selects a task using the appropriate scheduling policy/algorithm.

What is Run-To-Completion scheduling?

Scheduling that states once a thread starts it will run until completion.

What is the First-Come First-Serve (FCFS) algorithm?

Schedules tasks in the order they arrived

How do you calculate throughput?

Number of tasks / Total Completion Time (including context switches)

How do you calculate average wait time?

Start time of each task / number of tasks.




For example, in a round robin with 1ms timeslice, two tasks, and a 0.1ms context switch it would be (0 + (1 + 0.1) ) / 2 = 0.55 seconds




For 2 tasks that finish until completion and each take 4 seconds, it would be (0 + 4 / 2) = 2 seconds

Assume there are 3 threads that are executing on a single CPU, scheduled using the FCFS algorithm. T1 takes 1 second to complete, T2 takes 10 seconds, and T3 one second. What is the average completion time to complete all threads, where they are executed sequentially? What about the average wait time?

Avg. Completion Time: (1 + 11 + 12) / 3 = 8sec.


Note: Remember T1 takes one second to complete, so it's actually going to tak 11 seconds before T2 will complete. Likewise, it's going to take T3 12 seconds since it took T1 1 second and T2 11 seconds.




Avg. Wait Time: (0 + 1 + 11) /3 = 4sec


Note: T2 had to wait one second, while T3 had to wait 11 seconds for both T1 and T2 to complete.

What is an advantage to the FCFS algorithm?

Advantage(s)


1. Simple implementation




Disadvantage(s)


1. Wait time is poor if there are shorter tasks waiting on a long task.

What is the Shortest Job First (SJF) algorithm?

Schedules tasks in order of their execution time

What is Preemptive Scheduling?

Scheduling that can be interrupted

What is a windowed average?

The average of the times it took a previous job job to complete. For example, thread 1 ran 3 times; the first time it ran it completed in 3 seconds, the second time 2.5 seconds, and the last time only 2 seconds. The windowed average would be ((3 + 2.5 + 2) / 3) = 2.5 seconds

What is priority scheduling?

Running tasks in order of priority.

What are some ways a priority-based scheduler can assess both the available running threads and their priority?

1. Multiple, priority-based runqueues


2. Tree-like structure based on priority

What is starvation relating to multiple, priority-based runqueues? How can it be overcome?

Definition: A situation in which several high-priority tasks arrive in a runqueue, forcing them all to be scheduled over lower-priority queue tasks, and therefore starving (prevent scheduling of) the other higher-priority runqueue tasks.




Solution: Use priority aging. This will be implemented using a function(actual priority, time spent in runqueue). This will allow other runqueue tasks to assume a higher priority when they've stayed in the queue longer.

An OS scheduler uses a priority-based algorithm with preemption to schedule tasks. Determine the finishing times of each task.

An OS scheduler uses a priority-based algorithm with preemption to schedule tasks. Determine the finishing times of each task.

T1: Finishes at 8


T2: Finishes at 10


T3: Finishes at 11

What is Priority Inversion?

A situation than can occur in which a higher priority thread ends up finishing last due to be blocked earlier when trying to lock a resource that was already locked by a lower priority thread.

A situation than can occur in which a higher priority thread ends up finishing last due to be blocked earlier when trying to lock a shared resource that was already locked by a lower priority thread.

How can priority inversion be solved?

It can be solved by temporarily boosting the priority of the mutex owner

What is Round Robin Scheduling?

Scheduling in which tasks are executed around in a circular manner.

What is a difference between Round Robin Scheduling and FCFS scheduling?

Round Robin scheduling allows tasks to yield to the CPU to wait on I/O

What is Round Robin with Priorities?

Whenever a higher priority tasks arrives and the current task is still running, it will be preempted.

What is Round Robin with Interleaving?

Tasks will have a set amount of time to execute, and then will be interrupted, allowing another task to run.

What is a timeslice (time quantum)?

Maximum amount of uninterrupted time given to a task.

What do timeslices allow?

They allow for tasks to be interleaved (i.e. timeshared).

What happens when a task's timeslice expires?

The task becomes preempted?

What are some advantages and disadvantages to timeslice scheduling?

Advantage(s)


1. Shorter tasks finish sooner


2. More responsive


3. Length I/O operations initiated sooner




Disadvantage(s)


1. Overheads (interrupt running task, run the CPU scheduler, context switching, etc)

How should the length of a timeslice be determined?

By balancing benefits with overhead

True or False: A CPU bound task performs better with a smaller timeslice

False; A CPU bound task prefers a larger timeslice

True or False: A I/O bound task performs better with a smaller timeslice

True

If we want I/O and CPU bound tasks to have different timeslice values, what are some options?

1. Have the same runqueue, but check the type
2. Two different runqueue structures
3. A Multi-Level Feedback Queue (MLFQ) scheduler in which tasks coming into the queue are assumed to be I/O intensive. If the tasks are instead more CPU intensive,...

1. Have the same runqueue, but check the type


2. Two different runqueue structures


3. A Multi-Level Feedback Queue (MLFQ) scheduler in which tasks coming into the queue are assumed to be I/O intensive. If the tasks are instead more CPU intensive, they will be pushed to a lower level. On the other hand, if a lower priority tasks starts to experience I/O waits, it's priority can be boosted.

What is the 0(1) Linux Scheduler?

A Linux CPU scheduler that is able to select/add a task to the runqueue in constant time.

How many priority levels are in the 0(1) Linux scheduler?

140


0 - highest priority


139 - lowest

In the 0(1) Linux scheduler, what levels do real-time tasks fall under? What about timesharing tasks?

0-99 - Real-time tasks


100-139 - Timesharing

In the 0(1) Linux Scheduler, what is the default level for a user process?

120

How is the priority of a user process adjusted in the 0(1) Linux scheduler?

Using a nice value

Regarding the 0(1) Linux scheduler, what is the valid range of values for a "nice value"?

-20 to 19. This allows the default level of 120 to be adjusted to either 100 or 139, filling up the entire priority range for timesharing tasks.

How is the 0(1) scheduler similar to the MLFQ scheduler? How is it different?

Similar:


1. They both assign different timeslice values for different priority levels.


2. They both use feedback from how the tasks behaved in the past to determine the priority levels in the future.




Different:


1. How it assigns the timeslice values to the priority - it assigns the smallest timeslice to low priority CPU bound tasks and high timeslice values to the interactive, higher priority tasks.


2. How it uses the feedback - it determines how interactive a task is based on how long it sleeps (waiting for user input) and assigns a higher priority to that task. Lower sleep times mean less interactive, and will receive a lower priority.

How is the runqueue organized in the 0(1) Linux task scheduler?

Two arrays of task queues, named active and expired. Each array element points to the first runnable task at the corresponding priority level. The active array is used by the scheduler to pick the next task to run. The expired array is where a task goes when it has consumed its entire timeslice.

Regarding the runqueue for the Linux 0(1) task scheduler, how do tasks in the expired array become active again (i.e. moved to the active array)?

When all of the tasks in the active array are complete, the pointers for the two array switch, and all the expired tasks now become active.

When was the 0(1) Linux scheduler introduced?

In the Linux kernel 2.5

What was the 0(1) scheduler replaced with? When was it replaced?

It was replaced with the Completely Fair Scheduler (CFS) starting with the Linux kernel 2.6.23. This is the currently the default scheduler for non real-time tasks.

What are some problems with the 0(1) Scheduler?

1. The performance of interactive tasks was not ideal.


2. It doesn't make any fairness guarantees

What is the runqueue called for the Linux Completely Fair Scheduler (CFS)?

A Red-Black Tree

How does the Linux Completely Fair Scheduler (CFS) work?

1. CFS always picks the leftmost node


2. CFS will periodically increase the vruntime of the task that's currently executing


3. It will then compare the vruntime of the current running task with the left-most task in the tree. If the vruntime of the currently running task is smaller, it will continue running; if larger, preempt and place appropriately in the tree.

What does a Shared Memory Mulitprocessor architecture look like?

Multiple CPUs have their own private level caches (L1, L2...), there are last level caches (LLCs) that may or may not be shared among the CPUs, and system memory (DRAM) may or may not be shared among the CPUs.

What does a Multicore architecture look like?

Each CPU can have multiple, internal cores (CPUs), with each core having its own private L1, L2 caches. Overall the entire CPU will have some shared last-level cache (LLC) along with shared system memory.

Why is it important that a CPU scheduler schedule a thread that has executed before on the same CPU? How can this be achieved?

Because the it is more likely that data used by the thread will still be in the cache on that CPU. This can be achieved using a hierarchical scheduler architecture where at the top level a load balancing component divides the tasks among CPUs. The...
Because the it is more likely that data used by the thread will still be in the cache on that CPU. This can be achieved using a hierarchical scheduler architecture where at the top level a load balancing component divides the tasks among CPUs. Then, a per-CPU scheduler with a per-CPU runqueue repeatedly schedules those tasks on a CPU as much as possible. To balance the workload, a top level component can look at queues and perform loadbalancing based on queue length and CPU idle time.

What is a Non-Uniform Memory Access (NUMA) platform?

A system in which interconnected components (including memory and CPUs) allow access to remote memory on another CPU through a special interconnect.

In a Non-Uniform Memory Access (NUMA) platform, is it faster to access local or remote memory?

Local is faster.

What is NUMA-aware scheduling?

Scheduling in which tasks are scheduled on a CPU that is closer to the memory containing the state of those tasks.

Why does a CPU have to context switch amongst threads?

Because the CPU only has one set of registers to describe a given context

What is Hyperthreading? What are other names for hyperthreading?

-A process in which there are multiple-hardware supported execution contexts (threads).


-These contexts are still on a single CPU but have a very fast context switch.


-Hardware multithreading, hyperthreading, chip multithreading (CMT), simultaneous multithreading (SMT).

What questions do hyperthreading support raise in regards to CPU scheduling?

What kind of tasks should we schedule on hardware threads?

For optimal hyperthreading performance using two hardware threads, how should the threads be scheduled.

One thread for CPU tasks, the other a memory-bound thread.

In regards to scheduling for hyperthreaded platforms, how can it be determined which tasks are CPU-bound and which are memory-bound?

By using hardware counters. For instance, a scheduler can look at a counter like last-level cache (LLC) misses. Using this information a scheduler can tell if a thread is more memory-bound (its footprint doesn't fit into cache)

Generally speaking, what do cycles-per-instruction (CPI) look like for both CPU-bound and memory-bound threads.

Memory-bound - High CPI


CPU bound - 1 or low CPI