Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
107 Cards in this Set
- Front
- Back
atomicity/atomic
|
cannot be interleaved with
|
|
concurrency
|
simultaneous execution of multiple threads of control
|
|
multiprogramming
|
management of multiple processes on a uniprocessor architecture
|
|
multiprocessing
|
management of multiple processes on a multiprocessor architecture
|
|
distributed computing
|
management of multiple processes on multiple independent computers
|
|
race condition
|
when the outcome of the multiple process execution depends on the order of the execution of the instructions of the processes
|
|
synchronization
|
restriction of possible concurrent executions to a subset of desirable ones
|
|
mutual exclusion
|
elementary synchronization problem where a set of threads indefinitely alternate between executing the critical section and non-critical section
|
|
safety
|
at most one thread executes the critical section
|
|
liveness
|
when a thread requesting the critical section is eventually given the chance to enter it
|
|
liveness violations
|
starvation,
deadlock, livelock |
|
starvation
|
a requesting thread is not allowed to enter the critical section
|
|
deadlock
|
all threads are blocked and cannot proceed
|
|
livelock
|
threads continue to operate but none can enter the critical section
|
|
peterson's algorithm
|
enforces both safety and liveness with no deadlock. implements MX for two threads
|
|
bakery algorithm
|
generalization of peterson's algorithm for any given number of processes
|
|
semaphore
|
general locking mechanism; generally initialized to 1
|
|
two atomic operations of semaphores
|
P/wait
V/signal |
|
when is P(semaphore) (wait(semaphore)) called?
|
before entering critical section
|
|
when is V(semaphore) (signal(semaphore)) called?
|
after leaving critical section
|
|
what is the code for the wait semaphore?
|
s=s-1
if (s < 0) block thread that called wait else let thread that called wait continue to critical section |
|
what is the code for the signal semaphore?
|
s=s+1
if( s <= 0) wake one thread that called wait, and run it so it can continue into critical section |
|
bounded wait condition
|
if signal is continuously executed, each individual blocked process is eventually woken up
|
|
locality of reference principle
|
if data is accessed once, it will likely be accessed again in the future
|
|
what does a positive semaphore represent?
|
the number of additional threads that can be allowed into the critical section
|
|
what does a negative semaphore represent?
|
the number of threads that are blocked (note that there's also one included in the critical section)
|
|
binary semaphore has what initial value?
|
one
|
|
counting semaphore has an initial value greater than...?
|
one
|
|
describe the producer/consumer problem
|
bounded buffer holds items added by producer and removed by consumer. producer blocks when buffer is full. consumer blocks when buffer is empty.
|
|
describe the readers/writers problem
|
readers and writers perform operations concurrently on a certain item. writers cannot concurrently access items, but readers can.
|
|
describe the reader's preference of the readers/writers problem
|
if reader wants to get to a certain item, the writers wait
|
|
describe the writer's preference of the readers/writers problem
|
if writer wants to get to a certain item, the readers wait
|
|
describe the dining philosopher's problem
|
5 philosophers sit at a table and alternate between thinking and eating. they have 5 forks and a philosopher needs 2 forks to eat. picking up and setting down a fork is an atomic operation. philosophers can only talk (share variables) with neighbors
|
|
what's the objective of the dining philosopher's problem?
|
to ensure that any "hungry" philosopher eventually eats
|
|
barrier
|
designated point in a concurrent program
|
|
two-thread barrier synchronization
|
a thread can leave the barrier only when the other thread arrives at it
|
|
what's the code for a two process barrier synchronization problem?
|
semaphore b1 (0), b2(0)
process1() { //code signal(b1) wait(b2) //code } process 2() { //code signal(b2) wait(b1) } |
|
what is the idea behind spinlocks for implementing semaphores?
|
inside wait, continuously check the semaphore value (spins) until unblocked
|
|
what's wrong with implementing semaphores via spinlocks?
|
1.) wait and signal operations are not atomic
2.) wasteful resource-wise 3.) does not support bounded wait condition |
|
what are read-modify-write instructions?
|
RMW instructions atomically read a value from memory, modify it, and write the new value to memory
|
|
exchange RMW instruction
|
swaps values between register and memory
intel x86 |
|
compare and swap RMW instruction
|
read value, if value matches value in register r1, exchange register r1 and value
motorola 68xxx |
|
compare, compare and swap RMW instruction
|
SPARC
|
|
RMW is not provided by what processors?
|
RISC
|
|
test and set RMW instruction
|
if lock is free: read false, set value to true, return false
loop test fails, meaning lock is now busy if lock is busy: read true, return true loop test is true, so loop continues until someone releases the lock |
|
lock
|
provide mutually exclusive access to shared data
can be locked or unlocked (busy and free) |
|
condition variables
|
provide conditional synchronization, coordinates events
|
|
what's the problem with programming using semaphores?
|
it's dead-lock prone
|
|
what are the three basic operations on condition variables?
|
wait: blocks thread and releases associated lock
signal: if threads are waiting on the lock, wake up one of those threads and put on ready list; otherwise do nothing broadcast: if threads are waiting on lock, wake up all of those threads and put them on ready list; otherwise do nothing |
|
Hoare style
|
if a thread P wakes up another Q they are then both in protected area, so in this style, thread Q is allowed to proceed. problem is: what to do with signaling thread?
|
|
Hansen/Mesa Style
|
if a thread P wakes up another Q then are then both in protected area, so in this style, thread P is allowed to proceed. seems logical but awakened thread may miss the condition.
|
|
spinlock
|
a locked process does not release the CPU but rather "spins" constantly checking the lock until it opens
|
|
sleeplock
|
a locked process blocks and it put back on the ready queue only when the lock is open
|
|
advantages/disadvantages to blocking and spinning with locks and CVs
|
spinning: ties up processor but efficient in short wait
blocking: necessary for long waits but inefficient (due to context switching overhead) for short ones |
|
binding
|
assigning memory addresses to program objects (variables, pointers, etc)
|
|
physical address
|
the "hardware" address of a memory word (0xb0000)
|
|
logical address (virtual address, relative address)
|
used by the program
|
|
relocatable address
|
relative address (14 bytes from the beginning of this module)
|
|
absolute code
|
all addresses are physical
|
|
relocatable code
|
all addresses are relative
|
|
linking
|
preparing compiled program to run
|
|
static linking
|
all library functions included in the code
|
|
dynamic linking
|
library functions can be connected at load-time
|
|
static loading
|
loading a program all at once in memory
|
|
dynamic loading
|
loading a program into memory on demand
|
|
memory-management unit
|
hardware device that translates logical memory addresses generated by programs running on the CPU into physical addresses
|
|
relocation register MMU
|
loads beginning (physical address) of the module to the register.
logical address is added to the register to produce physical address |
|
what are a process' memory divided into?
|
segments
|
|
the compiler and assembler generate what from each source file?
|
an object file (containing code and data segments)
|
|
what does the linker do?
|
combines all the object files for a program into a single executable object file, which is complete and self-sufficient
|
|
what does the loader do?
|
the loader (part of the OS) loads an executable object file into memory at locations determined by the OS
|
|
what happens as the program runs?
|
as the program runs and uses new/delete to dynamically allocate memory, gets/releases space on stack during function calls
|
|
stack
|
good when allocation and freeing are somewhat predictable
keeps free space together |
|
heap
|
used when allocation and freeing are not predictable.
|
|
fragmentation
|
free memory is spread in multiple places of various sizes (fragments) which makes further allocation difficult/impossible and thus wastes memory
|
|
internal fragmentation
|
unused memory fragments are inside the allocation units
|
|
external fragmentation
|
unused memory fragments are outside the allocation units
|
|
static segment allocation
|
physical memory addresses are encoded in program
|
|
dynamic segment allocation
|
uses logical addresses
|
|
segment-table base register
|
points to the segment table's location in memory
|
|
segment-table length register
|
indicates number of segments used by a program
|
|
what does each segment table entry have?
|
base: contains starting segment physical memory address
limit: specifies the length of the segment protection bits: read/write/execute privileges validation bit = 0 -> illegal segment |
|
what is the validity check?
|
if segment number < STLR (segment table length register)
if offset < segment limit if access type is compatible with protection bits (no writing to read-only segment) |
|
what happens if validity check fails?
|
memory access violation trap to kernel
|
|
compaction
|
relocating segments to free unused space
|
|
dynamic allocation of segments may lead to...?
|
fragmentation
|
|
segmentation
|
the process address space is divided into logical pieces called segments
|
|
what are some examples of types of segments?
|
code, BSS (statically allocated data), heap, stack
|
|
pages
|
each process is divided up into a number of small, fixed-size partitions called pages
|
|
frames
|
physical memory is divided into a large number of small, fixed-size partitions called frames
|
|
correlation between page size and frame size?
|
page size = frame size
usually 512 bytes to 16k bytes (typically 4k) |
|
what does a virtual (logical) address consist of?
|
a page number and offset from the beginning of the page
|
|
the pages of a process do not have to be...?
|
loaded into a contiguous set of frames
|
|
page table
|
per process data structure that provides mapping from page to frame
|
|
address space
|
the set of all possible addresses
|
|
logical address space
|
continuous
|
|
physical address space
|
fragmented
|
|
in order to obtain a physical memory address, the page number part is translated into a...?
|
frame number
|
|
what does the valid/invalid bit protect?
|
unused portions of the page table
|
|
true/false: a process can use the whole page table
|
false
|
|
what happens if you attempt to address pages marked as invalid?
|
trap sent to OS, "memory protection violation"
|
|
true/false: page-table base register points to the page table page-table length register indicating the size of the page table
|
true
|
|
in order to share the code within a page with multiple difference processes, the code has to be...?
|
reentrant (not self-modifying) and write-protected - usually every page has a read-only bit
|
|
TLB
|
an associative cache that holds page and frame numbers. resolve the problem of page tables residing in memory and requiring more than one memory lookup while getting data/code
|
|
what happens during a page access?
|
page number is first looked in TLBs, if found (cache hit) can immediately access page
if not found (cache miss) have to look up the frame in the page table |
|
what happens during context switch?
|
old way: flush TLBs (destroying all info they held)
new way: each TLB entry stores address-space identifier - identifies process, if wrong process - cache miss |
|
multilevel page table
|
deals with issue of searching a single, very large page table. may have up to 3 levels
|