1) What is the first step in problem-solving? A) Writing code B) Debugging C) Understanding the problem D) Optimizing the solution Answer: C 2) Which of these is not a step in the problem-solving process? A) Algorithm development B) Problem analysis C) Random guessing D) Testing and debugging Answer: C 3) What is an algorithm? A) A high-level programming language B) A step-by-step procedure to solve a problem C) A flowchart D) A data structure Answer: B 4) Which of these is the simplest data structure for representing a sequence of elements? A) Dictionary B) List C) Set D) Tuple Answer: B 5) What does a flowchart represent? A) Errors in a program B) A graphical representation of an algorithm C) The final solution to a problem D) A set of Python modules Answer: B 6) What is pseudocode? A) Code written in Python B) Fake code written for fun C) An informal high-level description of an algorithm D) A tool for testing code Answer: C 7) Which of the following tools is NOT commonly used in pr...
Process Synchronization
* Concurrent access to shared data may result in data inconsistency (change in behavior)
* Maintaining data stability requires mechanisms to ensure the orderly execution of cooperating processes
* Suppose that we wanted to provide a solution to the “producer-consumer” problem that fills all the buffers.
* We can do so by having an integer variable “count” that keeps track of the number of full buffers.
* Initially, count is set to 0.
* It is incremented by the producer after it produces a new buffer.
* It is reducted by the consumer after it consumes a buffer.
Producer
while (true) {
/* produce an object and put in next Produced */
while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = next Produced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
next Consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the object in next Consumed
}
Critical section problem:-
A section of code which reads or writes shared data.
Race Condition
• The situation where two or more processes try to access and manipulate the same data and output of the process depends on the orderly execution of those processes is called as Race Condition.
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
• count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
• Consider this execution interleaving with “count = 5” initially:
* S0: producer execute register1 = count {register1 = 5}
* S1: producer execute register1 = register1 + 1 {register1 = 6}
* S2: consumer execute register2 = count {register2 = 5}
* S3: consumer execute register2 = register(2 - 1) {register2 = 4}
* S4: producer execute count = register1 {count = 6 }
* S5: consumer execute count = register2 {count = 4}
Requirements for the Solution to Critical-Section Problem
1. Mutual Exclusion: - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections
2. Progress: - If no process is executing in its critical section and there be some processes that wish to enter their critical section, then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely.
3. Bounded Waiting: - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
• To general proposals are used to handle critical sections in operating systems:
(1) Preemptive Kernel
(2) Non Preemptive Kernel
1)Preemptive Kernel permits a process to be preempted while it is running in kernel mode.
2) Non Preemptive Kernel does not allow a process running in kernel mode to be preempted. (these are free from race conditions)
Peterson’s Solution
• It is restricted to two processes that alternates the execution between their critical and remainder sections.
• Suppose that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.
• The two processes share two variables:
* int turn;
* Boolean flag[2]
• The variable turn implies whose turn it is to enter the critical section.
• The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implicit that process Pi is ready!
Note:- Peterson’s Solution is a software formed solution.
Algorithm for Process Pi
while (true) {
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}
Solution to Critical Section Problem using Locks.
do{
acquire lock
Critical Section
release lock
Remainder Section
}while (True);
Note:- Race Conditions are prevented by protecting the
critical region by the locks.
Synchronization Hardware
• In general we can provide any solution to critical section problem by using a simple tool called as LOCK where we can prevent the race condition.
• Many systems provide hardware support (hardware instructions available on several systems) for critical section code.
• In UniProcessor hardware environment by disabling interrupts we can solve the critical section problem. So that Currently running code would implement without any preemption .
• But by disabling interrupts on multiprocessor systems is time taking so that it is inefficient compared to UniProcessor system.
• Now a days Modern machines gives special atomic hardware instructions that allow us to either test memory word and set value Or swap contents of two memory words automatically i.e. done through an uninterruptible unit.
Special Atomic hardware Instructions
• TestAndSet()
• Swap()
• The TestAndSet() Instruction is one kind of special atomic hardware instruction that allow us to test memory and set the value. We can provide Mutual Exclusion by using TestAndSet() instruction.
• Definition:
Boolean TestAndSet (Boolean *target)
{
Boolean rv = *target;
*target = TRUE;
return rv:
}
• Mutual Exclusion Implementation with TestAndSet()
• To implement Mutual Exclusion using TestAndSet() we need to declare Shared Boolean variable Known as ‘lock’ (initialized to false ) .
• Solution:
while (true) {
while ( TestAndSet (&lock ))
; /* do nothing
// critical section
lock = FALSE;
// remainder section
}
• The Swap() Instruction is another kind of specific atomic hardware instruction that allow us to swap the contents of two memory words.
• By using Swap() Instruction we can provide Mutual Exclusion.
• Definition:-
void Swap (Boolean *a, Boolean *b)
{
Boolean temp = *a;
*a = *b;
*b = temp:
}
• Shared Boolean variable called as ‘lock’ is to be declared to implement Mutual Exclusion in Swap() also, which is initialized to FALSE.
Solution:
while (true) {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
// critical section
lock = FALSE;
// remainder section
}
Note:- Each process has a local Boolean variable called as ‘key’.
Semaphores
• As it is difficult for the application programmer to use these hardware instructions, to overcome this difficulty we use the synchronization tool called as Semaphore (that does not require busy waiting)
• Semaphore S – integer variable, apart from this initialization we can access this only through two standard atomic operations called as wait() and signal().
• Originally the wait() and signal() operations are termed as P() and V() respectively. Which are termed from the Dutch words “proberen” and “verhogen”.
• The definition for wait() is as follows:
wait (S) {
while S <= 0
; // no-op
S--;
}
• The definition for signal() is as follows:
signal (S) {
S++;
}
• All the modifications to the integer value of the semaphore in the wait() and signal() atomic operations must be executed indivisibly. i.e. when one process changes the semaphore value, no other process will change the same semaphore value simultaneously.
• Usage of semaphore:- we have two types of semaphores
* Counting semaphore
* Binary Semaphore.
• The value of the Counting Semaphore can ranges over an unrestricted domain.
• The value of the Binary Semaphore can ranges between 0 and 1 only.
• In some systems the Binary Semaphore is called as Mutex locks, because, as they are locks to provide the mutual exclusion.
• We can use the Binary Semaphore to deal with critical section problem for multiple processes.
• Counting Semaphores are used to control the access of given resource each of which consists of some finite no. of instances. This counting semaphore is initialized to number of resources available.
• The process that wish to use a resource must performs the wait() operation (count is decremented)
• The process that releases a resource must performs the signal() operation (count is incremented )
• When the count for the semaphore is 0 means that all the resources are being used by some processes. Otherwise resources are available for the processes to allocate .
• When a process is currently using a resource means that it blocks the resource until the count becomes > 0.
• For example:
* Let us take that there are two processes p0 and p1 which consists of two statements s0 & s1 respectively.
* Also assume that these two processes are running concurrently such that process p1executes the statement s1 only after process p0 executes the statement s0.
* Let us assume the process p0 & p1 share the same semaphore called as “synch” which is initialized to 0 by inserting the statements
S0;
Signal (synch);
in process p0 and the statements
wait (synch);
S1;
in process p1 .
Implementation:
• The main disadvantage of the semaphore definition is, it requires the busy waiting.
• Because when one process is in critical section and if another process needs to enter in to the critical section must have to loop in the entry code continuously.
Implementation of semaphore with no busy waiting:
• To overcome the need of the busy waiting we have to modify the definition of wait() and signal() operations. i.e. when a process executes wait() operation and finds that it is not positive then it must wait.
• Instead of engaging the busy wait, the process block itself so that there will be a chance to the CPU to select another process for execution. It is done by block() operation.
* Blocked processes are placed in waiting queue.
• Later the process that has already been blocked by itself is restarted by using wakeup() operation, so that the process will move from waiting state to ready state.
* Blocked processes that are placed in waiting queue are now placed into ready queue.
• To implement the semaphore with no busy waiting we need to define the semaphore of the wait() and signal() operation by using the ‘C’ Struct. Which is as follows:
typedef struct {
int value;
struct process *list;
}semaphore;
* i.e. each semaphore has an integer value stored in the variable “value” and the list of processes list.
* When a process perform the wait() operation on the semaphore then it will adds list of processes to the list .
* When a process perform the signal() operation on the semaphore then it removes the processes from the list.
Semaphore Implementation with no Busy waiting
Implementation of wait: (definition of wait with no busy waiting)
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
Implementation of signal: (definition of signal with no busy waiting)
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Deadlock and Starvation
• The implementation of semaphore with waiting queue may result in the situation where two or more processes are waiting for an event is called as Deadlocked.
• To illustrate this, let us assume two processes P0 and P1 each accessing two semaphores S and Q
which are initialized to 1 :-
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
• Now process P0 executes wait(S) and P1 executes wait(Q), assume that P0 wants to execute wait(Q) and P1 executes wait(S). But it is possible only after process P1 executes the signal(Q)and P0 executes signal(S).
• Starvation or indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.
Classical Problems of Synchronization
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• Let us take N buffers, each can hold only one item.
• Semaphore mutex taked to the value 1 which is used to provide mutual exclusion.
• Semaphore full initialized to the value 0
• Semaphore empty taked to the value N.
• Semaphore full and empty are used to count the number of buffers.
• The structure of the producer process
while (true) {
// produce an item
wait (empty);
wait (mutex);
// add the item to the buffer
signal (mutex);
signal (full);
}
The structure of the consumer process
while (true) {
wait (full);
wait (mutex);
// remove an item from buffer
signal (mutex);
signal (empty);
// consume the removed item
}
Readers-Writers Problem
• A data set is shared among a number of simultaneous processes
* Readers – only read the data set, do not perform any updates
* Writers – can both read and write the data set (perform the updates).
• If two readers read the divided data simultaneously, there will be no problem. If both a reader(s) and writer share the same data simultaneously then there will be a problem.
• In the solution of reader-writer problem, the reader process divide the following data structures:
Semaphore Mutex, wrt;
int readcount;
• Where
* Semaphore mutex is initialized to 1.
* Semaphore wrt is initialized to 1.
* Integer readcount is initialized to 0.
The structure of a writer process
while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ;
}
The structure of a reader process
while (true) {
wait (mutex) ;
readcount ++ ;
if (readcount == 1) wait(wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0) signal (wrt) ;
signal (mutex) ;
}
Dining-Philosophers Problem
• Shared data
* Bowl of rice (data set)
* Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem
• The structure of Philosopher i:
While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
}
Problems with Semaphores
• Incorrect use of semaphore operations:
*signal (mutex) …. wait (mutex) - Case 1
* wait (mutex) … wait (mutex) - Case 2 * * Omitting of wait (mutex) or signal (mutex) (or both) - Case 3
• As the semaphores used incorrectly as above may results the timing errors.
• Case 1 - Several processes may execute in critical section by violating the mutual
exclusion requirement.
• Case 2 - Dead lock will occur.
• Case 3 - either mutual exclusion is consistuted or dead lock will occur
• To demonstrate with such type of errors, researchers have developed high-level language constructs.
• One type of high-level language builds that is to be used to deal with the above type of errors is - the Monitor type.
Monitors
• A high-level abstraction that provides a convenient and effective mechanism for process synchronization.
• A plan of action can access only those variables that are declared in a monitor and formal parameters
• Only one process may be active within the monitor at a time
Syntax of the monitor :
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….){ … }
…
}
Schematic view of a Monitor
Condition Variables
• Synchronization scheme is not effectual within the monitors.
• A programmer who needs to write the synchronization scheme can define one or more variables of type Condition
• condition x, y;
• The only operations that can be called on a condition variable are wait() and signal().
• The operations are
– x.wait () – a process that request an operation is
– suspended until another process invokes x.signal ()
– x.signal () – resumes only one suspended processes (if any) that invoked x.wait ()
• Now suppose that when x.signal() operation is invoked by a process P, there is a suspended process Q associated with condition x. if Q is allowed to resume its execution, the signaling process P must wait. Else both P and Q would be function simultaneously with in a monitor.
• There are two possibilities
1) signal and wait:- P either waits until Q leaves the monitor or waits for
another condition.
2) signal and continue:- Q either waits Until P leaves the monitor or delays for
another condition.
Monitor with Condition Variables
Solution to Dining Philosophers using Monitors
monitor DP
{
enum { THINKING, HUNGRY, EATING} state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
Each philosopher i calls the operations pickup()and putdown() in the following sequence:
dp.pickup (i)
EAT
dp.putdown (i)
Monitor Implementation Using Semaphores
Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
Each procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next-count > 0)
signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
Monitor Implementation
For each condition variable x, we have:
semaphore x-sem; // (initially = 0)
int x-count = 0;
The operation x.wait can be implemented as:
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
The operation x.signal can be implemented as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Synchronization Examples
• Windows XP
• Linux
Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on uniprocessor systems
• Uses spinlocks on multiprocessor systems
• Also provides dispatcher objects which may act as either mutexes and semaphores
• Dispatcher objects may also provide events
– An event acts much like a condition variable
Linux Synchronization
• Linux:
– disables interrupts to implement short critical sections
• Linux provides:
– semaphores
– spin locks