0% found this document useful (0 votes)
30 views24 pages

Unit2 Os

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views24 pages

Unit2 Os

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Process System

Process Concept

 Process – a program or unit of job/task at a time of


execution is known as process. It includes all the
supporting current activities such as memory
management, I/O management and CPU management.

 A fundamental function of an operating system is the


execution of a program and an executing program is
known as process.

 Example: Execute a C program

A program is a set of instructions the programmer has written


and stored. When a program is ready for execution, we require
the various job as a sequence of programs i.e., Compile, Link and
Load. In other words, a program execution is called a process.
Process Concept
EX1. 1 If arrival time of all process are assumed to be 0 ms.
Process Burst Time/Execute Time
P1 24
P2 3
P3 3
Sol. All the processes are arriving at 0 ms.

Waiting Time : Wt.T


Wt.T P1= 0
Wt.T P2= 24
Wt.T P3= 27

Gantt chart
0 24 27 30
P1 P2 P3

Turnaround Time (TAT) :


TAT P1 = 24
TAT P2 = 27
TAT P3= 30
Process Concept
EX. 2 If order of arrival of all process are given below.

Process Burst Time


P1 24
P2 3
P3 3
Note: If process arrive in order P2,P3,P1 at 0 ms.

Sol. Gantt chart


0 3 6 30
P2 P3 P1
Waiting Time : Wt.T
Wt.T P1= 6
Wt.T P2= 0
Wt.T P3= 3
Average Wt. T=(6+0+3)/3 = 3 ms
Turnaround Time (TAT) :
TAT P1 = 30
TAT P2 = 3
TAT P3 = 6
Average TAT=(30+3+6)/3= 13 ms
Process State
 As a process executes, it changes its state. A
process can be in one of the following states:
 New: The process is being created.
 Running: Executing Instructions.
 Waiting: The process is waiting for some event to
occur such as I/O completion or response of a signal.
 Ready: The process is waiting to be assigned to a
processor.
 Terminated: The process has finished execution.
Process Transition Diagram
 Transition mean change of one state of a process to other
state. A process state transition is dynamic change in the
state of process. Process transition diagram also consists of
five states as shown below in Fig.
Process Transition Diagram
Characteristics of Suspend State :

 Suspended process is not immediately available for execution.


 Process may not be removed from the suspended state until agent
orders the removal.

Reason for Process Suspension:


 Swapping : OS need to release required main memory to bring in
a process that a ready to execute.
 Timing: Process may be suspended while wait for the next time
interval
 Interactive user request: Process may be suspended for
debugging purpose by user:
Process Control Block (PCB)
 Process Control includes scheduling, data structuring,
resources information etc.
 If the OS is to manage processes and resources, it must have
information about the current status of each process and
resource.
 Each process is represented in OS by a process control block
as shown in Fig.

Fig. PCB
Process Control Block (PCB)
The process control block (PCB), a data structure maintained by the OS for
each process, contains the following Information associated with each
process:

 Process State : The process may be new, ready, running, waiting, exit and so on.
 Process Number : Each process is represented by the process ID.
 Program Counter, which indicates the next instruction to be executed
 CPU Registers information saved when an interrupt occurs
 CPU Scheduling information such as priority, pointers to scheduling queues, etc
 Memory Management information includes base limit addresses, page tables, etc.
 Accounting information: amount of CPU time used
 I/O status information: list of I/O devices allocated, list of open files, etc.
Concurrent Process
Concurrent processing is a computing model in which multiple processors
execute instructions simultaneously for better performance.

One of the first uses of concurrent processing was in operating systems.

If the
computer is to support a multiuser environment, the operating system must
utilize
concurrent programming techniques to allow several users to access the
computer simultaneously.

The operating system should also permit several various input and output
devices to be
used simultaneously, again utilizing concurrent processing.
Concurrent Process
Concurrency is the interleaving of process in time to give the appearance of
simultaneous execution.
Concurrent Process
Example:
chin = getchar();
chout = chin;
putchar(chout);

It processes P1 and P2 both executing this code at the same time then:

•P1 enter this code, but interrupted after reading the char character x into chin.
•P2 enter this code and run it to completion, reading and display the character y.
•P1 is resumed, but chin now contains the character y, so P1 display the wrong
character.

The general solution is to allow only one process at a time to enter the code that access
chin, such code is often called a Critical-Section.

When one process is inside a critical selection of code, other process must be
prevented from entering this section. This requirement is known as Mutual Exclusion.
Critical Section Problem
A critical section is a sequence or block of code run by a process that
references one or more variable in a read/update/write method while any of
those variables is possibly being changed by another process.

Criteria for the valid solutions to the Critical Section Problem

Mutual exclusion: If one process is executing the critical section all


other processes must be debarred.

Progress: Only process that is in their entry section can be selected to


enter their critical section. This selection cannot be postponed
indefinitely.

Bounded wait : There is a bound on the number of times that a process


is allowed to enter its critical section, after another process has made a
request to enter its critical section, before that request is granted

Producer – Consumer Problem is classical problem of Critical Section.


Critical Section Problem
Producer – Consumer Problem
A producer who produces the item in the buffer and consumer who consumes
the item from the buffer .That’s why ,it is also known as bounded-Buffer
Problem as shown in the fig.

For this type of synchronization the following two conditions must be satisfied:

1. The consumers must wait if the buffer is empty.


2. The producers must wait if the buffer is full
Flow Chart for Producer – Consumer
Problem
Producer – Consumer Problem using
Semaphores
The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in
1965, and has found use in operating systems. Semaphore is a system of sending
messages by holding two flags in certain positions according to an alphabetic code.

In operating systems, a semaphore is a variable or abstract data type that is used for
controlling access, by multiple processes, to a common resource in a parallel
programming.

In the solution below we use two semaphores, fillCount and emptyCount, to solve
the problem. fillCount is the number of items already in the buffer and available to be
read, while emptyCount is the number of available spaces in the buffer where items
could be written.
fillCount is incremented and emptyCount decremented when a new item is put into the
buffer. If the producer tries to decrement emptyCount when its value is zero, the
producer is put to sleep. The next time an item is consumed, emptyCount is
incremented and the producer wakes up. The consumer works analogously.
Producer – Consumer Problem using
Semaphores
semaphore fillCount = 0; // items produced
semaphore emptyCount = BUFFER_SIZE; //
remaining space

procedure producer() {
while (true) {
item = produceItem();
down(emptyCount);
putItemIntoBuffer(item);
up(fillCount);
}
}

procedure consumer() {
while (true) {
down(fillCount);
item = removeItemFromBuffer();
up(emptyCount);
consumeItem(item);
}
}
Classical Problem in Concurrency or
Synchronization
The producer/consumer problem is classic problem for which design of
synchronization and concurrency mechanisms can be tested. The Reader’s –
Writer’s Problem
and Dining Philosophers are another such problem.

1. Reader’s – Writer’s Problem

In computer science, the first and second readers-writers problems are examples of
a common computing problem in concurrency. The two problems deal with
situations in which many threads must access the same shared memory at one time,
some reading and some writing, with the natural constraint that no process may
access the share for reading or writing while another process is in the act of writing to
it. (In particular, it is allowed for two or more readers to access the share at the same
time.) A readers-writer lock is a data structure that solves one or more of the
readers-writers problems.
Classical Problem in Concurrency or
Synchronization
2. Sleeping Barber Problem
Consider a barber’s shop where there is only one barber, one barber chair
and a number of waiting chairs for the customers. When there are no
customers the barber sits on the barber chair and sleeps. When a customer
arrives he awakes the barber or waits in one of the vacant chairs if the
barber is cutting someone else’s hair. When all the chairs are full, the newly
arrived customer simply leaves.

3. Dining Philosophers Problem


The dining philosopher’s problem is invented by E. W. Dijkstra. Imagine that five
philosophers who spend their lives just thinking and easting. In the middle of the
dining room is a circular table with five chairs. The table has a big plate of spaghetti.
However, there are only five chopsticks available, as shown in the following figure.
Each philosopher thinks. When he gets hungry, he sits down and picks up the two
chopsticks that are closest to him. If a philosopher can pick up both chopsticks,
he eats for a while. After a philosopher finishes eating, he puts down the chopsticks
and starts to think.
Dekker's and Peterson's Solution and Algorithm

Dekker's algorithm is the first known correct solution to the mutual


exclusion problem in concurrent programming. The solution is attributed
to Dutch mathematician Th. J. Dekker by Edsger W. Dijkstra in his manuscript
on cooperating sequential processes. It allows two threads to share a single-
use resource without conflict, using only shared memory for communication.

If two processes attempt to enter a critical section at the same time, the
algorithm will allow only one process in, based on whose turn it is. If one
process is already in the critical section, the other process will busy wait for the
first process to exit. This is done by the use of two flags, flag[0] and flag[1],
which indicate an intention to enter the critical section and a turn variable that
indicates who has priority between the two processes.
Dekker's and Peterson's Solution and Algorithm

Peterson's algorithm is a concurrent programming algorithm for mutual


exclusion that allows two processes to share a single-use resource without
conflict, using only shared memory for communication. It was formulated
by Gary L. Peterson in 1981. While Peterson's original formulation worked with
only two processes, the algorithm can be generalized for more than two, as
shown below.

The algorithm uses two variables, flag and turn. A flag[n] value
of true indicates that the process n wants to enter the critical section. Entrance
to the critical section is granted for process P0 if P1 does not want to enter its
critical section or if P1 has given priority to P0 by setting turn to 0.
Threads and their management
Threads : The threads has a program counter that keep track of which
instruction to execute next. Threads have some of the properties of
processes, they are sometimes called light weight process.

Processes and Threads


Similarities Difference
1. Like processes, threads share CPU and only 1. Unlike processes, threads are not
one thread active at a time. independent of one another
2. Like processes, threads within a processes, 2. Unlike processes, all threads can
execute sequentially. access every address in the task
3. Like processes, threads can create 3. Unlike processes, threads are
children. design to assist one other.
4. Like processes, if one thread is blocked,
another thread can run.
Difference between User-level thread and Kernel-
level thread
User-level thread Kernel-level thread
1. User thread is normally created and 1. Kernel threads is created and scheduled by
scheduled by a threading library. Kernel

2. User threads completely managed by 2. Kernel threads use the kernel scheduler,
the threading library. Different kernel threads can run on different
CPU.

3. User threads are cheaper to create 3. Kernel threads are more expensive to create
than kernel threads than user threads.

4. A one to one threading model 4. A many to one threading model

5. User level threads can run on any OS 5. Kernel Level threads can run on specific OS
NITRA Technical Campus (802), Ghaziabad
B. Tech – CSE (6th Semester)
Operating System (KCS- 401)

2nd Assignment (Unit- 2), 23rd April 2022

What is Concurrent Process and explain the critical section problem with suitable example.

Last date to submit the assignment: 3rd May, 2022

drbksharma@[Link]
drbknitra@[Link]

You might also like