OS Assignment
OS Assignment
b. Intermediate-Level Scheduler:
Manages process' memory swapping.
Optimizes memory utilization.
Medium-term management.
c. Dispatcher:
Selects next process for CPU.
Performs context switching.
Short-term decision-making.
8.2 Which level of scheduler should make a decision on each of the
following questions? a. Which ready process should be assigned a
processor when one becomes available? b. Which of a series of waiting
batch processes that have been spooled to disk should next be initiated?
c. Which processes should be temporarily suspended to relieve a short-
term burden on the processor? d. Which temporarily suspended process
that is known to be I/O bound should be activated to balance the
multiprogramming mix?
Answer:
There are responsibilities of the three levels of schedulers:
Long-term scheduler decides which processes to bring into the
memory from the disk.
Medium-term scheduler temporarily suspends processes, moves
processes between the ready queue and the wait queue, and decides
which processes to swap out of memory.
Short-term scheduler selects a process from the ready queue and
allocates the CPU to it.
Here is a table summarizing the responsibilities of the three levels of
schedulers in a shorter form:
Scheduler Responsibility
Long-term Brings processes into memory
Medium-term Suspends, moves, swaps processes
Short-term Selects, allocates CPU
For example, the FCFS policy states that the next process to run should be
the one that has been waiting in the queue the longest. The round robin
mechanism implements the FCFS policy by giving each process a time slice,
and then rotating the CPU to the next process when the time slice expires.
Defines the criteria for choosing the Defines the procedure for
next process to run implementing the policy
Examples: FCFS, SJF, priority Examples: round robin, priority
scheduling queue, multilevel feedback queue
Answer:
The following are the scheduling objectives and their corresponding
descriptions:
Fairness : All processes should be given an equal chance to run,
regardless of their priority or other factors.
Throughput : The number of processes that can be completed per
unit time.
Interactive response time : The amount of time it takes for a process
to receive a response from the system.
Predictability : The ability to predict how long a process will take to
run.
Overhead : The amount of time and resources that are used by the
scheduling algorithm.
Resource utilization : The extent to which the system's resources are
being used.
Response and utilization balance : The trade-off between giving
processes a good response time and ensuring that the system's
resources are being used efficiently.
Indefinite postponement : The situation where a process is never
given a chance to run.
Obeying priorities : The scheduling algorithm should give higher
priority processes a better chance to run.
Giving preference to processes that hold key resources : The
scheduling algorithm should give higher priority to processes that are
holding key resources that are needed by other processes.
Giving a lower grade of service to high overhead processes : The
scheduling algorithm should give lower priority to processes that
have a high overhead.
Degrading gracefully under heavy loads : The system should continue
to function even when it is under heavy load.
Now, let's look at the specific scheduling objectives that are most directly
applicable to each of the given cases:
Case i : This case is about fairness, so the most directly applicable
objective is fairness.
Case ii : This case is about predictability, so the most directly
applicable objective is predictability.
Case iii : This case is about resource utilization, so the most directly
applicable objective is resource utilization.
Case iv : This case is about favoring important processes, so the most
directly applicable objective is obeying priorities.
Case v : This case is about avoiding indefinite postponement, so the
most directly applicable objective is avoiding indefinite
postponement.
Case vi : This case is about minimizing overhead, so the most directly
applicable objective is minimizing overhead.
Case vii : This case is about favoring I/O-bound processes, so the most
directly applicable objective is favoring I/O-bound processes.
Case viii : This case is about context switches, so the most directly
applicable objective is degrading gracefully under heavy loads.
8.6 State which of the following are true and which false. Justify your
answers. a. A process scheduling discipline is preemptive if the processor
cannot be forcibly removed from a process. b. Real-time systems generally
use preemptive processor scheduling. c. Timesharing systems generally
use nonpreemptive processor scheduling. d. Turnaround times are more
predictable in preemptive than in nonpreemptive systems. e. One
weakness of priority schemes is that the system will faithfully honor the
priorities, but the priorities themselves may not be meaningful.
Answer:
Here are the statements about scheduling disciplines
a. False. A preemptive scheduling discipline is where the processor can be
forcibly removed from a running process.
b. True. Real-time systems need to meet deadlines, so preempting a
running process is important if a higher priority process arrives.
c. False. Timesharing systems use preemptive scheduling to ensure that all
users get a fair share of the CPU.
d. False. Turnaround time is unpredictable in preemptive scheduling
because the process can be preempted and put back in the queue multiple
times.
e. True. Priority schemes are based on the assumption that the priorities
assigned to processes are meaningful. However, this is not always the case.
Answer:
Here is a brief explanation of each of the statements:
Static priorities are priorities that are assigned to processes when
they are created. They do not change over time, regardless of the
process's behavior.
Dynamic priorities are priorities that can change over time, based on
the process's behavior. For example, a process's priority might be
increased if it is waiting for a long time for a resource.
Based on these definitions, the following statements are true:
Static priorities are easier to implement because the operating
system does not need to track the process's behavior over time.
Static priorities require less runtime overhead because the operating
system does not need to update the process's priority as often.
Dynamic priorities are more responsive to changes in a process's
environment because the operating system can adjust the process's
priority as needed.
Static priorities require more careful deliberation over the initial
priority value chosen because the priority cannot be changed later.
Answer:
Accurately predicting task execution time: It is difficult to predict the
execution time of tasks, especially for tasks that are I/O-bound or that
have unpredictable behavior.
Minimizing average waiting time: It is difficult to minimize the average
waiting time of all tasks, especially if there are many tasks with different
deadlines.
Avoiding starvation: It is difficult to avoid starvation, which is the situation
where a task is never given a chance to run because it is always
preempted by higher priority tasks.
Handling preemptions: It is difficult to handle preemptions gracefully, which
means that the algorithm must be able to quickly re-schedule tasks that
have been preempted, and it must also ensure that the tasks still meet
their deadlines.
Dealing with uncertainty: It is difficult to deal with uncertainty in the system,
such as uncertainty about the arrival time of tasks, the execution time of
tasks, and the availability of resources.
Answer:
Here is an example showing why FIFO is not an appropriate processor
scheduling scheme for interactive users:
Suppose there are two processes, P1 and P2, running on a single processor
system. P1 is a long-running batch process, while P2 is an interactive process
that is waiting for user input.
Under FIFO scheduling, P1 will be given the CPU first, even though P2 is waiting
for user input. This means that P2 will have to wait until P1 finishes running
before it can get a chance to run. This can lead to a poor user experience, as the
user will have to wait a long time for their process to start running.
Here are some other reasons why FIFO is not an appropriate processor
scheduling scheme for interactive users:
8.11 Using the example from the previous problem, show why round-robin
is a better scheme for interactive users.
Answer:
Answer:
A small quantum size will lead to frequent preemptions, which can reduce
performance.
A large quantum size will reduce preemptions, but it can lead to starvation
for I/O-bound processes.
The ideal quantum size is slightly greater than the time it takes for an I/O-
bound process to generate an I/O request.
Here are some additional considerations when choosing a quantum size:
However, it is important to note that the ideal quantum size can vary
depending on the specific system. The best way to choose a quantum size is
to experiment with different values and see what works best for the specific
system.
8.14 State why each of the following is incorrect. a. SPF never has a higher
throughput than SRT. b. SPF is fair. c. The shorter the process, the better
the service it should receive. d. Because SPF gives preference to short
processes, it is useful in timesharing.
Answer:
a. SPF never has a higher throughput than SRT.
This statement is incorrect because SPF can have a higher throughput than
SRT in some cases. For example, if there are a lot of short processes in the
system, then SPF can give them all a chance to run before a long process
gets a chance to run. This can lead to higher throughput than SRT, which
would only give the long process a chance to run.
b. SPF is fair.
This statement is incorrect because SPF is not fair to all processes. It gives
preference to short processes, which means that long processes may have
to wait a long time to run. This can be unfair to processes that have
important deadlines.
c. The shorter the process, the better the service it should receive.
This statement is incorrect because it is not always true. For example, a long
process that is close to finishing may be more important than a short
process that is just starting. In this case, it would be better to give the long
process the CPU, even though it is longer.