Multiple-Processor Scheduling in Operating System Last Updated : 11 Sep, 2024 Comments Improve Suggest changes Like Article Like Report In multiple-processor scheduling multiple CPUs are available and hence Load Sharing becomes possible. However multiple processor scheduling is more complex as compared to single processor scheduling. In multiple processor scheduling, there are cases when the processors are identical i.e. HOMOGENEOUS, in terms of their functionality, we can use any processor available to run any process in the queue.What is CPU Scheduling?It is the mechanism through which an operating system chooses which tasks or processes are to be executed by the CPU at any instant in time. The major goal is to keep the CPU in busy mode by assigning the time of various processes in an effective manner so that the overall performance of the system can be optimized. There are several algorithms, like Round Robin or Priority Scheduling, applied to govern the scheduling of tasks.What is Multiple-Processor Scheduling?In systems containing more than one processor, multiple-processor scheduling addresses task allocations to multiple CPUs. This will involve higher throughputs since several tasks can be processed concurrently in separate processors. It would also involve the determination of which CPU handles a particular task and balancing loads between available processors.Approaches to Multiple-Processor SchedulingOne approach is when all the scheduling decisions and I/O processing are handled by a single processor which is called the Master Server and the other processors executes only the user code. This is simple and reduces the need of data sharing. This entire scenario is called Asymmetric Multiprocessing. A second approach uses Symmetric Multiprocessing where each processor is self scheduling. All processes may be in a common ready queue or each processor may have its own private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute.1. Processor AffinityProcessor Affinity means a processes has an affinity for the processor on which it is currently running. When a process runs on a specific processor there are certain effects on the cache memory. The data most recently accessed by the process populate the cache for the processor and as a result successive memory access by the process are often satisfied in the cache memory. Now if the process migrates to another processor, the contents of the cache memory must be invalidated for the first processor and the cache for the second processor must be repopulated. Because of the high cost of invalidating and repopulating caches, most of the SMP(symmetric multiprocessing) systems try to avoid migration of processes from one processor to another and try to keep a process running on the same processor. This is known as PROCESSOR AFFINITY. There are two types of processor affinity:Soft Affinity: When an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing it will do so, this situation is called soft affinity.Hard Affinity: Hard Affinity allows a process to specify a subset of processors on which it may run. Some systems such as Linux implements soft affinity but also provide some system calls like sched_setaffinity() that supports hard affinity.2. Load BalancingLoad Balancing is the phenomena which keeps the workload evenly distributed across all processors in an SMP system. Load balancing is necessary only on systems where each processor has its own private queue of process which are eligible to execute. Load balancing is unnecessary because once a processor becomes idle it immediately extracts a runnable process from the common run queue. On SMP(symmetric multiprocessing), it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor else one or more processor will sit idle while other processors have high workloads along with lists of processors awaiting the CPU. There are two general approaches to load balancing :Push Migration: In push migration a task routinely checks the load on each processor and if it finds an imbalance then it evenly distributes load on each processors by moving the processes from overloaded to idle or less busy processors.Pull Migration: Pull Migration occurs when an idle processor pulls a waiting task from a busy processor for its execution.3. Multicore ProcessorsIn multicore processors multiple processor cores are places on the same physical chip. Each core has a register set to maintain its architectural state and thus appears to the operating system as a separate physical processor. SMP systems that use multicore processors are faster and consume less power than systems in which each processor has its own physical chip. However multicore processors may complicate the scheduling problems. When processor accesses memory then it spends a significant amount of time waiting for the data to become available. This situation is called MEMORY STALL. It occurs for various reasons such as cache miss, which is accessing the data that is not in the cache memory. In such cases the processor can spend upto fifty percent of its time waiting for data to become available from the memory. To solve this problem recent hardware designs have implemented multithreaded processor cores in which two or more hardware threads are assigned to each core. Therefore if one thread stalls while waiting for the memory, core can switch to another thread. There are two ways to multithread a processor :Coarse-Grained Multithreading: In coarse grained multithreading a thread executes on a processor until a long latency event such as a memory stall occurs, because of the delay caused by the long latency event, the processor must switch to another thread to begin execution. The cost of switching between threads is high as the instruction pipeline must be terminated before the other thread can begin execution on the processor core. Once this new thread begins execution it begins filling the pipeline with its instructions.Fine-Grained Multithreading: This multithreading switches between threads at a much finer level mainly at the boundary of an instruction cycle. The architectural design of fine grained systems include logic for thread switching and as a result the cost of switching between threads is small.4. Virtualization and ThreadingIn this type of multiple-processor scheduling even a single CPU system acts like a multiple-processor system. In a system with Virtualization, the virtualization presents one or more virtual CPU to each of virtual machines running on the system and then schedules the use of physical CPU among the virtual machines. Most virtualized environments have one host operating system and many guest operating systems. The host operating system creates and manages the virtual machines. Each virtual machine has a guest operating system installed and applications run within that guest. Each guest operating system may be assigned for specific use cases,applications or users including time sharing or even real-time operation. Any guest operating-system scheduling algorithm that assumes a certain amount of progress in a given amount of time will be negatively impacted by the virtualization. A time sharing operating system tries to allot 100 milliseconds to each time slice to give users a reasonable response time. A given 100 millisecond time slice may take much more than 100 milliseconds of virtual CPU time. Depending on how busy the system is, the time slice may take a second or more which results in a very poor response time for users logged into that virtual machine. The net effect of such scheduling layering is that individual virtualized operating systems receive only a portion of the available CPU cycles, even though they believe they are receiving all cycles and that they are scheduling all of those cycles.Commonly, the time-of-day clocks in virtual machines are incorrect because timers take no longer to trigger than they would on dedicated CPU's. Virtualizations can thus undo the good scheduling-algorithm efforts of the operating systems within virtual machines.ConclusionScheduling is an indispensable part of any operating system, and CPU scheduling ensures that the tasks are accomplished without idling the CPU. The introduction of multi-processor systems considerably complicates things and requires scheduling strategies for effective load balancing and optimization of execution. Knowing these concepts will further help assure you that the operating system is well maintained. Comment More infoAdvertise with us Next Article Thread Scheduling A Ayush_Pandey_22 Follow Improve Article Tags : Misc Operating Systems Practice Tags : Misc Similar Reads Operating System Tutorial An Operating System(OS) is a software that manages and handles hardware and software resources of a computing device. Responsible for managing and controlling all the activities and sharing of computer resources among different running applications.A low-level Software that includes all the basic fu 4 min read OS BasicsWhat is an Operating System?An Operating System is a System software that manages all the resources of the computing device. Acts as an interface between the software and different parts of the computer or the computer hardware. Manages the overall resources and operations of the computer. Controls and monitors the execution o 9 min read Functions of Operating SystemAn Operating System acts as a communication interface between the user and computer hardware. Its purpose is to provide a platform on which a user can execute programs conveniently and efficiently. The main goal of an operating system is to make the computer environment more convenient to use and to 7 min read Types of Operating SystemsOperating Systems can be categorized according to different criteria like whether an operating system is for mobile devices (examples Android and iOS) or desktop (examples Windows and Linux). Here, we are going to classify based on functionalities an operating system provides.8 Main Operating System 11 min read Need and Functions of Operating SystemsThe fundamental goal of an Operating System is to execute user programs and to make tasks easier. Various application programs along with hardware systems are used to perform this work. Operating System is software that manages and controls the entire set of resources and effectively utilizes every 9 min read Commonly Used Operating SystemThere are various types of Operating Systems used throughout the world and this depends mainly on the type of operations performed. These Operating Systems are manufactured by large multinational companies like Microsoft, Apple, etc. Let's look at the few most commonly used OS in the real world: Win 9 min read Structure of Operating SystemOperating System ServicesAn operating system is software that acts as an intermediary between the user and computer hardware. It is a program with the help of which we are able to run various applications. It is the one program that is running all the time. Every computer must have an operating system to smoothly execute ot 6 min read Introduction of System CallA system call is a programmatic way in which a computer program requests a service from the kernel of the operating system on which it is executed. A system call is a way for programs to interact with the operating system. A computer program makes a system call when it requests the operating system' 11 min read System Programs in Operating SystemSystem Programming can be defined as the act of building Systems Software using System Programming Languages. According to Computer Hierarchy, Hardware comes first then is Operating System, System Programs, and finally Application Programs.In the context of an operating system, system programs are n 5 min read Operating Systems StructuresThe operating system can be implemented with the help of various structures. The structure of the OS depends mainly on how the various standard components of the operating system are interconnected and merge into the kernel. This article discusses a variety of operating system implementation structu 8 min read History of Operating SystemAn operating system is a type of software that acts as an interface between the user and the hardware. It is responsible for handling various critical functions of the computer and utilizing resources very efficiently so the operating system is also known as a resource manager. The operating system 8 min read Booting and Dual Booting of Operating SystemWhen a computer or any other computing device is in a powerless state, its operating system remains stored in secondary storage like a hard disk or SSD. But, when the computer is started, the operating system must be present in the main memory or RAM of the system.What is Booting?When a computer sys 7 min read Types of OSBatch Processing Operating SystemIn the beginning, computers were very large types of machinery that ran from a console table. In all-purpose, card readers or tape drivers were used for input, and punch cards, tape drives, and line printers were used for output. Operators had no direct interface with the system, and job implementat 6 min read Multiprogramming in Operating SystemAs the name suggests, Multiprogramming means more than one program can be active at the same time. Before the operating system concept, only one program was to be loaded at a time and run. These systems were not efficient as the CPU was not used efficiently. For example, in a single-tasking system, 5 min read Time Sharing Operating SystemMultiprogrammed, batched systems provide an environment where various system resources were used effectively, but it did not provide for user interaction with computer systems. Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so frequent that 5 min read What is a Network Operating System?The basic definition of an operating system is that the operating system is the interface between the computer hardware and the user. In daily life, we use the operating system on our devices which provides a good GUI, and many more features. Similarly, a network operating system(NOS) is software th 2 min read Real Time Operating System (RTOS)Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment, flight control, and 6 min read Process ManagementIntroduction of Process ManagementProcess Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl 8 min read Process Table and Process Control Block (PCB)While creating a process, the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes. For this task, the process control 6 min read Operations on ProcessesProcess operations refer to the actions or activities performed on processes in an operating system. These operations include creating, terminating, suspending, resuming, and communicating between processes. Operations on processes are crucial for managing and controlling the execution of programs i 5 min read Process Schedulers in Operating SystemA process is the instance of a computer program in execution. Scheduling is important in operating systems with multiprogramming as multiple processes might be eligible for running at a time.One of the key responsibilities of an Operating System (OS) is to decide which programs will execute on the C 7 min read Inter Process Communication (IPC)Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f 5 min read Context Switching in Operating SystemContext Switching in an operating system is a critical function that allows the CPU to efficiently manage multiple processes. By saving the state of a currently active process and loading the state of another, the system can handle various tasks simultaneously without losing progress. This switching 4 min read Preemptive and Non-Preemptive SchedulingIn operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp 5 min read CPU Scheduling in OSCPU Scheduling in Operating SystemsCPU scheduling is a process used by the operating system to decide which task or process gets to use the CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are usually many tasks that need to be processed. The following are different purposes of a 8 min read CPU Scheduling CriteriaCPU scheduling is essential for the system's performance and ensures that processes are executed correctly and on time. Different CPU scheduling algorithms have other properties and the choice of a particular algorithm depends on various factors. Many criteria have been suggested for comparing CPU s 6 min read Multiple-Processor Scheduling in Operating SystemIn multiple-processor scheduling multiple CPUs are available and hence Load Sharing becomes possible. However multiple processor scheduling is more complex as compared to single processor scheduling. In multiple processor scheduling, there are cases when the processors are identical i.e. HOMOGENEOUS 8 min read Thread SchedulingThere is a component in Java that basically decides which thread should execute or get a resource in the operating system. Scheduling of threads involves two boundary scheduling. Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via lightweight process (LWP) by the application dev 7 min read Threads in OSThread in Operating SystemA thread is a single sequence stream within a process. Threads are also called lightweight processes as they possess some of the properties of processes. Each thread belongs to exactly one process.In an operating system that supports multithreading, the process can consist of many threads. But threa 7 min read Threads and its Types in Operating SystemA thread is a single sequence stream within a process. Threads have the same properties as the process so they are called lightweight processes. On single core processor, threads are are rapidly switched giving the illusion that they are executing in parallel. In multi-core systems, threads can exec 8 min read Multithreading in Operating SystemA thread is a path that is followed during a programâs execution. The majority of programs written nowadays run as a single thread. For example, a program is not capable of reading keystrokes while making drawings. These tasks cannot be executed by the program at the same time. This problem can be s 7 min read Process SynchronizationIntroduction of Process SynchronizationProcess Synchronization is used in a computer system to ensure that multiple processes or threads can run concurrently without interfering with each other.The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other an 10 min read Race Condition VulnerabilityRace condition occurs when multiple threads read and write the same variable i.e. they have access to some shared data and they try to change it at the same time. In such a scenario threads are âracingâ each other to access/change the data. This is a major security vulnerability.What is Race Conditi 10 min read Critical Section in SynchronizationA critical section is a segment of a program where shared resources, such as memory, files, or ports, are accessed by multiple processes or threads. To prevent issues like data inconsistency and race conditions, synchronization techniques ensure that only one process or thread accesses the critical 8 min read Mutual Exclusion in SynchronizationDuring concurrent execution of processes, processes need to enter the critical section (or the section of the program shared across processes) at times for execution. It might happen that because of the execution of multiple processes at once, the values stored in the critical section become inconsi 6 min read Critical Section Problem SolutionPeterson's Algorithm in Process SynchronizationPeterson's Algorithm is a classic solution to the critical section problem in process synchronization. It ensures mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions. The algorithm uses two shared variables to manage the turn-taking mechanis 15+ min read Semaphores in Process SynchronizationSemaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar 15+ min read Semaphores and its typesA semaphore is a tool used in computer science to manage how multiple programs or processes access shared resources, like memory or files, without causing conflicts. Semaphores are compound data types with two fields one is a Non-negative integer S.V(Semaphore Value) and the second is a set of proce 6 min read Producer Consumer Problem using Semaphores | Set 1The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two types of processes: producers, which generate data, and consumers, which process that data. Both share a common buffer. The challenge is to ensure that the producer doesn't add data to a full buffe 4 min read Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)The readers-writer problem in operating systems is about managing access to shared data. It allows multiple readers to read data at the same time without issues but ensures that only one writer can write at a time, and no one can read while writing is happening. This helps prevent data corruption an 7 min read Dining Philosopher Problem Using SemaphoresThe Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked 11 min read Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, SwapProcess Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as 4 min read Deadlocks & Deadlock Handling MethodsIntroduction of Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two 11 min read Conditions for Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss what deadlock is and the necessary conditions required for deadlock.What is Deadlock?Deadlock is 8 min read Banker's Algorithm in Operating SystemBanker's Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding unsafe states that could lead to deadlocks.The Banker's Algorithm is a smart way for 8 min read Wait For Graph Deadlock Detection in Distributed SystemDeadlocks are a fundamental problem in distributed systems. A process may request resources in any order and a process can request resources while holding others. A Deadlock is a situation where a set of processes are blocked as each process in a Distributed system is holding some resources and that 5 min read Handling DeadlocksDeadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. In other words, Deadlock is a critical situation in computing where a process, or a group of processes, beco 8 min read Deadlock Prevention And AvoidanceDeadlock prevention and avoidance are strategies used in computer systems to ensure that different processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic system where cars (processes) must move through intersections (resources) without getting int 5 min read Deadlock Detection And RecoveryDeadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an operating system. In operating systems, deadlock recovery is important to keep everything running smoothly. A deadlock occurs when two or more processes are blocked, waiting for each other to release the reso 6 min read Deadlock Ignorance in Operating SystemIn this article we will study in brief about what is Deadlock followed by Deadlock Ignorance in Operating System. What is Deadlock? If each process in the set of processes is waiting for an event that only another process in the set can cause it is actually referred as called Deadlock. In other word 5 min read Recovery from Deadlock in Operating SystemIn today's world of computer systems and multitasking environments, deadlock is an undesirable situation that can bring operations to a halt. When multiple processes compete for exclusive access to resources and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth function 8 min read Like