Define Process. Explain The Major Components of A Process
Define Process. Explain The Major Components of A Process
The notion of process is central to the understanding of operating systems. The term process is used somewhat interchangeably with task or job. There are quite a few definitions presented in the literature, for instance A program in Execution. An asynchronous activity. The entity to which processors are assigned. The dispatchable unit. And many more, but the definition "Program in Execution" seem to be most frequently used. And this is a concept we will use in the present study of operating systems. Now that we agreed upon the definition of process, the question is, what is the relation between process and program, or is it same with different name or when the process is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. A process is more than a program code. A process is an active entity as oppose to program which considered being a passive entity. As we all know that a program is an algorithm expressed in some programming language. Being a passive, a program is only a part of process. Process, on the other hand, includes: Current value of Program Counter (PC) Contents of the processors registers Value of the variables The process stack, which typically contains temporary data such as subroutine parameter, return address, and temporary variables. A data section that contains global variables. A process is the unit of work in a system. In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes.
2. Describe the following: A.) Layered Approach B.) Micro Kernels C.) Virtual Machines
Layered Approach With proper hardware support, operating systems can be broken into pieces that are smaller and more appropriate than those allowed by the original MS-DOS or UNIX systems. The operating system can then retain much greater control over the computer and over the applications that make use of that computer. Implementers have more freedom in changing the inner workings of the system and in creating modular operating systems. Under the top-down approach, the overall functionality and features are determined and the separated into components. Information hiding is also important, because it leaves programmers free to implement the low-level routines as they see fit, provided that the external interface of the routine stays unchanged and that the routine itself performs the advertised task. A system can be made modular in many ways. One method is the layered approach, in which the operating system is broken up into a number of layers (levels). The bottom layer (layer 0) id the hardware; the highest (layer N) is the user interface. Users File Systems Inter-process Communication I/O and Device Management Virtual Memory Primitive Process Management Hardware Fig. 2.2: Layered Architecture An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating system layer-say, layer M-consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M, in turn, can invoke operations on lowerlevel layers. The main advantage of the layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system is simplified. Each layer is implemented with only those operations provided by lower-level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers. The major difficulty with the layered approach involves appropriately defining the various layers. Because layer can use only lower-level layers, careful planning is necessary. For example, the device driver for the backing store (disk space used by virtual-memory algorithms) must be at a lower level than the memory-management routines, because memory management requires the ability to use the backing store. Other requirement may not be so obvious. The backing-store driver would normally be above the CPU scheduler, because the driver may need to wait for I/O and the CPU can be rescheduled during this time. However, on a larger system, the CPU scheduler may have more information about all the active processes
than can fit in memory. Therefore, this information may need to be swapped in and out of memory, requiring the backing-store driver routine to be below the CPU scheduler. A final problem with layered implementations is that they tend to be less efficient than other types. For instance, when a user program executes an I/O operation, it executes a system call that is trapped to the I/O layer, which calls the memory-management layer, which in turn calls the CPU-scheduling layer, which is then passed to the hardware. At each layer, the parameters may be modified; data may need to be passed, and so on. Each layer adds overhead to the system call; the net result is a system call that takes longer than does one on a non-layered system. These limitations have caused a small backlash against layering in recent years. Fewer layers with more functionality are being designed, providing most of the advantages of modularized code while avoiding the difficult problems of layer definition and interaction. Micro-kernels We have already seen that as UNIX expanded, the kernel became large and difficult to manage. In the mid1980s, researches at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing then as system and user-level programs. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, micro-kernels provide minimal process and memory management, in addition to a communication facility. Device Drivers Microkernel Hardware Fig. 2.3: Microkernel Architecture The main function of the microkernel is to provide a communication facility between the client program and the various services that are also running in user space. Communication is provided by message passing. For example, if the client program and service never interact directly. Rather, they communicate indirectly by exchanging messages with the microkernel. On benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provided more security and reliability, since most services are running as user rather than kernel processes, if a service fails the rest of the operating system remains untouched. Several contemporary operating systems have used the microkernel approach. Tru64 UNIX (formerly Digital UNIX provides a UNIX interface to the user, but it is implemented with a March kernel. The March kernel maps UNIX system calls into messages to the appropriate user-level services. The following figure shows the UNIX operating system architecture. At the center is hardware, covered by kernel. Above that are the UNIX utilities, and command interface, such as shell (sh), etc. File Server Client . Process Virtual Memory
Virtual Machine The layered approach of operating systems is taken to its logical conclusion in the concept of virtual machine. The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the CPU, Memory, Disk drives, Network Interface Cards, and so forth) into several different execution environments and thereby creating the illusion that each separate execution environment is running its own private computer. By using CPU Scheduling and Virtual Memory techniques, an operating system can create the illusion that a process has its own processor with its own (virtual) memory. Normally a process has additional features, such as system calls and a file system, which are not provided by the hardware. The Virtual machine approach does not provide any such additional functionality but rather an interface that is identical to the underlying bare hardware. Each process is provided with a (virtual) copy of the underlying computer. Hardware Virtual machine The original meaning of virtual machine, sometimes called a hardware virtual machine, is that of a number of discrete identical execution environments on a single computer, each of which runs an operating system (OS). This can allow applications written for one OS to be executed on a machine which runs a different OS, or provide execution "sandboxes" which provide a greater level of isolation between processes than is achieved when running multiple processes on the same instance of an OS. One use is to provide multiple users the illusion of having an entire computer, one that is their "private" machine, isolated from other users, all on a single physical machine. Another advantage is that booting and restarting a virtual machine can be much faster than with a physical machine, since it may be possible to skip tasks such as hardware initialization. Such software is now often referred to with the terms virtualization and virtual servers. The host software which provides this capability is often referred to as a virtual machine monitor or hypervisor. Software virtualization can be done in three major ways: Emulation, full system simulation, or "full virtualization with dynamic recompilation" the virtual machine simulates the complete hardware, allowing an unmodified OS for a completely different CPU to be run.
Paravirtualization the virtual machine does not simulate hardware but instead offers a special API that requires OS modifications. An example of this is XenSources XenEnterprise (www.xensource.com) Native virtualization and "full virtualization" the virtual machine only partially simulates enough hardware to allow an unmodified OS to be run in isolation, but the guest OS must be designed for the same type of CPU. The term native virtualization is also sometimes used to designate that hardware assistance through Virtualization Technology is used. Application virtual machine Another meaning of virtual machine is a piece of computer software that isolates the application being used by the user from the computer. Because versions of the virtual machine are written for various computer platforms, any application written for the virtual machine can be operated on any of the platforms, instead of having to produce separate versions of the application for each computer and operating system. The application is run on the computer using an interpreter or Just In Time compilation. One of the best known examples of an application virtual machine is Sun Microsystems Java Virtual Machine.
3. Memory management is important in operating systems. Discuss the main problems that can occur if memory is managed poorly.