93%(14)93% found this document useful (14 votes) 35K views78 pagesOperating System Handwritten Notes of Module 1,2,3,4,5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
[Spin _apin ple iain hess aad ngs
| ~ | Booking seq ence ~_ staking xcampier agate Adel aleve ie
| steps 1) Sil on_pawer supply.
i - eg FL ;
: 2% Operaling System Overview oa _ D , al
(8108 will load os)
3) R Son Eas
needed by user
BIOS = -Ba sic Tnpub Qubput System (Rom (present in Ram) )
an
> Firmware (hardware ¢ Safhware)
call ing tain napa |
= provide inkefire be!” user and compulen* | Fundions oF operating syskm_:
Security 7
1 Gonlmal_an_syolim performance
1 —ab_accaunting
Error delkeching
Goardinabion bel user ¢ Gampuker
Memary manager
I ymce ss ar manager
Device Manager
file manager.
___ ke |e os
ea ce c_slad 1940-1950 en.Balch pre processing - — Te See
-——ihis_a_tedhnique in_which an_ lee dala 4+ peagiam_legdlhee |
—befare_pracessing stad, lobiet acid “aah ee
-— TN ithonel Tab the eee! ha ol jl
rod meee Mc oo), eee pe
_ perabing | +
Jobn |, [Bak1 Mulkipegeomming oer ey
—Lah en_fwo_or ‘man. pingrams iside_in men}
—tLal the Fime_called_mulhiprogramming e =|
— —Hlulkiprogramming incase ed
does the fllowingadk . ; 1
ees
Ad bs
High and efficient cpu ulilipahian,
eh thdang e_eguindlad fo mullilasking. ©
_l 08 does allowing ackibies “te a)
ie ida a page
an immediate tesponse, as cer i
Lancs operation pgm a a_Hon
kare the cance i
en | Operaling Sys Shou
T J
L 4.——___limilaliong- ainsi Dae
——— ode suiten_in thie 8 jz iil otiaipa i pork. -
= it generale_mate_enrars_and bg bo resales — _
——|__loralion as the kernel:
a —=—adding and _tumaving Pictl soda aT and. Pb Le
S Layered Shuchure -
- Ose Tree a
a
Cane
bee ttShade. _ ———_{
oe = move all. non.
make kernel _eF fii
Leasenbial senioes | oe Hebert onl ea
ink and_cmall.as possible.
———
_ |
LL IE _abduchete ae
I | tg Gux.
a ee Micmkernel Monolithic kernel
Comparison L
+#— Basic user service and kernel =| —=————
address \ i
iz: . .
> —_Isize il maller in_ size, larger sin ize,
a ilk
——$§| Bewhion Slow exewlion fast exewhion
|
Extendible | easily exkndable | hard fo extn
Se it doesn't ~\—— does ebheck
effick on wating fp
a
Teds | more code tgutind less de is. tequirad =
Ld fie:. nx, Symbian Linax ele. i__ Syskm_calls - ~proile onl ba Series made —_
by a; —
= Calls writin in C$ cet
* | Types =
af
process Control =
| load, execute, end abort ferminalion process fe ‘|
wait for imewolt event signal ent _allocal, frre_memanp.
2) File management -| Linux kemel_and_shell = a
1 || both kernel and chell are used to perlurm_any aperatisn_on syste
a Pithen uste_gizs his command far pifaming any opeuiln. then
= f un
| Request will goes to shell parts / inkerpreter which tronslal. human
: nto machine | sith Dat oa
| progam _in language. ii rae ino
Ob | Oa Disha and_theduling of process ——
_ Process
=e, Sai
prgmam_is_passive_enlily sored in exeutloble file. ————______
proceas_ is achive elity «
_program becomes pracess_then_execulable File loaded info memory
} enby of its name
parks af process. =
cuntat_achivthy including pragranm_caunten
paramelre:
max
shack contain _temparary data
addresses vanable
Data sechian_caninining global variable.
|_ sta ey +
L | | t 5 Process. -
| heap [
| dala \
| : :eee exit
Funning.
teminakd - finished execution
% Proress Canbmal Black (pce)
Process shake |» running 4 wailing
Ft | Process number
. Program counkr [> locakion cf inshudlians to neat excuke
tegisters Contains all proess~ Cenhic_tegishers.
memory limits
List: oP ope Piks| — process creabion = ; ;
“|= process identified -and_manages via_pracese identifier
L | Gasnutteahafna_opluna.
a =parnt and children shar_all. msourte
Execuhan aphions -
§ = hil
‘n_terminale.
-addiss~ pace possibilities — we
= child proces is dupli
PS — leach ds _ne
ecco erminalfan,
Lexi
=_outpul dala Fan dll panel
= chi &
= Task_assigned fo child ts no lonaee equited
2 i . ' ;
if its _parnt terminaly .1s —each thrad_beangs 4a exactly onepronse
@ thrrads @
= light weight process
| jis flaw of execution thrugh the procss_cade with ifs own progam
counkr_. regiskns and shack
= provide way | fo_imprart_applicalion.
c
keen coe ee
| S E:
3
Te
y
| Single: thrnd —_mulli- thread
r
ry
po : :
—- Fast and efficient
Lee Pah
ba
~@_Kemel= level therad = Chir)
—Lhandle_by os dfedly and management fs done by fenel
twa
Bea 3
i aez Advankages of kIT = zs
|= el con silane thrass
iBone thicad block kernel_con_schelulenancthees thread _____
Disadvanlages= =|
— =gentallyslawer_}n_cteaeand_manage, ££
Loe ea head_ a
ee Types =
3 oP
By @ Ont ty one © Many - 4p -ane
ea i:
|f+, CPU Scheduling ew |
r | process —deleemining which process cai will own Cou for execubion |
r -whileanather_process fs on hold. elas — an
r bes Types = epins-emphiviscl i cee godevdiigs eo. FM ed
| | a Nonespin ser phit a aad
L
wes —cpu_uill exewle a pmeess bul for a iba ial CP nah
Lafter that wail for iks next hun
b
}_* Arrival Time when a process enkes in teody stale |
| Bs ‘. obs = type. e_pragram
= in_execulion
proess= program ine:| iu Scheduling Caleia $3
. aximize =
~——— | cpu utilizalion = cpu re mains_as_busy-as— possible. _—_—___|
— throughput = the nach process thal Pinish their execuhian per |
unil_hime= I selecks processes Frama the queue and loads them into meman ;
[Be cpu scheduling 4 execubian
hae
shi t= Term
Lcpu stheduler { dispalchec
Provess-£p | Arvival time (at) | Burst Time (Bt)
|
| Po 9 2
a Pr = !
Po a be
Pee 3
Pa q
~T
i 7 4hy Ps Py
0 2 3 wa gat "33¥ Convay Ef
a
|
2
Z)
meaning = if
the burst Hime af the, ius jab ts highesl_amang_all.
due to th ie ath ey ae
| Disady = pra
Adnnlog.s = tity ee a) al
Non- pree
antages = proceas will run fn the complehion Cnon- preemptive)
e prablem_of slarvalion “may occur
4 pootin_pettarmante, Aine the alg unify time is higher as compan —
to_other ac dluling “algorithm:
| Shortesk gob Biss! CSIP) scheduling
_.—Non = preemphive S1F
> _preenphive S1F
a PLD AT QT
L 1 Ne a:
a
| 3 mn ae 3
g a 6 a
4 ef ab
S Bi gzavg wT
avg TAT = 5¥/5=1ng
| Advantage
Qs = 5-4
maximum, hraughput
an implementable hea. the exact BT Per pravesc cont he bnawn in
mi T Ts
¢
| SS
_ Disadvantages =
a
y
Shailest Remaining Time Next (SRT) ser __#4
Pascal Espen aaa ra feo | Ps | Pt
4 L 4 - I [ a a 1
j
PED AT Br ar AF |
R 3 G So eee
| te 1 4 Repeats 4
Pz 2 2 2-220 4-929 |
5 3 a-tet 5-322 1
Po 4 a-3-6 | 13-429 q
la Pe 5 2-226 | 4-522
1 i |
I Total a4 44
| avg= 4 avg = 6.33
L wT =
TAT= eT
TAT: _Complelian Hime = Anieol_time4 : | Piaaily
————- _pilemiplite eae
pie the new aan
——{+ than curmntly 1
‘Scheduling 2
= method of schealing process
Te high Pray camitd aut
—____|| out on Eces_
asis..
ning then PU is pattmpled i-e —__=§
5 a on pals ——
fi fn oho
avg TAT > 2125
eg -
PEP ar Priarly
Pr au 2
Pa 3 '
= Pa 6 at
Py | 2 3
a
Gand! chart =
Pe Pi iar ce *
2 a a 26 a0
Prd v4
: zh WT. TAT
- ee a a =
Pa ! 3 ° =
Ps a 6 2
____ptlo 6
| Pa 3 = Ey ae
—— a 26
Fagur = 1935 __ Same1—Advaolages = cay Wig) -
- shisha a ee b_wait for lig hcheenrtea oe ae
r | + suitable Far eppliakions with sludualng fime_and_resaurce_tequinoment
LC | Disadvantag ss fete ie , at
*
PU higher peal ail take lls? cou Hing Phen laude
ST —pelanby may Starr
EE
E- > pmmphive version of ECs,
- a
C __|= process execuled in ¢ clic wa {
| that time the prncess will terminals else praccas wi go-back to
pL tady queue and waits for nek bum to complek. exehian |
| P.Ip T. 8T girn_time quankum= 4 unit f
Ei i a Da a7 P-ID AT er wr 2
a Fi fe 16 eal
7 be 7 a ® 16 ae 29, ‘
: Pe 2 3 a 6 63 |
| Pa 3 1 @ 3 e 3 1
rs | 4 5 | aao ,
Ps 6 4 m 16 aa 1
an Hee
ee fgets 23} ;
avg [| / re 3]
pee at Priory | Calulae |
P, a -avg_watting Hime
Es m 2 andtar for Fees, Sip,
; 3 ; cr
P, (on picemplin’) .priacby gs
| : RR
Py 32). ]
Ps
co] 0
wo] >
aYr
@ | Fcrs = _
Gantt chart a es 4
| | Ps
e a ee —
Ee + = is _ 90° EN
oe eee
| co gr wr | TAT
| 7 Pr 2 2 ° “f
Ee | m a 1 2 3
ee Pa i 3 3 " avg wr= 62
fF | is | 4 m1 is avg TAT = 10-2
Ps 20 s Is 20
31> [Sei
(2) SIF Cer)
Gantt cha
L Pia
0 “a aps page= 2
1
]
PD a ar wr | TAT
| s 1 3
_ 3 i |
ae #1 ! ie § avg wr = 46
b | rs a6) 8 bg vs avg TAT = 6 |PLD Br
Pi 2
H Pa \
{[» [
1] i 4
Ps 5
|
|
© | er -
Quantum = 2
[Gantt chart
| cen
i °
[eT ar
Pi =
ee ;
| aes g
4
Ps s
a
avg wr = #2
avg TAT = 11.2
|
1synchronization and Deadlock
os
* i ai Concurrency = a ———
=
Execulfon OF _Ganzsvem ae sallfiag ———_—_—_—
a
[ o— r —
ee pes “of cites ere
_ © _Independent process = Bake el |
=a. rss than atel cing of nthepas eae
B | cttndat by. ote ne i
a ae
| 4, other process ak = ‘
-———=—Sotnmuniiake ond share dala ond infémmabion with abber process
a | tle ae Ec a iy
| if
| 3p Gardin
| a - -
Po Mulbiple processes tend $f dala ikms
1» Mutual excision = way tn prabihi more than one procs
EE esses Can_use to Cooperate a
BE do {
] secon
| crikical section _
Ee I exit sechion
rt
Reminder sechion
Je i while (ree);|= Synchronization mechanism.
= provide _synchmnizahion among Concumn running
TSL_tnshruchians =
rehum h
while (TéL) “fe
critical Sedhon
tock= 0 —>
td Crihtcal_g
Workings —=—ensurs mutual. exclusion.
—-+—dassall_quarankee Bounded wailing and may cause sharabian.
Ex Operalfng — Syelen riaphnn
Fe seeniphar a Tuage vail Ne vant ee Aen
ee prablem.by using. fallowing Huo apemaion
Pia fa eae AI cea onl ene asec
q Signal (s)
© “Types_ch Semaphane = ms
Ol Counting Semaphane =
= 1} cout nach tecaurces availableee, ee
|
PakNotes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
4. Memory Management
4.1.1 Memory Management Requirements
Memory management keeps track of the status of each memory location,
whether it is allocated or free. It allocates the memory dynamically to the
programs at their request and frees it for reuse when it is no longer
needed. Memory management should satisfy some requirements. These
Requirements of memory management are:
1. Relocation
a. The available memory is generally shared among a number of
processes in a multiprogramming system, so it is not possible to know
in advance which other programs will be resident in main memory at
the time of execution of his program.
b, Swapping the active processes in and out of the main memory
enables the operating system to have a larger pool of ready-to-
execute process. When a program gets swapped out to a disk
memory, then it is not always possible to occupy the previous
memory location when it is swapped back into main memory.
Because the location may still be occupied by another process. The
process must be relocated to a different area of memory.
c. There is a possibility that program may be moved in main memory
due to swapping.
d. The figure depicts an image of a process. The process is occupying a
continuous region of main memory. The operating system keeps track
of many things including the location of process control information,
the execution stack, and the code entry. Within a program, there are
memory references in various instructions and these are called logical
addresses.Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
Process control
information Ltere Reel Bele 4
Entry point
to program Renee
eee) instruction
Paes Reference
to data
values
top of
stack
e. After loading of the program into main memory, the processor and
the operating system must be able to translate logical addresses into
physical addresses. Branch instructions contain the address of the
next instruction to be executed. Data reference instructions contain
the address of byte or word of data referenced. Data reference
instructions contain the address of byte or word of data referenced.
2. Protection ,logical and Physical Address Space
a. An address generated by CPU is called as logical address or virtual
address and an address generated by Memory management unit is
called as physical address.
b. If size of program is 1000K and loaded in memory from address 5000
to 6000, then physical address is from 5k to 6k and logical address is 0
to 999.
c. The logical address space is collection of all logical addresses
generated by a program and the physical address space is collection
of physical addresses corresponding to these logical addresses.Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
d. If binding of instructions and data to memory addresses are at
compile time or load time, then logical and physical addresses are
same.
e. If same binding occurs at run time then logical and physical addresses
are different.
f. Relocation register (base register) contains address 5000. Physical
address (5200) can be calculated as follows.
g. Consider the logical address 200 of the same program.
h. Physical address(5200) = logical address(offset)(200) + contents of
base register(5000)
i, Mapping from virtual to physical address is done by MMU at run time.
j. User programs deals with logical addresses. Memory mapping
hardware converts logical address into physical address.
k. Memory protection is done by base and limit registers.
|. The base register holds the smallest legal physical memory address.
m, The limit register gives the size of the range of physical address. If
base address is 5000 and limit register is 500 , then program can
access all addresses from 5000 to 5499. Memory protection with base
and limit registers is depicted in the following figure.
0
§
3
‘5000 Base
‘5500 LimitNotes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
3. Sharing
a. A protection mechanism must have to allow several processes to
access the same portion of main memory.
b. Allowing each processes access to the same copy of the program
rather than have their own separate copy has an advantage. For
example, multiple processes may use the same system file and it is
natural to load one copy of the file in main memory and let it shared
by those processes.
c. It is the task of Memory management to allow controlled access to
the shared areas of memory without compromising the protection.
Mechanisms are used to support relocation supported sharing
capabilities.
4. Logical organization
a. Main memory is organized as linear or it can be a one-dimensional
address space which consists of a sequence of bytes or words.
b. Most of the programs can be organized into modules, some of those
are read-only or execute only and some of those contain data that can
be modified.
c. To effectively deal with a user program, the operating system and
computer hardware must support a basic module to provide the
required protection and sharing. It has the following advantages:
* Modules are written and compiled independently and all the
references from one module to another module are resolved
by the system at run time.
* Different modules are provided with different degrees of
protection.
* There are mechanisms by which modules can be shared among
processes.
© Sharing can be provided on a module level that allows the user
to specify the sharing that is desired.&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
5. Physical organization
a. Computer memory has main memory and secondary memory. Main
memory is relatively very fast and costly as compared to the
secondary memory.
b. Main memory is volatile. Secondary memory is provided for storage of
data on a long-term basis while the main memory holds currently
used programs.
c. The major system concern between main memory and secondary
memory is the flow of information and it is impractical for
programmers to understand this for two reasons:
© The programmer may engage in a practice known as overlaying
when the main memory available for a program and its data
may be insufficient. It allows different modules to be assigned
to the same region of memory. One disadvantage is that it is
time-consuming for the programmer.
© Ina multiprogramming environment, the programmer does
not know how much space will be available at the time of
coding and where that space will be located inside the
memory.
4.1.2 Memory Partitioning
a. Memory Management is responsible for allocation and management
of computer's main memory.
b. Memory Management function keeps track of the status of each
memory location, either allocated or free to ensure effective and
efficient use of primary Memory.
c. There are two Memory Management Techniques:
- Contiguous Memory Allocation
- Non-Contiguous Memory AllocationNotes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
d. In Contiguous Technique, executing process must be loaded entirely
in primary memory.
e. Contiguous Technique can be divided into:
- Fixed (or static) partitioning
- Variable (or dynamic) partitioning
Fixed (or static) partitioning
* Memory is divided into several fixed sized partitions.
* Each partition contains exactly one process. Thus degree of
multiprogramming is bound by number of partitions.
© In this method, when a partition is free, a process is selected from
the input queue and loaded into the free partition.
* When the process terminates, partition becomes available for
another process.
© Operating system keeps a table indicating which part of memory
is available and which are occupied.
* Figure below shows the concept of fixed partitioning.
© There are some advantages and disadvantages of variable
partitioning over fixed partitioning as given below.
© First process is consuming 1MB out of 4MB in the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
* Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-
7)+(16-14)= 3414142 = 7MBNotes Compiled By Archana Nawore
Asst. Professor,Comp. Engg, Dept. LTCOE
| internal
Free = 3 MB “| fragmentation
Block size = 4 MB
Block size = 8 MB. -
Block size = 8 MB
Fixed size parton
Advantages of Fixed Partitioning —
1. Easy to implement:
Algorithms needed to implement Fixed Partitioning are easy to
implement. It simply requires putting a process into certain partition
without focussing on the emergence of Internal and External
Fragmentation.
2. Little OS overhead:
Processing of Fixed Partitioning require lesser excess and indirect
computational power.
Disadvantages of Fixed Partitioning —
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small,
occupies an entire partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space (as stated above) of various partitions cannot be&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
used to load the processes even though there is space available but not
in the contiguous form.
3. Limit process size:
Process of size greater than size of partition in Main Memory cannot be
accommodated. Partition size cannot be varied according to the size of
incoming process's size. Hence, process size of 32MB in given example
above is invalid.
4. Limitation on Degree of Multiprogramming:
Partition in Main Memory are made before execution or during the
system configure. Main Memory is divided into fixed number of
partition. Suppose if there are n1 partitions in RAM and n2 are the
number of processes, then n2<=n1 condition must be fulfilled. Number
of processes greater than number of partitions in RAM is invalid in Fixed
Partitioning.
Variable (or dynamic) partitioning
* Initially all memory is available for user processes and considered
as one large block called as hole.
* When a process arrives and needs memory, a hole which is large
enough is searched for this process. If such a hole is found,
needed memory is allocated and remaining memory is available
for another requests.Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
Dynamic partitioning
Operating system
Block size =2 MB
P1=2MB
P2=7MB Block size = 7 MB
P3=1MB Block size = 1 MB
eee Block size = 5 MB
Empty space of RAM
Partition size = process size
So, no internal Fragmentation
Advantages of Variable Partitioning —
1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated according to
the need of process, hence there is no internal fragmentation. There will
be no unused space left in the partition.
2. No restriction on Degree of Multiprogramming:
More number of processes can be accommodated due to absence of
internal fragmentation. A process can be loaded until the memory is
available.
3. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of
the largest partition cannot be loaded in memory. In variable
partitioning, the process size can’t be restricted since the partition size is
decided according to the process size.
Disadvantages of Variable Partitioning —4.13
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
1. Difficult Implementation
implementing variable Partitioning is difficult as compared to Fixed
Partitioning as it involves allocation of memory during run-time rather
than during system configure.
2. External Fragmentation:
There will be external fragmentation in spite of absence of internal
fragmentation.
For example, suppose in above example- process P1(2MB) and process
P3(1MB) completed their execution. Hence two spaces are left i.e. 2MB
and 1MB. Let’s suppose process P5 of size 3MB comes. The empty space
in memory cannot be allocated as no spanning is allowed in contiguous
allocation. Process must be contiguously present in main memory to get
executed, Hence it results in External Fragmentation.
Memory Allocation Strategies
If there are number of free holes in memory and process requests
the free hole, then question is which hole is allocated to this process.
Following are the strategies used to allocate the free hole to the requesting
process.
1. First-Fit — Allocate first hole that is big enough. Searching can start
either at the beginning of the set of holes or where the previous first-
fit search ended, Searching can be stopped as soon as a free hole is
found that is large enough
2. Best-Fit - Allocate the smallest hole that is big enough. Entire list
must be searched if the list is not kept ordered by size. This strategy
produces the smallest leftover hole.
3. Worst-Fit — Allocate the largest hole. Entire list must be searched if
the list is not kept ordered by size.
Example: Given memory partition of 150K, SOOK, 200K, 300K and 5SOK (in
Order) how would each of the first fit, best fit, and worst fit algorithm place
the processes of 220K, 430K, 110K, 425K (in order). Which algorithm makes
the most efficient use of memory?
Solution:
First Fit
10Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
220K is put in 500K partition.
430K is put in 550K partition.
110K is put in 150K partition.
425K must wait.
aYNE
Best Fit
220K is put in 300K partition.
430K is put in 500K partition.
110K is put in 150K partition.
425K is put in 55OK partition.
PUNE
Worst Fit
220K is put in SSOK partition.
430K is put in 500K partition.
110K is put in 330K partition.( 550K-220K=330K)
425K must wait.
PeNe
4.1.4 Paging
Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be noncontiguous.
Basic Operation
© The problem of external fragmentation is avoided using paging.
© Paging is implemented by integrating the paging hardware and
operating system.
* _ Inthis technique, logical memory is divided into blocks of the same
size called pages.
© The physical memory is divided into fixed sized blocks called frames (
size is power of 2, between 512 bytes and 8192 bytes, also larger size
possible in practice). The page size and frame size is equal.
© Initially all pages remains on secondary storage. When a process is to
be executed, its pages are loaded into any available memory frames from
secondary storage.
* Figure below shows the paging model of physical and logical
memory.
11Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
Page No.
o| 3
nee
—
2) 5
3| 6
4lo
Page Table
Following basic operations are done in paging.
1. CPU generates logical address and it is divided into two parts. A page
number(p) and page offset(d)
N
. The page number is used as an index into a page table.
»
. The base address of each page in physical memory is maintained by
page table.
4, The combination of base address with page offset defines the
physical memory address that is sent to the memory unit.
5. The physical location of the data in memory is therefore at offset d in
page frame f.
12o
Notes Compiled By Archana Naware
Asst. Professor,Comp. Engg. Dept. LTCoE
Because of the paging the user program sights the memory as one
single contiguous space, it gives the illusion that memory contains
only one program.
But the user program is spread throughout main memory(physical
memory). The logical addresses are translated into physical
addresses.
Figure below shows the operation of the paging hardware.
13&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LICE
Example: Consider the following figure. Let page size is 4K and memory
available is 32K.
© Page 0 is in frame number 5. Page 1 is in frame number 7 . Logical
address 4 is in page 1 and offset is 0. Page 1 is mapped to to frame 7.
So physical address is (7X4+0)=28K
14Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
© logical address 10 is in page 2. offset is 2. Page 2 is mapped to frame
2.
© So physical address is (2X4+2)=10K
Hardware Support for Paging
1. Each operating system has its own method for storing page table.
Most operating system allocates a page table for each process.
2. A pointer to the page table is stored with other register values in the
process control block.
3. When the dispatcher is told to start a process, it must reload the user
registers and define correct hardware page table values from the
stored user page table.
4. The hardware implementation of the page table can be done in
several ways.
5. In the simplest case, the page table is implemented as a set of
dedicated registers. These registers should be built with very high
speed logic to make the paging address translation efficient.
6. Every access to memory must go through the paging map so
efficiency is a major consideration.
7. The CPU dispatcher reloads these registers just as it reloads the other
registers. Instructions to load or modify the page table the registers
are privileged so only OS can change the memory map.
8. The use of registers for the page table is reasonable if page table is
reasonably small.
9. Most modern computers allow the page table to be very large.
10.For these machines the use of fast registers to implement the page
table is not feasible. Rather, page table is kept in main memory and a
page table base register (PTBR) points to the page table. Changing
page tables requires only this one registers which reduces context
switch time.
11.The problem with this approach is the time required to access a user
memory location.
12.If we want to access location, we must first index into page table
using the value in the PTBR offset by the page number.
15Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
13.With this scheme two memory accesses are required. One for page
table entry and one for byte. Thus memory factor is slowed by factor
2.
14. Solution to this is use of Translation Look aside Buffer (TLB).
15.TLB contains a few of the page table entries. When a logical address
is generated by the CPU, its page nuber is presented to the TLB.
16.If page number is found, its frame number is immediately found and
used to access memory.
17.f page number is not found, a memory reference to the page table
must be made and frame number is obtained. Then page number
and frame number to the TLB so that they will be found quickly on
the next reference.
18.If TLB is full OS must select one of the page replacement policies.
Translation Look aside Buffer (TLB)
1. Translation look aside buffer is a special small fast lookup hardware
cache.
2. TLB is associative, high speed memory.
Each entry in TLB consists of two parts. A key and value.
4. When associative memory is presented with an item, it is compared
with all keys simultaneously. If the item is found, the corresponding
value field is returned. The search is fast. Hardware is expensive.
5. Paging hardware with TLB is shown in the following figure.
»
16Notes Compiled By Archana Naware
9. Dep!
Example: For a paged system with TLB hit ratio is 0.9. Let RAM access time
Tis 100 ns. TLB access time tis 20ns. Find out
1. Effective memory access time with TLB
2. Effective memory access time without TLB
3. Reduction in effective access time
Solution:
1. Effective memory access time with TLB (Et)
21: (One T for page table n frame number and one T for required byte in memory)
Et=(TLB hit ratio)*( T +t) + ( 1-TLB hit Ratio ) * (2T + t)
= 0.9*( 100+20) + ( 1-0.9) * (2*100+20)
= 0.9*(120) + (0.1) * (220)
= 108 +22
= 130ns
17Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
2. Effective memory access time without TLB
Ewt = 2T=2*100=200ns
3. Reduction in effective access time
= ( Ewt-Et) X (T/Ewt )
=(200-130)*(100/200)
=35 ns
Segmentation
Basic Method:
1. User view of a main program is a set of methods, procedures, or
functions. It may also include various data structures: objects, arrays,
stacks, variables, and so on. Each of these modules or data elements
is referred to by name.
2. Each of these segments is of variable length and the length is defined
by the purpose of the segment in the program. Elements within a
segment are identified by their offset from the beginning of the
segment: the first statement of the program, the seventh stack frame
entry in the stack, the fifth instruction of the Sqrt (), and so on.
18Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
subrouine | [stack
logeal acres
User View of Program
Segmentation is a memory-management scheme that supports this
user view of memory. A logical address space is a collection of
segments.
Each segment has a name and a length. The addresses specify both
the segment name and the offset within the segment. The user
therefore specifies each address by two quantities: a segment name
and an offset.
Segments are numbered and are referred to by a segment number,
rather than by a segment name. Thus, a logical address consists of a
two tuple:
User program is compiled, and the compiler automatically constructs
segments reflecting the input program. AC compiler might create
separate segments for the following:
The code
Global variables
The heap, from which memory is allocated
The stacks used by each thread
The standard C library
19&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
7. Libraries that are linked in during compile time might be assigned
separate segments. The loader would take all these segments and
assign them segment numbers.
Segmentation Hardware
limit_|base
| segment
table
yes
no
trap: addressing error physical memory
1. Segment table has a segment base and a segment limit.
2. The segment base contains the starting physical address where the
segment resides in memory , and the segment limit specifies the length of
the segment.
3. Alogical address consists of two parts: a segment number, s, and an
offset into that segment, d.
20&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
4, The segment number is used as an index to the segment table. The
offset d of the logical address must be between 0 and the segment limit. If
it is not, we trap to the operating system (logical addressing attempt
beyond end of segment).
5. When an offset is legal, it is added to the segment base to produce
the address in physical memory of the desired byte. The segment table is
thus essentially an array of base-limit register pairs.
-
rowine | {stack 7
“gegment 3 \ fsegment
ned
ome
rants as
lea Tea Ties
| [oe] canes’ | of won er
\ | 1) 400 | 6300 sm
\ program 3} roo | S200 eee
| {1000 | 2700 er
Necaest) Sieerens segment aie
= "
geal odors oece Leora 4
»=t—_|
a ae
a
preical memory
Segmentation Example
6. As an example, consider the situation shown in above Figure. There
are five segments numbered from 0 through 4. The segments are stored in
physical memory as shown.
7. The segment table has a separate entry for each segment, giving the
beginning address of the segment in physical memory (or base) and the
length of that segment (or limit)
21Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
For example, segment 2 is 400 bytes long and begins at location 4300.
Thus, a reference to byte 53 of segment 2 is mapped onto location 4300
+53= 4353.
A reference to segment 3, byte 852, is mapped to
3200 (the base of segment 3) + 852 = 4052. A reference to byte 1222 of
segment 0 would result in a trap to the operating system, as this segment is
only 1000 bytes long.
Virtual Memory
1. Virtual memory involves the separation of logical memory from
physical memory. This separation allows an extremely large virtual
memory to be provided for programmers when only a smaller physical
memory is available.
2. Virtual memory makes the task of programming much easier,
because the programmer no longer needs to worry about the amount of
physical memory available.
3. The address space of a process refers to the logical (or virtual) view
of how a process is stored in memory. This view is that a process begins at
a certain logical address, for example, addresses 0 and exists in contiguous
memory. Physical memory may be organized in page frames and that the
physical page frames assigned to a process may not be contiguous. It is up
to the memory management unit (MMU) to map logical pages to physical
page frames in memory.
4. Figure below shows virtual address space. Heap can grow upward in
memory as it is used for dynamic memory allocation. Stack can grow
downward in memory through successive function calls.
5. The large blank space (or hole) between the heap and the stack is
part of the virtual address space but will require actual physical pages only
if the heap or stack grows.
6. Virtual address spaces that include holes are known as sparse
address spaces. Using a sparse address space is beneficial because the
holes can be filled as the stack or heap segments grow or if we wish to
dynamically link libraries or other shared objects during program
execution.
22Notes Compiled By Archana Naware
Asst, Professor, Comp. Engg. Dept. LTCoE
Figure: Virtual Address Space
Demand Paging:
1
An executable program can be loaded from disk into memory by
following ways.
One way is to load the entire program in physical memory at
program execution time. Here the problem is we may not need the
entire program in memory as some of the modules of that program
may not require. Loading the entire program into memory results
loading the executable code for program including all modules
regardless of whether the module is required to be executed or not.
An Alternative option is to load pages only as they are needed. This
technique is known as paging and is commonly used in virtual
memory systems.
With demand paging, pages are only loaded when they are
demanded during program execution. Pages that are never accessed
are never loaded into physical memory.
A demand-paging system is similar to a paging system with swapping
23&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
where processes reside in secondary memory (usually a disk). When
we want to execute a process, we swap it into memory.
In demand paging, a lazy swapper swaps a page into memory only if
that page is needed. A swapper manipulates entire process. Rather
than swapper, pager term is used in connection with demand paging.
Figure below shows Transfer of a paged memory to contiguous disk
space.
—
‘swap out 1.00 10) 20 30);
web eta
80 sO0001D
O304o180!
Cees n sorrel 96)
natn
program
A
program
8
memory
Transfer of a paged memory to contiguous disk space.
Basic Concepts
1
2.
When a process is to be swapped in, the pager finds which pages will
be used before the process is swapped out again.
Instead of swapping in a whole process, the pager brings only those
pages into memory.
24o
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
Thus, it avoids reading into memory pages that will not be used
anyway, decreasing the swap time and the amount of physical
memory needed.
With this scheme, hardware support is required to distinguish
between the pages that are in memory and the pages that are on the
disk. The valid -invalid bit scheme is used for this purpose.
This time, however, when this bit is set to "valid/' the associated
page is both legal and in memory. If the bit is set to "invalid/' the
page is not in the logical address space of the process or is currently
on the disk.
The page table entry for a page that is brought into memory is set as
usual but the page-table entry for a page that is not currently in
memory is either simply marked invalid or contains the address of
the page on disk. This situation is depicted in Figure.
25Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
o|
1
of a |
vali-invaks
Re NaN | ere aaa be
ac ol 4 iv
3) > 1h
-— 2/6 |v
ee 3 i
s|_F +h
soy
so oH
7) H | 7 i!
nea page tate
manery
18
physical
7. Access to a page marked invalid causes a paging hardware a trap to
the operating system. This trap is the result of the operating system's
failure to bring the desired page into memory.
8. The procedure for handling this page fault is shown in figure.
26Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
@, bases on
2) backing store —
operating
system
reference
load M
restart
instruction]
| ______ ree trame
page table
® ®
maps) | | oka h
wasle oss page
L__]
physical
memory
1) We check an internal table (usually kept with the process control
block) for this process to determine whether the reference was a
valid or an invalid memory access.
2) If the reference was invalid, we terminate the process. If it was
valid, but we have not yet brought in that page, we now page it in.
3) We find a free frame (by taking one from the free-frame list, for
example).
4) We schedule a disk operation to read the desired page into the
newly allocated frame. When the disk read is complete, we
modify the internal table kept with the process and the page table
to indicate that the page is now in memory.
5) We restart the instruction that was interrupted by the trap. The
process can now access the page as though it had always been in
memory.
27&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
Pure Demand Paging
1. In the extreme case, we can start executing a process with no pages
in memory.
2. When the operating system sets the instruction pointer to the first
instruction of the process, immediately page fault occurs.
3. After this page is brought into memory, the process continues to
execute. Page faults occur until every page that it requires is in
memory. At this point it can execute with no more page faults. With
this scheme a page is never brought into memory until it is required.
Page Replacement Strategies:
1. First In First Out(FIFO) page replacement algorithm
1. AFIFO replacement algorithm associates with each page the time
when that page was brought into memory. When a page must be
replaced, the oldest page is chosen.
2. We can create a FIFO queue to hold all pages in memory.
3. We replace the page at the head of the queue. When a page is
brought into memory, we insert it at the tail of the queue.
Example:
Consider a reference string for a memory with three frames.
7,0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2,0, 1,7,0,1
707203042303 2172 707
7| (7) 7) (I fa] [2] [4] [4] [4] [ol (o} [o) 7] 2
0} jo} fol [3] (3) {3) [al (2) [2 {4} (4) [41 {0} (9)
qn 1 t t
CLE fy fe) Gy fo} fo} fo} (3) [3] 3 [2] {2} [2] [4
page frames
4. There are fifteen faults altogether.
28&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
2. Optimal Page Replacement algorithm
1. Replace the page that will not be used for the longest period of time.
2. Example:
Consider a reference string for a memory with three frames.
7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2,0, 1, 7, 0,1
reference string
701203042303212017
"Are 2 2 {i
o| fo} fo ° [4] |0|
iW & 1
page frames
3. There are nine faults altogether.
LRU Page Replacement algorithm
1. LRU replacement associates with each page the time of that page's
last use.
2. When a page must be replaced, LRU chooses the page that has not
been used for the longest period of time.
3. Example:
Consider a reference string for a memory with three frames.
7,0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,2, 0,1,7,0,1
29reference string
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
701203042303212017041
7] [7] {7} f2} fa] 4] [4] [4] fol [1] 1 ft
7] [2] fe] ] ff
fol (01 [ol_—fol ‘o| fo} [3] [3] 3 [o| 0
HOA Bb bale al i fl
page frames
4, There are twelve faults altogether.
Example: Compare the performance of FIFO, LRU and Optimal based on
number of page hit for the following string. Frame size=3
String(pages)= 123452133245
Solution:
FIFO
Frame | 1 2.3 4 5 2 1 3 3 2 4 5
0 Ls - ee eee ts LE Zz bs = 5
2 2 2 a 5 5 5 3 13 3 3 3
2 3 (3 |3 (2 |2 |2 |2 (2 |4 4
Page jy i cee y 7 y y a a i 7
fault
No of Hits=2
No of Mis:
LRU
Frame | 1 ae 4 Li 2 7 a 3 2 4 i
30Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
- 2 2 a 5 5 5 3 3 5
2 393 |3 (2 |2 |2 2 (2
Page jy Lf an y - y y n n Ls i”
fault
No of Hits=2
No of Miss=10
Optimal
Frame] 1 [2/3 /4[s|[2]/a1]/3|[3|2]| 4 5
o f2 ja fa fa fa 3 4
1 2 [2 [2 [2 2 2
2 3 |4 |5 5 5
Page ly jy jy jv jy jm jn jy in on ly n
fault
No of Hits=5
No of Miss=7
Thrashing
1. Consider any process that does not have "enough" frames. If the
process does not have the number of frames it needs to support
pages in active use, it will quickly page-fault.
31&
Notes Compiled By Archana Naware
Asst. Professor, Comp. Engg. Dept. LTCoE
At this point, it must replace some page.
However, since all its pages are in active use, it must replace a page
that will be needed again right away. Consequently, it quickly faults
again, and again, and again, replacing pages that it must back in
immediately.
This high paging activity is called thrashing. A process is thrashing if it
is spending more time on paging than executing.
32e
Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
5. File Management
1. Overview
1.1 Definition of File
A file is named collection of related information that is recorded on secondary
storage. File represents programs and data. Data files are numeric alphabetic, alphanumeric,
or binary. File can be free forms such as text files, bytes, lines or records. Information in the
file is defined by the creator of the file. Information can be of type source programs, object
programs, executable programs, numeric data, text, payroll records, graphics images, sound
recordings and so on.
File has certain defined structure a
cording to its type.
© Atext file is a sequence of characters organized into lines.
© Source file is a sequence of subroutines and functions
‘© Object file is sequence of bytes organized into blocks and understandable by system's
linker.
© Executable file is a
series of code sections that the loader can bring into memory and
execute.
1.2 File Attributes
File is named for the convenience of the user. A name is string of characters. Some
systems differentiate between upper case lower case characters in names where as some
systems consider it equivalent. When a file is named, it becomes independent of the process,
the user, even the system that created the file. A file typically consists of the following
attributes.
Name - Human readable form.
. Identifier - Unique tag usually a number. It is non-human readable form.
. Type- this information is for the systems that support different types.
. Location- it’s a pointer to a device and the location of the file on that device.
Size — current size of the file it is in bytes, words or blocks.
Protection - Access control information such as read , write , execute permissions.
ime, date and user identification - This information may be kept for creation, last
modification and last use. The information is needed for protection, security and usage
monitoring,
Mav sene
1.3 File Operations
File is abstract data type. The operating system provides
‘operations. The file operations are as follow.
1. Creating a file —Two steps are necessary to create a file.
a. Space must be found for the file.
b. An entry for new file must be made in the directory.
stem calls to perform file&
Notes Compiled By Archana Naware
Asst. Professor ,Comp. Engg. Dept. LTCOE
Writing a file- to write to a file system call is made to specify the name of the file
and information to be written to the file. With this name of the file system searches
the directory to find the location of the file. System keeps write pointer to the location
in the file where next write is to take place.
Reading a file- to read from a file system call is made which specifies name of the
file and where the next block of the file should be put. Directory is searched for the
specified file name and system keeps read pointer to the location in the file where the
next read is to take place,
After reading the file read pointer is updated.
Repositioning within a file-the directory is searched for the specific file name and
the current file position s set to the given value. Repositioning within a file is also
called file seek.
Deleting a file-The directory is searched for the specified file name and after the file
is found, all file space is released so that it can be reused by other files
Truncating a file- To erase some contents of the file , user truncate the file instead of
deleting it. In this operation only length is reduced keeping all its attributes same.
1.4 File Types
File type is implemented as a apart of file name. The name is split into two parts-
name and extension separated by a period character. User and operating system can tell the
type of the file from the name of the file,Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
file type _| usual extension function
executable | exe, com, bin _| ready-to-run machine-
or none language program
object ol ‘compiled, machine
|__| language, not linked
‘source code | ¢, cc, java, pas, | source code in various
asm, a languages
batch bat, sh ‘commands to the command
interpreter
text textual data, documents
word processor| various word-processor
formats
library libraries of routines for
programmers
print or view ASCII or binary file in a
format for printing or
|__| viewing
archive are, zip, tar related files grouped into
‘one file, sometimes com-
pressed, for archiving
| i or storage
multimedia | mpeg, mov, mm, | binary file containing
mp3, avi audio or A/V information
Common File types
2. File Organization and Access Methods
2.1 File Structure
A File Structure should be according to a required format that the operating system can
understand. An object file is a sequence of bytes organized into blocks that are
understandable by the system. When operating system defines different file structures, it also
contains the code to support these file structure.
Internal File Structure
Files can be structured in several ways. Three common possibilities are depicted in the
following Figure.
a) Sequence of bytes&
Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
b) Sequence of records
©) Tree structure
182 1 Record
[ale ea
® ” ©
Internal File Structure
1. Internally, locating an offset within a file can be complicated for the OS.
2. Disk systems typically have a well-defined block size determined by the size of a
sector. All disk I/O is performed in units of one block (physical record), and all blocks
are the same size.
3. It is unlikely that the physical record size will exactly match the length of the desired
logical record. Packing a number of logical records into physical blocks is a common
solution to this problem. For example, the UNIX OS defines all files to be simply
streams of bytes. Each byte is individually addressable by its offset from the
beginning (or end) of the file. In this case, the logical record size is 1 byte. The file
system automatically packs and unpacks bytes into physical disk blocks -say, 512
bytes per block- as necessary.
4, The file may be considered to be a sequence of blocks. All the bi
occur in terms of blocks.
5. Because disk space is always allocated in blocks, some portion of the last block of
cach file is generally wasted. If each block were 512 bytes, for example, then a file of
1,949 bytes would be allocated four blocks (2,048 bytes); the last 99 bytes would be
wasted.
UO operations
2.2 Access Methods
Information in the file is accessed in several ways and thoses are discussed below.
1, Sequential access
i) Itis the simplest access method. Information in the file is processed in order, one record
after the other,&
Notes Compiled By Archana Naware
Asst. Professor ,Comp. Engg. Dept. LTCOE
ii) This mode of access is the most common access method. for example, editor and
compiler usually access the file in this fashion.
iii) A read operation reads next portion of the file and automatically advance a file pointer
which keeps track I/O location.
iv) Write appends to the end of the file and advances to the end of the newly written
contents. Sequential access is based on a tape model of a file and also work on sequential
access devices as it does on random access devices.
vy) Sequential access is as shown in the following figure.
current
etinning position end
+ rewind os
or write, ———>
Sequential — access file
2. Direct Access
i) Another method is direct access method also known as relative access method. A file is
made of fixed-length logical record that allows the program to read and write record
rapidly in no particular order. The direct access is based on the disk model of a file since
disk allows random access to any file block.
ii) For direct access, the file is viewed as a numbered sequence of block or record. Thus,
we may read block 14 then block 59 and then we can write block 17.
iii) There is no restriction on the order of reading and writing for a direct access file.
iv) Direct access useful for immediate access of large amount of information a
databases are often of this type of access.
v) Ablock number provided by the user to the operating system is normally a relative block
number. The first relative block of the file is 0 and then 1 and so on.
vi) For direct access method file operations must be modified to include block number as a
parameter. Thus read next is modified as read n and write next is modified as write n
where n is a block number.
vii) Block number is usually a relative block number. Relative block number is an index
relative to the beginning of the file. Thus first relative block is 0, second is | and so on.
‘Thus given a logical record length L, a request for record N is turned into /O request for
L bytes starting at location L * (N ~ 1) within the file (assume N=1).
3. Index files
i) These methods involve the construction of an index for the file
ii) Index contains pointers to the various blocks.Notes Compiled By Archana Naware
‘Asst Professor Comp. Engg. Dept. LTCOE
iii) To find a record in a file, index is searched and then pointer is used to access the file
directly and to find the desired record.
iv) Example: Consider a retail-price file containing Universal Product Code (UPC) for
items with associated price. UPC is 10 digits, price is 6 digit. Each record is of 16 Bytes.
Consider a disk containing 1024 Bytes per block. Each block contains 64 records. A file
containing 120000 records occupy about 2000 blocks (2M bytes).
v) If the file is sorted by UPC, index can be defined consisting of the first UPC in each
block. Thus index would have 2000 entries each of which is 10 digits. To find price of a
particular item, index is searched and the block containing the desired record if found.
vi) With large file index files becomes to large. Solution is to create index to index file.
Primary index files contains pointers to secondary index files which points to the actual data
items,
vii) Figure below is the example of index and relative files.
logical record
lastname number
‘Adams,
Arthur
relative fle
Example of index and relative files
3. File Directories and Structure
A directory is a container that is used to contain folders and file. It organizes files and folders
into a hierarchical manner.
There are several logical structures of a directory, these are given below.
1. Single-level directory
i) Single level directory is simplest directory structure. In it all files are contained in
same directory which make it easy to support and understand.
A single level directory has a significant limitation, however, when the number of
files increases or when the system has more than one user. Since all the files are in
the same directory, they must have the unique name. If two users call their data files
test, then the unique name rule is violated
) Single-level directory structure is as shown in the following figure.Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
iv)
Directory
~ OOOO
‘Single level directory
Advantages:
a. Since itis a single directory, so its implementation is very easy.
b. If the files are smaller in size, searching will become faster.
c. The operations like file creation, searching, deletion, updating are very easy in such a
directory structure.
Disadvantages:
a. There may chance of name collision because two files can not have the same name
b. Searching will become time taking if the directory is large.
c. Inthis cannot group the same type of files together.
2. Two-level directory
i) as we have seen, a single level directory often leads to confusion of files names among
different users. The solution to this problem is to create a separate directory for each user.
i) Inthe two-level directory structure, each user has their own user files directory
(UFD).
fi) The UFDs has similar structures, but each lists only the files of a single user.
iii) System's master file directory (MED) is searches whenever a new user id=s logged in.
‘The MED is indexed by username or account number, and each entry points to the UFD
for that user.
iv) The two level directory structure is as shown in the following figure.
|—— Root directory
A B C | user directory
®® OOOO
wn
Files
Two-level directory&
Notes Compiled By Archana Naware
‘Asst Professor Comp. Engg. Dept. LTCOE
Advantages:
a. We can give full path like /User-name/directory-name/.
b. Different users can have same directory as well as file name.
c. Searching of files become more easy due to path name and user-grouping.
Disadvantages:
a. Auseris not allowed to share files with other user
b. Still itis not very scalable; two files of the same type cannot be grouped together in
the same user.
3. Tree structured directory
i) Once we have seen a two-level directory as a tree of height 2, the natural generalization
is to extend the directory structure to a tree of arbitrary height.
ii) This generalization allows the user to create their own subdirectories and to organize
on their files accordingly.
¥) A tee structure is the most common directory structure. The tree has a root directory,
and every file in the system have a unique path.
vi) Tree structured directory is as shown in figure below.
Advantages:
i) Very generalize, since full path name can be given.
ii) Very scalable, the probability of name collision is less.
iii) Searching becomes very easy, we can use both absolute path as well as relative.
Disadvantages:
')_ Every file does not fit into the hierarchical model; files may be saved into multiple
directories.
i) We cannot share fil
Itis inefficient, because accessing a file may go under multiple directories.
4, Acyclic graph directory
i) An acyclic graph is a graph with no cycle and allows sharing subdirectories and files.Notes Compiled By Archana Naware
Asst, Professor Comp. Engg. Dept. LTCOE
ii) The same file or subdirectories may be in two different directories.
Itisa natural generalization of the tree-structured directory.
iv) Iris used in the situation like when two programmers are working on a joint project
and they need to access files. The associated files are stored in a subdirectory, separating
them from other projects and files of other programmers, since they are working on a joint
project so they want the subdirectories to be into their own directories. The common
subdirectories should be shared. So here we use acyclic directories
V) It is the point to note that shared file is not the same as copy file. If any programmer
makes some changes in the subdirectory it will reflect in both subdirectories.
vi) Acyclic graph directory is as shown in figure below.
root [eich | spel
wh
Logs
Advantages:
a. We can share files.
b. Searching is easy due to different-different paths.
Disadvantages:
a. We share the files via linking, in case of deleting it may create the problem,
b. If the link is soft link then after deleting the file we left with a dangling pointer.
In case of hard link, to delete a file we have to delete all the reference associated
it.
5. General graph directory
i) In general graph directory structure, cycles are allowed within a directory structure where
multiple directories can be derived from more than one parent directory.
ii) The main problem with this kind of directory structure is to calculate total size or space
that has been taken by the files and directories.
iii) General graph directory is as shown in figure.
Advantages
i) Itallows cycles.
ii) It is more flexible than other directories structure.©
Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
Disadvantages
i) Itis more costly than others.
ii) Itneeds garbage collection.
4. File Allocation Methods:
File allocation methods define how the files are stored in the disk blocks. There are three
methods.
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
‘The main idea behind these methods is to provide:
a. Efficient disk space utilization.
b. Fast access to the file blocks.
|. Contiguous Allocation
1. Each file occupies a contiguous set of blocks on the disk. For example, if a file
requires 5 blocks and is given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,b+3,b+4.
This means that given the starting block address and the length of the file we can
determine the blocks occupied by the file. The directory entry for a file with
contiguous allocation contains
© Address of starting block
‘* Length of the allocated portion.
The file list in the following figure starts from the block 28 with length = 4 blocks.
Therefore, it occupies 28,29,30,31 blocks.
10Notes Compiled By Archana Naware
Advantages:
1. Sequential and Direct Accesses are supported by this. For direct access, the address
of the k" block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks,
advantages:
1. This method suffers from internal and external fragmentation. This makes it
inefficient in terms of memory utilization.
2. Increasing file size is difficult because it depends on the availability of contiguous
memory at a particular instance.
4.2 Linked List Allocation
In this scheme, each file is a linked list of disk blocks which can be scattered anywhere on the
disk.
‘The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
Example: The file ‘jeep’ in following diagram shows how the blocks are randomly
distributed. The last block (25) contains -I indicating a null pointer and does not point to any
other block.
aNotes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
file start end
jeep 9 25
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the
system does not have to look for availability of contiguous memory.
This method does not suffer from external fragmentation. This makes it relatively
better in terms of memory utilization.
Disadvantages:
1
1
Because the file blocks are distributed randomly on the disk, a large number of seeks
are needed to access every block individually. This makes linked allocation slower.
It does not support random or direc We cannot directly access the blocks of a
file. A block k of a file can be wersing k blocks sequentially (sequential
access ) from the starting block of the file via block pointers.
Pointers required in the linked allocation cause some extra overhead.
Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all
the blocks occupied by a file.
Each file has its own index block. The i entry in the index block contai
address of the i" file block.
‘The directory entry contains the address of the index block as shown in the following
figure.
is the disk
12Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
file index block
jeep 19
Advantages:
1. This supports direct access to the blocks occupied by the file and therefore provides
fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
1. The pointer overhead for indexed allocation is greater than linked allocation.
2. For very small files, say files that expand only 2-3 blocks, the indexed allocation
would keep one entire block (index block) for the pointers which is inefficient in
terms of memory utilization. However, in linked allocation we lose the space of only
1 pointer per block.
4.4 For files that are very large, single index block may not be able to hold all the
pointers.
Following mec!
-an be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for holding
the pointers. Every index block would then contain a pointer or the address to the next
index block.
2. Multilevel index: In this policy, a first level index block is used to point to the
second level index blocks which in turn points to the disk blocks occupied by the file.
‘This can be extended to 3 or more levels depending on the maximum file size.
3. Combined Scheme:
2B&
Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
1. In this scheme, a special block called the Inode (information Node) contains all the
information about the file such as the name, size, authority, etc and the remaining
space of Inode is used to store the Disk Block addresses which contain the actual
file as shown in the following figure.
direct blocks
double indirect
2. The first few of these pointers in Inode point to the direct blocks i.e the pointers
contain the addresses of the disk blocks that contain data of the file.
The next few pointers point to indirect blocks. Indirect blocks may be single indirect,
double indirect or triple indirect
4. Single Indirect block is the disk block that does not contain the file data but the disk
address of the blocks that contain the file data. Similarly, double indirect blocks do
not contain the file data but the disk address of the blocks that contain the address of
the blocks containing the file data,
5. File Sharing
File sharing is very important for the users who want to collaborate and to reduce the efforts
required to achieve a computing goal. There are many more aspects of the file sharing
discussed below.
14&
Notes Compiled By Archana Naware
Asst. Professor Comp. Engg. Dept. LTCOE
5.1 Multiple Users
i) In multiuser systems, system can allow a user to access the files of other users by default
or it may require that a user grant access to the files.
ii) To implement sharing and protection system must maintain more file and directory
attributes than single user system.
iii) Most systems use the concept of file/directory owner and group. Owner can change
attributes, grant access and has most control over the file or directory. Group attribute
defines subset of users who has share access to the file.
iv) To implement owner attributes a list of user names and associated user identifiers ( User
IDs) is maintained. These numerical UIDs are unique per user.
v) UID is associated with all users’ processes and threads. When user reads the process, UID
is translated back to the user name via the user name list.
vi) Group functionality is implemented as a system wide group names and group identifiers.
Every user can be in one or more groups. The user's group IDs is included in every
associated process and thread
vii)When a user requests an operation on a file, UID is compared with owner attribute.
Likewise group IDs can be compared. Like this permissions are determined which are
applicable. Those permissions are applied to the requested operation and the requested
operation is accepted or denied.
5.2 Remote File Systems
i) Networking allows sharing of resources within a campus or around the world. Data in
the form of files is one of the resource types.
ii) File sharing methods have changed in the evolution of network. In the first method users
were manually transferring files between machines via programs like ftp. Ftp is used for
both anonymous and authenticated access. Anonymous access allows user to transfer
files without having an account on the remote system.
iii) Second method is Distributed File System in which remote directories are acces
from local machines. There is tighter integration between the machines that is a
the remote files and the machine providing the files
iv) Third method is World Wide Web in which browser is needed to gain access to remote
files and separate operations are used to transfer files. WWW uses anonymous file
exchange exclusively.
ing
Linux Virtual File system
1) Linux virtual file system is based on object oriented principles. It has two components.
First component is set of definitions that define what a file object is allowed to look alike
and second component is layer of software to manipulate those objects.
2) Three main object types are inode-object, file-object structure which represents individual
files and file-system object which represents system.
15Notes Compiled By Archana Naware
Asst, Professor ,Comp. Engg. Dept. LTCOE
3) For each object VES defines a set of operations that must be implemented by that
structure. Each object contains a pointer to a function table. The function table lists the
addresses of the actual functions that implement those operations for that particular
object.
4) VES does not need to know whether an inode represents a networked file, a disk file,
network socket or a directory file.
5) The file system object is connected as a set of files that forms a self-contained directory
hierarchy. The kemel maintains single file system object for each disk mounted as file
system and for each networked file system currently connected.
6) The file system object's main responsibility is to give access to inode. File system object
returns the inode with a particular inode number to VFS and VFS identifies every inode
by a unique file-system-inode number pair.
7) The inode and file-object are used to access files. Inode object represents file as a whole
whereas file -object represents a point of access in the file.
8) A process cannot acess an inode’s data contents without obtaining a file-object pi
to the inode
9) File object keeps track of the location in the file where the process is currently reading or
writing, It remembers whether the process asked for write permission when the file was
opened and also keeps track of process’s activity.
10) File objects belong to a single process but inode-objects do not. All cached file data are
linked onto a list are linked onto a list in the file’s inode object. Inode also maintains
file’s standard information such as owner, size, time most recently modified.
11) Directory is treated differently than files. UNIX programming interface defines a number
of operations on directories such as creating, deleting, and renaming. Directory operations
do not need that the user should open concerned files.
12) VES defines these operations in inode objects rather than in the file objects.
ting
16