VIRTUAL MEMORY
PRESENTED BY:
APARNA C BHADRAN
PGCS S1 1
INTRODUCTION
• Multiple processors will be having its own
address space.
• It will be expensive to allocate a memory with
full address space to each processors.
2
• Virtual memory:
– Divide physical memory to block
– Allocate them to different processors.
• Protection Scheme:
– To restrict a process to the block belonging only to
that process.
• Virtual memory reduce the time to start a
program
– All code and data does not need to be in physical
memory.
3
• If a program becomes too large .
– It was the programer job to fit it.
– Virtual memory was invented to relieve
programmer from this burden.
• Virtual memory supports relocation
mechanism.
– Allows same program to run in any location in
physical memory.
4
5
• Page and segment-block
• Page fault and address fault-miss
• Memory mapping(Address translation)
– Processors produce virtual address
– Virtual address is translated to physical address
which access main memory.
6
Difference between cache and VM
• Replacement of cache miss is controlled by
hardware.
VM replacement is controlled by OS.
• The size of processor address determines size
of VM
Cache Size is independent of processor
address.
7
• VM system can be categorized into 2:
– Pages:with fixed size block.
– Segments:with variable size block.
8
9
PAGED ADDRESSING
• Single fixed size address
• Divided into page number and
• offset within page.
SEGMENTED ADDRESSING
• Variable size required.
• 1 word for segment number
• 1 word for offset within a segment.
• Total 2 word.
10
DIFFERENCE BETWEEN PAGE AND
SEGMENT
11
• Because of replacement problem few
computers use pure segmentation.
• Hybrid approach:
– Called as paged segment.
– Segment will be internal number of page.
– Memory need not be contiguous
– full segment need not be in memory.
12
PAGE TABLE
• Paging and segmentation rely on data
structure.
• Indexed by page or segment number.
• Data structure contains physical address of
block.
• Segmentation:
– Offset is added to the segments physical address
to obtain final physical address.
13
• Paging:
– The offset is simply concatenated to physical page
address.
14
Indexed by virtual page number.
Size of the table =no.of pages in virtual address space.
15
REPLACEMENT ON VIRTUAL MEMORY
MISS
• LRU scheme is used for replacement.
• Processors will have a use bit or reference bit.
• The bit will set when the page is accessed.
16
Techniques for fast address translation
• Paging:
– Memory access to obtain physical address
– Access to get data.
• Can keep address translation in separate
cache to reduce the second access to data.
• Special address translation cache is called
Translation Lookaside Buffer(TLB)or
Translation Buffer(TB).
17
TLB
• Like cache entry.
• Tag holds portion of virtual address.
• Data portion holds:
– physical page frame number
– Protection field
– Valid bit
– Use and dirty bit.
18
19
• Step 1 and 2:
– Translation begins by sending virtual address to all
tags.
– The tag must be marked valid to allow match.
• Step 3:
– The matching tag sends the corresponding physical
address through a 40:1 multiplexer.
• Step 4:
– The page offset is combined with physical page frame
to form full address.
20
SELECTING A PAGE SIZE
• LARGE PAGE SIZE:
1)The size of the page table is inversely proportional to
the page size.
So memory can be saved by making the pages bigger.
2) larger page size can allow larger caches with fast cache
hit times.
3)Transferring larger pages to or from secondary storage,
possibly over a network, is more efficient than
transferring smaller pages.
4) The number of TLB entries is restricted, so a larger
page size means that more memory can be mapped
efficiently, thereby reducing the number of TLB misses.
21
• SMALL PAGE SIZE:
1)Conserving storage:
• A small page size will result in less wasted
storage
• Avoids internal fragmentation
2)Many process are small so large page size will
increase the time to invoke process.
22
THANK YOU
23

Vm

  • 1.
  • 2.
    INTRODUCTION • Multiple processorswill be having its own address space. • It will be expensive to allocate a memory with full address space to each processors. 2
  • 3.
    • Virtual memory: –Divide physical memory to block – Allocate them to different processors. • Protection Scheme: – To restrict a process to the block belonging only to that process. • Virtual memory reduce the time to start a program – All code and data does not need to be in physical memory. 3
  • 4.
    • If aprogram becomes too large . – It was the programer job to fit it. – Virtual memory was invented to relieve programmer from this burden. • Virtual memory supports relocation mechanism. – Allows same program to run in any location in physical memory. 4
  • 5.
  • 6.
    • Page andsegment-block • Page fault and address fault-miss • Memory mapping(Address translation) – Processors produce virtual address – Virtual address is translated to physical address which access main memory. 6
  • 7.
    Difference between cacheand VM • Replacement of cache miss is controlled by hardware. VM replacement is controlled by OS. • The size of processor address determines size of VM Cache Size is independent of processor address. 7
  • 8.
    • VM systemcan be categorized into 2: – Pages:with fixed size block. – Segments:with variable size block. 8
  • 9.
  • 10.
    PAGED ADDRESSING • Singlefixed size address • Divided into page number and • offset within page. SEGMENTED ADDRESSING • Variable size required. • 1 word for segment number • 1 word for offset within a segment. • Total 2 word. 10
  • 11.
  • 12.
    • Because ofreplacement problem few computers use pure segmentation. • Hybrid approach: – Called as paged segment. – Segment will be internal number of page. – Memory need not be contiguous – full segment need not be in memory. 12
  • 13.
    PAGE TABLE • Pagingand segmentation rely on data structure. • Indexed by page or segment number. • Data structure contains physical address of block. • Segmentation: – Offset is added to the segments physical address to obtain final physical address. 13
  • 14.
    • Paging: – Theoffset is simply concatenated to physical page address. 14
  • 15.
    Indexed by virtualpage number. Size of the table =no.of pages in virtual address space. 15
  • 16.
    REPLACEMENT ON VIRTUALMEMORY MISS • LRU scheme is used for replacement. • Processors will have a use bit or reference bit. • The bit will set when the page is accessed. 16
  • 17.
    Techniques for fastaddress translation • Paging: – Memory access to obtain physical address – Access to get data. • Can keep address translation in separate cache to reduce the second access to data. • Special address translation cache is called Translation Lookaside Buffer(TLB)or Translation Buffer(TB). 17
  • 18.
    TLB • Like cacheentry. • Tag holds portion of virtual address. • Data portion holds: – physical page frame number – Protection field – Valid bit – Use and dirty bit. 18
  • 19.
  • 20.
    • Step 1and 2: – Translation begins by sending virtual address to all tags. – The tag must be marked valid to allow match. • Step 3: – The matching tag sends the corresponding physical address through a 40:1 multiplexer. • Step 4: – The page offset is combined with physical page frame to form full address. 20
  • 21.
    SELECTING A PAGESIZE • LARGE PAGE SIZE: 1)The size of the page table is inversely proportional to the page size. So memory can be saved by making the pages bigger. 2) larger page size can allow larger caches with fast cache hit times. 3)Transferring larger pages to or from secondary storage, possibly over a network, is more efficient than transferring smaller pages. 4) The number of TLB entries is restricted, so a larger page size means that more memory can be mapped efficiently, thereby reducing the number of TLB misses. 21
  • 22.
    • SMALL PAGESIZE: 1)Conserving storage: • A small page size will result in less wasted storage • Avoids internal fragmentation 2)Many process are small so large page size will increase the time to invoke process. 22
  • 23.