Paging is static memory allocation technique in which memory is divided into fixed size pages. Size of pages are the same size and defined by the hardware. Operating system stores and retrieves data from secondary memory using paging technique. When a process arrives in n pages in logical memory then it must have n frames in the physical address.
Operating system maintains job table, page table, and frame table to support paging. Paging can be suffer with internal fragmentation. Processes are used physical memory which is non-contiguous and virtual memory which is used as contiguous manner.
Logical addresses are consisting page number and size of page that coverts into physical frame number and frame size, frame size is always equals to page size. This address translation takes support from page table. Page number is an index id of page table.
Every new process creates a separate page table that is stored in main/physical memory. Page table stores frame number and optional valid/invalid status bits. These entries in the page table are called page table entries. Page table can be single level, multilevel, and, inverted page table.
A process has only one page table in single level paging. However, mapping from virtual to physical address is costly or less. Therefore, we use multilevel paging, where page tables are split into two or more levels.
In multilevel paging, the first level has base address of second address, which has base address of third level and so on. The last level has page frame number. Generally, page size is same for all the processes. Since all the page tables are stored in the main memory, so we need more than one memory access to get page frame number and it is one access for each level. This causes translation process is slow which is main disadvantage of multilevel paging.
Paging with TLB ( Translation Look-aside Buffer )
TLB(Translation look-aside buffer) or address translation cache is a hardware cache. TLB is a part of memory management unit. TLB access is much faster than a memory access. In multilevel paging, we access level by level. Since, page table and TLB is different from each other and TLB can cache only a few of page table entries. A fully associative cache allows parallel access of TLB entries.
During virtual to physical address translation, if page number is found in TLB ( that is hit ) then it’s frame number is accessed from the TLB without memory reference. If the page number is missed in the TLB (i.e., not found), then one extra memory reference is needed to access frame number. Gerenary, a TLB contains page numbers and frame numbers.
Performance of Paging with TLB and without TLB
Assume that TLB hit ratio is ‘h’, TLB access time is ‘c’ and page table access time is ‘m’ which is same as memory access time because all page tables are stored in memory.
Therefore, effective access time in single level paging:
Without TLB; effective access time
= Page table access time + Memory access time = m +m = 2m
With TLB; effective access time
= Hit ratio * (TLB access time + memory access time) + (1 - hit ratio)*(TLB access time + Page table access time + memory access time)
= h * (c+m) + (1 - h) * (c + m + m) = h * (c+m) + (1 - h) * (c + 2*m)
And, effective access time in multilevel(i.e., n - level) paging, here we need to access page table in ‘n’ times :
Inverted Page Table
Inverted page table is global structure which contain an entry for each physical frame, but not for each logical page. The physical page number is not stored, since the in the table corresponds to it. The advantages of the inverted page table are only one inverted page table per system, there is no problem with virtual sparsity etc. However, lookup time is increased and there is no information about the pages that are not in the memory.
The above diagram of inverted page table. There is another type of page table is hash page table, which contains a chain of elements hashing to the same location.