Introduction to Cache and Locality of Reference

Cache memory is intended to give memory speed approaching that of the fastest memories available, and at the same time provide a large memory size at the price of less expensive types of semiconductor memories.

The cache contains a copy of portions of main memory. When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache.

If so, the word is delivered to the processor.

If not, a block of main memory, consisting of some fixed number of words, is read into the cache and then the word is delivered to the processor.

 

The locality of reference:  when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that there will be future references to that same memory location or to other words in the block.

This principle states that memory references tend to cluster. Over a long period of time, the clusters in use change, but over a short period of time, the processor is primarily working with fixed clusters of memory references.

 

Spatial locality and Temporal locality:-

Spatial locality refers to the tendency of execution to involve a number of
memory locations that are clustered. This reflects the tendency of a processor to access instructions sequentially. The spatial location also reflects the tendency of a program to access data locations sequentially, such as when processing a table of data.

Spatial locality is generally exploited by using larger cache blocks and by incorporating prefetching mechanisms (fetching items of anticipated use) into the cache control logic.

Temporal locality refers to the tendency for a processor to access memory locations that have been used recently. For example, when an iteration loop is executed, the processor executes the same set of instructions repeatedly.

 

Traditionally, the temporal locality is exploited by keeping recently used instruction and data values in the cache memory and by exploiting a cache hierarchy.

 

Contributor's Info

Created:
0Comment
Write through and Write back

Write through: In write through cache , during writing both memory and cache are updated simultaneously, so write time here is memory access time for a word. On a cache miss, only memory update takes place.
While during reading , a cache block is retrieved from memory and hence , read time on cache miss will be the time to bring a block from memory including cache access time using heirarichal access.

Write Back: Here cache block is replaced only when dirty bit is set, then that block is written back to memory. Write back policy uses write allocate technique.

 

Contributor's Info

Created:
0Comment