Cache Traffic Optimization
Anasua Bhowmik and Mohamed Zahran

IISc-CSA-TR-2005-1
(January 2005)

Available formats: [ps] [ps.gz]

Filed on January 10, 2005
Updated on January 10, 2005


The center of gravity of computer architecture is moving toward memory systems. This is due to many factors. First, one of the main performance bottlenecks is the gap between memory system and processor. This results in many moving as much memory as we can from off-chip to on-chip. Three levels of cache memory on-chip is not uncommon nowadays. Furthermore, we are on a sustained effort into integrating a larger number of devices per chip. This renders integrating a large memory on-chip a reality. Although the enabling technology allows larger caches to be embedded on-chip, these caches are not giving the required performance. One of the main reasons for that, is the delay in writing back the data of the replaced block to memory or to the lower level cache. This makes the block replacement time consuming, and hence affects the overall performance. In this paper, we present several techniques to enhance the cache traffic problem. The first technique, called "lazy-write", predicts the time at which the cache block will no longer be written before replacement,and write it back to the memory, if it is dirty, at time of low traffic.Hence, when the block is being replaced, it will be clean and the replacement will be done much faster. The other technique, deal with detecting values that are "dead", and hence do not need to be written to the memory altogether. Therefore, reducing the traffic to the memory and make the replacement faster. Hence, two main contributions are presented in this paper.The first is traffic optimization using bandwidth management. The second is traffic optimization using bandwidth saving.


Please bookmark this technical report as http://aditya.csa.iisc.ernet.in/TR/2005/1/.

Problems ? Contact techrep@csa.iisc.ernet.in
[Updated at 2009-10-22T06:42Z]