|Deletions are marked like this.||Additions are marked like this.|
|Line 9:||Line 9:|
|Line 18:||Line 18:|
|<a href="http://www.usenix.org/publications/library/proceedings/usenix01/zhou.html">The Multi-Queue Replacement Algorithm for Second Level Buffer Caches</a>. Why this algorithm probably isn't very suitable as a first level cache replacement algorithm is left as an exercise to the reader :)||[http://www.usenix.org/publications/library/proceedings/usenix01/zhou.html The Multi-Queue Replacement Algorithm for Second Level Buffer Caches]. Why this algorithm probably isn't very suitable as a first level cache replacement algorithm is left as an exercise to the reader :)|
In order to understand the need for advanced page replacement algorithms, it is necessary to understand something about the memory hierarchy, access patterns and mixed workloads.
There are two types of memory hierarchy. The first one will be familiar to most people, it is the hierarchy of CPU caches, to random access memory (RAM). Caches near the top (level 1 or L1) are faster but smaller, while caches near the bottom (level 2 & 3, L2 / L3) are larger but slower. This provides a cost effective way to very quickly access a small set of data over and over again, while simultaneously being able to cache lots of data. The assumption is that the amount of data that a program accesses on very short time scales will fit in the L1 cache, and that that short term working set can slowly shift to other data.
This cache hierarchy is inclusive: every piece of data that is in L1 cache will also be in L2 cache and in memory; a line of L2 cache cannot be replaced while the corresponding data is still present in the L1 cache. When the L1 cache working set shifts back to a recent working set, chances are the data will still be in L2 cache and can be quickly loaded back into L1.
The second type of cache hierarchy is exclusive: a piece of data can be evicted from the second level cache, while it is still present in the first level cache. This is common when dealing with storage servers, like NFS or Samba servers and other NAS/SAN equipment. Typically the top level in this cache hierarchy will be the RAM on local workstations or computational nodes; the bottom level is the cache on a storage server.
Since the accesses (hits) on the data in the first level (local) cache are not visible to the storage server, the only cache hits the storage server sees are the misses of the first level cache. This means that there commonly is a very large distance between references. This means normal LRU will not be very effective and the storage server needs a smarter replacement algorithm.
More information on the problems LRU has with a second level cache can be found in this paper from the 2001 USENIX Annual Technical Conference by Zhou, Philbin and Li: [http://www.usenix.org/publications/library/proceedings/usenix01/zhou.html The Multi-Queue Replacement Algorithm for Second Level Buffer Caches]. Why this algorithm probably isn't very suitable as a first level cache replacement algorithm is left as an exercise to the reader