Over the last few years, Linux memory management bugs have been resolved as they came in, one by one. However, it has been quite common for some classes of bugs to get fixed and reintroduced repeatedly.
The speed gap becween memory and hard disks is increasing, with disk latencies being tens of millions of CPU cycles. Additionally, large memory systems (>64GB) are becoming more and more common, and present their own set of scalability challenges.
Maybe it is time for us to understand all the constraints a page replacement mechanism has to satisfy, instead of fixing the bugs one by one? At the very least, this page could turn into a list of "do"s and "don't"s for the VM that can be amended as we go.
Requirements shortlist
- Must select good pages for eviction.
- Must not submit too much I/O at once. Submitting too much I/O at once can kill latency and even lead to deadlocks when bounce buffers (highmem) are involved. Note that submitting sequential I/O is a good thing.
- Must be able to efficiently evict the pages on which pageout I/O completed.
- Must be able to deal with multiple memory zones efficiently.
- Must always have some pages ready to evict. Scanning 32GB of "recently referenced" memory is not an option when memory gets tight.
- Must be able to process pages in batches, to reduce SMP lock contention.
- A bad decision should have bounded consequences. The VM needs to be resilient against its own heuristics going bad.
- Low overhead of execution.
- Works out of the box. Should not have too many knobs.
For more problems that need fixing, see the list of problem workloads.
Contents
Pageout selection
Scan Resistant
A large sequential scan (eg. daily backup) should not be able to push the working set out of memory.
Effective as second level cache
The only hits in a second level cache are the cache misses from the primary cache. This means that the inter-reference distances on eg. a file server may be very large. A page replacement algorithm should be able to detect and cache the most frequently accessed pages even in this case.
Recency vs. Frequency
Which of the two is more important depends entirely on the workload. It would be nice if the pageout selection algorithm would adapt automatically.
Use once
The use once algorithm currently in the 2.6 kernel does the wrong thing in some use cases. For example, rsync can briefly touch the same pages twice, and then never again. In this case, the pages should not get promoted to the active list.
For page replacement purposes "referenced twice" should mean that the page was referenced in two time slots during which the VM scanned the page referenced bit, so "referenced twice" is counted the same for page tables as it is for page structs.
Limited pageout I/O
Pageout I/O is submitted as pages hit the end of the LRU list. Dirty pages are then rotated back onto the start of inactive list. Not only does this disturb LRU order, but it can result in hundreds of megabytes worth of small I/Os being submitted at once. This kills I/O latency and can lead to deadlocks on 32 bit systems with highmem, where the kernel needs to allocate bounce buffers and/or buffer heads from low memory.
Reclaim after I/O
The rotate_reclaimable_page() mechanism in the current 2.6 kernels fixes part of the problem by moving pages back to the end of the inactive list when IO finishes, but there is no effective mechanism to limit how much I/O is submitted at once.
The importance of sequential I/O
Since most disk writes are seek time dominated, the VM should aim to do sequential/clustered writeouts, as well as refrain from submitting too much pageout I/O at once. If the VM wants to free 10MB of memory, it should not submit 500MB worth of I/O, just because there are that many pages on the inactive list.
Asynchronous Page-Out
The page-out operation is not synchonous. Dirty pages that are selected for reclaim are not directly freed, writeback is started against them (PG_writeback is set) and they are fed back to the resident list. When on completion of the write to their backing-store the reference bit is still unset a callback is invoked to place them so that they are immediate candidates for reclaim again (rotate_reclaimable_page).
When scanning for reclaimable pages make sure you are not stuck on a writeback saturated list.
Multiple Zones
Unlike most, linux has multiple memory zones; that is, memory is not viewed as one big continuous section. There are specific sections of memory where it is desirable to have the ability to free pages in. Think of NUMA topologies or DMA engines that cannot operate on the full address space. Hence memory is viewed in multiple zones.
For traditional page replacement algorithms this is not a big issue since we just implement per zone page replacement; eg. a CLOCK per zone. However with the introduction of non-resident page state tracking in the recent algorithms this does become a problem. Since a page can fault into a different zone than where it came from, the non-resident page state tracking needs to be over all memory, not just a single zone.
This makes for per zone resident page tracking and global non-resident page tracking; this separation is not present in several proposed algorithms and hence makes implementing them a challenge.
Background aging
After a day of no memory shortage, it is possible for a system to end up with most pages having the referenced bit set. This has a number of bad effects:
- Essentially a random page will be evicted.
- The system may have to scan through hundreds of thousands of pages in order to find a page to be evicted.
To avoid these situations, the system should always have some pages on hand that are good candidates to be evicted. Light background aging of pages may be one solution to get the desired result. There may be others.
Batch processing
To reduce SMP lock contention on the pageout list locks, the algorithms must allow for pages to be moved around in batches instead of individually. This is relatively easy to satisfy.
Resilience
Unlike many other subsystems, which are optimized for the common case, the VM also needs to be optimized for the worst case. This is because the latency difference between RAM and disk can be tens of millions of CPU cycles.
All heuristics will do the wrong thing occasionally, and the VM is no exception. However, there should be mechanisms (probably feedback loops) to stop the VM from blindly continuing down the wrong path and turning a single mistake into a worst case scenario.
Examples of worst case scenarios could be:
- LRU eviction on a circularly accessed working set slightly larger than memory.
- Readahead window thrashing.
- Doing small I/Os through the pageout path, instead of larger contiguous I/Os through the inode writeback path.
One bad decision by the VM should never lead to the system going down the drain.
Low overhead of execution
Evicting the wrong pages can be extremely costly, reducing system performance by orders of magnitude. However, the VM also cannot go overboard in trying to analyze what is going on and selecting pages to evict. The algorithms used for pageout selection cannot scan the page structs and page tables too often, otherwise they will end up wasting too much CPU. This is especially true when thinking about large memory systems, 128GB RAM is not that strange any more in 2007, and 1TB systems will probably be common within a few years.
Expensive Referenced Check
Because multiple page table entries can refer to the same physical page checking the referenced bit is not as cheap as most algorithms assume it is (rmap). Hence we need to do the check without holding most locks. This suggests a batched approach to minimize the lock/unlock frequency. Modifying algorithms to do this is not usualy very hard.
Tuning Knobs
Due to the increasing speed gap between memory and disk, the increasing memory capacity and the increasing complexity of large systems, VM developers cannot pawn off responsibility for a working system onto the system administrator by providing dozens of tuning knobs.
Instead, the VM should just work and do something reasonable out of the box, without any tuning. Feedback loops could play a big role here.
Other considerations
Insert Referenced
Since we fault in pages it is per definition that the page is going to be used (readahead?) right after we switch back to userspace. Hence we effectifly insert page with their reference bit set. Since most algorithms assume we insert pages with their reference bit unset the need arises to modify the algorithms so that pages are not promoted on their first reference (use-once).
User requests
Some of these requests should be taken with a grain of salt. Not because the users have no genuine need for a fix to their problem, but because alternative solutions may be possible (and sometimes better).
Lumpy reclaim
Sometimes the kernel needs to allocate a range of physically contiguous pages. It would be nice if the VM could purposely free physically contiguous pages, instead of relying on luck alone.
Page cache size limits
A feature in some other OSes is limiting the size of the page cache. This is often done so the VM will evict page cache data instead of something from the working set of the programs on the system.
It is not clear in how much this a real feature, and in how much this is simply a workaround for a non-optimal page replacement algorithm.
Containers / selective reclaim
Container technologies, like CKRM and userbeans, want to be able to limit the amount of memory a group of processes can take. This would require the VM to evict pages from only a certain group of processes.
This page is part of CategoryAdvancedPageReplacement