converted to 1.6 markup
|No differences found!|
The project suffers from long dormant periods so I have put up a detailed Todo list here. Please mail me/mailing list for any additional details you may require:
- Enhance current static compressed caching code:
Critical vulnerability: Many allocations need to be done when decompressing a page e.g. temporary buffers required by decompression algos, final decompressed page etc. If any of these allocations fail in decompression code path, we essentially lose that data and corresponding applications crash. Do something about it! Possible solutions include having pre-allocated buffers (a.k.a. emergency pools) to avoid allocations in page decompression path but this has problems on how to have such emergency pools refilled (some background threads?). At least for clean page-cache pages, we can afford to fail in decompress path since their up-to-date copies are always on disk NOTE: If any of allocations fail during page compression, we simply let go that page though usual reclaim path (as when ccaching is not present).
- Avoid/Delay OOM-kills: When we reach OOM(out-of-memory) situations, we should try to free all of ccache first before going for process kills. Currently, we actually pre-pone OOM-kills in some cases since all the pages given to ccache are pinned. Only in cases where pages are requested soon after they are compressed, we really do post-pone OOM killer. Solution to this problem involves part of work required to have dynamic ccaching support (ability to evict compressed pages from ccache to filesystem/swap disk).
- Add support for dynamic ccache resizing support: This involves two main tasks:
- Decide factors that will determine size of ccache at runtime – When to increase/decrease size?
Use existing works – Rodrigo, Kaplan, New?
To begin with, device simplest scheme you can think of. Building on that will be easier
- How to shrink/expand ccache dynamically based on answer to (1) – Expanding is easy. Shrinking is hard.
- Expanding simply involves allocating more pages and adding them to ccache free list.
- Shrinking is more involved: three cases to consider – Clean page cache (filesystem) pages, Dirty page cache pages and Anonymous pages.
- Clean FS pages: Easy. Just remove its compressed chunks. Its up-to-date copy is always on disk.
Dirty FS pages: Harder: Invoke filesystem specific witepage() -> when writback completes, free its chunks.
Anonymous pages: Hardest: If swap not present -> Hurray! no work Otherwise:
- Determine location on physical swap where this page will be written (swp_entry_t).
- Invoke swap_writepage()
- Free its chunks when writeback done.
- Swap-out compressed pages as-is. Delay decompression until they are swapped-in and really used (Purpose here is not to save swap space – we’ll pad out compressed page with 0’s when swapping-out).
- Code standards:
Ideal will be to post ccaching as RFC on LKML once static ccache is good as per your heart’s desire and points listed in (1). Additionally:
- Improve code quality:
- Make de/compression algos. arch independent – 32 vs. 64-bit, little vs big endian.
- Code cleanups, specially in LZO de/compression (lib/minilzo.[ch]).
- Add config options for ccache: Currently no config options exist. Major work here is to completely #ifdef away ccache code if it’s not selected – most difficult part will be to deal with intrusive changes made in find_get_page() and friends.
- A basic compressed cache implementation supporting compression for both anonymous and page-cache pages is working. See (patches):
- Tested with pre-emption enabled (on 32-bit x86 Only).
- Size can be set by user via /proc interface.
- Its size begins as a single page and expands as page are compressed and added to it (until it reaches its limit as set by user) and shrinks as pages are taken out and decompressed. (This is not adaptive resizing scheme).
- Pages are cyclically compressed using WKdm, WK4x4 and LZO. This cyclical selection of compression algorithms makes no sense in real use but this is done just to show that its possible to compress each page with a different algo.
- Compression structure is implemented as described on Wiki (Overall approach is also described here).