2966
Comment:
|
3927
|
Deletions are marked like this. | Additions are marked like this. |
Line 5: | Line 5: |
This page contains list of patches and other relatd code as it is being developed for Compressed Caching (for 2.6.x kernels) project. You can always find most up-to-date code at Project CVS (at linuxcompressed.sourceforge.net). Code is placed here when it achieves any particular (maybe small) thing. | ||<tablewidth="275px"tableheight="44px">CompressedCaching ||[http://dev.laptop.org/git.do?p=projects/linux-mm-cc;a=summary Git (WebView)] || |
Line 7: | Line 7: |
The code is presently very raw but its helping me getting more experience with VMM code and how things are to be done :) | This page contains list of patches and other relatd code as it is being developed for Compressed Caching (for 2.6.x kernels) project. You can always find most up-to-date code at Project Git repository ('''git://dev.laptop.org/projects/linux-mm-cc'''). |
Line 11: | Line 12: |
''' (These are old patches kept here for reference. Much water has passed since then. For most up-to-date code checkout the Git repository).''' |
|
Line 27: | Line 30: |
---- | |
Line 32: | Line 34: |
1. compress: compress data witten to it and store in internal buffer. | 1. ''compress'': compress data witten to it and store in internal buffer. |
Line 34: | Line 36: |
2. algo_idx: write index of algo you want to test (0: WKdm, 1: WK4x4, 2: LZO) | 2. ''algo_idx'': write index of algo you want to test (0: WKdm, 1: WK4x4, 2: LZO) |
Line 38: | Line 40: |
1. compress: show original and compressed size (TODO: add other stats like time taken too) | 1. ''compress'': show original and compressed size (TODO: add other stats like time taken too) |
Line 40: | Line 42: |
2. decompress: decompress compressed data stored in internal buffer. | 2. ''decompress'': decompress compressed data stored in internal buffer. |
Line 42: | Line 44: |
3. algo_idx: shows list of algos supported with their index. | 3. ''algo_idx'': shows list of algos supported with their index. Also commited all these algos to GIT repo: git://dev.laptop.org/projects/linux-mm-cc |
Line 45: | Line 49: |
=== Compression Structure Implementation: === attachment:storage-test.tar.gz : This module implements compression structure as described on CompressedCaching. |
|
Line 46: | Line 52: |
Also commited all these algos to CVS in linux26/lib/{WKdm, WK4x4, LZO} | * The storage begins as a single page. As you add pages to it, it expands till it reaches its max limit. As you take out pages, it shrinks, freeing up pages when it has no chunks left. * Adjacent free chunks are merged together. * Each page can be compressed using different algo. Please see README for usage. ''' In short:''' Interface is via /proc: /proc/storage-test/{readpage, writepage, show_structure} 1. ''writepage'': write a page on this to compress and store it in ccache. 2. ''readpage'': write 'id' (see README) of page you want. 3. ''show_structure'': read to show current snapshot of ccache storage. ---- |
[:NitinGupta:Nitin Gupta]
MailTo(nitingupta.mail AT gmail DOT com)
This page contains list of patches and other relatd code as it is being developed for Compressed Caching (for 2.6.x kernels) project. You can always find most up-to-date code at Project Git repository (git://dev.laptop.org/projects/linux-mm-cc).
Kernel changes to support Compressed Caching:
(These are old patches kept here for reference. Much water has passed since then. For most up-to-date code checkout the Git repository).
- [attachment:toy-cc-2.6.16-rc4.diff Toy ccache patch]: These are few lines I added to 2.6.16-rc4 while going through VMM code. Just some printk()s to simply highlight some kernel entry points for compressed caching work.
- [attachment:patch-cc-2.6.16-radix-replace-stable.diff patch-cc-2.6.16-radix-replace-stable]: Replace original page (for now, only clean page cache pages) with a 'chunk head' when it is to be freed under memory pressure and simply store original page uncompressed. When page cache lookup is performed, again replace the 'chunk head' with original page. This patch uses simplified (and inefficient) locking in page cache lookup functions to make it stable for now.
- [attachment:patch-cc-2.6.16-better-locking-unstable.diff patch-cc-2.6.16-better-locking-unstable]: This was an attempt to get a better (more efficint) locking in page cache lookup functions but it is not quite as stable as previous simplified patch. It causes apps to freeze as swap usage increase.
Compression algorithms to kernel mode:
Kernel module to test de/compression algorithms (WKdm, WK4x4, LZO): attachment:compress-test.tar.gz
There are basically three main algorithms that are well studied w.r.t compressed caching by previous works -- WKdm, WK4x4, LZO.
Of these, WKdm, WK4x4 are designed to handle anon pages (non filesystem pages) while LZO is more suitable for filesystem data. (Also, in general, compression speed is in order: WKdm > WK4x4 > LZO, while compression factor order is, in general, reverse).
Now, all three algos are ported to kernel space - WKdm, WK4x4 and LZO. You can test them all using this module. It creates 3 /proc entries: /proc/compress-test/{compress, decompress, algo_idx} as described below: (for some detail see README with this module)
In short:
Write to /proc/compress-test entries:
1. compress: compress data witten to it and store in internal buffer.
2. algo_idx: write index of algo you want to test (0: WKdm, 1: WK4x4, 2: LZO)
Read from /proc/compress-test entries:
1. compress: show original and compressed size (TODO: add other stats like time taken too)
2. decompress: decompress compressed data stored in internal buffer.
3. algo_idx: shows list of algos supported with their index.
Also commited all these algos to GIT repo: git://dev.laptop.org/projects/linux-mm-cc
Compression Structure Implementation:
attachment:storage-test.tar.gz : This module implements compression structure as described on CompressedCaching.
- The storage begins as a single page. As you add pages to it, it expands till it reaches its max limit. As you take out pages, it shrinks, freeing up pages when it has no chunks left.
- Adjacent free chunks are merged together.
- Each page can be compressed using different algo.
Please see README for usage.
In short:
Interface is via /proc:
/proc/storage-test/{readpage, writepage, show_structure}
1. writepage: write a page on this to compress and store it in ccache.
2. readpage: write 'id' (see README) of page you want.
3. show_structure: read to show current snapshot of ccache storage.