Once pages have been added to the swapped list, a timer is started, testing for conditions suitable to prefetch swap pages every 5 seconds. Suitable conditions are defined as lack of swapping out or in any pages, and no watermark tests failing. Significant amounts of dirtied ram and changes in free ram representing disk writes or reads also prevent prefetching.
It then checks that we have spare ram looking for at least 3* pages_high free per zone and if it succeeds that will prefetch pages from swap into the swap cache. The pages are added to the tail of the inactive list to preserve LRU ordering.
Pages are prefetched until the list is empty or the vm is seen as busy according to the previously described criteria. Node data on numa is stored with the entries and an appropriate zonelist based on this is used when allocating ram.
The pages are copied to swap cache and kept on backing store. This allows pressure on either physical ram or swap to readily find free pages without further I/O. As the pages are added to the tail of the inactive LRU list, if any pages are evicted before these are used they will be the first chosen for eviction.