• Immutable Page
  • Info
  • Attachments

Diff for "Memory pressure"

Differences between revisions 3 and 4

Deletions are marked like this. Additions are marked like this.
Line 30: Line 30:



Operating systems have many consumers of memory: user allocations, file caches, network buffers, etc... Memory pressure happens when there is a shortage of memory. It represents the the work that Linux (or any other OS) does in order to manage and shuffle memory around to satisfy the system's many users.


Memory pressure is caused when someone needs memory. Usually, that memory is simply any free memory. At times, more specialized memory is needed and you can see pressure when there is lots of free memory of other kinds. A few examples of these special needs would be DMA-capable memory, physical contiguity for large pages, "low" memory, and memory on one NUMA node.

A common mistake is assuming that having any free memory means that there is no pressure.


What happens because of memory pressure?

The kernel has to do work in order to make memory available. In Linux, that means that we will be scanning and reclaiming memory. We will write out dirty data, try to throw away data which has been cached previously, or try to swap memory.

How do you tell you are under memory pressure?

Look for the entries in /proc/vmstat. Virtually all the entries that say "scan" in them are reacting to memory pressure. These counters also have per-zone counterparts in /proc/zoneinfo.

How do I find the source of the memory pressure?

This is trickier. The /sys/kernel/debug/tracing/events/kmem/mm_page_alloc trace point is probably a good one to watch.

How does NUMA interact with memory pressure?

When Linux's NUMA support is enabled, it essentially breaks the system in to pieces and manages them separately. Let's say you have a 64GB NUMA system with 2 nodes so that each has 32GB of memory. Since your system is effectively in pieces, there are are number of situations now where your computer behaves like two systems with 32GB rather than one with 64GB.

It is quite possible (and normal) for an individual NUMA node to be under memory pressure while the system as a whole or other nodes are under no pressure. The vm.zone_reclaim_mode tunable is highly responsible for how the kernel reacts when there is pressure on one NUMA node.

Tell others about this page:

last edited 2013-10-18 21:09:37 by DaveHansen