Hugetlb pages are incredibly architecture specific at the Memory Management Unit (MMU) level. For example, every arch has a different way of handling the page table entries. This leads to some tricky rules and restrictions on what can be done with huge pages. The following sections describe the issues surrounding these architecture limitations on a per-architecture basis. == PowerPC == The powerpc architecture makes use of segmentation to reduce the flushing of virtual to physical address translations. Each segment is 256M in size and can contain only one page size. For addresses below 4GB, 256MB is the page size granularity. For 64bit addresses above 4GB, the page size granularity is 1TB. The available page sizes are: 4k, 64k, 16M, and 16G(!) == ia64 == On ia64, 64-bit virtual address space is divided into eight equal sized regions. Associated with each region is a control register that specifies page size for that region, as well as an address space number (for implementing multiple address space). This region based page size and address space number are used as integral part of virtual-to-physical TLB translation, assisted by VHPT (virtually hashed page table -- an extension of processor's TLB that resides in memory and is automatically searched for translations by the processor). linux-ia64 currently uses one mode of VHPT which restrict one page size for each region and assigns one region for hugetlb use. All other regions are configured to use base page size. In this mode, if the VHPT finds the translation it will use the page size assigned to the region the translation is in. In the other format, an explicit page size is kept with the translation. This makes the translation entry larger, and thus uses up more cache. Finding an effective trade-off between TLB, cache and VHPT performance is a current research issue (see [http://www.gelato.unsw.edu.au/~ianw/litreview/litreview.pdf A survey of large page support] for more information) The available page sizes are: 4K, 8K, 16K, 64K, 256K, 1M, 4M, 16M, 64M, 256M, 1G, 4G == x86 / x86-64 == The x86 architecture has a relatively simple method of doing huge pages: in the page table tree the lowest tier of page table entries just gets consolidated into the "one level above that" PTE entry with a special bit set. This means that there are effectively three requirements on huge pages * The size is fixed to 2Mb (x86-64 and x86-with-PAE) or 4Mb (x86-without-PAE). PAE is the 64 bit page table entry support for x86 that allows x86 to support more than 4Gb of physical memory. * The virtual address space alignment is also 2Mb/4Mb (same as the size page) * The physical page alignment is also 2Mb/4Mb (this is no issue; buddy already makes sure of this) There has been talk from AMD at the last kernel summit to go to 1Gb pages in addition to the 2Mb/4Mb ones. This probably will be a logical extension of the concept above, with a "one level up" consolidation bit in the pagetable, nothing stunning otherwise.