1 Documentation for /proc/sys/vm/* kernel version 2.2.10
2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
4 For general info and legal blurb, please look in README.
6 ==============================================================
8 This file contains the documentation for the sysctl files in
9 /proc/sys/vm and is valid for Linux kernel version 2.2.
11 The files in this directory can be used to tune the operation
12 of the virtual memory (VM) subsystem of the Linux kernel and
13 the writeout of dirty data to disk.
15 Default values and initialization routines for most of these
16 files can be found in mm/swap.c.
18 Currently, these files are in /proc/sys/vm:
22 - dirty_background_ratio
23 - dirty_expire_centisecs
24 - dirty_writeback_centisecs
34 - oom_kill_allocating_task
38 - nr_overcommit_hugepages
40 ==============================================================
42 dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
43 dirty_writeback_centisecs, vfs_cache_pressure, laptop_mode,
44 block_dump, swap_token_timeout, drop-caches,
45 hugepages_treat_as_movable:
47 See Documentation/filesystems/proc.txt
49 ==============================================================
53 This value contains a flag that enables memory overcommitment.
55 When this flag is 0, the kernel attempts to estimate the amount
56 of free memory left when userspace requests more memory.
58 When this flag is 1, the kernel pretends there is always enough
59 memory until it actually runs out.
61 When this flag is 2, the kernel uses a "never overcommit"
62 policy that attempts to prevent any overcommit of memory.
64 This feature can be very useful because there are a lot of
65 programs that malloc() huge amounts of memory "just-in-case"
66 and don't use much of it.
68 The default value is 0.
70 See Documentation/vm/overcommit-accounting and
71 security/commoncap.c::cap_vm_enough_memory() for more information.
73 ==============================================================
77 When overcommit_memory is set to 2, the committed address
78 space is not permitted to exceed swap plus this percentage
79 of physical RAM. See above.
81 ==============================================================
85 The Linux VM subsystem avoids excessive disk seeks by reading
86 multiple pages on a page fault. The number of pages it reads
87 is dependent on the amount of memory in your machine.
89 The number of pages the kernel reads in at once is equal to
90 2 ^ page-cluster. Values above 2 ^ 5 don't make much sense
91 for swap because we only cluster swap data in 32-page groups.
93 ==============================================================
97 This file contains the maximum number of memory map areas a process
98 may have. Memory map areas are used as a side-effect of calling
99 malloc, directly by mmap and mprotect, and also when loading shared
102 While most applications need less than a thousand maps, certain
103 programs, particularly malloc debuggers, may consume lots of them,
104 e.g., up to one or two maps per allocation.
106 The default value is 65536.
108 ==============================================================
112 This is used to force the Linux VM to keep a minimum number
113 of kilobytes free. The VM uses this number to compute a pages_min
114 value for each lowmem zone in the system. Each lowmem zone gets
115 a number of reserved free pages based proportionally on its size.
117 Some minimal ammount of memory is needed to satisfy PF_MEMALLOC
118 allocations; if you set this to lower than 1024KB, your system will
119 become subtly broken, and prone to deadlock under high loads.
121 Setting this too high will OOM your machine instantly.
123 ==============================================================
125 percpu_pagelist_fraction
127 This is the fraction of pages at most (high mark pcp->high) in each zone that
128 are allocated for each per cpu page list. The min value for this is 8. It
129 means that we don't allow more than 1/8th of pages in each zone to be
130 allocated in any single per_cpu_pagelist. This entry only changes the value
131 of hot per cpu pagelists. User can specify a number like 100 to allocate
132 1/100th of each zone to each per cpu page list.
134 The batch value of each per cpu pagelist is also updated as a result. It is
135 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
137 The initial value is zero. Kernel does not use this value at boot time to set
138 the high water marks for each per cpu page list.
140 ===============================================================
144 Zone_reclaim_mode allows someone to set more or less aggressive approaches to
145 reclaim memory when a zone runs out of memory. If it is set to zero then no
146 zone reclaim occurs. Allocations will be satisfied from other zones / nodes
149 This is value ORed together of
152 2 = Zone reclaim writes dirty pages out
153 4 = Zone reclaim swaps pages
155 zone_reclaim_mode is set during bootup to 1 if it is determined that pages
156 from remote zones will cause a measurable performance reduction. The
157 page allocator will then reclaim easily reusable pages (those page
158 cache pages that are currently not used) before allocating off node pages.
160 It may be beneficial to switch off zone reclaim if the system is
161 used for a file server and all of memory should be used for caching files
162 from disk. In that case the caching effect is more important than
165 Allowing zone reclaim to write out pages stops processes that are
166 writing large amounts of data from dirtying pages on other nodes. Zone
167 reclaim will write out dirty pages if a zone fills up and so effectively
168 throttle the process. This may decrease the performance of a single process
169 since it cannot use all of system memory to buffer the outgoing writes
170 anymore but it preserve the memory on other nodes so that the performance
171 of other processes running on other nodes will not be affected.
173 Allowing regular swap effectively restricts allocations to the local
174 node unless explicitly overridden by memory policies or cpuset
177 =============================================================
181 This is available only on NUMA kernels.
183 A percentage of the total pages in each zone. Zone reclaim will only
184 occur if more than this percentage of pages are file backed and unmapped.
185 This is to insure that a minimal amount of local pages is still available for
186 file I/O even if the node is overallocated.
188 The default is 1 percent.
190 =============================================================
194 This is available only on NUMA kernels.
196 A percentage of the total pages in each zone. On Zone reclaim
197 (fallback from the local zone occurs) slabs will be reclaimed if more
198 than this percentage of pages in a zone are reclaimable slab pages.
199 This insures that the slab growth stays under control even in NUMA
200 systems that rarely perform global reclaim.
202 The default is 5 percent.
204 Note that slab reclaim is triggered in a per zone / node fashion.
205 The process of reclaiming slab memory is currently not node specific
208 =============================================================
212 This enables or disables panic on out-of-memory feature.
214 If this is set to 0, the kernel will kill some rogue process,
215 called oom_killer. Usually, oom_killer can kill rogue processes and
218 If this is set to 1, the kernel panics when out-of-memory happens.
219 However, if a process limits using nodes by mempolicy/cpusets,
220 and those nodes become memory exhaustion status, one process
221 may be killed by oom-killer. No panic occurs in this case.
222 Because other nodes' memory may be free. This means system total status
223 may be not fatal yet.
225 If this is set to 2, the kernel panics compulsorily even on the
228 The default value is 0.
229 1 and 2 are for failover of clustering. Please select either
230 according to your policy of failover.
232 =============================================================
234 oom_kill_allocating_task
236 This enables or disables killing the OOM-triggering task in
237 out-of-memory situations.
239 If this is set to zero, the OOM killer will scan through the entire
240 tasklist and select a task based on heuristics to kill. This normally
241 selects a rogue memory-hogging task that frees up a large amount of
244 If this is set to non-zero, the OOM killer simply kills the task that
245 triggered the out-of-memory condition. This avoids the expensive
248 If panic_on_oom is selected, it takes precedence over whatever value
249 is used in oom_kill_allocating_task.
251 The default value is 0.
253 ==============================================================
257 This file indicates the amount of address space which a user process will
258 be restricted from mmaping. Since kernel null dereference bugs could
259 accidentally operate based on the information in the first couple of pages
260 of memory userspace processes should not be allowed to write to them. By
261 default this value is set to 0 and no protections will be enforced by the
262 security module. Setting this value to something like 64k will allow the
263 vast majority of applications to work correctly and provide defense in depth
264 against future potential kernel bugs.
266 ==============================================================
270 This sysctl is only for NUMA.
271 'where the memory is allocated from' is controlled by zonelists.
272 (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
273 you may be able to read ZONE_DMA as ZONE_DMA32...)
275 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
276 ZONE_NORMAL -> ZONE_DMA
277 This means that a memory allocation request for GFP_KERNEL will
278 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
280 In NUMA case, you can think of following 2 types of order.
281 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL
283 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
284 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
286 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
287 will be used before ZONE_NORMAL exhaustion. This increases possibility of
288 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
290 Type(B) cannot offer the best locality but is more robust against OOM of
293 Type(A) is called as "Node" order. Type (B) is "Zone" order.
295 "Node order" orders the zonelists by node, then by zone within each node.
296 Specify "[Nn]ode" for zone order
298 "Zone Order" orders the zonelists by zone type, then by node within each
299 zone. Specify "[Zz]one"for zode order.
301 Specify "[Dd]efault" to request automatic configuration. Autoconfiguration
302 will select "node" order in following case.
303 (1) if the DMA zone does not exist or
304 (2) if the DMA zone comprises greater than 50% of the available memory or
305 (3) if any node's DMA zone comprises greater than 60% of its local memory and
306 the amount of local memory is big enough.
308 Otherwise, "zone" order will be selected. Default order is recommended unless
309 this is causing problems for your system/application.
311 ==============================================================
315 Change the minimum size of the hugepage pool.
317 See Documentation/vm/hugetlbpage.txt
319 ==============================================================
321 nr_overcommit_hugepages
323 Change the maximum size of the hugepage pool. The maximum is
324 nr_hugepages + nr_overcommit_hugepages.
326 See Documentation/vm/hugetlbpage.txt