1 Documentation for /proc/sys/vm/* kernel version 2.2.10
2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
4 For general info and legal blurb, please look in README.
6 ==============================================================
8 This file contains the documentation for the sysctl files in
9 /proc/sys/vm and is valid for Linux kernel version 2.2.
11 The files in this directory can be used to tune the operation
12 of the virtual memory (VM) subsystem of the Linux kernel and
13 the writeout of dirty data to disk.
15 Default values and initialization routines for most of these
16 files can be found in mm/swap.c.
18 Currently, these files are in /proc/sys/vm:
22 - dirty_background_ratio
23 - dirty_expire_centisecs
24 - dirty_writeback_centisecs
25 - highmem_is_dirtyable (only if CONFIG_HIGHMEM set)
36 - oom_kill_allocating_task
40 - nr_overcommit_hugepages
42 ==============================================================
44 dirty_ratio, dirty_background_ratio, dirty_expire_centisecs,
45 dirty_writeback_centisecs, highmem_is_dirtyable,
46 vfs_cache_pressure, laptop_mode, block_dump, swap_token_timeout,
47 drop-caches, hugepages_treat_as_movable:
49 See Documentation/filesystems/proc.txt
51 ==============================================================
55 This value contains a flag that enables memory overcommitment.
57 When this flag is 0, the kernel attempts to estimate the amount
58 of free memory left when userspace requests more memory.
60 When this flag is 1, the kernel pretends there is always enough
61 memory until it actually runs out.
63 When this flag is 2, the kernel uses a "never overcommit"
64 policy that attempts to prevent any overcommit of memory.
66 This feature can be very useful because there are a lot of
67 programs that malloc() huge amounts of memory "just-in-case"
68 and don't use much of it.
70 The default value is 0.
72 See Documentation/vm/overcommit-accounting and
73 security/commoncap.c::cap_vm_enough_memory() for more information.
75 ==============================================================
79 When overcommit_memory is set to 2, the committed address
80 space is not permitted to exceed swap plus this percentage
81 of physical RAM. See above.
83 ==============================================================
87 The Linux VM subsystem avoids excessive disk seeks by reading
88 multiple pages on a page fault. The number of pages it reads
89 is dependent on the amount of memory in your machine.
91 The number of pages the kernel reads in at once is equal to
92 2 ^ page-cluster. Values above 2 ^ 5 don't make much sense
93 for swap because we only cluster swap data in 32-page groups.
95 ==============================================================
99 This file contains the maximum number of memory map areas a process
100 may have. Memory map areas are used as a side-effect of calling
101 malloc, directly by mmap and mprotect, and also when loading shared
104 While most applications need less than a thousand maps, certain
105 programs, particularly malloc debuggers, may consume lots of them,
106 e.g., up to one or two maps per allocation.
108 The default value is 65536.
110 ==============================================================
114 This is used to force the Linux VM to keep a minimum number
115 of kilobytes free. The VM uses this number to compute a pages_min
116 value for each lowmem zone in the system. Each lowmem zone gets
117 a number of reserved free pages based proportionally on its size.
119 Some minimal ammount of memory is needed to satisfy PF_MEMALLOC
120 allocations; if you set this to lower than 1024KB, your system will
121 become subtly broken, and prone to deadlock under high loads.
123 Setting this too high will OOM your machine instantly.
125 ==============================================================
127 percpu_pagelist_fraction
129 This is the fraction of pages at most (high mark pcp->high) in each zone that
130 are allocated for each per cpu page list. The min value for this is 8. It
131 means that we don't allow more than 1/8th of pages in each zone to be
132 allocated in any single per_cpu_pagelist. This entry only changes the value
133 of hot per cpu pagelists. User can specify a number like 100 to allocate
134 1/100th of each zone to each per cpu page list.
136 The batch value of each per cpu pagelist is also updated as a result. It is
137 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
139 The initial value is zero. Kernel does not use this value at boot time to set
140 the high water marks for each per cpu page list.
142 ===============================================================
146 Zone_reclaim_mode allows someone to set more or less aggressive approaches to
147 reclaim memory when a zone runs out of memory. If it is set to zero then no
148 zone reclaim occurs. Allocations will be satisfied from other zones / nodes
151 This is value ORed together of
154 2 = Zone reclaim writes dirty pages out
155 4 = Zone reclaim swaps pages
157 zone_reclaim_mode is set during bootup to 1 if it is determined that pages
158 from remote zones will cause a measurable performance reduction. The
159 page allocator will then reclaim easily reusable pages (those page
160 cache pages that are currently not used) before allocating off node pages.
162 It may be beneficial to switch off zone reclaim if the system is
163 used for a file server and all of memory should be used for caching files
164 from disk. In that case the caching effect is more important than
167 Allowing zone reclaim to write out pages stops processes that are
168 writing large amounts of data from dirtying pages on other nodes. Zone
169 reclaim will write out dirty pages if a zone fills up and so effectively
170 throttle the process. This may decrease the performance of a single process
171 since it cannot use all of system memory to buffer the outgoing writes
172 anymore but it preserve the memory on other nodes so that the performance
173 of other processes running on other nodes will not be affected.
175 Allowing regular swap effectively restricts allocations to the local
176 node unless explicitly overridden by memory policies or cpuset
179 =============================================================
183 This is available only on NUMA kernels.
185 A percentage of the total pages in each zone. Zone reclaim will only
186 occur if more than this percentage of pages are file backed and unmapped.
187 This is to insure that a minimal amount of local pages is still available for
188 file I/O even if the node is overallocated.
190 The default is 1 percent.
192 =============================================================
196 This is available only on NUMA kernels.
198 A percentage of the total pages in each zone. On Zone reclaim
199 (fallback from the local zone occurs) slabs will be reclaimed if more
200 than this percentage of pages in a zone are reclaimable slab pages.
201 This insures that the slab growth stays under control even in NUMA
202 systems that rarely perform global reclaim.
204 The default is 5 percent.
206 Note that slab reclaim is triggered in a per zone / node fashion.
207 The process of reclaiming slab memory is currently not node specific
210 =============================================================
214 This enables or disables panic on out-of-memory feature.
216 If this is set to 0, the kernel will kill some rogue process,
217 called oom_killer. Usually, oom_killer can kill rogue processes and
220 If this is set to 1, the kernel panics when out-of-memory happens.
221 However, if a process limits using nodes by mempolicy/cpusets,
222 and those nodes become memory exhaustion status, one process
223 may be killed by oom-killer. No panic occurs in this case.
224 Because other nodes' memory may be free. This means system total status
225 may be not fatal yet.
227 If this is set to 2, the kernel panics compulsorily even on the
230 The default value is 0.
231 1 and 2 are for failover of clustering. Please select either
232 according to your policy of failover.
234 =============================================================
238 Enables a system-wide task dump (excluding kernel threads) to be
239 produced when the kernel performs an OOM-killing and includes such
240 information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and
241 name. This is helpful to determine why the OOM killer was invoked
242 and to identify the rogue task that caused it.
244 If this is set to zero, this information is suppressed. On very
245 large systems with thousands of tasks it may not be feasible to dump
246 the memory state information for each one. Such systems should not
247 be forced to incur a performance penalty in OOM conditions when the
248 information may not be desired.
250 If this is set to non-zero, this information is shown whenever the
251 OOM killer actually kills a memory-hogging task.
253 The default value is 0.
255 =============================================================
257 oom_kill_allocating_task
259 This enables or disables killing the OOM-triggering task in
260 out-of-memory situations.
262 If this is set to zero, the OOM killer will scan through the entire
263 tasklist and select a task based on heuristics to kill. This normally
264 selects a rogue memory-hogging task that frees up a large amount of
267 If this is set to non-zero, the OOM killer simply kills the task that
268 triggered the out-of-memory condition. This avoids the expensive
271 If panic_on_oom is selected, it takes precedence over whatever value
272 is used in oom_kill_allocating_task.
274 The default value is 0.
276 ==============================================================
280 This file indicates the amount of address space which a user process will
281 be restricted from mmaping. Since kernel null dereference bugs could
282 accidentally operate based on the information in the first couple of pages
283 of memory userspace processes should not be allowed to write to them. By
284 default this value is set to 0 and no protections will be enforced by the
285 security module. Setting this value to something like 64k will allow the
286 vast majority of applications to work correctly and provide defense in depth
287 against future potential kernel bugs.
289 ==============================================================
293 This sysctl is only for NUMA.
294 'where the memory is allocated from' is controlled by zonelists.
295 (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
296 you may be able to read ZONE_DMA as ZONE_DMA32...)
298 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
299 ZONE_NORMAL -> ZONE_DMA
300 This means that a memory allocation request for GFP_KERNEL will
301 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
303 In NUMA case, you can think of following 2 types of order.
304 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL
306 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
307 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
309 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
310 will be used before ZONE_NORMAL exhaustion. This increases possibility of
311 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
313 Type(B) cannot offer the best locality but is more robust against OOM of
316 Type(A) is called as "Node" order. Type (B) is "Zone" order.
318 "Node order" orders the zonelists by node, then by zone within each node.
319 Specify "[Nn]ode" for zone order
321 "Zone Order" orders the zonelists by zone type, then by node within each
322 zone. Specify "[Zz]one"for zode order.
324 Specify "[Dd]efault" to request automatic configuration. Autoconfiguration
325 will select "node" order in following case.
326 (1) if the DMA zone does not exist or
327 (2) if the DMA zone comprises greater than 50% of the available memory or
328 (3) if any node's DMA zone comprises greater than 60% of its local memory and
329 the amount of local memory is big enough.
331 Otherwise, "zone" order will be selected. Default order is recommended unless
332 this is causing problems for your system/application.
334 ==============================================================
338 Change the minimum size of the hugepage pool.
340 See Documentation/vm/hugetlbpage.txt
342 ==============================================================
344 nr_overcommit_hugepages
346 Change the maximum size of the hugepage pool. The maximum is
347 nr_hugepages + nr_overcommit_hugepages.
349 See Documentation/vm/hugetlbpage.txt