2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version pre-2.8.4 Mon Nov 27 11:22:37 2006 (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
107 Thread-safety: NOT thread-safe unless USE_LOCKS defined
108 When USE_LOCKS is defined, each public call to malloc, free,
109 etc is surrounded with either a pthread mutex or a win32
110 spinlock (depending on WIN32). This is not especially fast, and
111 can be a major bottleneck. It is designed only to provide
112 minimal protection in concurrent environments, and to provide a
113 basis for extensions. If you are using malloc in a concurrent
114 program, consider instead using nedmalloc
115 (http://www.nedprod.com/programs/portable/nedmalloc/) or
116 ptmalloc (See http://www.malloc.de), which are derived
117 from versions of this malloc.
119 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
120 This malloc can use unix sbrk or any emulation (invoked using
121 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
122 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
123 memory. On most unix systems, it tends to work best if both
124 MORECORE and MMAP are enabled. On Win32, it uses emulations
125 based on VirtualAlloc. It also uses common C library functions
128 Compliance: I believe it is compliant with the Single Unix Specification
129 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
132 * Overview of algorithms
134 This is not the fastest, most space-conserving, most portable, or
135 most tunable malloc ever written. However it is among the fastest
136 while also being among the most space-conserving, portable and
137 tunable. Consistent balance across these factors results in a good
138 general-purpose allocator for malloc-intensive programs.
140 In most ways, this malloc is a best-fit allocator. Generally, it
141 chooses the best-fitting existing chunk for a request, with ties
142 broken in approximately least-recently-used order. (This strategy
143 normally maintains low fragmentation.) However, for requests less
144 than 256bytes, it deviates from best-fit when there is not an
145 exactly fitting available chunk by preferring to use space adjacent
146 to that used for the previous small request, as well as by breaking
147 ties in approximately most-recently-used order. (These enhance
148 locality of series of small allocations.) And for very large requests
149 (>= 256Kb by default), it relies on system memory mapping
150 facilities, if supported. (This helps avoid carrying around and
151 possibly fragmenting memory used only for large chunks.)
153 All operations (except malloc_stats and mallinfo) have execution
154 times that are bounded by a constant factor of the number of bits in
155 a size_t, not counting any clearing in calloc or copying in realloc,
156 or actions surrounding MORECORE and MMAP that have times
157 proportional to the number of non-contiguous regions returned by
158 system allocation routines, which is often just 1. In real-time
159 applications, you can optionally suppress segment traversals using
160 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
161 system allocators return non-contiguous spaces, at the typical
162 expense of carrying around more memory and increased fragmentation.
164 The implementation is not very modular and seriously overuses
165 macros. Perhaps someday all C compilers will do as good a job
166 inlining modular code as can now be done by brute-force expansion,
167 but now, enough of them seem not to.
169 Some compilers issue a lot of warnings about code that is
170 dead/unreachable only on some platforms, and also about intentional
171 uses of negation on unsigned types. All known cases of each can be
174 For a longer but out of date high-level description, see
175 http://gee.cs.oswego.edu/dl/html/malloc.html
178 If MSPACES is defined, then in addition to malloc, free, etc.,
179 this file also defines mspace_malloc, mspace_free, etc. These
180 are versions of malloc routines that take an "mspace" argument
181 obtained using create_mspace, to control all internal bookkeeping.
182 If ONLY_MSPACES is defined, only these versions are compiled.
183 So if you would like to use this allocator for only some allocations,
184 and your system malloc for others, you can compile with
185 ONLY_MSPACES and then do something like...
186 static mspace mymspace = create_mspace(0,0); // for example
187 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
189 (Note: If you only need one instance of an mspace, you can instead
190 use "USE_DL_PREFIX" to relabel the global malloc.)
192 You can similarly create thread-local allocators by storing
193 mspaces as thread-locals. For example:
194 static __thread mspace tlms = 0;
195 void* tlmalloc(size_t bytes) {
196 if (tlms == 0) tlms = create_mspace(0, 0);
197 return mspace_malloc(tlms, bytes);
199 void tlfree(void* mem) { mspace_free(tlms, mem); }
201 Unless FOOTERS is defined, each mspace is completely independent.
202 You cannot allocate from one and free to another (although
203 conformance is only weakly checked, so usage errors are not always
204 caught). If FOOTERS is defined, then each chunk carries around a tag
205 indicating its originating mspace, and frees are directed to their
208 ------------------------- Compile-time options ---------------------------
210 Be careful in setting #define values for numerical constants of type
211 size_t. On some systems, literal values are not automatically extended
212 to size_t precision unless they are explicitly casted. You can also
213 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
215 WIN32 default: defined if _WIN32 defined
216 Defining WIN32 sets up defaults for MS environment and compilers.
217 Otherwise defaults are for unix. Beware that there seem to be some
218 cases where this malloc might not be a pure drop-in replacement for
219 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
220 SetDIBits()) may be due to bugs in some video driver implementations
221 when pixel buffers are malloc()ed, and the region spans more than
222 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
223 default granularity, pixel buffers may straddle virtual allocation
224 regions more often than when using the Microsoft allocator. You can
225 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
226 buffers rather than using malloc(). If this is not possible,
227 recompile this malloc with a larger DEFAULT_GRANULARITY.
229 MALLOC_ALIGNMENT default: (size_t)8
230 Controls the minimum alignment for malloc'ed chunks. It must be a
231 power of two and at least 8, even on machines for which smaller
232 alignments would suffice. It may be defined as larger than this
233 though. Note however that code and data structures are optimized for
234 the case of 8-byte alignment.
236 MSPACES default: 0 (false)
237 If true, compile in support for independent allocation spaces.
238 This is only supported if HAVE_MMAP is true.
240 ONLY_MSPACES default: 0 (false)
241 If true, only compile in mspace versions, not regular versions.
243 USE_LOCKS default: 0 (false)
244 Causes each call to each public routine to be surrounded with
245 pthread or WIN32 mutex lock/unlock. (If set true, this can be
246 overridden on a per-mspace basis for mspace versions.) If set to a
247 non-zero value other than 1, locks are used, but their
248 implementation is left out, so lock functions must be supplied manually.
250 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and on x86 using gcc or MSC
251 If true, uses custom spin locks for locking. This is currently
252 supported only for x86 platforms using gcc or recent MS compilers.
253 Otherwise, posix locks or win32 critical sections are used.
256 If true, provide extra checking and dispatching by placing
257 information in the footers of allocated chunks. This adds
258 space and time overhead.
261 If true, omit checks for usage errors and heap space overwrites.
263 USE_DL_PREFIX default: NOT defined
264 Causes compiler to prefix all public routines with the string 'dl'.
265 This can be useful when you only want to use this malloc in one part
266 of a program, using your regular system malloc elsewhere.
268 ABORT default: defined as abort()
269 Defines how to abort on failed checks. On most systems, a failed
270 check cannot die with an "assert" or even print an informative
271 message, because the underlying print routines in turn call malloc,
272 which will fail again. Generally, the best policy is to simply call
273 abort(). It's not very useful to do more than this because many
274 errors due to overwriting will show up as address faults (null, odd
275 addresses etc) rather than malloc-triggered checks, so will also
276 abort. Also, most compilers know that abort() does not return, so
277 can better optimize code conditionally calling it.
279 PROCEED_ON_ERROR default: defined as 0 (false)
280 Controls whether detected bad addresses cause them to bypassed
281 rather than aborting. If set, detected bad arguments to free and
282 realloc are ignored. And all bookkeeping information is zeroed out
283 upon a detected overwrite of freed heap space, thus losing the
284 ability to ever return it from malloc again, but enabling the
285 application to proceed. If PROCEED_ON_ERROR is defined, the
286 static variable malloc_corruption_error_count is compiled in
287 and can be examined to see if errors have occurred. This option
288 generates slower code than the default abort policy.
290 DEBUG default: NOT defined
291 The DEBUG setting is mainly intended for people trying to modify
292 this code or diagnose problems when porting to new platforms.
293 However, it may also be able to better isolate user errors than just
294 using runtime checks. The assertions in the check routines spell
295 out in more detail the assumptions and invariants underlying the
296 algorithms. The checking is fairly extensive, and will slow down
297 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
298 set will attempt to check every non-mmapped allocated and free chunk
299 in the course of computing the summaries.
301 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
302 Debugging assertion failures can be nearly impossible if your
303 version of the assert macro causes malloc to be called, which will
304 lead to a cascade of further failures, blowing the runtime stack.
305 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
306 which will usually make debugging easier.
308 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
309 The action to take before "return 0" when malloc fails to be able to
310 return memory because there is none available.
312 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
313 True if this system supports sbrk or an emulation of it.
315 MORECORE default: sbrk
316 The name of the sbrk-style system routine to call to obtain more
317 memory. See below for guidance on writing custom MORECORE
318 functions. The type of the argument to sbrk/MORECORE varies across
319 systems. It cannot be size_t, because it supports negative
320 arguments, so it is normally the signed type of the same width as
321 size_t (sometimes declared as "intptr_t"). It doesn't much matter
322 though. Internally, we only call it with arguments less than half
323 the max value of a size_t, which should work across all reasonable
324 possibilities, although sometimes generating compiler warnings.
326 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
327 If true, take advantage of fact that consecutive calls to MORECORE
328 with positive arguments always return contiguous increasing
329 addresses. This is true of unix sbrk. It does not hurt too much to
330 set it true anyway, since malloc copes with non-contiguities.
331 Setting it false when definitely non-contiguous saves time
332 and possibly wasted space it would take to discover this though.
334 MORECORE_CANNOT_TRIM default: NOT defined
335 True if MORECORE cannot release space back to the system when given
336 negative arguments. This is generally necessary only if you are
337 using a hand-crafted MORECORE function that cannot handle negative
340 NO_SEGMENT_TRAVERSAL default: 0
341 If non-zero, suppresses traversals of memory segments
342 returned by either MORECORE or CALL_MMAP. This disables
343 merging of segments that are contiguous, and selectively
344 releasing them to the OS if unused, but bounds execution times.
346 HAVE_MMAP default: 1 (true)
347 True if this system supports mmap or an emulation of it. If so, and
348 HAVE_MORECORE is not true, MMAP is used for all system
349 allocation. If set and HAVE_MORECORE is true as well, MMAP is
350 primarily used to directly allocate very large blocks. It is also
351 used as a backup strategy in cases where MORECORE fails to provide
352 space from system. Note: A single call to MUNMAP is assumed to be
353 able to unmap memory that may have be allocated using multiple calls
354 to MMAP, so long as they are adjacent.
356 HAVE_MREMAP default: 1 on linux, else 0
357 If true realloc() uses mremap() to re-allocate large blocks and
358 extend or shrink allocation spaces.
360 MMAP_CLEARS default: 1 except on WINCE.
361 True if mmap clears memory so calloc doesn't need to. This is true
362 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
364 USE_BUILTIN_FFS default: 0 (i.e., not used)
365 Causes malloc to use the builtin ffs() function to compute indices.
366 Some compilers may recognize and intrinsify ffs to be faster than the
367 supplied C version. Also, the case of x86 using gcc is special-cased
368 to an asm instruction, so is already as fast as it can be, and so
369 this setting has no effect. Similarly for Win32 under recent MS compilers.
370 (On most x86s, the asm version is only slightly faster than the C version.)
372 malloc_getpagesize default: derive from system includes, or 4096.
373 The system page size. To the extent possible, this malloc manages
374 memory from the system in page-size units. This may be (and
375 usually is) a function rather than a constant. This is ignored
376 if WIN32, where page size is determined using getSystemInfo during
379 USE_DEV_RANDOM default: 0 (i.e., not used)
380 Causes malloc to use /dev/random to initialize secure magic seed for
381 stamping footers. Otherwise, the current time is used.
383 NO_MALLINFO default: 0
384 If defined, don't compile "mallinfo". This can be a simple way
385 of dealing with mismatches between system declarations and
388 MALLINFO_FIELD_TYPE default: size_t
389 The type of the fields in the mallinfo struct. This was originally
390 defined as "int" in SVID etc, but is more usefully defined as
391 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
393 REALLOC_ZERO_BYTES_FREES default: not defined
394 This should be set if a call to realloc with zero bytes should
395 be the same as a call to free. Some people think it should. Otherwise,
396 since this malloc returns a unique pointer for malloc(0), so does
399 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
400 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
401 LACKS_STDLIB_H default: NOT defined unless on WIN32
402 Define these if your system does not have these header files.
403 You might need to manually insert some of the declarations they provide.
405 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
406 system_info.dwAllocationGranularity in WIN32,
408 Also settable using mallopt(M_GRANULARITY, x)
409 The unit for allocating and deallocating memory from the system. On
410 most systems with contiguous MORECORE, there is no reason to
411 make this more than a page. However, systems with MMAP tend to
412 either require or encourage larger granularities. You can increase
413 this value to prevent system allocation functions to be called so
414 often, especially if they are slow. The value must be at least one
415 page and must be a power of two. Setting to 0 causes initialization
416 to either page size or win32 region size. (Note: In previous
417 versions of malloc, the equivalent of this option was called
420 DEFAULT_TRIM_THRESHOLD default: 2MB
421 Also settable using mallopt(M_TRIM_THRESHOLD, x)
422 The maximum amount of unused top-most memory to keep before
423 releasing via malloc_trim in free(). Automatic trimming is mainly
424 useful in long-lived programs using contiguous MORECORE. Because
425 trimming via sbrk can be slow on some systems, and can sometimes be
426 wasteful (in cases where programs immediately afterward allocate
427 more large chunks) the value should be high enough so that your
428 overall system performance would improve by releasing this much
429 memory. As a rough guide, you might set to a value close to the
430 average size of a process (program) running on your system.
431 Releasing this much memory would allow such a process to run in
432 memory. Generally, it is worth tuning trim thresholds when a
433 program undergoes phases where several large chunks are allocated
434 and released in ways that can reuse each other's storage, perhaps
435 mixed with phases where there are no such chunks at all. The trim
436 value must be greater than page size to have any useful effect. To
437 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
438 some people use of mallocing a huge space and then freeing it at
439 program startup, in an attempt to reserve system memory, doesn't
440 have the intended effect under automatic trimming, since that memory
441 will immediately be returned to the system.
443 DEFAULT_MMAP_THRESHOLD default: 256K
444 Also settable using mallopt(M_MMAP_THRESHOLD, x)
445 The request size threshold for using MMAP to directly service a
446 request. Requests of at least this size that cannot be allocated
447 using already-existing space will be serviced via mmap. (If enough
448 normal freed space already exists it is used instead.) Using mmap
449 segregates relatively large chunks of memory so that they can be
450 individually obtained and released from the host system. A request
451 serviced through mmap is never reused by any other request (at least
452 not directly; the system may just so happen to remap successive
453 requests to the same locations). Segregating space in this way has
454 the benefits that: Mmapped space can always be individually released
455 back to the system, which helps keep the system level memory demands
456 of a long-lived program low. Also, mapped memory doesn't become
457 `locked' between other chunks, as can happen with normally allocated
458 chunks, which means that even trimming via malloc_trim would not
459 release them. However, it has the disadvantage that the space
460 cannot be reclaimed, consolidated, and then used to service later
461 requests, as happens with normal chunks. The advantages of mmap
462 nearly always outweigh disadvantages for "large" chunks, but the
463 value of "large" may vary across systems. The default is an
464 empirically derived value that works well in most systems. You can
465 disable mmap by setting to MAX_SIZE_T.
467 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
468 The number of consolidated frees between checks to release
469 unused segments when freeing. When using non-contiguous segments,
470 especially with multiple mspaces, checking only for topmost space
471 doesn't always suffice to trigger trimming. To compensate for this,
472 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
473 current number of segments, if greater) try to release unused
474 segments to the OS when freeing chunks that result in
475 consolidation. The best value for this parameter is a compromise
476 between slowing down frees with relatively costly checks that
477 rarely trigger versus holding on to unused memory. To effectively
478 disable, set to MAX_SIZE_T. This may lead to a very slight speed
479 improvement at the expense of carrying around more memory.
482 /* Version identifier to allow people to support multiple versions */
483 #ifndef DLMALLOC_VERSION
484 #define DLMALLOC_VERSION 20804
485 #endif /* DLMALLOC_VERSION */
488 #define _GNU_SOURCE 1
496 #define LACKS_FCNTL_H
498 #endif /* _WIN32_WCE */
501 #define WIN32_LEAN_AND_MEAN
503 #define _WIN32_WINNT 0x403
507 #define HAVE_MORECORE 0
508 #define LACKS_UNISTD_H
509 #define LACKS_SYS_PARAM_H
510 #define LACKS_SYS_MMAN_H
511 #define LACKS_STRING_H
512 #define LACKS_STRINGS_H
513 #define LACKS_SYS_TYPES_H
514 #define LACKS_ERRNO_H
515 #ifndef MALLOC_FAILURE_ACTION
516 #define MALLOC_FAILURE_ACTION
517 #endif /* MALLOC_FAILURE_ACTION */
518 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
519 #define MMAP_CLEARS 0
521 #define MMAP_CLEARS 1
522 #endif /* _WIN32_WCE */
525 #if defined(DARWIN) || defined(_DARWIN)
526 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
527 #ifndef HAVE_MORECORE
528 #define HAVE_MORECORE 0
530 /* OSX allocators provide 16 byte alignment */
531 #ifndef MALLOC_ALIGNMENT
532 #define MALLOC_ALIGNMENT ((size_t)16U)
534 #endif /* HAVE_MORECORE */
537 #ifndef LACKS_SYS_TYPES_H
538 #include <sys/types.h> /* For size_t */
539 #endif /* LACKS_SYS_TYPES_H */
541 /* The maximum possible size_t value has all bits set */
542 #define MAX_SIZE_T (~(size_t)0)
545 #define ONLY_MSPACES 0 /* define to a value */
547 #define ONLY_MSPACES 1
548 #endif /* ONLY_MSPACES */
552 #else /* ONLY_MSPACES */
554 #endif /* ONLY_MSPACES */
556 #ifndef MALLOC_ALIGNMENT
557 #define MALLOC_ALIGNMENT ((size_t)8U)
558 #endif /* MALLOC_ALIGNMENT */
563 #define ABORT abort()
565 #ifndef ABORT_ON_ASSERT_FAILURE
566 #define ABORT_ON_ASSERT_FAILURE 1
567 #endif /* ABORT_ON_ASSERT_FAILURE */
568 #ifndef PROCEED_ON_ERROR
569 #define PROCEED_ON_ERROR 0
570 #endif /* PROCEED_ON_ERROR */
573 #endif /* USE_LOCKS */
574 #ifndef USE_SPIN_LOCKS
575 #if USE_LOCKS && (defined(__GNUC__) && ((defined(__i386__) || defined(__x86_64__)))) || (defined(_MSC_VER) && _MSC_VER>=1310)
576 #define USE_SPIN_LOCKS 1
578 #define USE_SPIN_LOCKS 0
579 #endif /* USE_LOCKS && ... */
580 #endif /* USE_SPIN_LOCKS */
583 #endif /* INSECURE */
586 #endif /* HAVE_MMAP */
588 #define MMAP_CLEARS 1
589 #endif /* MMAP_CLEARS */
592 #define HAVE_MREMAP 1
594 #define HAVE_MREMAP 0
596 #endif /* HAVE_MREMAP */
597 #ifndef MALLOC_FAILURE_ACTION
598 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
599 #endif /* MALLOC_FAILURE_ACTION */
600 #ifndef HAVE_MORECORE
602 #define HAVE_MORECORE 0
603 #else /* ONLY_MSPACES */
604 #define HAVE_MORECORE 1
605 #endif /* ONLY_MSPACES */
606 #endif /* HAVE_MORECORE */
608 #define MORECORE_CONTIGUOUS 0
609 #else /* !HAVE_MORECORE */
610 #define MORECORE_DEFAULT sbrk
611 #ifndef MORECORE_CONTIGUOUS
612 #define MORECORE_CONTIGUOUS 1
613 #endif /* MORECORE_CONTIGUOUS */
614 #endif /* HAVE_MORECORE */
615 #ifndef DEFAULT_GRANULARITY
616 #if (MORECORE_CONTIGUOUS || defined(WIN32))
617 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
618 #else /* MORECORE_CONTIGUOUS */
619 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
620 #endif /* MORECORE_CONTIGUOUS */
621 #endif /* DEFAULT_GRANULARITY */
622 #ifndef DEFAULT_TRIM_THRESHOLD
623 #ifndef MORECORE_CANNOT_TRIM
624 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
625 #else /* MORECORE_CANNOT_TRIM */
626 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
627 #endif /* MORECORE_CANNOT_TRIM */
628 #endif /* DEFAULT_TRIM_THRESHOLD */
629 #ifndef DEFAULT_MMAP_THRESHOLD
631 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
632 #else /* HAVE_MMAP */
633 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
634 #endif /* HAVE_MMAP */
635 #endif /* DEFAULT_MMAP_THRESHOLD */
636 #ifndef MAX_RELEASE_CHECK_RATE
638 #define MAX_RELEASE_CHECK_RATE 4095
640 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
641 #endif /* HAVE_MMAP */
642 #endif /* MAX_RELEASE_CHECK_RATE */
643 #ifndef USE_BUILTIN_FFS
644 #define USE_BUILTIN_FFS 0
645 #endif /* USE_BUILTIN_FFS */
646 #ifndef USE_DEV_RANDOM
647 #define USE_DEV_RANDOM 0
648 #endif /* USE_DEV_RANDOM */
650 #define NO_MALLINFO 0
651 #endif /* NO_MALLINFO */
652 #ifndef MALLINFO_FIELD_TYPE
653 #define MALLINFO_FIELD_TYPE size_t
654 #endif /* MALLINFO_FIELD_TYPE */
655 #ifndef NO_SEGMENT_TRAVERSAL
656 #define NO_SEGMENT_TRAVERSAL 0
657 #endif /* NO_SEGMENT_TRAVERSAL */
660 mallopt tuning options. SVID/XPG defines four standard parameter
661 numbers for mallopt, normally defined in malloc.h. None of these
662 are used in this malloc, so setting them has no effect. But this
663 malloc does support the following options.
666 #define M_TRIM_THRESHOLD (-1)
667 #define M_GRANULARITY (-2)
668 #define M_MMAP_THRESHOLD (-3)
670 /* ------------------------ Mallinfo declarations ------------------------ */
674 This version of malloc supports the standard SVID/XPG mallinfo
675 routine that returns a struct containing usage properties and
676 statistics. It should work on any system that has a
677 /usr/include/malloc.h defining struct mallinfo. The main
678 declaration needed is the mallinfo struct that is returned (by-copy)
679 by mallinfo(). The malloinfo struct contains a bunch of fields that
680 are not even meaningful in this version of malloc. These fields are
681 are instead filled by mallinfo() with other numbers that might be of
684 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
685 /usr/include/malloc.h file that includes a declaration of struct
686 mallinfo. If so, it is included; else a compliant version is
687 declared below. These must be precisely the same for mallinfo() to
688 work. The original SVID version of this struct, defined on most
689 systems with mallinfo, declares all fields as ints. But some others
690 define as unsigned long. If your system defines the fields using a
691 type of different width than listed here, you MUST #include your
692 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
695 /* #define HAVE_USR_INCLUDE_MALLOC_H */
697 #ifdef HAVE_USR_INCLUDE_MALLOC_H
698 #include "/usr/include/malloc.h"
699 #else /* HAVE_USR_INCLUDE_MALLOC_H */
700 #ifndef STRUCT_MALLINFO_DECLARED
701 #define STRUCT_MALLINFO_DECLARED 1
703 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
704 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
705 MALLINFO_FIELD_TYPE smblks; /* always 0 */
706 MALLINFO_FIELD_TYPE hblks; /* always 0 */
707 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
708 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
709 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
710 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
711 MALLINFO_FIELD_TYPE fordblks; /* total free space */
712 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
714 #endif /* STRUCT_MALLINFO_DECLARED */
715 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
716 #endif /* NO_MALLINFO */
719 Try to persuade compilers to inline. The most critical functions for
720 inlining are defined as macros, so these aren't used for them.
723 #ifdef __MINGW64_VERSION_MAJOR
727 #if defined(__GNUC__)
728 #define FORCEINLINE __inline __attribute__ ((always_inline))
729 #elif defined(_MSC_VER)
730 #define FORCEINLINE __forceinline
734 #if defined(__GNUC__)
735 #define NOINLINE __attribute__ ((noinline))
736 #elif defined(_MSC_VER)
737 #define NOINLINE __declspec(noinline)
746 #define FORCEINLINE inline
748 #endif /* __cplusplus */
755 /* ------------------- Declarations of public routines ------------------- */
757 #ifndef USE_DL_PREFIX
758 #define dlcalloc calloc
760 #define dlmalloc malloc
761 #define dlmemalign memalign
762 #define dlrealloc realloc
763 #define dlvalloc valloc
764 #define dlpvalloc pvalloc
765 #define dlmallinfo mallinfo
766 #define dlmallopt mallopt
767 #define dlmalloc_trim malloc_trim
768 #define dlmalloc_stats malloc_stats
769 #define dlmalloc_usable_size malloc_usable_size
770 #define dlmalloc_footprint malloc_footprint
771 #define dlmalloc_max_footprint malloc_max_footprint
772 #define dlindependent_calloc independent_calloc
773 #define dlindependent_comalloc independent_comalloc
774 #endif /* USE_DL_PREFIX */
779 Returns a pointer to a newly allocated chunk of at least n bytes, or
780 null if no space is available, in which case errno is set to ENOMEM
783 If n is zero, malloc returns a minimum-sized chunk. (The minimum
784 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
785 systems.) Note that size_t is an unsigned type, so calls with
786 arguments that would be negative if signed are interpreted as
787 requests for huge amounts of space, which will often fail. The
788 maximum supported value of n differs across systems, but is in all
789 cases less than the maximum representable value of a size_t.
791 void* dlmalloc(size_t);
795 Releases the chunk of memory pointed to by p, that had been previously
796 allocated using malloc or a related routine such as realloc.
797 It has no effect if p is null. If p was not malloced or already
798 freed, free(p) will by default cause the current program to abort.
803 calloc(size_t n_elements, size_t element_size);
804 Returns a pointer to n_elements * element_size bytes, with all locations
807 void* dlcalloc(size_t, size_t);
810 realloc(void* p, size_t n)
811 Returns a pointer to a chunk of size n that contains the same data
812 as does chunk p up to the minimum of (n, p's size) bytes, or null
813 if no space is available.
815 The returned pointer may or may not be the same as p. The algorithm
816 prefers extending p in most cases when possible, otherwise it
817 employs the equivalent of a malloc-copy-free sequence.
819 If p is null, realloc is equivalent to malloc.
821 If space is not available, realloc returns null, errno is set (if on
822 ANSI) and p is NOT freed.
824 if n is for fewer bytes than already held by p, the newly unused
825 space is lopped off and freed if possible. realloc with a size
826 argument of zero (re)allocates a minimum-sized chunk.
828 The old unix realloc convention of allowing the last-free'd chunk
829 to be used as an argument to realloc is not supported.
832 void* dlrealloc(void*, size_t);
835 memalign(size_t alignment, size_t n);
836 Returns a pointer to a newly allocated chunk of n bytes, aligned
837 in accord with the alignment argument.
839 The alignment argument should be a power of two. If the argument is
840 not a power of two, the nearest greater power is used.
841 8-byte alignment is guaranteed by normal malloc calls, so don't
842 bother calling memalign with an argument of 8 or less.
844 Overreliance on memalign is a sure way to fragment space.
846 void* dlmemalign(size_t, size_t);
850 Equivalent to memalign(pagesize, n), where pagesize is the page
851 size of the system. If the pagesize is unknown, 4096 is used.
853 void* dlvalloc(size_t);
856 mallopt(int parameter_number, int parameter_value)
857 Sets tunable parameters The format is to provide a
858 (parameter-number, parameter-value) pair. mallopt then sets the
859 corresponding parameter to the argument value if it can (i.e., so
860 long as the value is meaningful), and returns 1 if successful else
861 0. To workaround the fact that mallopt is specified to use int,
862 not size_t parameters, the value -1 is specially treated as the
863 maximum unsigned size_t value.
865 SVID/XPG/ANSI defines four standard param numbers for mallopt,
866 normally defined in malloc.h. None of these are use in this malloc,
867 so setting them has no effect. But this malloc also supports other
868 options in mallopt. See below for details. Briefly, supported
869 parameters are as follows (listed defaults are for "typical"
872 Symbol param # default allowed param values
873 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
874 M_GRANULARITY -2 page size any power of 2 >= page size
875 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
877 int dlmallopt(int, int);
881 Returns the number of bytes obtained from the system. The total
882 number of bytes allocated by malloc, realloc etc., is less than this
883 value. Unlike mallinfo, this function returns only a precomputed
884 result, so can be called frequently to monitor memory consumption.
885 Even if locks are otherwise defined, this function does not use them,
886 so results might not be up to date.
888 size_t dlmalloc_footprint(void);
891 malloc_max_footprint();
892 Returns the maximum number of bytes obtained from the system. This
893 value will be greater than current footprint if deallocated space
894 has been reclaimed by the system. The peak number of bytes allocated
895 by malloc, realloc etc., is less than this value. Unlike mallinfo,
896 this function returns only a precomputed result, so can be called
897 frequently to monitor memory consumption. Even if locks are
898 otherwise defined, this function does not use them, so results might
901 size_t dlmalloc_max_footprint(void);
906 Returns (by copy) a struct containing various summary statistics:
908 arena: current total non-mmapped bytes allocated from system
909 ordblks: the number of free chunks
911 hblks: current number of mmapped regions
912 hblkhd: total bytes held in mmapped regions
913 usmblks: the maximum total allocated space. This will be greater
914 than current total if trimming has occurred.
916 uordblks: current total allocated space (normal or mmapped)
917 fordblks: total free space
918 keepcost: the maximum number of bytes that could ideally be released
919 back to system via malloc_trim. ("ideally" means that
920 it ignores page restrictions etc.)
922 Because these fields are ints, but internal bookkeeping may
923 be kept as longs, the reported values may wrap around zero and
926 struct mallinfo dlmallinfo(void);
927 #endif /* NO_MALLINFO */
930 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
932 independent_calloc is similar to calloc, but instead of returning a
933 single cleared space, it returns an array of pointers to n_elements
934 independent elements that can hold contents of size elem_size, each
935 of which starts out cleared, and can be independently freed,
936 realloc'ed etc. The elements are guaranteed to be adjacently
937 allocated (this is not guaranteed to occur with multiple callocs or
938 mallocs), which may also improve cache locality in some
941 The "chunks" argument is optional (i.e., may be null, which is
942 probably the most typical usage). If it is null, the returned array
943 is itself dynamically allocated and should also be freed when it is
944 no longer needed. Otherwise, the chunks array must be of at least
945 n_elements in length. It is filled in with the pointers to the
948 In either case, independent_calloc returns this pointer array, or
949 null if the allocation failed. If n_elements is zero and "chunks"
950 is null, it returns a chunk representing an array with zero elements
951 (which should be freed if not wanted).
953 Each element must be individually freed when it is no longer
954 needed. If you'd like to instead be able to free all at once, you
955 should instead use regular calloc and assign pointers into this
956 space to represent elements. (In this case though, you cannot
957 independently free elements.)
959 independent_calloc simplifies and speeds up implementations of many
960 kinds of pools. It may also be useful when constructing large data
961 structures that initially have a fixed number of fixed-sized nodes,
962 but the number is not known at compile time, and some of the nodes
963 may later need to be freed. For example:
965 struct Node { int item; struct Node* next; };
967 struct Node* build_list() {
969 int n = read_number_of_nodes_needed();
970 if (n <= 0) return 0;
971 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
972 if (pool == 0) die();
973 // organize into a linked list...
974 struct Node* first = pool[0];
975 for (i = 0; i < n-1; ++i)
976 pool[i]->next = pool[i+1];
977 free(pool); // Can now free the array (or not, if it is needed later)
981 void** dlindependent_calloc(size_t, size_t, void**);
984 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
986 independent_comalloc allocates, all at once, a set of n_elements
987 chunks with sizes indicated in the "sizes" array. It returns
988 an array of pointers to these elements, each of which can be
989 independently freed, realloc'ed etc. The elements are guaranteed to
990 be adjacently allocated (this is not guaranteed to occur with
991 multiple callocs or mallocs), which may also improve cache locality
992 in some applications.
994 The "chunks" argument is optional (i.e., may be null). If it is null
995 the returned array is itself dynamically allocated and should also
996 be freed when it is no longer needed. Otherwise, the chunks array
997 must be of at least n_elements in length. It is filled in with the
998 pointers to the chunks.
1000 In either case, independent_comalloc returns this pointer array, or
1001 null if the allocation failed. If n_elements is zero and chunks is
1002 null, it returns a chunk representing an array with zero elements
1003 (which should be freed if not wanted).
1005 Each element must be individually freed when it is no longer
1006 needed. If you'd like to instead be able to free all at once, you
1007 should instead use a single regular malloc, and assign pointers at
1008 particular offsets in the aggregate space. (In this case though, you
1009 cannot independently free elements.)
1011 independent_comallac differs from independent_calloc in that each
1012 element may have a different size, and also that it does not
1013 automatically clear elements.
1015 independent_comalloc can be used to speed up allocation in cases
1016 where several structs or objects must always be allocated at the
1017 same time. For example:
1022 void send_message(char* msg) {
1023 int msglen = strlen(msg);
1024 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1026 if (independent_comalloc(3, sizes, chunks) == 0)
1028 struct Head* head = (struct Head*)(chunks[0]);
1029 char* body = (char*)(chunks[1]);
1030 struct Foot* foot = (struct Foot*)(chunks[2]);
1034 In general though, independent_comalloc is worth using only for
1035 larger values of n_elements. For small values, you probably won't
1036 detect enough difference from series of malloc calls to bother.
1038 Overuse of independent_comalloc can increase overall memory usage,
1039 since it cannot reuse existing noncontiguous small chunks that
1040 might be available for some of the elements.
1042 void** dlindependent_comalloc(size_t, size_t*, void**);
1047 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1048 round up n to nearest pagesize.
1050 void* dlpvalloc(size_t);
1053 malloc_trim(size_t pad);
1055 If possible, gives memory back to the system (via negative arguments
1056 to sbrk) if there is unused memory at the `high' end of the malloc
1057 pool or in unused MMAP segments. You can call this after freeing
1058 large blocks of memory to potentially reduce the system-level memory
1059 requirements of a program. However, it cannot guarantee to reduce
1060 memory. Under some allocation patterns, some large free blocks of
1061 memory will be locked between two used chunks, so they cannot be
1062 given back to the system.
1064 The `pad' argument to malloc_trim represents the amount of free
1065 trailing space to leave untrimmed. If this argument is zero, only
1066 the minimum amount of memory to maintain internal data structures
1067 will be left. Non-zero arguments can be supplied to maintain enough
1068 trailing space to service future expected allocations without having
1069 to re-obtain memory from the system.
1071 Malloc_trim returns 1 if it actually released any memory, else 0.
1073 int dlmalloc_trim(size_t);
1077 Prints on stderr the amount of space obtained from the system (both
1078 via sbrk and mmap), the maximum amount (which may be more than
1079 current if malloc_trim and/or munmap got called), and the current
1080 number of bytes allocated via malloc (or realloc, etc) but not yet
1081 freed. Note that this is the number of bytes allocated, not the
1082 number requested. It will be larger than the number requested
1083 because of alignment and bookkeeping overhead. Because it includes
1084 alignment wastage as being in use, this figure may be greater than
1085 zero even when no user-level chunks are allocated.
1087 The reported current and maximum system memory can be inaccurate if
1088 a program makes other calls to system memory allocation functions
1089 (normally sbrk) outside of malloc.
1091 malloc_stats prints only the most commonly interesting statistics.
1092 More information can be obtained by calling mallinfo.
1094 void dlmalloc_stats(void);
1096 #endif /* ONLY_MSPACES */
1099 malloc_usable_size(void* p);
1101 Returns the number of bytes you can actually use in
1102 an allocated chunk, which may be more than you requested (although
1103 often not) due to alignment and minimum size constraints.
1104 You can use this many bytes without worrying about
1105 overwriting other allocated objects. This is not a particularly great
1106 programming practice. malloc_usable_size can be more useful in
1107 debugging and assertions, for example:
1110 assert(malloc_usable_size(p) >= 256);
1112 size_t dlmalloc_usable_size(void*);
1118 mspace is an opaque type representing an independent
1119 region of space that supports mspace_malloc, etc.
1121 typedef void* mspace;
1124 create_mspace creates and returns a new independent space with the
1125 given initial capacity, or, if 0, the default granularity size. It
1126 returns null if there is no system memory available to create the
1127 space. If argument locked is non-zero, the space uses a separate
1128 lock to control access. The capacity of the space will grow
1129 dynamically as needed to service mspace_malloc requests. You can
1130 control the sizes of incremental increases of this space by
1131 compiling with a different DEFAULT_GRANULARITY or dynamically
1132 setting with mallopt(M_GRANULARITY, value).
1134 mspace create_mspace(size_t capacity, int locked);
1137 destroy_mspace destroys the given space, and attempts to return all
1138 of its memory back to the system, returning the total number of
1139 bytes freed. After destruction, the results of access to all memory
1140 used by the space become undefined.
1142 size_t destroy_mspace(mspace msp);
1145 create_mspace_with_base uses the memory supplied as the initial base
1146 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1147 space is used for bookkeeping, so the capacity must be at least this
1148 large. (Otherwise 0 is returned.) When this initial space is
1149 exhausted, additional memory will be obtained from the system.
1150 Destroying this space will deallocate all additionally allocated
1151 space (if possible) but not the initial base.
1153 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1156 mspace_mmap_large_chunks controls whether requests for large chunks
1157 are allocated in their own mmapped regions, separate from others in
1158 this mspace. By default this is enabled, which reduces
1159 fragmentation. However, such chunks are not necessarily released to
1160 the system upon destroy_mspace. Disabling by setting to false may
1161 increase fragmentation, but avoids leakage when relying on
1162 destroy_mspace to release all memory allocated using this space.
1164 int mspace_mmap_large_chunks(mspace msp, int enable);
1168 mspace_malloc behaves as malloc, but operates within
1171 void* mspace_malloc(mspace msp, size_t bytes);
1174 mspace_free behaves as free, but operates within
1177 If compiled with FOOTERS==1, mspace_free is not actually needed.
1178 free may be called instead of mspace_free because freed chunks from
1179 any space are handled by their originating spaces.
1181 void mspace_free(mspace msp, void* mem);
1184 mspace_realloc behaves as realloc, but operates within
1187 If compiled with FOOTERS==1, mspace_realloc is not actually
1188 needed. realloc may be called instead of mspace_realloc because
1189 realloced chunks from any space are handled by their originating
1192 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1195 mspace_calloc behaves as calloc, but operates within
1198 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1201 mspace_memalign behaves as memalign, but operates within
1204 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1207 mspace_independent_calloc behaves as independent_calloc, but
1208 operates within the given space.
1210 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1211 size_t elem_size, void* chunks[]);
1214 mspace_independent_comalloc behaves as independent_comalloc, but
1215 operates within the given space.
1217 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1218 size_t sizes[], void* chunks[]);
1221 mspace_footprint() returns the number of bytes obtained from the
1222 system for this space.
1224 size_t mspace_footprint(mspace msp);
1227 mspace_max_footprint() returns the peak number of bytes obtained from the
1228 system for this space.
1230 size_t mspace_max_footprint(mspace msp);
1235 mspace_mallinfo behaves as mallinfo, but reports properties of
1238 struct mallinfo mspace_mallinfo(mspace msp);
1239 #endif /* NO_MALLINFO */
1242 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1244 size_t mspace_usable_size(void* mem);
1247 mspace_malloc_stats behaves as malloc_stats, but reports
1248 properties of the given space.
1250 void mspace_malloc_stats(mspace msp);
1253 mspace_trim behaves as malloc_trim, but
1254 operates within the given space.
1256 int mspace_trim(mspace msp, size_t pad);
1259 An alias for mallopt.
1261 int mspace_mallopt(int, int);
1263 #endif /* MSPACES */
1266 }; /* end of extern "C" */
1267 #endif /* __cplusplus */
1270 ========================================================================
1271 To make a fully customizable malloc.h header file, cut everything
1272 above this line, put into file malloc.h, edit to suit, and #include it
1273 on the next line, as well as in programs that use this malloc.
1274 ========================================================================
1277 /* #include "malloc.h" */
1279 /*------------------------------ internal #includes ---------------------- */
1283 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1287 #include <stdio.h> /* for printing in malloc_stats */
1289 #ifndef LACKS_ERRNO_H
1290 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1291 #endif /* LACKS_ERRNO_H */
1293 #include <time.h> /* for magic initialization */
1294 #endif /* FOOTERS */
1295 #ifndef LACKS_STDLIB_H
1296 #include <stdlib.h> /* for abort() */
1297 #endif /* LACKS_STDLIB_H */
1299 #if ABORT_ON_ASSERT_FAILURE
1300 #define assert(x) if(!(x)) ABORT
1301 #else /* ABORT_ON_ASSERT_FAILURE */
1303 #endif /* ABORT_ON_ASSERT_FAILURE */
1310 #ifndef LACKS_STRING_H
1311 #include <string.h> /* for memset etc */
1312 #endif /* LACKS_STRING_H */
1314 #ifndef LACKS_STRINGS_H
1315 #include <strings.h> /* for ffs */
1316 #endif /* LACKS_STRINGS_H */
1317 #endif /* USE_BUILTIN_FFS */
1319 #ifndef LACKS_SYS_MMAN_H
1320 #include <sys/mman.h> /* for mmap */
1321 #endif /* LACKS_SYS_MMAN_H */
1322 #ifndef LACKS_FCNTL_H
1324 #endif /* LACKS_FCNTL_H */
1325 #endif /* HAVE_MMAP */
1326 #ifndef LACKS_UNISTD_H
1327 #include <unistd.h> /* for sbrk, sysconf */
1328 #else /* LACKS_UNISTD_H */
1329 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1330 extern void* sbrk(ptrdiff_t);
1331 #endif /* FreeBSD etc */
1332 #endif /* LACKS_UNISTD_H */
1334 /* Declarations for locking */
1337 #include <pthread.h>
1338 #if defined (__SVR4) && defined (__sun) /* solaris */
1340 #endif /* solaris */
1343 /* These are already defined on AMD64 builds */
1346 #endif /* __cplusplus */
1348 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1349 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1353 #endif /* __cplusplus */
1354 #endif /* _M_AMD64 */
1356 #pragma intrinsic (_InterlockedCompareExchange)
1357 #pragma intrinsic (_InterlockedExchange)
1359 /* --[ start GCC compatibility ]----------------------------------------------
1360 * Compatibility <intrin_x86.h> header for GCC -- GCC equivalents of intrinsic
1361 * Microsoft Visual C++ functions. Originally developed for the ReactOS
1362 * (<http://www.reactos.org/>) and TinyKrnl (<http://www.tinykrnl.org/>)
1365 * Copyright (c) 2006 KJK::Hyperion <hackbunny@reactos.com>
1367 * Permission is hereby granted, free of charge, to any person obtaining a
1368 * copy of this software and associated documentation files (the "Software"),
1369 * to deal in the Software without restriction, including without limitation
1370 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
1371 * and/or sell copies of the Software, and to permit persons to whom the
1372 * Software is furnished to do so, subject to the following conditions:
1374 * The above copyright notice and this permission notice shall be included in
1375 * all copies or substantial portions of the Software.
1377 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1378 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1379 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1380 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1381 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
1382 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
1383 * DEALINGS IN THE SOFTWARE.
1386 /*** Atomic operations ***/
1387 #if (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) > 40100
1388 #undef _ReadWriteBarrier
1389 #define _ReadWriteBarrier() __sync_synchronize()
1391 static __inline__ __attribute__((always_inline)) long __sync_lock_test_and_set(volatile long * const Target, const long Value)
1394 __asm__ __volatile__("xchg%z0 %2, %0" : "=g" (*(Target)), "=r" (res) : "1" (Value));
1397 static void __inline__ __attribute__((always_inline)) _MemoryBarrier(void)
1399 __asm__ __volatile__("" : : : "memory");
1401 #define _ReadWriteBarrier() _MemoryBarrier()
1403 /* BUGBUG: GCC only supports full barriers */
1404 static __inline__ __attribute__((always_inline)) long _InterlockedExchange(volatile long * const Target, const long Value)
1406 /* NOTE: __sync_lock_test_and_set would be an acquire barrier, so we force a full barrier */
1407 _ReadWriteBarrier();
1408 return __sync_lock_test_and_set(Target, Value);
1410 /* --[ end GCC compatibility ]---------------------------------------------- */
1412 #define interlockedcompareexchange _InterlockedCompareExchange
1413 #define interlockedexchange _InterlockedExchange
1415 #endif /* USE_LOCKS */
1417 /* Declarations for bit scanning on win32 */
1418 #if defined(_MSC_VER) && _MSC_VER>=1300
1419 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1422 #endif /* __cplusplus */
1423 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1424 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1427 #endif /* __cplusplus */
1429 #define BitScanForward _BitScanForward
1430 #define BitScanReverse _BitScanReverse
1431 #pragma intrinsic(_BitScanForward)
1432 #pragma intrinsic(_BitScanReverse)
1433 #endif /* BitScanForward */
1434 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1437 #ifndef malloc_getpagesize
1438 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1439 # ifndef _SC_PAGE_SIZE
1440 # define _SC_PAGE_SIZE _SC_PAGESIZE
1443 # ifdef _SC_PAGE_SIZE
1444 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1446 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1447 extern size_t getpagesize();
1448 # define malloc_getpagesize getpagesize()
1450 # ifdef WIN32 /* use supplied emulation of getpagesize */
1451 # define malloc_getpagesize getpagesize()
1453 # ifndef LACKS_SYS_PARAM_H
1454 # include <sys/param.h>
1456 # ifdef EXEC_PAGESIZE
1457 # define malloc_getpagesize EXEC_PAGESIZE
1461 # define malloc_getpagesize NBPG
1463 # define malloc_getpagesize (NBPG * CLSIZE)
1467 # define malloc_getpagesize NBPC
1470 # define malloc_getpagesize PAGESIZE
1471 # else /* just guess */
1472 # define malloc_getpagesize ((size_t)4096U)
1485 /* ------------------- size_t and alignment properties -------------------- */
1487 /* The byte and bit size of a size_t */
1488 #define SIZE_T_SIZE (sizeof(size_t))
1489 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1491 /* Some constants coerced to size_t */
1492 /* Annoying but necessary to avoid errors on some platforms */
1493 #define SIZE_T_ZERO ((size_t)0)
1494 #define SIZE_T_ONE ((size_t)1)
1495 #define SIZE_T_TWO ((size_t)2)
1496 #define SIZE_T_FOUR ((size_t)4)
1497 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1498 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1499 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1500 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1502 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1503 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1505 /* True if address a has acceptable alignment */
1506 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1508 /* the number of bytes to offset an address to align it */
1509 #define align_offset(A)\
1510 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1511 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1513 /* -------------------------- MMAP preliminaries ------------------------- */
1516 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1517 checks to fail so compiler optimizer can delete code rather than
1518 using so many "#if"s.
1522 /* MORECORE and MMAP must return MFAIL on failure */
1523 #define MFAIL ((void*)(MAX_SIZE_T))
1524 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1529 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1530 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1531 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1532 #define MAP_ANONYMOUS MAP_ANON
1533 #endif /* MAP_ANON */
1534 #ifdef MAP_ANONYMOUS
1535 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1536 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1537 #else /* MAP_ANONYMOUS */
1539 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1540 is unlikely to be needed, but is supplied just in case.
1542 #define MMAP_FLAGS (MAP_PRIVATE)
1543 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1544 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1545 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1546 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1547 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1548 #endif /* MAP_ANONYMOUS */
1550 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1554 /* Win32 MMAP via VirtualAlloc */
1555 static FORCEINLINE void* win32mmap(size_t size) {
1556 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1557 return (ptr != 0)? ptr: MFAIL;
1560 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1561 static FORCEINLINE void* win32direct_mmap(size_t size) {
1562 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1564 return (ptr != 0)? ptr: MFAIL;
1567 /* This function supports releasing coalesed segments */
1568 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1569 MEMORY_BASIC_INFORMATION minfo;
1570 char* cptr = (char*)ptr;
1572 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1574 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1575 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1577 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1579 cptr += minfo.RegionSize;
1580 size -= minfo.RegionSize;
1585 #define MMAP_DEFAULT(s) win32mmap(s)
1586 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1587 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1589 #endif /* HAVE_MMAP */
1593 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1595 #endif /* HAVE_MREMAP */
1599 * Define CALL_MORECORE
1603 #define CALL_MORECORE(S) MORECORE(S)
1604 #else /* MORECORE */
1605 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1606 #endif /* MORECORE */
1607 #else /* HAVE_MORECORE */
1608 #define CALL_MORECORE(S) MFAIL
1609 #endif /* HAVE_MORECORE */
1612 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1615 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1616 #define USE_MMAP_BIT (SIZE_T_ONE)
1619 #define CALL_MMAP(s) MMAP(s)
1621 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1624 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1626 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1629 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1630 #else /* DIRECT_MMAP */
1631 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1632 #endif /* DIRECT_MMAP */
1633 #else /* HAVE_MMAP */
1634 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1635 #define USE_MMAP_BIT (SIZE_T_ZERO)
1637 #define MMAP(s) MFAIL
1638 #define MUNMAP(a, s) (-1)
1639 #define DIRECT_MMAP(s) MFAIL
1640 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1641 #define CALL_MMAP(s) MMAP(s)
1642 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1643 #endif /* HAVE_MMAP */
1646 * Define CALL_MREMAP
1648 #if HAVE_MMAP && HAVE_MREMAP
1650 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1652 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1654 #else /* HAVE_MMAP && HAVE_MREMAP */
1655 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1656 #endif /* HAVE_MMAP && HAVE_MREMAP */
1658 /* mstate bit set if continguous morecore disabled or failed */
1659 #define USE_NONCONTIGUOUS_BIT (4U)
1661 /* segment bit set in create_mspace_with_base */
1662 #define EXTERN_BIT (8U)
1665 /* --------------------------- Lock preliminaries ------------------------ */
1668 When locks are defined, there is one global lock, plus
1669 one per-mspace lock.
1671 The global lock_ensures that mparams.magic and other unique
1672 mparams values are initialized only once. It also protects
1673 sequences of calls to MORECORE. In many cases sys_alloc requires
1674 two calls, that should not be interleaved with calls by other
1675 threads. This does not protect against direct calls to MORECORE
1676 by other threads not using this lock, so there is still code to
1677 cope the best we can on interference.
1679 Per-mspace locks surround calls to malloc, free, etc. To enable use
1680 in layered extensions, per-mspace locks are reentrant.
1682 Because lock-protected regions generally have bounded times, it is
1683 OK to use the supplied simple spinlocks in the custom versions for
1686 If USE_LOCKS is > 1, the definitions of lock routines here are
1687 bypassed, in which case you will need to define at least
1688 INITIAL_LOCK, ACQUIRE_LOCK, RELEASE_LOCK and possibly TRY_LOCK
1689 (which is not used in this malloc, but commonly needed in
1698 /* Custom pthread-style spin locks on x86 and x64 for gcc */
1699 struct pthread_mlock_t {
1700 volatile unsigned int l;
1701 volatile unsigned int c;
1702 volatile pthread_t threadid;
1704 #define MLOCK_T struct pthread_mlock_t
1705 #define CURRENT_THREAD pthread_self()
1706 #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)
1707 #define ACQUIRE_LOCK(sl) pthread_acquire_lock(sl)
1708 #define RELEASE_LOCK(sl) pthread_release_lock(sl)
1709 #define TRY_LOCK(sl) pthread_try_lock(sl)
1710 #define SPINS_PER_YIELD 63
1712 static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1714 static FORCEINLINE int pthread_acquire_lock (MLOCK_T *sl) {
1716 volatile unsigned int* lp = &sl->l;
1719 if (sl->threadid == CURRENT_THREAD) {
1725 /* place args to cmpxchgl in locals to evade oddities in some gccs */
1729 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1731 : "r" (val), "m" (*(lp)), "0"(cmp)
1734 assert(!sl->threadid);
1736 sl->threadid = CURRENT_THREAD;
1739 if ((++spins & SPINS_PER_YIELD) == 0) {
1740 #if defined (__SVR4) && defined (__sun) /* solaris */
1743 #if defined(__linux__) || defined(__FreeBSD__) || defined(__APPLE__)
1745 #else /* no-op yield on unknown systems */
1747 #endif /* __linux__ || __FreeBSD__ || __APPLE__ */
1748 #endif /* solaris */
1754 static FORCEINLINE void pthread_release_lock (MLOCK_T *sl) {
1756 assert(sl->threadid == CURRENT_THREAD);
1759 volatile unsigned int* lp = &sl->l;
1762 __asm__ __volatile__ ("lock; xchgl %0, %1"
1764 : "m" (*(lp)), "0"(prev)
1769 static FORCEINLINE int pthread_try_lock (MLOCK_T *sl) {
1770 volatile unsigned int* lp = &sl->l;
1772 if (sl->threadid == CURRENT_THREAD) {
1781 __asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1783 : "r" (val), "m" (*(lp)), "0"(cmp)
1786 assert(!sl->threadid);
1788 sl->threadid = CURRENT_THREAD;
1797 /* Custom win32-style spin locks on x86 and x64 for MSC */
1798 struct win32_mlock_t
1801 volatile unsigned int c;
1802 volatile long threadid;
1805 #define MLOCK_T struct win32_mlock_t
1806 #define CURRENT_THREAD win32_getcurrentthreadid()
1807 #define INITIAL_LOCK(sl) (memset(sl, 0, sizeof(MLOCK_T)), 0)
1808 #define ACQUIRE_LOCK(sl) win32_acquire_lock(sl)
1809 #define RELEASE_LOCK(sl) win32_release_lock(sl)
1810 #define TRY_LOCK(sl) win32_try_lock(sl)
1811 #define SPINS_PER_YIELD 63
1813 static MLOCK_T malloc_global_mutex = { 0, 0, 0};
1815 static FORCEINLINE long win32_getcurrentthreadid(void) {
1817 #if defined(_M_IX86)
1818 long *threadstruct=(long *)__readfsdword(0x18);
1819 long threadid=threadstruct[0x24/sizeof(long)];
1821 #elif defined(_M_X64)
1823 return GetCurrentThreadId();
1825 return GetCurrentThreadId();
1828 return GetCurrentThreadId();
1832 static FORCEINLINE int win32_acquire_lock (MLOCK_T *sl) {
1836 if (sl->threadid == CURRENT_THREAD) {
1842 if (!interlockedexchange(&sl->l, 1)) {
1843 assert(!sl->threadid);
1844 sl->c=CURRENT_THREAD;
1845 sl->threadid = CURRENT_THREAD;
1850 if ((++spins & SPINS_PER_YIELD) == 0)
1855 static FORCEINLINE void win32_release_lock (MLOCK_T *sl) {
1856 assert(sl->threadid == CURRENT_THREAD);
1860 interlockedexchange (&sl->l, 0);
1864 static FORCEINLINE int win32_try_lock (MLOCK_T *sl) {
1866 if (sl->threadid == CURRENT_THREAD) {
1872 if (!interlockedexchange(&sl->l, 1)){
1873 assert(!sl->threadid);
1874 sl->threadid = CURRENT_THREAD;
1883 #else /* USE_SPIN_LOCKS */
1886 /* pthreads-based locks */
1888 #define MLOCK_T pthread_mutex_t
1889 #define CURRENT_THREAD pthread_self()
1890 #define INITIAL_LOCK(sl) pthread_init_lock(sl)
1891 #define ACQUIRE_LOCK(sl) pthread_mutex_lock(sl)
1892 #define RELEASE_LOCK(sl) pthread_mutex_unlock(sl)
1893 #define TRY_LOCK(sl) (!pthread_mutex_trylock(sl))
1895 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
1897 /* Cope with old-style linux recursive lock initialization by adding */
1898 /* skipped internal declaration from pthread.h */
1900 #ifndef PTHREAD_MUTEX_RECURSIVE
1901 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
1903 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
1904 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
1908 static int pthread_init_lock (MLOCK_T *sl) {
1909 pthread_mutexattr_t attr;
1910 if (pthread_mutexattr_init(&attr)) return 1;
1911 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
1912 if (pthread_mutex_init(sl, &attr)) return 1;
1913 if (pthread_mutexattr_destroy(&attr)) return 1;
1918 /* Win32 critical sections */
1919 #define MLOCK_T CRITICAL_SECTION
1920 #define CURRENT_THREAD GetCurrentThreadId()
1921 #define INITIAL_LOCK(s) (!InitializeCriticalSectionAndSpinCount((s), 0x80000000|4000))
1922 #define ACQUIRE_LOCK(s) (EnterCriticalSection(s), 0)
1923 #define RELEASE_LOCK(s) LeaveCriticalSection(s)
1924 #define TRY_LOCK(s) TryEnterCriticalSection(s)
1925 #define NEED_GLOBAL_LOCK_INIT
1927 static MLOCK_T malloc_global_mutex;
1928 static volatile long malloc_global_mutex_status;
1930 /* Use spin loop to initialize global lock */
1931 static void init_malloc_global_mutex() {
1933 long stat = malloc_global_mutex_status;
1936 /* transition to < 0 while initializing, then to > 0) */
1938 interlockedcompareexchange(&malloc_global_mutex_status, -1, 0) == 0) {
1939 InitializeCriticalSection(&malloc_global_mutex);
1940 interlockedexchange(&malloc_global_mutex_status,1);
1948 #endif /* USE_SPIN_LOCKS */
1949 #endif /* USE_LOCKS == 1 */
1951 /* ----------------------- User-defined locks ------------------------ */
1954 /* Define your own lock implementation here */
1955 /* #define INITIAL_LOCK(sl) ... */
1956 /* #define ACQUIRE_LOCK(sl) ... */
1957 /* #define RELEASE_LOCK(sl) ... */
1958 /* #define TRY_LOCK(sl) ... */
1959 /* static MLOCK_T malloc_global_mutex = ... */
1960 #endif /* USE_LOCKS > 1 */
1962 /* ----------------------- Lock-based state ------------------------ */
1965 #define USE_LOCK_BIT (2U)
1966 #else /* USE_LOCKS */
1967 #define USE_LOCK_BIT (0U)
1968 #define INITIAL_LOCK(l)
1969 #endif /* USE_LOCKS */
1972 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
1973 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
1974 #else /* USE_LOCKS */
1975 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1976 #define RELEASE_MALLOC_GLOBAL_LOCK()
1977 #endif /* USE_LOCKS */
1980 /* ----------------------- Chunk representations ------------------------ */
1983 (The following includes lightly edited explanations by Colin Plumb.)
1985 The malloc_chunk declaration below is misleading (but accurate and
1986 necessary). It declares a "view" into memory allowing access to
1987 necessary fields at known offsets from a given base.
1989 Chunks of memory are maintained using a `boundary tag' method as
1990 originally described by Knuth. (See the paper by Paul Wilson
1991 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1992 techniques.) Sizes of free chunks are stored both in the front of
1993 each chunk and at the end. This makes consolidating fragmented
1994 chunks into bigger chunks fast. The head fields also hold bits
1995 representing whether chunks are free or in use.
1997 Here are some pictures to make it clearer. They are "exploded" to
1998 show that the state of a chunk can be thought of as extending from
1999 the high 31 bits of the head field of its header through the
2000 prev_foot and PINUSE_BIT bit of the following chunk header.
2002 A chunk that's in use looks like:
2004 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2005 | Size of previous chunk (if P = 0) |
2006 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2007 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2008 | Size of this chunk 1| +-+
2009 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2015 +- size - sizeof(size_t) available payload bytes -+
2019 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2020 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2021 | Size of next chunk (may or may not be in use) | +-+
2022 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2024 And if it's free, it looks like this:
2027 | User payload (must be in use, or we would have merged!) |
2028 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2029 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2030 | Size of this chunk 0| +-+
2031 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2033 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2035 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2037 +- size - sizeof(struct chunk) unused bytes -+
2039 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2040 | Size of this chunk |
2041 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2042 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2043 | Size of next chunk (must be in use, or we would have merged)| +-+
2044 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2048 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2051 Note that since we always merge adjacent free chunks, the chunks
2052 adjacent to a free chunk must be in use.
2054 Given a pointer to a chunk (which can be derived trivially from the
2055 payload pointer) we can, in O(1) time, find out whether the adjacent
2056 chunks are free, and if so, unlink them from the lists that they
2057 are on and merge them with the current chunk.
2059 Chunks always begin on even word boundaries, so the mem portion
2060 (which is returned to the user) is also on an even word boundary, and
2061 thus at least double-word aligned.
2063 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2064 chunk size (which is always a multiple of two words), is an in-use
2065 bit for the *previous* chunk. If that bit is *clear*, then the
2066 word before the current chunk size contains the previous chunk
2067 size, and can be used to find the front of the previous chunk.
2068 The very first chunk allocated always has this bit set, preventing
2069 access to non-existent (or non-owned) memory. If pinuse is set for
2070 any given chunk, then you CANNOT determine the size of the
2071 previous chunk, and might even get a memory addressing fault when
2074 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2075 the chunk size redundantly records whether the current chunk is
2076 inuse. This redundancy enables usage checks within free and realloc,
2077 and reduces indirection when freeing and consolidating chunks.
2079 Each freshly allocated chunk must have both cinuse and pinuse set.
2080 That is, each allocated chunk borders either a previously allocated
2081 and still in-use chunk, or the base of its memory arena. This is
2082 ensured by making all allocations from the `lowest' part of any
2083 found chunk. Further, no free chunk physically borders another one,
2084 so each free chunk is known to be preceded and followed by either
2085 inuse chunks or the ends of memory.
2087 Note that the `foot' of the current chunk is actually represented
2088 as the prev_foot of the NEXT chunk. This makes it easier to
2089 deal with alignments etc but can be very confusing when trying
2090 to extend or adapt this code.
2092 The exceptions to all this are
2094 1. The special chunk `top' is the top-most available chunk (i.e.,
2095 the one bordering the end of available memory). It is treated
2096 specially. Top is never included in any bin, is used only if
2097 no other chunk is available, and is released back to the
2098 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2099 the top chunk is treated as larger (and thus less well
2100 fitting) than any other available chunk. The top chunk
2101 doesn't update its trailing size field since there is no next
2102 contiguous chunk that would have to index off it. However,
2103 space is still allocated for it (TOP_FOOT_SIZE) to enable
2104 separation or merging when space is extended.
2106 3. Chunks allocated via mmap, which have the lowest-order bit
2107 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
2108 PINUSE_BIT in their head fields. Because they are allocated
2109 one-by-one, each must carry its own prev_foot field, which is
2110 also used to hold the offset this chunk has within its mmapped
2111 region, which is needed to preserve alignment. Each mmapped
2112 chunk is trailed by the first two fields of a fake next-chunk
2113 for sake of usage checks.
2117 struct malloc_chunk {
2118 size_t prev_foot; /* Size of previous chunk (if free). */
2119 size_t head; /* Size and inuse bits. */
2120 struct malloc_chunk* fd; /* double links -- used only if free. */
2121 struct malloc_chunk* bk;
2124 typedef struct malloc_chunk mchunk;
2125 typedef struct malloc_chunk* mchunkptr;
2126 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2127 typedef unsigned int bindex_t; /* Described below */
2128 typedef unsigned int binmap_t; /* Described below */
2129 typedef unsigned int flag_t; /* The type of various bit flag sets */
2131 /* ------------------- Chunks sizes and alignments ----------------------- */
2133 #define MCHUNK_SIZE (sizeof(mchunk))
2136 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2138 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2139 #endif /* FOOTERS */
2141 /* MMapped chunks need a second word of overhead ... */
2142 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2143 /* ... and additional padding for fake next-chunk at foot */
2144 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2146 /* The smallest size we can malloc is an aligned minimal chunk */
2147 #define MIN_CHUNK_SIZE\
2148 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2150 /* conversion from malloc headers to user pointers, and back */
2151 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2152 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2153 /* chunk associated with aligned address A */
2154 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2156 /* Bounds on request (not chunk) sizes. */
2157 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2158 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2160 /* pad request bytes into a usable size */
2161 #define pad_request(req) \
2162 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2164 /* pad request, checking for minimum (but not maximum) */
2165 #define request2size(req) \
2166 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2169 /* ------------------ Operations on head and foot fields ----------------- */
2172 The head field of a chunk is or'ed with PINUSE_BIT when previous
2173 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2174 use. If the chunk was obtained with mmap, the prev_foot field has
2175 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
2176 mmapped region to the base of the chunk.
2178 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2181 #define PINUSE_BIT (SIZE_T_ONE)
2182 #define CINUSE_BIT (SIZE_T_TWO)
2183 #define FLAG4_BIT (SIZE_T_FOUR)
2184 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2185 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2187 /* Head value for fenceposts */
2188 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2190 /* extraction of fields from head words */
2191 #define cinuse(p) ((p)->head & CINUSE_BIT)
2192 #define pinuse(p) ((p)->head & PINUSE_BIT)
2193 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2195 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2196 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
2198 /* Treat space at ptr +/- offset as a chunk */
2199 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2200 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2202 /* Ptr to next or previous physical malloc_chunk. */
2203 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2204 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2206 /* extract next chunk's pinuse bit */
2207 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2209 /* Get/set size at footer */
2210 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2211 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2213 /* Set size, pinuse bit, and foot */
2214 #define set_size_and_pinuse_of_free_chunk(p, s)\
2215 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2217 /* Set size, pinuse bit, foot, and clear next pinuse */
2218 #define set_free_with_pinuse(p, s, n)\
2219 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2221 #define is_mmapped(p)\
2222 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
2224 /* Get the internal overhead associated with chunk p */
2225 #define overhead_for(p)\
2226 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2228 /* Return true if malloced space is not necessarily cleared */
2230 #define calloc_must_clear(p) (!is_mmapped(p))
2231 #else /* MMAP_CLEARS */
2232 #define calloc_must_clear(p) (1)
2233 #endif /* MMAP_CLEARS */
2235 /* ---------------------- Overlaid data structures ----------------------- */
2238 When chunks are not in use, they are treated as nodes of either
2241 "Small" chunks are stored in circular doubly-linked lists, and look
2244 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2245 | Size of previous chunk |
2246 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2247 `head:' | Size of chunk, in bytes |P|
2248 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2249 | Forward pointer to next chunk in list |
2250 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2251 | Back pointer to previous chunk in list |
2252 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2253 | Unused space (may be 0 bytes long) .
2256 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2257 `foot:' | Size of chunk, in bytes |
2258 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2260 Larger chunks are kept in a form of bitwise digital trees (aka
2261 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2262 free chunks greater than 256 bytes, their size doesn't impose any
2263 constraints on user chunk sizes. Each node looks like:
2265 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2266 | Size of previous chunk |
2267 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2268 `head:' | Size of chunk, in bytes |P|
2269 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2270 | Forward pointer to next chunk of same size |
2271 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2272 | Back pointer to previous chunk of same size |
2273 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2274 | Pointer to left child (child[0]) |
2275 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2276 | Pointer to right child (child[1]) |
2277 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2278 | Pointer to parent |
2279 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2280 | bin index of this chunk |
2281 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2284 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2285 `foot:' | Size of chunk, in bytes |
2286 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2288 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2289 of the same size are arranged in a circularly-linked list, with only
2290 the oldest chunk (the next to be used, in our FIFO ordering)
2291 actually in the tree. (Tree members are distinguished by a non-null
2292 parent pointer.) If a chunk with the same size as an existing node
2293 is inserted, it is linked off the existing node using pointers that
2294 work in the same way as fd/bk pointers of small chunks.
2296 Each tree contains a power of 2 sized range of chunk sizes (the
2297 smallest is 0x100 <= x < 0x180), which is divided in half at each
2298 tree level, with the chunks in the smaller half of the range (0x100
2299 <= x < 0x140 for the top nose) in the left subtree and the larger
2300 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2301 done by inspecting individual bits.
2303 Using these rules, each node's left subtree contains all smaller
2304 sizes than its right subtree. However, the node at the root of each
2305 subtree has no particular ordering relationship to either. (The
2306 dividing line between the subtree sizes is based on trie relation.)
2307 If we remove the last chunk of a given size from the interior of the
2308 tree, we need to replace it with a leaf node. The tree ordering
2309 rules permit a node to be replaced by any leaf below it.
2311 The smallest chunk in a tree (a common operation in a best-fit
2312 allocator) can be found by walking a path to the leftmost leaf in
2313 the tree. Unlike a usual binary tree, where we follow left child
2314 pointers until we reach a null, here we follow the right child
2315 pointer any time the left one is null, until we reach a leaf with
2316 both child pointers null. The smallest chunk in the tree will be
2317 somewhere along that path.
2319 The worst case number of steps to add, find, or remove a node is
2320 bounded by the number of bits differentiating chunks within
2321 bins. Under current bin calculations, this ranges from 6 up to 21
2322 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2323 is of course much better.
2326 struct malloc_tree_chunk {
2327 /* The first four fields must be compatible with malloc_chunk */
2330 struct malloc_tree_chunk* fd;
2331 struct malloc_tree_chunk* bk;
2333 struct malloc_tree_chunk* child[2];
2334 struct malloc_tree_chunk* parent;
2338 typedef struct malloc_tree_chunk tchunk;
2339 typedef struct malloc_tree_chunk* tchunkptr;
2340 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2342 /* A little helper macro for trees */
2343 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2345 /* ----------------------------- Segments -------------------------------- */
2348 Each malloc space may include non-contiguous segments, held in a
2349 list headed by an embedded malloc_segment record representing the
2350 top-most space. Segments also include flags holding properties of
2351 the space. Large chunks that are directly allocated by mmap are not
2352 included in this list. They are instead independently created and
2353 destroyed without otherwise keeping track of them.
2355 Segment management mainly comes into play for spaces allocated by
2356 MMAP. Any call to MMAP might or might not return memory that is
2357 adjacent to an existing segment. MORECORE normally contiguously
2358 extends the current space, so this space is almost always adjacent,
2359 which is simpler and faster to deal with. (This is why MORECORE is
2360 used preferentially to MMAP when both are available -- see
2361 sys_alloc.) When allocating using MMAP, we don't use any of the
2362 hinting mechanisms (inconsistently) supported in various
2363 implementations of unix mmap, or distinguish reserving from
2364 committing memory. Instead, we just ask for space, and exploit
2365 contiguity when we get it. It is probably possible to do
2366 better than this on some systems, but no general scheme seems
2367 to be significantly better.
2369 Management entails a simpler variant of the consolidation scheme
2370 used for chunks to reduce fragmentation -- new adjacent memory is
2371 normally prepended or appended to an existing segment. However,
2372 there are limitations compared to chunk consolidation that mostly
2373 reflect the fact that segment processing is relatively infrequent
2374 (occurring only when getting memory from system) and that we
2375 don't expect to have huge numbers of segments:
2377 * Segments are not indexed, so traversal requires linear scans. (It
2378 would be possible to index these, but is not worth the extra
2379 overhead and complexity for most programs on most platforms.)
2380 * New segments are only appended to old ones when holding top-most
2381 memory; if they cannot be prepended to others, they are held in
2384 Except for the top-most segment of an mstate, each segment record
2385 is kept at the tail of its segment. Segments are added by pushing
2386 segment records onto the list headed by &mstate.seg for the
2389 Segment flags control allocation/merge/deallocation policies:
2390 * If EXTERN_BIT set, then we did not allocate this segment,
2391 and so should not try to deallocate or merge with others.
2392 (This currently holds only for the initial segment passed
2393 into create_mspace_with_base.)
2394 * If IS_MMAPPED_BIT set, the segment may be merged with
2395 other surrounding mmapped segments and trimmed/de-allocated
2397 * If neither bit is set, then the segment was obtained using
2398 MORECORE so can be merged with surrounding MORECORE'd segments
2399 and deallocated/trimmed using MORECORE with negative arguments.
2402 struct malloc_segment {
2403 char* base; /* base address */
2404 size_t size; /* allocated size */
2405 struct malloc_segment* next; /* ptr to next segment */
2406 flag_t sflags; /* mmap and extern flag */
2409 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
2410 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2412 typedef struct malloc_segment msegment;
2413 typedef struct malloc_segment* msegmentptr;
2415 /* ---------------------------- malloc_state ----------------------------- */
2418 A malloc_state holds all of the bookkeeping for a space.
2419 The main fields are:
2422 The topmost chunk of the currently active segment. Its size is
2423 cached in topsize. The actual size of topmost space is
2424 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2425 fenceposts and segment records if necessary when getting more
2426 space from the system. The size at which to autotrim top is
2427 cached from mparams in trim_check, except that it is disabled if
2430 Designated victim (dv)
2431 This is the preferred chunk for servicing small requests that
2432 don't have exact fits. It is normally the chunk split off most
2433 recently to service another small request. Its size is cached in
2434 dvsize. The link fields of this chunk are not maintained since it
2435 is not kept in a bin.
2438 An array of bin headers for free chunks. These bins hold chunks
2439 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2440 chunks of all the same size, spaced 8 bytes apart. To simplify
2441 use in double-linked lists, each bin header acts as a malloc_chunk
2442 pointing to the real first node, if it exists (else pointing to
2443 itself). This avoids special-casing for headers. But to avoid
2444 waste, we allocate only the fd/bk pointers of bins, and then use
2445 repositioning tricks to treat these as the fields of a chunk.
2448 Treebins are pointers to the roots of trees holding a range of
2449 sizes. There are 2 equally spaced treebins for each power of two
2450 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2454 There is one bit map for small bins ("smallmap") and one for
2455 treebins ("treemap). Each bin sets its bit when non-empty, and
2456 clears the bit when empty. Bit operations are then used to avoid
2457 bin-by-bin searching -- nearly all "search" is done without ever
2458 looking at bins that won't be selected. The bit maps
2459 conservatively use 32 bits per map word, even if on 64bit system.
2460 For a good description of some of the bit-based techniques used
2461 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2462 supplement at http://hackersdelight.org/). Many of these are
2463 intended to reduce the branchiness of paths through malloc etc, as
2464 well as to reduce the number of memory locations read or written.
2467 A list of segments headed by an embedded malloc_segment record
2468 representing the initial space.
2470 Address check support
2471 The least_addr field is the least address ever obtained from
2472 MORECORE or MMAP. Attempted frees and reallocs of any address less
2473 than this are trapped (unless INSECURE is defined).
2476 A cross-check field that should always hold same value as mparams.magic.
2479 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2482 Each space keeps track of current and maximum system memory
2483 obtained via MORECORE or MMAP.
2486 Fields holding the amount of unused topmost memory that should trigger
2487 timming, and a counter to force periodic scanning to release unused
2488 non-topmost segments.
2491 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2492 around every public call using this mspace.
2495 A void* pointer and a size_t field that can be used to help implement
2496 extensions to this malloc.
2499 /* Bin types, widths and sizes */
2500 #define NSMALLBINS (32U)
2501 #define NTREEBINS (32U)
2502 #define SMALLBIN_SHIFT (3U)
2503 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2504 #define TREEBIN_SHIFT (8U)
2505 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2506 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2507 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2509 struct malloc_state {
2518 size_t release_checks;
2520 mchunkptr smallbins[(NSMALLBINS+1)*2];
2521 tbinptr treebins[NTREEBINS];
2523 size_t max_footprint;
2526 MLOCK_T mutex; /* locate lock among fields that rarely change */
2527 #endif /* USE_LOCKS */
2529 void* extp; /* Unused but available for extensions */
2533 typedef struct malloc_state* mstate;
2535 /* ------------- Global malloc_state and malloc_params ------------------- */
2538 malloc_params holds global properties, including those that can be
2539 dynamically set using mallopt. There is a single instance, mparams,
2540 initialized in init_mparams. Note that the non-zeroness of "magic"
2541 also serves as an initialization flag.
2544 struct malloc_params {
2545 volatile size_t magic;
2548 size_t mmap_threshold;
2549 size_t trim_threshold;
2550 flag_t default_mflags;
2553 static struct malloc_params mparams;
2555 /* Ensure mparams initialized */
2556 #define ensure_initialization() ((void)(mparams.magic != 0 || init_mparams()))
2560 /* The global malloc_state used for all non-"mspace" calls */
2561 static struct malloc_state _gm_;
2563 #define is_global(M) ((M) == &_gm_)
2565 #endif /* !ONLY_MSPACES */
2567 #define is_initialized(M) ((M)->top != 0)
2569 /* -------------------------- system alloc setup ------------------------- */
2571 /* Operations on mflags */
2573 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2574 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2575 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2577 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2578 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2579 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2581 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2582 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2584 #define set_lock(M,L)\
2585 ((M)->mflags = (L)?\
2586 ((M)->mflags | USE_LOCK_BIT) :\
2587 ((M)->mflags & ~USE_LOCK_BIT))
2589 /* page-align a size */
2590 #define page_align(S)\
2591 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2593 /* granularity-align a size */
2594 #define granularity_align(S)\
2595 (((S) + (mparams.granularity - SIZE_T_ONE))\
2596 & ~(mparams.granularity - SIZE_T_ONE))
2599 /* For mmap, use granularity alignment on windows, else page-align */
2601 #define mmap_align(S) granularity_align(S)
2603 #define mmap_align(S) page_align(S)
2606 /* For sys_alloc, enough padding to ensure can malloc request on success */
2607 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2609 #define is_page_aligned(S)\
2610 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2611 #define is_granularity_aligned(S)\
2612 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2614 /* True if segment S holds address A */
2615 #define segment_holds(S, A)\
2616 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2618 /* Return segment holding given address */
2619 static msegmentptr segment_holding(mstate m, char* addr) {
2620 msegmentptr sp = &m->seg;
2622 if (addr >= sp->base && addr < sp->base + sp->size)
2624 if ((sp = sp->next) == 0)
2629 /* Return true if segment contains a segment link */
2630 static int has_segment_link(mstate m, msegmentptr ss) {
2631 msegmentptr sp = &m->seg;
2633 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2635 if ((sp = sp->next) == 0)
2640 #ifndef MORECORE_CANNOT_TRIM
2641 #define should_trim(M,s) ((s) > (M)->trim_check)
2642 #else /* MORECORE_CANNOT_TRIM */
2643 #define should_trim(M,s) (0)
2644 #endif /* MORECORE_CANNOT_TRIM */
2647 TOP_FOOT_SIZE is padding at the end of a segment, including space
2648 that may be needed to place segment records and fenceposts when new
2649 noncontiguous segments are added.
2651 #define TOP_FOOT_SIZE\
2652 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2655 /* ------------------------------- Hooks -------------------------------- */
2658 PREACTION should be defined to return 0 on success, and nonzero on
2659 failure. If you are not using locking, you can redefine these to do
2665 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2666 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2667 #else /* USE_LOCKS */
2670 #define PREACTION(M) (0)
2671 #endif /* PREACTION */
2674 #define POSTACTION(M)
2675 #endif /* POSTACTION */
2677 #endif /* USE_LOCKS */
2680 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2681 USAGE_ERROR_ACTION is triggered on detected bad frees and
2682 reallocs. The argument p is an address that might have triggered the
2683 fault. It is ignored by the two predefined actions, but might be
2684 useful in custom actions that try to help diagnose errors.
2687 #if PROCEED_ON_ERROR
2689 /* A count of the number of corruption errors causing resets */
2690 int malloc_corruption_error_count;
2692 /* default corruption action */
2693 static void reset_on_error(mstate m);
2695 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2696 #define USAGE_ERROR_ACTION(m, p)
2698 #else /* PROCEED_ON_ERROR */
2700 #ifndef CORRUPTION_ERROR_ACTION
2701 #define CORRUPTION_ERROR_ACTION(m) ABORT
2702 #endif /* CORRUPTION_ERROR_ACTION */
2704 #ifndef USAGE_ERROR_ACTION
2705 #define USAGE_ERROR_ACTION(m,p) ABORT
2706 #endif /* USAGE_ERROR_ACTION */
2708 #endif /* PROCEED_ON_ERROR */
2710 /* -------------------------- Debugging setup ---------------------------- */
2714 #define check_free_chunk(M,P)
2715 #define check_inuse_chunk(M,P)
2716 #define check_malloced_chunk(M,P,N)
2717 #define check_mmapped_chunk(M,P)
2718 #define check_malloc_state(M)
2719 #define check_top_chunk(M,P)
2722 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2723 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2724 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2725 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2726 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2727 #define check_malloc_state(M) do_check_malloc_state(M)
2729 static void do_check_any_chunk(mstate m, mchunkptr p);
2730 static void do_check_top_chunk(mstate m, mchunkptr p);
2731 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2732 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2733 static void do_check_free_chunk(mstate m, mchunkptr p);
2734 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2735 static void do_check_tree(mstate m, tchunkptr t);
2736 static void do_check_treebin(mstate m, bindex_t i);
2737 static void do_check_smallbin(mstate m, bindex_t i);
2738 static void do_check_malloc_state(mstate m);
2739 static int bin_find(mstate m, mchunkptr x);
2740 static size_t traverse_and_check(mstate m);
2743 /* ---------------------------- Indexing Bins ---------------------------- */
2745 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2746 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2747 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2748 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2750 /* addressing by index. See above about smallbin repositioning */
2751 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2752 #define treebin_at(M,i) (&((M)->treebins[i]))
2754 /* assign tree index for size S to variable I. Use x86 asm if possible */
2755 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2756 #define compute_tree_index(S, I)\
2758 unsigned int X = S >> TREEBIN_SHIFT;\
2761 else if (X > 0xFFFF)\
2765 __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "rm" (X));\
2766 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2770 #elif defined (__INTEL_COMPILER)
2771 #define compute_tree_index(S, I)\
2773 size_t X = S >> TREEBIN_SHIFT;\
2776 else if (X > 0xFFFF)\
2779 unsigned int K = _bit_scan_reverse (X); \
2780 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2784 #elif defined(_MSC_VER) && _MSC_VER>=1300
2785 #define compute_tree_index(S, I)\
2787 size_t X = S >> TREEBIN_SHIFT;\
2790 else if (X > 0xFFFF)\
2794 _BitScanReverse((DWORD *) &K, X);\
2795 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2800 #define compute_tree_index(S, I)\
2802 size_t X = S >> TREEBIN_SHIFT;\
2805 else if (X > 0xFFFF)\
2808 unsigned int Y = (unsigned int)X;\
2809 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2810 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2812 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2813 K = 14 - N + ((Y <<= K) >> 15);\
2814 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2819 /* Bit representing maximum resolved size in a treebin at i */
2820 #define bit_for_tree_index(i) \
2821 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2823 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2824 #define leftshift_for_tree_index(i) \
2825 ((i == NTREEBINS-1)? 0 : \
2826 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2828 /* The size of the smallest chunk held in bin with index i */
2829 #define minsize_for_tree_index(i) \
2830 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2831 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2834 /* ------------------------ Operations on bin maps ----------------------- */
2836 /* bit corresponding to given index */
2837 #define idx2bit(i) ((binmap_t)(1) << (i))
2839 /* Mark/Clear bits with given index */
2840 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2841 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2842 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2844 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2845 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2846 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2848 /* isolate the least set bit of a bitmap */
2849 #define least_bit(x) ((x) & -(x))
2851 /* mask with all bits to left of least bit of x on */
2852 #define left_bits(x) ((x<<1) | -(x<<1))
2854 /* mask with all bits to left of or equal to least bit of x on */
2855 #define same_or_left_bits(x) ((x) | -(x))
2857 /* index corresponding to given bit. Use x86 asm if possible */
2859 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2860 #define compute_bit2idx(X, I)\
2863 __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "rm" (X));\
2867 #elif defined (__INTEL_COMPILER)
2868 #define compute_bit2idx(X, I)\
2871 J = _bit_scan_forward (X); \
2875 #elif defined(_MSC_VER) && _MSC_VER>=1300
2876 #define compute_bit2idx(X, I)\
2879 _BitScanForward((DWORD *) &J, X);\
2883 #elif USE_BUILTIN_FFS
2884 #define compute_bit2idx(X, I) I = ffs(X)-1
2887 #define compute_bit2idx(X, I)\
2889 unsigned int Y = X - 1;\
2890 unsigned int K = Y >> (16-4) & 16;\
2891 unsigned int N = K; Y >>= K;\
2892 N += K = Y >> (8-3) & 8; Y >>= K;\
2893 N += K = Y >> (4-2) & 4; Y >>= K;\
2894 N += K = Y >> (2-1) & 2; Y >>= K;\
2895 N += K = Y >> (1-0) & 1; Y >>= K;\
2896 I = (bindex_t)(N + Y);\
2901 /* ----------------------- Runtime Check Support ------------------------- */
2904 For security, the main invariant is that malloc/free/etc never
2905 writes to a static address other than malloc_state, unless static
2906 malloc_state itself has been corrupted, which cannot occur via
2907 malloc (because of these checks). In essence this means that we
2908 believe all pointers, sizes, maps etc held in malloc_state, but
2909 check all of those linked or offsetted from other embedded data
2910 structures. These checks are interspersed with main code in a way
2911 that tends to minimize their run-time cost.
2913 When FOOTERS is defined, in addition to range checking, we also
2914 verify footer fields of inuse chunks, which can be used guarantee
2915 that the mstate controlling malloc/free is intact. This is a
2916 streamlined version of the approach described by William Robertson
2917 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2918 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2919 of an inuse chunk holds the xor of its mstate and a random seed,
2920 that is checked upon calls to free() and realloc(). This is
2921 (probablistically) unguessable from outside the program, but can be
2922 computed by any code successfully malloc'ing any chunk, so does not
2923 itself provide protection against code that has already broken
2924 security through some other means. Unlike Robertson et al, we
2925 always dynamically check addresses of all offset chunks (previous,
2926 next, etc). This turns out to be cheaper than relying on hashes.
2930 /* Check if address a is at least as high as any from MORECORE or MMAP */
2931 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2932 /* Check if address of next chunk n is higher than base chunk p */
2933 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2934 /* Check if p has its cinuse bit on */
2935 #define ok_cinuse(p) cinuse(p)
2936 /* Check if p has its pinuse bit on */
2937 #define ok_pinuse(p) pinuse(p)
2939 #else /* !INSECURE */
2940 #define ok_address(M, a) (1)
2941 #define ok_next(b, n) (1)
2942 #define ok_cinuse(p) (1)
2943 #define ok_pinuse(p) (1)
2944 #endif /* !INSECURE */
2946 #if (FOOTERS && !INSECURE)
2947 /* Check if (alleged) mstate m has expected magic field */
2948 #define ok_magic(M) ((M)->magic == mparams.magic)
2949 #else /* (FOOTERS && !INSECURE) */
2950 #define ok_magic(M) (1)
2951 #endif /* (FOOTERS && !INSECURE) */
2954 /* In gcc, use __builtin_expect to minimize impact of checks */
2956 #if defined(__GNUC__) && __GNUC__ >= 3
2957 #define RTCHECK(e) __builtin_expect(e, 1)
2959 #define RTCHECK(e) (e)
2961 #else /* !INSECURE */
2962 #define RTCHECK(e) (1)
2963 #endif /* !INSECURE */
2965 /* macros to set up inuse chunks with or without footers */
2969 #define mark_inuse_foot(M,p,s)
2971 /* Set cinuse bit and pinuse bit of next chunk */
2972 #define set_inuse(M,p,s)\
2973 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2974 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2976 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2977 #define set_inuse_and_pinuse(M,p,s)\
2978 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2979 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2981 /* Set size, cinuse and pinuse bit of this chunk */
2982 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2983 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2987 /* Set foot of inuse chunk to be xor of mstate and seed */
2988 #define mark_inuse_foot(M,p,s)\
2989 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2991 #define get_mstate_for(p)\
2992 ((mstate)(((mchunkptr)((char*)(p) +\
2993 (chunksize(p))))->prev_foot ^ mparams.magic))
2995 #define set_inuse(M,p,s)\
2996 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2997 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2998 mark_inuse_foot(M,p,s))
3000 #define set_inuse_and_pinuse(M,p,s)\
3001 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3002 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3003 mark_inuse_foot(M,p,s))
3005 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3006 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3007 mark_inuse_foot(M, p, s))
3009 #endif /* !FOOTERS */
3011 /* ---------------------------- setting mparams -------------------------- */
3013 /* Initialize mparams */
3014 static int init_mparams(void) {
3015 #ifdef NEED_GLOBAL_LOCK_INIT
3016 if (malloc_global_mutex_status <= 0)
3017 init_malloc_global_mutex();
3020 ACQUIRE_MALLOC_GLOBAL_LOCK();
3021 if (mparams.magic == 0) {
3027 psize = malloc_getpagesize;
3028 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3031 SYSTEM_INFO system_info;
3032 GetSystemInfo(&system_info);
3033 psize = system_info.dwPageSize;
3034 gsize = ((DEFAULT_GRANULARITY != 0)?
3035 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3039 /* Sanity-check configuration:
3040 size_t must be unsigned and as wide as pointer type.
3041 ints must be at least 4 bytes.
3042 alignment must be at least 8.
3043 Alignment, min chunk size, and page size must all be powers of 2.
3045 if ((sizeof(size_t) != sizeof(char*)) ||
3046 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3047 (sizeof(int) < 4) ||
3048 (MALLOC_ALIGNMENT < (size_t)8U) ||
3049 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3050 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3051 ((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3052 ((psize & (psize-SIZE_T_ONE)) != 0))
3055 mparams.granularity = gsize;
3056 mparams.page_size = psize;
3057 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3058 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3059 #if MORECORE_CONTIGUOUS
3060 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3061 #else /* MORECORE_CONTIGUOUS */
3062 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3063 #endif /* MORECORE_CONTIGUOUS */
3066 /* Set up lock for main malloc area */
3067 gm->mflags = mparams.default_mflags;
3068 INITIAL_LOCK(&gm->mutex);
3071 #if (FOOTERS && !INSECURE)
3075 unsigned char buf[sizeof(size_t)];
3076 /* Try to use /dev/urandom, else fall back on using time */
3077 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3078 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3079 magic = *((size_t *) buf);
3083 #endif /* USE_DEV_RANDOM */
3085 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3087 magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3089 magic |= (size_t)8U; /* ensure nonzero */
3090 magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3092 #else /* (FOOTERS && !INSECURE) */
3093 magic = (size_t)0x58585858U;
3094 #endif /* (FOOTERS && !INSECURE) */
3096 mparams.magic = magic;
3099 RELEASE_MALLOC_GLOBAL_LOCK();
3103 /* support for mallopt */
3104 static int change_mparam(int param_number, int value) {
3105 size_t val = (value == -1)? MAX_SIZE_T : (size_t)value;
3106 ensure_initialization();
3107 switch(param_number) {
3108 case M_TRIM_THRESHOLD:
3109 mparams.trim_threshold = val;
3112 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3113 mparams.granularity = val;
3118 case M_MMAP_THRESHOLD:
3119 mparams.mmap_threshold = val;
3127 /* ------------------------- Debugging Support --------------------------- */
3129 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3130 static void do_check_any_chunk(mstate m, mchunkptr p) {
3131 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3132 assert(ok_address(m, p));
3135 /* Check properties of top chunk */
3136 static void do_check_top_chunk(mstate m, mchunkptr p) {
3137 msegmentptr sp = segment_holding(m, (char*)p);
3138 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3140 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3141 assert(ok_address(m, p));
3142 assert(sz == m->topsize);
3144 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3146 assert(!pinuse(chunk_plus_offset(p, sz)));
3149 /* Check properties of (inuse) mmapped chunks */
3150 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3151 size_t sz = chunksize(p);
3152 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
3153 assert(is_mmapped(p));
3154 assert(use_mmap(m));
3155 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3156 assert(ok_address(m, p));
3157 assert(!is_small(sz));
3158 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3159 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3160 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3163 /* Check properties of inuse chunks */
3164 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3165 do_check_any_chunk(m, p);
3167 assert(next_pinuse(p));
3168 /* If not pinuse and not mmapped, previous chunk has OK offset */
3169 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3171 do_check_mmapped_chunk(m, p);
3174 /* Check properties of free chunks */
3175 static void do_check_free_chunk(mstate m, mchunkptr p) {
3176 size_t sz = chunksize(p);
3177 mchunkptr next = chunk_plus_offset(p, sz);
3178 do_check_any_chunk(m, p);
3180 assert(!next_pinuse(p));
3181 assert (!is_mmapped(p));
3182 if (p != m->dv && p != m->top) {
3183 if (sz >= MIN_CHUNK_SIZE) {
3184 assert((sz & CHUNK_ALIGN_MASK) == 0);
3185 assert(is_aligned(chunk2mem(p)));
3186 assert(next->prev_foot == sz);
3188 assert (next == m->top || cinuse(next));
3189 assert(p->fd->bk == p);
3190 assert(p->bk->fd == p);
3192 else /* markers are always of size SIZE_T_SIZE */
3193 assert(sz == SIZE_T_SIZE);
3197 /* Check properties of malloced chunks at the point they are malloced */
3198 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3200 mchunkptr p = mem2chunk(mem);
3201 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
3202 do_check_inuse_chunk(m, p);
3203 assert((sz & CHUNK_ALIGN_MASK) == 0);
3204 assert(sz >= MIN_CHUNK_SIZE);
3206 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3207 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3211 /* Check a tree and its subtrees. */
3212 static void do_check_tree(mstate m, tchunkptr t) {
3215 bindex_t tindex = t->index;
3216 size_t tsize = chunksize(t);
3218 compute_tree_index(tsize, idx);
3219 assert(tindex == idx);
3220 assert(tsize >= MIN_LARGE_SIZE);
3221 assert(tsize >= minsize_for_tree_index(idx));
3222 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3224 do { /* traverse through chain of same-sized nodes */
3225 do_check_any_chunk(m, ((mchunkptr)u));
3226 assert(u->index == tindex);
3227 assert(chunksize(u) == tsize);
3229 assert(!next_pinuse(u));
3230 assert(u->fd->bk == u);
3231 assert(u->bk->fd == u);
3232 if (u->parent == 0) {
3233 assert(u->child[0] == 0);
3234 assert(u->child[1] == 0);
3237 assert(head == 0); /* only one node on chain has parent */
3239 assert(u->parent != u);
3240 assert (u->parent->child[0] == u ||
3241 u->parent->child[1] == u ||
3242 *((tbinptr*)(u->parent)) == u);
3243 if (u->child[0] != 0) {
3244 assert(u->child[0]->parent == u);
3245 assert(u->child[0] != u);
3246 do_check_tree(m, u->child[0]);
3248 if (u->child[1] != 0) {
3249 assert(u->child[1]->parent == u);
3250 assert(u->child[1] != u);
3251 do_check_tree(m, u->child[1]);
3253 if (u->child[0] != 0 && u->child[1] != 0) {
3254 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3262 /* Check all the chunks in a treebin. */
3263 static void do_check_treebin(mstate m, bindex_t i) {
3264 tbinptr* tb = treebin_at(m, i);
3266 int empty = (m->treemap & (1U << i)) == 0;
3270 do_check_tree(m, t);
3273 /* Check all the chunks in a smallbin. */
3274 static void do_check_smallbin(mstate m, bindex_t i) {
3275 sbinptr b = smallbin_at(m, i);
3276 mchunkptr p = b->bk;
3277 unsigned int empty = (m->smallmap & (1U << i)) == 0;
3281 for (; p != b; p = p->bk) {
3282 size_t size = chunksize(p);
3284 /* each chunk claims to be free */
3285 do_check_free_chunk(m, p);
3286 /* chunk belongs in bin */
3287 assert(small_index(size) == i);
3288 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3289 /* chunk is followed by an inuse chunk */
3291 if (q->head != FENCEPOST_HEAD)
3292 do_check_inuse_chunk(m, q);
3297 /* Find x in a bin. Used in other check functions. */
3298 static int bin_find(mstate m, mchunkptr x) {
3299 size_t size = chunksize(x);
3300 if (is_small(size)) {
3301 bindex_t sidx = small_index(size);
3302 sbinptr b = smallbin_at(m, sidx);
3303 if (smallmap_is_marked(m, sidx)) {
3308 } while ((p = p->fd) != b);
3313 compute_tree_index(size, tidx);
3314 if (treemap_is_marked(m, tidx)) {
3315 tchunkptr t = *treebin_at(m, tidx);
3316 size_t sizebits = size << leftshift_for_tree_index(tidx);
3317 while (t != 0 && chunksize(t) != size) {
3318 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3324 if (u == (tchunkptr)x)
3326 } while ((u = u->fd) != t);
3333 /* Traverse each chunk and check it; return total */
3334 static size_t traverse_and_check(mstate m) {
3336 if (is_initialized(m)) {
3337 msegmentptr s = &m->seg;
3338 sum += m->topsize + TOP_FOOT_SIZE;
3340 mchunkptr q = align_as_chunk(s->base);
3341 mchunkptr lastq = 0;
3343 while (segment_holds(s, q) &&
3344 q != m->top && q->head != FENCEPOST_HEAD) {
3345 sum += chunksize(q);
3347 assert(!bin_find(m, q));
3348 do_check_inuse_chunk(m, q);
3351 assert(q == m->dv || bin_find(m, q));
3352 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
3353 do_check_free_chunk(m, q);
3364 /* Check all properties of malloc_state. */
3365 static void do_check_malloc_state(mstate m) {
3369 for (i = 0; i < NSMALLBINS; ++i)
3370 do_check_smallbin(m, i);
3371 for (i = 0; i < NTREEBINS; ++i)
3372 do_check_treebin(m, i);
3374 if (m->dvsize != 0) { /* check dv chunk */
3375 do_check_any_chunk(m, m->dv);
3376 assert(m->dvsize == chunksize(m->dv));
3377 assert(m->dvsize >= MIN_CHUNK_SIZE);
3378 assert(bin_find(m, m->dv) == 0);
3381 if (m->top != 0) { /* check top chunk */
3382 do_check_top_chunk(m, m->top);
3383 /*assert(m->topsize == chunksize(m->top)); redundant */
3384 assert(m->topsize > 0);
3385 assert(bin_find(m, m->top) == 0);
3388 total = traverse_and_check(m);
3389 assert(total <= m->footprint);
3390 assert(m->footprint <= m->max_footprint);
3394 /* ----------------------------- statistics ------------------------------ */
3397 static struct mallinfo internal_mallinfo(mstate m) {
3398 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3399 ensure_initialization();
3400 if (!PREACTION(m)) {
3401 check_malloc_state(m);
3402 if (is_initialized(m)) {
3403 size_t nfree = SIZE_T_ONE; /* top always free */
3404 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3406 msegmentptr s = &m->seg;
3408 mchunkptr q = align_as_chunk(s->base);
3409 while (segment_holds(s, q) &&
3410 q != m->top && q->head != FENCEPOST_HEAD) {
3411 size_t sz = chunksize(q);
3424 nm.hblkhd = m->footprint - sum;
3425 nm.usmblks = m->max_footprint;
3426 nm.uordblks = m->footprint - mfree;
3427 nm.fordblks = mfree;
3428 nm.keepcost = m->topsize;
3435 #endif /* !NO_MALLINFO */
3437 static void internal_malloc_stats(mstate m) {
3438 ensure_initialization();
3439 if (!PREACTION(m)) {
3443 check_malloc_state(m);
3444 if (is_initialized(m)) {
3445 msegmentptr s = &m->seg;
3446 maxfp = m->max_footprint;
3448 used = fp - (m->topsize + TOP_FOOT_SIZE);
3451 mchunkptr q = align_as_chunk(s->base);
3452 while (segment_holds(s, q) &&
3453 q != m->top && q->head != FENCEPOST_HEAD) {
3455 used -= chunksize(q);
3462 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3463 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3464 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3470 /* ----------------------- Operations on smallbins ----------------------- */
3473 Various forms of linking and unlinking are defined as macros. Even
3474 the ones for trees, which are very long but have very short typical
3475 paths. This is ugly but reduces reliance on inlining support of
3479 /* Link a free chunk into a smallbin */
3480 #define insert_small_chunk(M, P, S) {\
3481 bindex_t I = small_index(S);\
3482 mchunkptr B = smallbin_at(M, I);\
3484 assert(S >= MIN_CHUNK_SIZE);\
3485 if (!smallmap_is_marked(M, I))\
3486 mark_smallmap(M, I);\
3487 else if (RTCHECK(ok_address(M, B->fd)))\
3490 CORRUPTION_ERROR_ACTION(M);\
3498 /* Unlink a chunk from a smallbin */
3499 #define unlink_small_chunk(M, P, S) {\
3500 mchunkptr F = P->fd;\
3501 mchunkptr B = P->bk;\
3502 bindex_t I = small_index(S);\
3505 assert(chunksize(P) == small_index2size(I));\
3507 clear_smallmap(M, I);\
3508 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3509 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3514 CORRUPTION_ERROR_ACTION(M);\
3518 /* Unlink the first chunk from a smallbin */
3519 #define unlink_first_small_chunk(M, B, P, I) {\
3520 mchunkptr F = P->fd;\
3523 assert(chunksize(P) == small_index2size(I));\
3525 clear_smallmap(M, I);\
3526 else if (RTCHECK(ok_address(M, F))) {\
3531 CORRUPTION_ERROR_ACTION(M);\
3537 /* Replace dv node, binning the old one */
3538 /* Used only when dvsize known to be small */
3539 #define replace_dv(M, P, S) {\
3540 size_t DVS = M->dvsize;\
3542 mchunkptr DV = M->dv;\
3543 assert(is_small(DVS));\
3544 insert_small_chunk(M, DV, DVS);\
3550 /* ------------------------- Operations on trees ------------------------- */
3552 /* Insert chunk into tree */
3553 #define insert_large_chunk(M, X, S) {\
3556 compute_tree_index(S, I);\
3557 H = treebin_at(M, I);\
3559 X->child[0] = X->child[1] = 0;\
3560 if (!treemap_is_marked(M, I)) {\
3561 mark_treemap(M, I);\
3563 X->parent = (tchunkptr)H;\
3568 size_t K = S << leftshift_for_tree_index(I);\
3570 if (chunksize(T) != S) {\
3571 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3575 else if (RTCHECK(ok_address(M, C))) {\
3582 CORRUPTION_ERROR_ACTION(M);\
3587 tchunkptr F = T->fd;\
3588 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3596 CORRUPTION_ERROR_ACTION(M);\
3607 1. If x is a chained node, unlink it from its same-sized fd/bk links
3608 and choose its bk node as its replacement.
3609 2. If x was the last node of its size, but not a leaf node, it must
3610 be replaced with a leaf node (not merely one with an open left or
3611 right), to make sure that lefts and rights of descendants
3612 correspond properly to bit masks. We use the rightmost descendant
3613 of x. We could use any other leaf, but this is easy to locate and
3614 tends to counteract removal of leftmosts elsewhere, and so keeps
3615 paths shorter than minimally guaranteed. This doesn't loop much
3616 because on average a node in a tree is near the bottom.
3617 3. If x is the base of a chain (i.e., has parent links) relink
3618 x's parent and children to x's replacement (or null if none).
3621 #define unlink_large_chunk(M, X) {\
3622 tchunkptr XP = X->parent;\
3625 tchunkptr F = X->fd;\
3627 if (RTCHECK(ok_address(M, F))) {\
3632 CORRUPTION_ERROR_ACTION(M);\
3637 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3638 ((R = *(RP = &(X->child[0]))) != 0)) {\
3640 while ((*(CP = &(R->child[1])) != 0) ||\
3641 (*(CP = &(R->child[0])) != 0)) {\
3644 if (RTCHECK(ok_address(M, RP)))\
3647 CORRUPTION_ERROR_ACTION(M);\
3652 tbinptr* H = treebin_at(M, X->index);\
3654 if ((*H = R) == 0) \
3655 clear_treemap(M, X->index);\
3657 else if (RTCHECK(ok_address(M, XP))) {\
3658 if (XP->child[0] == X) \
3664 CORRUPTION_ERROR_ACTION(M);\
3666 if (RTCHECK(ok_address(M, R))) {\
3669 if ((C0 = X->child[0]) != 0) {\
3670 if (RTCHECK(ok_address(M, C0))) {\
3675 CORRUPTION_ERROR_ACTION(M);\
3677 if ((C1 = X->child[1]) != 0) {\
3678 if (RTCHECK(ok_address(M, C1))) {\
3683 CORRUPTION_ERROR_ACTION(M);\
3687 CORRUPTION_ERROR_ACTION(M);\
3692 /* Relays to large vs small bin operations */
3694 #define insert_chunk(M, P, S)\
3695 if (is_small(S)) insert_small_chunk(M, P, S)\
3696 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3698 #define unlink_chunk(M, P, S)\
3699 if (is_small(S)) unlink_small_chunk(M, P, S)\
3700 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3703 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3706 #define internal_malloc(m, b) mspace_malloc(m, b)
3707 #define internal_free(m, mem) mspace_free(m,mem);
3708 #else /* ONLY_MSPACES */
3710 #define internal_malloc(m, b)\
3711 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3712 #define internal_free(m, mem)\
3713 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3715 #define internal_malloc(m, b) dlmalloc(b)
3716 #define internal_free(m, mem) dlfree(mem)
3717 #endif /* MSPACES */
3718 #endif /* ONLY_MSPACES */
3720 /* ----------------------- Direct-mmapping chunks ----------------------- */
3723 Directly mmapped chunks are set up with an offset to the start of
3724 the mmapped region stored in the prev_foot field of the chunk. This
3725 allows reconstruction of the required argument to MUNMAP when freed,
3726 and also allows adjustment of the returned chunk to meet alignment
3727 requirements (especially in memalign). There is also enough space
3728 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3729 the PINUSE bit so frees can be checked.
3732 /* Malloc using mmap */
3733 static void* mmap_alloc(mstate m, size_t nb) {
3734 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3735 if (mmsize > nb) { /* Check for wrap around 0 */
3736 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3738 size_t offset = align_offset(chunk2mem(mm));
3739 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3740 mchunkptr p = (mchunkptr)(mm + offset);
3741 p->prev_foot = offset | IS_MMAPPED_BIT;
3742 (p)->head = (psize|CINUSE_BIT);
3743 mark_inuse_foot(m, p, psize);
3744 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3745 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3747 if (mm < m->least_addr)
3749 if ((m->footprint += mmsize) > m->max_footprint)
3750 m->max_footprint = m->footprint;
3751 assert(is_aligned(chunk2mem(p)));
3752 check_mmapped_chunk(m, p);
3753 return chunk2mem(p);
3759 /* Realloc using mmap */
3760 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3761 size_t oldsize = chunksize(oldp);
3762 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3764 /* Keep old chunk if big enough but not too big */
3765 if (oldsize >= nb + SIZE_T_SIZE &&
3766 (oldsize - nb) <= (mparams.granularity << 1))
3769 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3770 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3771 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3772 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3773 oldmmsize, newmmsize, 1);
3775 mchunkptr newp = (mchunkptr)(cp + offset);
3776 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3777 newp->head = (psize|CINUSE_BIT);
3778 mark_inuse_foot(m, newp, psize);
3779 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3780 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3782 if (cp < m->least_addr)
3784 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3785 m->max_footprint = m->footprint;
3786 check_mmapped_chunk(m, newp);
3793 /* -------------------------- mspace management -------------------------- */
3795 /* Initialize top chunk and its size */
3796 static void init_top(mstate m, mchunkptr p, size_t psize) {
3797 /* Ensure alignment */
3798 size_t offset = align_offset(chunk2mem(p));
3799 p = (mchunkptr)((char*)p + offset);
3804 p->head = psize | PINUSE_BIT;
3805 /* set size of fake trailing chunk holding overhead space only once */
3806 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3807 m->trim_check = mparams.trim_threshold; /* reset on each update */
3810 /* Initialize bins for a new mstate that is otherwise zeroed out */
3811 static void init_bins(mstate m) {
3812 /* Establish circular links for smallbins */
3814 for (i = 0; i < NSMALLBINS; ++i) {
3815 sbinptr bin = smallbin_at(m,i);
3816 bin->fd = bin->bk = bin;
3820 #if PROCEED_ON_ERROR
3822 /* default corruption action */
3823 static void reset_on_error(mstate m) {
3825 ++malloc_corruption_error_count;
3826 /* Reinitialize fields to forget about all memory */
3827 m->smallbins = m->treebins = 0;
3828 m->dvsize = m->topsize = 0;
3833 for (i = 0; i < NTREEBINS; ++i)
3834 *treebin_at(m, i) = 0;
3837 #endif /* PROCEED_ON_ERROR */
3839 /* Allocate chunk and prepend remainder with chunk in successor base. */
3840 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3842 mchunkptr p = align_as_chunk(newbase);
3843 mchunkptr oldfirst = align_as_chunk(oldbase);
3844 size_t psize = (char*)oldfirst - (char*)p;
3845 mchunkptr q = chunk_plus_offset(p, nb);
3846 size_t qsize = psize - nb;
3847 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3849 assert((char*)oldfirst > (char*)q);
3850 assert(pinuse(oldfirst));
3851 assert(qsize >= MIN_CHUNK_SIZE);
3853 /* consolidate remainder with first chunk of old base */
3854 if (oldfirst == m->top) {
3855 size_t tsize = m->topsize += qsize;
3857 q->head = tsize | PINUSE_BIT;
3858 check_top_chunk(m, q);
3860 else if (oldfirst == m->dv) {
3861 size_t dsize = m->dvsize += qsize;
3863 set_size_and_pinuse_of_free_chunk(q, dsize);
3866 if (!cinuse(oldfirst)) {
3867 size_t nsize = chunksize(oldfirst);
3868 unlink_chunk(m, oldfirst, nsize);
3869 oldfirst = chunk_plus_offset(oldfirst, nsize);
3872 set_free_with_pinuse(q, qsize, oldfirst);
3873 insert_chunk(m, q, qsize);
3874 check_free_chunk(m, q);
3877 check_malloced_chunk(m, chunk2mem(p), nb);
3878 return chunk2mem(p);
3881 /* Add a segment to hold a new noncontiguous region */
3882 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3883 /* Determine locations and sizes of segment, fenceposts, old top */
3884 char* old_top = (char*)m->top;
3885 msegmentptr oldsp = segment_holding(m, old_top);
3886 char* old_end = oldsp->base + oldsp->size;
3887 size_t ssize = pad_request(sizeof(struct malloc_segment));
3888 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3889 size_t offset = align_offset(chunk2mem(rawsp));
3890 char* asp = rawsp + offset;
3891 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3892 mchunkptr sp = (mchunkptr)csp;
3893 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3894 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3895 mchunkptr p = tnext;
3898 /* reset top to new space */
3899 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3901 /* Set up segment record */
3902 assert(is_aligned(ss));
3903 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3904 *ss = m->seg; /* Push current record */
3905 m->seg.base = tbase;
3906 m->seg.size = tsize;
3907 m->seg.sflags = mmapped;
3910 /* Insert trailing fenceposts */
3912 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3913 p->head = FENCEPOST_HEAD;
3915 if ((char*)(&(nextp->head)) < old_end)
3920 assert(nfences >= 2);
3922 /* Insert the rest of old top into a bin as an ordinary free chunk */
3923 if (csp != old_top) {
3924 mchunkptr q = (mchunkptr)old_top;
3925 size_t psize = csp - old_top;
3926 mchunkptr tn = chunk_plus_offset(q, psize);
3927 set_free_with_pinuse(q, psize, tn);
3928 insert_chunk(m, q, psize);
3931 check_top_chunk(m, m->top);
3934 /* -------------------------- System allocation -------------------------- */
3936 /* Get memory from system using MORECORE or MMAP */
3937 static void* sys_alloc(mstate m, size_t nb) {
3938 char* tbase = CMFAIL;
3940 flag_t mmap_flag = 0;
3942 ensure_initialization();
3944 /* Directly map large chunks */
3945 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3946 void* mem = mmap_alloc(m, nb);
3952 Try getting memory in any of three ways (in most-preferred to
3953 least-preferred order):
3954 1. A call to MORECORE that can normally contiguously extend memory.
3955 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3956 main space is mmapped or a previous contiguous call failed)
3957 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3958 Note that under the default settings, if MORECORE is unable to
3959 fulfill a request, and HAVE_MMAP is true, then mmap is
3960 used as a noncontiguous system allocator. This is a useful backup
3961 strategy for systems with holes in address spaces -- in this case
3962 sbrk cannot contiguously expand the heap, but mmap may be able to
3964 3. A call to MORECORE that cannot usually contiguously extend memory.
3965 (disabled if not HAVE_MORECORE)
3967 In all cases, we need to request enough bytes from system to ensure
3968 we can malloc nb bytes upon success, so pad with enough space for
3969 top_foot, plus alignment-pad to make sure we don't lose bytes if
3970 not on boundary, and round this up to a granularity unit.
3973 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3975 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3977 ACQUIRE_MALLOC_GLOBAL_LOCK();
3979 if (ss == 0) { /* First time through or recovery */
3980 char* base = (char*)CALL_MORECORE(0);
3981 if (base != CMFAIL) {
3982 asize = granularity_align(nb + SYS_ALLOC_PADDING);
3983 /* Adjust to end on a page boundary */
3984 if (!is_page_aligned(base))
3985 asize += (page_align((size_t)base) - (size_t)base);
3986 /* Can't call MORECORE if size is negative when treated as signed */
3987 if (asize < HALF_MAX_SIZE_T &&
3988 (br = (char*)(CALL_MORECORE(asize))) == base) {
3995 /* Subtract out existing available top space from MORECORE request. */
3996 asize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
3997 /* Use mem here only if it did continuously extend old space */
3998 if (asize < HALF_MAX_SIZE_T &&
3999 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
4005 if (tbase == CMFAIL) { /* Cope with partial failure */
4006 if (br != CMFAIL) { /* Try to use/extend the space we did get */
4007 if (asize < HALF_MAX_SIZE_T &&
4008 asize < nb + SYS_ALLOC_PADDING) {
4009 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - asize);
4010 if (esize < HALF_MAX_SIZE_T) {
4011 char* end = (char*)CALL_MORECORE(esize);
4014 else { /* Can't use; try to release */
4015 (void) CALL_MORECORE(-asize);
4021 if (br != CMFAIL) { /* Use the space we did get */
4026 disable_contiguous(m); /* Don't try contiguous path in the future */
4029 RELEASE_MALLOC_GLOBAL_LOCK();
4032 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4033 size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
4034 if (rsize > nb) { /* Fail if wraps around zero */
4035 char* mp = (char*)(CALL_MMAP(rsize));
4039 mmap_flag = IS_MMAPPED_BIT;
4044 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4045 size_t asize = granularity_align(nb + SYS_ALLOC_PADDING);
4046 if (asize < HALF_MAX_SIZE_T) {
4049 ACQUIRE_MALLOC_GLOBAL_LOCK();
4050 br = (char*)(CALL_MORECORE(asize));
4051 end = (char*)(CALL_MORECORE(0));
4052 RELEASE_MALLOC_GLOBAL_LOCK();
4053 if (br != CMFAIL && end != CMFAIL && br < end) {
4054 size_t ssize = end - br;
4055 if (ssize > nb + TOP_FOOT_SIZE) {
4063 if (tbase != CMFAIL) {
4065 if ((m->footprint += tsize) > m->max_footprint)
4066 m->max_footprint = m->footprint;
4068 if (!is_initialized(m)) { /* first-time initialization */
4069 m->seg.base = m->least_addr = tbase;
4070 m->seg.size = tsize;
4071 m->seg.sflags = mmap_flag;
4072 m->magic = mparams.magic;
4073 m->release_checks = MAX_RELEASE_CHECK_RATE;
4077 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4081 /* Offset top by embedded malloc_state */
4082 mchunkptr mn = next_chunk(mem2chunk(m));
4083 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4088 /* Try to merge with an existing segment */
4089 msegmentptr sp = &m->seg;
4090 /* Only consider most recent segment if traversal suppressed */
4091 while (sp != 0 && tbase != sp->base + sp->size)
4092 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4094 !is_extern_segment(sp) &&
4095 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
4096 segment_holds(sp, m->top)) { /* append */
4098 init_top(m, m->top, m->topsize + tsize);
4101 if (tbase < m->least_addr)
4102 m->least_addr = tbase;
4104 while (sp != 0 && sp->base != tbase + tsize)
4105 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4107 !is_extern_segment(sp) &&
4108 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
4109 char* oldbase = sp->base;
4112 return prepend_alloc(m, tbase, oldbase, nb);
4115 add_segment(m, tbase, tsize, mmap_flag);
4119 if (nb < m->topsize) { /* Allocate from new or extended top space */
4120 size_t rsize = m->topsize -= nb;
4121 mchunkptr p = m->top;
4122 mchunkptr r = m->top = chunk_plus_offset(p, nb);
4123 r->head = rsize | PINUSE_BIT;
4124 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4125 check_top_chunk(m, m->top);
4126 check_malloced_chunk(m, chunk2mem(p), nb);
4127 return chunk2mem(p);
4131 MALLOC_FAILURE_ACTION;
4135 /* ----------------------- system deallocation -------------------------- */
4137 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4138 static size_t release_unused_segments(mstate m) {
4139 size_t released = 0;
4141 msegmentptr pred = &m->seg;
4142 msegmentptr sp = pred->next;
4144 char* base = sp->base;
4145 size_t size = sp->size;
4146 msegmentptr next = sp->next;
4148 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4149 mchunkptr p = align_as_chunk(base);
4150 size_t psize = chunksize(p);
4151 /* Can unmap if first chunk holds entire segment and not pinned */
4152 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4153 tchunkptr tp = (tchunkptr)p;
4154 assert(segment_holds(sp, (char*)sp));
4160 unlink_large_chunk(m, tp);
4162 if (CALL_MUNMAP(base, size) == 0) {
4164 m->footprint -= size;
4165 /* unlink obsoleted record */
4169 else { /* back out if cannot unmap */
4170 insert_large_chunk(m, tp, psize);
4174 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4179 /* Reset check counter */
4180 m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
4181 nsegs : MAX_RELEASE_CHECK_RATE);
4185 static int sys_trim(mstate m, size_t pad) {
4186 size_t released = 0;
4187 ensure_initialization();
4188 if (pad < MAX_REQUEST && is_initialized(m)) {
4189 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4191 if (m->topsize > pad) {
4192 /* Shrink top space in granularity-size units, keeping at least one */
4193 size_t unit = mparams.granularity;
4194 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4196 msegmentptr sp = segment_holding(m, (char*)m->top);
4198 if (!is_extern_segment(sp)) {
4199 if (is_mmapped_segment(sp)) {
4201 sp->size >= extra &&
4202 !has_segment_link(m, sp)) { /* can't shrink if pinned */
4203 size_t newsize = sp->size - extra;
4204 /* Prefer mremap, fall back to munmap */
4205 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4206 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4211 else if (HAVE_MORECORE) {
4212 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4213 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4214 ACQUIRE_MALLOC_GLOBAL_LOCK();
4216 /* Make sure end of memory is where we last set it. */
4217 char* old_br = (char*)(CALL_MORECORE(0));
4218 if (old_br == sp->base + sp->size) {
4219 char* rel_br = (char*)(CALL_MORECORE(-extra));
4220 char* new_br = (char*)(CALL_MORECORE(0));
4221 if (rel_br != CMFAIL && new_br < old_br)
4222 released = old_br - new_br;
4225 RELEASE_MALLOC_GLOBAL_LOCK();
4229 if (released != 0) {
4230 sp->size -= released;
4231 m->footprint -= released;
4232 init_top(m, m->top, m->topsize - released);
4233 check_top_chunk(m, m->top);
4237 /* Unmap any unused mmapped segments */
4239 released += release_unused_segments(m);
4241 /* On failure, disable autotrim to avoid repeated failed future calls */
4242 if (released == 0 && m->topsize > m->trim_check)
4243 m->trim_check = MAX_SIZE_T;
4246 return (released != 0)? 1 : 0;
4250 /* ---------------------------- malloc support --------------------------- */
4252 /* allocate a large request from the best fitting chunk in a treebin */
4253 static void* tmalloc_large(mstate m, size_t nb) {
4255 size_t rsize = -nb; /* Unsigned negation */
4258 compute_tree_index(nb, idx);
4259 if ((t = *treebin_at(m, idx)) != 0) {
4260 /* Traverse tree for this bin looking for node with size == nb */
4261 size_t sizebits = nb << leftshift_for_tree_index(idx);
4262 tchunkptr rst = 0; /* The deepest untaken right subtree */
4265 size_t trem = chunksize(t) - nb;
4268 if ((rsize = trem) == 0)
4272 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4273 if (rt != 0 && rt != t)
4276 t = rst; /* set t to least subtree holding sizes > nb */
4282 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4283 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4284 if (leftbits != 0) {
4286 binmap_t leastbit = least_bit(leftbits);
4287 compute_bit2idx(leastbit, i);
4288 t = *treebin_at(m, i);
4292 while (t != 0) { /* find smallest of tree or subtree */
4293 size_t trem = chunksize(t) - nb;
4298 t = leftmost_child(t);
4301 /* If dv is a better fit, return 0 so malloc will use it */
4302 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4303 if (RTCHECK(ok_address(m, v))) { /* split */
4304 mchunkptr r = chunk_plus_offset(v, nb);
4305 assert(chunksize(v) == rsize + nb);
4306 if (RTCHECK(ok_next(v, r))) {
4307 unlink_large_chunk(m, v);
4308 if (rsize < MIN_CHUNK_SIZE)
4309 set_inuse_and_pinuse(m, v, (rsize + nb));
4311 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4312 set_size_and_pinuse_of_free_chunk(r, rsize);
4313 insert_chunk(m, r, rsize);
4315 return chunk2mem(v);
4318 CORRUPTION_ERROR_ACTION(m);
4323 /* allocate a small request from the best fitting chunk in a treebin */
4324 static void* tmalloc_small(mstate m, size_t nb) {
4328 binmap_t leastbit = least_bit(m->treemap);
4329 compute_bit2idx(leastbit, i);
4330 v = t = *treebin_at(m, i);
4331 rsize = chunksize(t) - nb;
4333 while ((t = leftmost_child(t)) != 0) {
4334 size_t trem = chunksize(t) - nb;
4341 if (RTCHECK(ok_address(m, v))) {
4342 mchunkptr r = chunk_plus_offset(v, nb);
4343 assert(chunksize(v) == rsize + nb);
4344 if (RTCHECK(ok_next(v, r))) {
4345 unlink_large_chunk(m, v);
4346 if (rsize < MIN_CHUNK_SIZE)
4347 set_inuse_and_pinuse(m, v, (rsize + nb));
4349 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4350 set_size_and_pinuse_of_free_chunk(r, rsize);
4351 replace_dv(m, r, rsize);
4353 return chunk2mem(v);
4357 CORRUPTION_ERROR_ACTION(m);
4361 /* --------------------------- realloc support --------------------------- */
4363 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
4364 if (bytes >= MAX_REQUEST) {
4365 MALLOC_FAILURE_ACTION;
4368 if (!PREACTION(m)) {
4369 mchunkptr oldp = mem2chunk(oldmem);
4370 size_t oldsize = chunksize(oldp);
4371 mchunkptr next = chunk_plus_offset(oldp, oldsize);
4375 /* Try to either shrink or extend into top. Else malloc-copy-free */
4377 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
4378 ok_next(oldp, next) && ok_pinuse(next))) {
4379 size_t nb = request2size(bytes);
4380 if (is_mmapped(oldp))
4381 newp = mmap_resize(m, oldp, nb);
4382 else if (oldsize >= nb) { /* already big enough */
4383 size_t rsize = oldsize - nb;
4385 if (rsize >= MIN_CHUNK_SIZE) {
4386 mchunkptr remainder = chunk_plus_offset(newp, nb);
4387 set_inuse(m, newp, nb);
4388 set_inuse(m, remainder, rsize);
4389 extra = chunk2mem(remainder);
4392 else if (next == m->top && oldsize + m->topsize > nb) {
4393 /* Expand into top */
4394 size_t newsize = oldsize + m->topsize;
4395 size_t newtopsize = newsize - nb;
4396 mchunkptr newtop = chunk_plus_offset(oldp, nb);
4397 set_inuse(m, oldp, nb);
4398 newtop->head = newtopsize |PINUSE_BIT;
4400 m->topsize = newtopsize;
4405 USAGE_ERROR_ACTION(m, oldmem);
4414 internal_free(m, extra);
4416 check_inuse_chunk(m, newp);
4417 return chunk2mem(newp);
4420 void* newmem = internal_malloc(m, bytes);
4422 size_t oc = oldsize - overhead_for(oldp);
4423 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4424 internal_free(m, oldmem);
4432 /* --------------------------- memalign support -------------------------- */
4434 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4435 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
4436 return internal_malloc(m, bytes);
4437 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4438 alignment = MIN_CHUNK_SIZE;
4439 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4440 size_t a = MALLOC_ALIGNMENT << 1;
4441 while (a < alignment) a <<= 1;
4445 if (bytes >= MAX_REQUEST - alignment) {
4446 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4447 MALLOC_FAILURE_ACTION;
4451 size_t nb = request2size(bytes);
4452 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4453 char* mem = (char*)internal_malloc(m, req);
4457 mchunkptr p = mem2chunk(mem);
4459 if (PREACTION(m)) return 0;
4460 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4462 Find an aligned spot inside chunk. Since we need to give
4463 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4464 the first calculation places us at a spot with less than
4465 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4466 We've allocated enough total room so that this is always
4469 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4473 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4475 mchunkptr newp = (mchunkptr)pos;
4476 size_t leadsize = pos - (char*)(p);
4477 size_t newsize = chunksize(p) - leadsize;
4479 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4480 newp->prev_foot = p->prev_foot + leadsize;
4481 newp->head = (newsize|CINUSE_BIT);
4483 else { /* Otherwise, give back leader, use the rest */
4484 set_inuse(m, newp, newsize);
4485 set_inuse(m, p, leadsize);
4486 leader = chunk2mem(p);
4491 /* Give back spare room at the end */
4492 if (!is_mmapped(p)) {
4493 size_t size = chunksize(p);
4494 if (size > nb + MIN_CHUNK_SIZE) {
4495 size_t remainder_size = size - nb;
4496 mchunkptr remainder = chunk_plus_offset(p, nb);
4497 set_inuse(m, p, nb);
4498 set_inuse(m, remainder, remainder_size);
4499 trailer = chunk2mem(remainder);
4503 assert (chunksize(p) >= nb);
4504 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4505 check_inuse_chunk(m, p);
4508 internal_free(m, leader);
4511 internal_free(m, trailer);
4513 return chunk2mem(p);
4519 /* ------------------------ comalloc/coalloc support --------------------- */
4521 static void** ialloc(mstate m,
4527 This provides common support for independent_X routines, handling
4528 all of the combinations that can result.
4531 bit 0 set if all elements are same size (using sizes[0])
4532 bit 1 set if elements should be zeroed
4535 size_t element_size; /* chunksize of each element, if all same */
4536 size_t contents_size; /* total size of elements */
4537 size_t array_size; /* request size of pointer array */
4538 void* mem; /* malloced aggregate space */
4539 mchunkptr p; /* corresponding chunk */
4540 size_t remainder_size; /* remaining bytes while splitting */
4541 void** marray; /* either "chunks" or malloced ptr array */
4542 mchunkptr array_chunk; /* chunk for malloced ptr array */
4543 flag_t was_enabled; /* to disable mmap */
4547 ensure_initialization();
4548 /* compute array length, if needed */
4550 if (n_elements == 0)
4551 return chunks; /* nothing to do */
4556 /* if empty req, must still return chunk representing empty array */
4557 if (n_elements == 0)
4558 return (void**)internal_malloc(m, 0);
4560 array_size = request2size(n_elements * (sizeof(void*)));
4563 /* compute total element size */
4564 if (opts & 0x1) { /* all-same-size */
4565 element_size = request2size(*sizes);
4566 contents_size = n_elements * element_size;
4568 else { /* add up all the sizes */
4571 for (i = 0; i != n_elements; ++i)
4572 contents_size += request2size(sizes[i]);
4575 size = contents_size + array_size;
4578 Allocate the aggregate chunk. First disable direct-mmapping so
4579 malloc won't use it, since we would not be able to later
4580 free/realloc space internal to a segregated mmap region.
4582 was_enabled = use_mmap(m);
4584 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4590 if (PREACTION(m)) return 0;
4592 remainder_size = chunksize(p);
4594 assert(!is_mmapped(p));
4596 if (opts & 0x2) { /* optionally clear the elements */
4597 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4600 /* If not provided, allocate the pointer array as final part of chunk */
4602 size_t array_chunk_size;
4603 array_chunk = chunk_plus_offset(p, contents_size);
4604 array_chunk_size = remainder_size - contents_size;
4605 marray = (void**) (chunk2mem(array_chunk));
4606 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4607 remainder_size = contents_size;
4610 /* split out elements */
4611 for (i = 0; ; ++i) {
4612 marray[i] = chunk2mem(p);
4613 if (i != n_elements-1) {
4614 if (element_size != 0)
4615 size = element_size;
4617 size = request2size(sizes[i]);
4618 remainder_size -= size;
4619 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4620 p = chunk_plus_offset(p, size);
4622 else { /* the final element absorbs any overallocation slop */
4623 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4629 if (marray != chunks) {
4630 /* final element must have exactly exhausted chunk */
4631 if (element_size != 0) {
4632 assert(remainder_size == element_size);
4635 assert(remainder_size == request2size(sizes[i]));
4637 check_inuse_chunk(m, mem2chunk(marray));
4639 for (i = 0; i != n_elements; ++i)
4640 check_inuse_chunk(m, mem2chunk(marray[i]));
4649 /* -------------------------- public routines ---------------------------- */
4653 void* dlmalloc(size_t bytes) {
4656 If a small request (< 256 bytes minus per-chunk overhead):
4657 1. If one exists, use a remainderless chunk in associated smallbin.
4658 (Remainderless means that there are too few excess bytes to
4659 represent as a chunk.)
4660 2. If it is big enough, use the dv chunk, which is normally the
4661 chunk adjacent to the one used for the most recent small request.
4662 3. If one exists, split the smallest available chunk in a bin,
4663 saving remainder in dv.
4664 4. If it is big enough, use the top chunk.
4665 5. If available, get memory from system and use it
4666 Otherwise, for a large request:
4667 1. Find the smallest available binned chunk that fits, and use it
4668 if it is better fitting than dv chunk, splitting if necessary.
4669 2. If better fitting than any binned chunk, use the dv chunk.
4670 3. If it is big enough, use the top chunk.
4671 4. If request size >= mmap threshold, try to directly mmap this chunk.
4672 5. If available, get memory from system and use it
4674 The ugly goto's here ensure that postaction occurs along all paths.
4678 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4681 if (!PREACTION(gm)) {
4684 if (bytes <= MAX_SMALL_REQUEST) {
4687 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4688 idx = small_index(nb);
4689 smallbits = gm->smallmap >> idx;
4691 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4693 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4694 b = smallbin_at(gm, idx);
4696 assert(chunksize(p) == small_index2size(idx));
4697 unlink_first_small_chunk(gm, b, p, idx);
4698 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4700 check_malloced_chunk(gm, mem, nb);
4704 else if (nb > gm->dvsize) {
4705 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4709 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4710 binmap_t leastbit = least_bit(leftbits);
4711 compute_bit2idx(leastbit, i);
4712 b = smallbin_at(gm, i);
4714 assert(chunksize(p) == small_index2size(i));
4715 unlink_first_small_chunk(gm, b, p, i);
4716 rsize = small_index2size(i) - nb;
4717 /* Fit here cannot be remainderless if 4byte sizes */
4718 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4719 set_inuse_and_pinuse(gm, p, small_index2size(i));
4721 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4722 r = chunk_plus_offset(p, nb);
4723 set_size_and_pinuse_of_free_chunk(r, rsize);
4724 replace_dv(gm, r, rsize);
4727 check_malloced_chunk(gm, mem, nb);
4731 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4732 check_malloced_chunk(gm, mem, nb);
4737 else if (bytes >= MAX_REQUEST)
4738 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4740 nb = pad_request(bytes);
4741 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4742 check_malloced_chunk(gm, mem, nb);
4747 if (nb <= gm->dvsize) {
4748 size_t rsize = gm->dvsize - nb;
4749 mchunkptr p = gm->dv;
4750 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4751 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4753 set_size_and_pinuse_of_free_chunk(r, rsize);
4754 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4756 else { /* exhaust dv */
4757 size_t dvs = gm->dvsize;
4760 set_inuse_and_pinuse(gm, p, dvs);
4763 check_malloced_chunk(gm, mem, nb);
4767 else if (nb < gm->topsize) { /* Split top */
4768 size_t rsize = gm->topsize -= nb;
4769 mchunkptr p = gm->top;
4770 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4771 r->head = rsize | PINUSE_BIT;
4772 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4774 check_top_chunk(gm, gm->top);
4775 check_malloced_chunk(gm, mem, nb);
4779 mem = sys_alloc(gm, nb);
4789 void dlfree(void* mem) {
4791 Consolidate freed chunks with preceding or succeeding bordering
4792 free chunks, if they exist, and then place in a bin. Intermixed
4793 with special cases for top, dv, mmapped chunks, and usage errors.
4797 mchunkptr p = mem2chunk(mem);
4799 mstate fm = get_mstate_for(p);
4800 if (!ok_magic(fm)) {
4801 USAGE_ERROR_ACTION(fm, p);
4806 #endif /* FOOTERS */
4807 if (!PREACTION(fm)) {
4808 check_inuse_chunk(fm, p);
4809 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4810 size_t psize = chunksize(p);
4811 mchunkptr next = chunk_plus_offset(p, psize);
4813 size_t prevsize = p->prev_foot;
4814 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4815 prevsize &= ~IS_MMAPPED_BIT;
4816 psize += prevsize + MMAP_FOOT_PAD;
4817 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4818 fm->footprint -= psize;
4822 mchunkptr prev = chunk_minus_offset(p, prevsize);
4825 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4827 unlink_chunk(fm, p, prevsize);
4829 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4831 set_free_with_pinuse(p, psize, next);
4840 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4841 if (!cinuse(next)) { /* consolidate forward */
4842 if (next == fm->top) {
4843 size_t tsize = fm->topsize += psize;
4845 p->head = tsize | PINUSE_BIT;
4850 if (should_trim(fm, tsize))
4854 else if (next == fm->dv) {
4855 size_t dsize = fm->dvsize += psize;
4857 set_size_and_pinuse_of_free_chunk(p, dsize);
4861 size_t nsize = chunksize(next);
4863 unlink_chunk(fm, next, nsize);
4864 set_size_and_pinuse_of_free_chunk(p, psize);
4872 set_free_with_pinuse(p, psize, next);
4874 if (is_small(psize)) {
4875 insert_small_chunk(fm, p, psize);
4876 check_free_chunk(fm, p);
4879 tchunkptr tp = (tchunkptr)p;
4880 insert_large_chunk(fm, tp, psize);
4881 check_free_chunk(fm, p);
4882 if (--fm->release_checks == 0)
4883 release_unused_segments(fm);
4889 USAGE_ERROR_ACTION(fm, p);
4896 #endif /* FOOTERS */
4899 void* dlcalloc(size_t n_elements, size_t elem_size) {
4902 if (n_elements != 0) {
4903 req = n_elements * elem_size;
4904 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4905 (req / n_elements != elem_size))
4906 req = MAX_SIZE_T; /* force downstream failure on overflow */
4908 mem = dlmalloc(req);
4909 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4910 memset(mem, 0, req);
4914 void* dlrealloc(void* oldmem, size_t bytes) {
4916 return dlmalloc(bytes);
4917 #ifdef REALLOC_ZERO_BYTES_FREES
4922 #endif /* REALLOC_ZERO_BYTES_FREES */
4927 mstate m = get_mstate_for(mem2chunk(oldmem));
4929 USAGE_ERROR_ACTION(m, oldmem);
4932 #endif /* FOOTERS */
4933 return internal_realloc(m, oldmem, bytes);
4937 void* dlmemalign(size_t alignment, size_t bytes) {
4938 return internal_memalign(gm, alignment, bytes);
4941 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4943 size_t sz = elem_size; /* serves as 1-element array */
4944 return ialloc(gm, n_elements, &sz, 3, chunks);
4947 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4949 return ialloc(gm, n_elements, sizes, 0, chunks);
4952 void* dlvalloc(size_t bytes) {
4954 ensure_initialization();
4955 pagesz = mparams.page_size;
4956 return dlmemalign(pagesz, bytes);
4959 void* dlpvalloc(size_t bytes) {
4961 ensure_initialization();
4962 pagesz = mparams.page_size;
4963 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4966 int dlmalloc_trim(size_t pad) {
4967 ensure_initialization();
4969 if (!PREACTION(gm)) {
4970 result = sys_trim(gm, pad);
4976 size_t dlmalloc_footprint(void) {
4977 return gm->footprint;
4980 size_t dlmalloc_max_footprint(void) {
4981 return gm->max_footprint;
4985 struct mallinfo dlmallinfo(void) {
4986 return internal_mallinfo(gm);
4988 #endif /* NO_MALLINFO */
4990 void dlmalloc_stats() {
4991 internal_malloc_stats(gm);
4994 int dlmallopt(int param_number, int value) {
4995 return change_mparam(param_number, value);
4998 #endif /* !ONLY_MSPACES */
5000 size_t dlmalloc_usable_size(void* mem) {
5002 mchunkptr p = mem2chunk(mem);
5004 return chunksize(p) - overhead_for(p);
5009 /* ----------------------------- user mspaces ---------------------------- */
5013 static mstate init_user_mstate(char* tbase, size_t tsize) {
5014 size_t msize = pad_request(sizeof(struct malloc_state));
5016 mchunkptr msp = align_as_chunk(tbase);
5017 mstate m = (mstate)(chunk2mem(msp));
5018 memset(m, 0, msize);
5019 INITIAL_LOCK(&m->mutex);
5020 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
5021 m->seg.base = m->least_addr = tbase;
5022 m->seg.size = m->footprint = m->max_footprint = tsize;
5023 m->magic = mparams.magic;
5024 m->release_checks = MAX_RELEASE_CHECK_RATE;
5025 m->mflags = mparams.default_mflags;
5028 disable_contiguous(m);
5030 mn = next_chunk(mem2chunk(m));
5031 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5032 check_top_chunk(m, m->top);
5036 mspace create_mspace(size_t capacity, int locked) {
5039 ensure_initialization();
5040 msize = pad_request(sizeof(struct malloc_state));
5041 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5042 size_t rs = ((capacity == 0)? mparams.granularity :
5043 (capacity + TOP_FOOT_SIZE + msize));
5044 size_t tsize = granularity_align(rs);
5045 char* tbase = (char*)(CALL_MMAP(tsize));
5046 if (tbase != CMFAIL) {
5047 m = init_user_mstate(tbase, tsize);
5048 m->seg.sflags = IS_MMAPPED_BIT;
5049 set_lock(m, locked);
5055 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5058 ensure_initialization();
5059 msize = pad_request(sizeof(struct malloc_state));
5060 if (capacity > msize + TOP_FOOT_SIZE &&
5061 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5062 m = init_user_mstate((char*)base, capacity);
5063 m->seg.sflags = EXTERN_BIT;
5064 set_lock(m, locked);
5069 int mspace_mmap_large_chunks(mspace msp, int enable) {
5071 mstate ms = (mstate)msp;
5072 if (!PREACTION(ms)) {
5084 size_t destroy_mspace(mspace msp) {
5086 mstate ms = (mstate)msp;
5088 msegmentptr sp = &ms->seg;
5090 char* base = sp->base;
5091 size_t size = sp->size;
5092 flag_t flag = sp->sflags;
5094 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
5095 CALL_MUNMAP(base, size) == 0)
5100 USAGE_ERROR_ACTION(ms,ms);
5106 mspace versions of routines are near-clones of the global
5107 versions. This is not so nice but better than the alternatives.
5111 void* mspace_malloc(mspace msp, size_t bytes) {
5112 mstate ms = (mstate)msp;
5113 if (!ok_magic(ms)) {
5114 USAGE_ERROR_ACTION(ms,ms);
5117 if (!PREACTION(ms)) {
5120 if (bytes <= MAX_SMALL_REQUEST) {
5123 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5124 idx = small_index(nb);
5125 smallbits = ms->smallmap >> idx;
5127 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5129 idx += ~smallbits & 1; /* Uses next bin if idx empty */
5130 b = smallbin_at(ms, idx);
5132 assert(chunksize(p) == small_index2size(idx));
5133 unlink_first_small_chunk(ms, b, p, idx);
5134 set_inuse_and_pinuse(ms, p, small_index2size(idx));
5136 check_malloced_chunk(ms, mem, nb);
5140 else if (nb > ms->dvsize) {
5141 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5145 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5146 binmap_t leastbit = least_bit(leftbits);
5147 compute_bit2idx(leastbit, i);
5148 b = smallbin_at(ms, i);
5150 assert(chunksize(p) == small_index2size(i));
5151 unlink_first_small_chunk(ms, b, p, i);
5152 rsize = small_index2size(i) - nb;
5153 /* Fit here cannot be remainderless if 4byte sizes */
5154 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5155 set_inuse_and_pinuse(ms, p, small_index2size(i));
5157 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5158 r = chunk_plus_offset(p, nb);
5159 set_size_and_pinuse_of_free_chunk(r, rsize);
5160 replace_dv(ms, r, rsize);
5163 check_malloced_chunk(ms, mem, nb);
5167 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5168 check_malloced_chunk(ms, mem, nb);
5173 else if (bytes >= MAX_REQUEST)
5174 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5176 nb = pad_request(bytes);
5177 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5178 check_malloced_chunk(ms, mem, nb);
5183 if (nb <= ms->dvsize) {
5184 size_t rsize = ms->dvsize - nb;
5185 mchunkptr p = ms->dv;
5186 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5187 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5189 set_size_and_pinuse_of_free_chunk(r, rsize);
5190 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5192 else { /* exhaust dv */
5193 size_t dvs = ms->dvsize;
5196 set_inuse_and_pinuse(ms, p, dvs);
5199 check_malloced_chunk(ms, mem, nb);
5203 else if (nb < ms->topsize) { /* Split top */
5204 size_t rsize = ms->topsize -= nb;
5205 mchunkptr p = ms->top;
5206 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5207 r->head = rsize | PINUSE_BIT;
5208 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5210 check_top_chunk(ms, ms->top);
5211 check_malloced_chunk(ms, mem, nb);
5215 mem = sys_alloc(ms, nb);
5225 void mspace_free(mspace msp, void* mem) {
5227 mchunkptr p = mem2chunk(mem);
5229 mstate fm = get_mstate_for(p);
5231 mstate fm = (mstate)msp;
5232 #endif /* FOOTERS */
5233 if (!ok_magic(fm)) {
5234 USAGE_ERROR_ACTION(fm, p);
5237 if (!PREACTION(fm)) {
5238 check_inuse_chunk(fm, p);
5239 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
5240 size_t psize = chunksize(p);
5241 mchunkptr next = chunk_plus_offset(p, psize);
5243 size_t prevsize = p->prev_foot;
5244 if ((prevsize & IS_MMAPPED_BIT) != 0) {
5245 prevsize &= ~IS_MMAPPED_BIT;
5246 psize += prevsize + MMAP_FOOT_PAD;
5247 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5248 fm->footprint -= psize;
5252 mchunkptr prev = chunk_minus_offset(p, prevsize);
5255 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5257 unlink_chunk(fm, p, prevsize);
5259 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5261 set_free_with_pinuse(p, psize, next);
5270 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5271 if (!cinuse(next)) { /* consolidate forward */
5272 if (next == fm->top) {
5273 size_t tsize = fm->topsize += psize;
5275 p->head = tsize | PINUSE_BIT;
5280 if (should_trim(fm, tsize))
5284 else if (next == fm->dv) {
5285 size_t dsize = fm->dvsize += psize;
5287 set_size_and_pinuse_of_free_chunk(p, dsize);
5291 size_t nsize = chunksize(next);
5293 unlink_chunk(fm, next, nsize);
5294 set_size_and_pinuse_of_free_chunk(p, psize);
5302 set_free_with_pinuse(p, psize, next);
5304 if (is_small(psize)) {
5305 insert_small_chunk(fm, p, psize);
5306 check_free_chunk(fm, p);
5309 tchunkptr tp = (tchunkptr)p;
5310 insert_large_chunk(fm, tp, psize);
5311 check_free_chunk(fm, p);
5312 if (--fm->release_checks == 0)
5313 release_unused_segments(fm);
5319 USAGE_ERROR_ACTION(fm, p);
5326 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5329 mstate ms = (mstate)msp;
5330 if (!ok_magic(ms)) {
5331 USAGE_ERROR_ACTION(ms,ms);
5334 if (n_elements != 0) {
5335 req = n_elements * elem_size;
5336 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5337 (req / n_elements != elem_size))
5338 req = MAX_SIZE_T; /* force downstream failure on overflow */
5340 mem = internal_malloc(ms, req);
5341 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5342 memset(mem, 0, req);
5346 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5348 return mspace_malloc(msp, bytes);
5349 #ifdef REALLOC_ZERO_BYTES_FREES
5351 mspace_free(msp, oldmem);
5354 #endif /* REALLOC_ZERO_BYTES_FREES */
5357 mchunkptr p = mem2chunk(oldmem);
5358 mstate ms = get_mstate_for(p);
5360 mstate ms = (mstate)msp;
5361 #endif /* FOOTERS */
5362 if (!ok_magic(ms)) {
5363 USAGE_ERROR_ACTION(ms,ms);
5366 return internal_realloc(ms, oldmem, bytes);
5370 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5371 mstate ms = (mstate)msp;
5372 if (!ok_magic(ms)) {
5373 USAGE_ERROR_ACTION(ms,ms);
5376 return internal_memalign(ms, alignment, bytes);
5379 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5380 size_t elem_size, void* chunks[]) {
5381 size_t sz = elem_size; /* serves as 1-element array */
5382 mstate ms = (mstate)msp;
5383 if (!ok_magic(ms)) {
5384 USAGE_ERROR_ACTION(ms,ms);
5387 return ialloc(ms, n_elements, &sz, 3, chunks);
5390 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5391 size_t sizes[], void* chunks[]) {
5392 mstate ms = (mstate)msp;
5393 if (!ok_magic(ms)) {
5394 USAGE_ERROR_ACTION(ms,ms);
5397 return ialloc(ms, n_elements, sizes, 0, chunks);
5400 int mspace_trim(mspace msp, size_t pad) {
5402 mstate ms = (mstate)msp;
5404 if (!PREACTION(ms)) {
5405 result = sys_trim(ms, pad);
5410 USAGE_ERROR_ACTION(ms,ms);
5415 void mspace_malloc_stats(mspace msp) {
5416 mstate ms = (mstate)msp;
5418 internal_malloc_stats(ms);
5421 USAGE_ERROR_ACTION(ms,ms);
5425 size_t mspace_footprint(mspace msp) {
5427 mstate ms = (mstate)msp;
5429 result = ms->footprint;
5432 USAGE_ERROR_ACTION(ms,ms);
5438 size_t mspace_max_footprint(mspace msp) {
5440 mstate ms = (mstate)msp;
5442 result = ms->max_footprint;
5445 USAGE_ERROR_ACTION(ms,ms);
5452 struct mallinfo mspace_mallinfo(mspace msp) {
5453 mstate ms = (mstate)msp;
5454 if (!ok_magic(ms)) {
5455 USAGE_ERROR_ACTION(ms,ms);
5457 return internal_mallinfo(ms);
5459 #endif /* NO_MALLINFO */
5461 size_t mspace_usable_size(void* mem) {
5463 mchunkptr p = mem2chunk(mem);
5465 return chunksize(p) - overhead_for(p);
5470 int mspace_mallopt(int param_number, int value) {
5471 return change_mparam(param_number, value);
5474 #endif /* MSPACES */
5476 /* -------------------- Alternative MORECORE functions ------------------- */
5479 Guidelines for creating a custom version of MORECORE:
5481 * For best performance, MORECORE should allocate in multiples of pagesize.
5482 * MORECORE may allocate more memory than requested. (Or even less,
5483 but this will usually result in a malloc failure.)
5484 * MORECORE must not allocate memory when given argument zero, but
5485 instead return one past the end address of memory from previous
5487 * For best performance, consecutive calls to MORECORE with positive
5488 arguments should return increasing addresses, indicating that
5489 space has been contiguously extended.
5490 * Even though consecutive calls to MORECORE need not return contiguous
5491 addresses, it must be OK for malloc'ed chunks to span multiple
5492 regions in those cases where they do happen to be contiguous.
5493 * MORECORE need not handle negative arguments -- it may instead
5494 just return MFAIL when given negative arguments.
5495 Negative arguments are always multiples of pagesize. MORECORE
5496 must not misinterpret negative args as large positive unsigned
5497 args. You can suppress all such calls from even occurring by defining
5498 MORECORE_CANNOT_TRIM,
5500 As an example alternative MORECORE, here is a custom allocator
5501 kindly contributed for pre-OSX macOS. It uses virtually but not
5502 necessarily physically contiguous non-paged memory (locked in,
5503 present and won't get swapped out). You can use it by uncommenting
5504 this section, adding some #includes, and setting up the appropriate
5507 #define MORECORE osMoreCore
5509 There is also a shutdown routine that should somehow be called for
5510 cleanup upon program exit.
5512 #define MAX_POOL_ENTRIES 100
5513 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5514 static int next_os_pool;
5515 void *our_os_pools[MAX_POOL_ENTRIES];
5517 void *osMoreCore(int size)
5520 static void *sbrk_top = 0;
5524 if (size < MINIMUM_MORECORE_SIZE)
5525 size = MINIMUM_MORECORE_SIZE;
5526 if (CurrentExecutionLevel() == kTaskLevel)
5527 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5530 return (void *) MFAIL;
5532 // save ptrs so they can be freed during cleanup
5533 our_os_pools[next_os_pool] = ptr;
5535 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5536 sbrk_top = (char *) ptr + size;
5541 // we don't currently support shrink behavior
5542 return (void *) MFAIL;
5550 // cleanup any allocated memory pools
5551 // called as last thing before shutting down driver
5553 void osCleanupMem(void)
5557 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5560 PoolDeallocate(*ptr);
5568 /* -----------------------------------------------------------------------
5570 V2.8.4 (not yet released)
5571 * Add mspace_mmap_large_chunks; thanks to Jean Brouwers
5572 * Fix insufficient sys_alloc padding when using 16byte alignment
5573 * Fix bad error check in mspace_footprint
5574 * Adaptations for ptmalloc, courtesy of Wolfram Gloger.
5575 * Reentrant spin locks, courtesy of Earl Chew and others
5576 * Win32 improvements, courtesy of Niall Douglas and Earl Chew
5577 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
5578 * Extension hook in malloc_state
5579 * Various small adjustments to reduce warnings on some compilers
5580 * Various configuration extensions/changes for more platforms. Thanks
5581 to all who contributed these.
5583 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5584 * Add max_footprint functions
5585 * Ensure all appropriate literals are size_t
5586 * Fix conditional compilation problem for some #define settings
5587 * Avoid concatenating segments with the one provided
5588 in create_mspace_with_base
5589 * Rename some variables to avoid compiler shadowing warnings
5590 * Use explicit lock initialization.
5591 * Better handling of sbrk interference.
5592 * Simplify and fix segment insertion, trimming and mspace_destroy
5593 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5594 * Thanks especially to Dennis Flanagan for help on these.
5596 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5597 * Fix memalign brace error.
5599 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5600 * Fix improper #endif nesting in C++
5601 * Add explicit casts needed for C++
5603 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5604 * Use trees for large bins
5606 * Use segments to unify sbrk-based and mmap-based system allocation,
5607 removing need for emulation on most platforms without sbrk.
5608 * Default safety checks
5609 * Optional footer checks. Thanks to William Robertson for the idea.
5610 * Internal code refactoring
5611 * Incorporate suggestions and platform-specific changes.
5612 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5613 Aaron Bachmann, Emery Berger, and others.
5614 * Speed up non-fastbin processing enough to remove fastbins.
5615 * Remove useless cfree() to avoid conflicts with other apps.
5616 * Remove internal memcpy, memset. Compilers handle builtins better.
5617 * Remove some options that no one ever used and rename others.
5619 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5620 * Fix malloc_state bitmap array misdeclaration
5622 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5623 * Allow tuning of FIRST_SORTED_BIN_SIZE
5624 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5625 * Better detection and support for non-contiguousness of MORECORE.
5626 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5627 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5628 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5629 * Raised default trim and map thresholds to 256K.
5630 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5631 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5632 * Branch-free bin calculation
5633 * Default trim and mmap thresholds now 256K.
5635 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5636 * Introduce independent_comalloc and independent_calloc.
5637 Thanks to Michael Pachos for motivation and help.
5638 * Make optional .h file available
5639 * Allow > 2GB requests on 32bit systems.
5640 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5641 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5643 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5645 * memalign: check alignment arg
5646 * realloc: don't try to shift chunks backwards, since this
5647 leads to more fragmentation in some programs and doesn't
5648 seem to help in any others.
5649 * Collect all cases in malloc requiring system memory into sysmalloc
5650 * Use mmap as backup to sbrk
5651 * Place all internal state in malloc_state
5652 * Introduce fastbins (although similar to 2.5.1)
5653 * Many minor tunings and cosmetic improvements
5654 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5655 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5656 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5657 * Include errno.h to support default failure action.
5659 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5660 * return null for negative arguments
5661 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5662 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5663 (e.g. WIN32 platforms)
5664 * Cleanup header file inclusion for WIN32 platforms
5665 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5666 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5667 memory allocation routines
5668 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5669 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5670 usage of 'assert' in non-WIN32 code
5671 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5673 * Always call 'fREe()' rather than 'free()'
5675 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5676 * Fixed ordering problem with boundary-stamping
5678 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5679 * Added pvalloc, as recommended by H.J. Liu
5680 * Added 64bit pointer support mainly from Wolfram Gloger
5681 * Added anonymously donated WIN32 sbrk emulation
5682 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5683 * malloc_extend_top: fix mask error that caused wastage after
5685 * Add linux mremap support code from HJ Liu
5687 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5688 * Integrated most documentation with the code.
5689 * Add support for mmap, with help from
5690 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5691 * Use last_remainder in more cases.
5692 * Pack bins using idea from colin@nyx10.cs.du.edu
5693 * Use ordered bins instead of best-fit threshold
5694 * Eliminate block-local decls to simplify tracing and debugging.
5695 * Support another case of realloc via move into top
5696 * Fix error occurring when initial sbrk_base not word-aligned.
5697 * Rely on page size for units instead of SBRK_UNIT to
5698 avoid surprises about sbrk alignment conventions.
5699 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5700 (raymond@es.ele.tue.nl) for the suggestion.
5701 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5702 * More precautions for cases where other routines call sbrk,
5703 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5704 * Added macros etc., allowing use in linux libc from
5705 H.J. Lu (hjl@gnu.ai.mit.edu)
5706 * Inverted this history list
5708 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5709 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5710 * Removed all preallocation code since under current scheme
5711 the work required to undo bad preallocations exceeds
5712 the work saved in good cases for most test programs.
5713 * No longer use return list or unconsolidated bins since
5714 no scheme using them consistently outperforms those that don't
5715 given above changes.
5716 * Use best fit for very large chunks to prevent some worst-cases.
5717 * Added some support for debugging
5719 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5720 * Removed footers when chunks are in use. Thanks to
5721 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5723 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5724 * Added malloc_trim, with help from Wolfram Gloger
5725 (wmglo@Dent.MED.Uni-Muenchen.DE).
5727 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5729 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5730 * realloc: try to expand in both directions
5731 * malloc: swap order of clean-bin strategy;
5732 * realloc: only conditionally expand backwards
5733 * Try not to scavenge used bins
5734 * Use bin counts as a guide to preallocation
5735 * Occasionally bin return list chunks in first scan
5736 * Add a few optimizations from colin@nyx10.cs.du.edu
5738 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5739 * faster bin computation & slightly different binning
5740 * merged all consolidations to one part of malloc proper
5741 (eliminating old malloc_find_space & malloc_clean_bin)
5742 * Scan 2 returns chunks (not just 1)
5743 * Propagate failure in realloc if malloc returns 0
5744 * Add stuff to allow compilation on non-ANSI compilers
5745 from kpv@research.att.com
5747 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5748 * removed potential for odd address access in prev_chunk
5749 * removed dependency on getpagesize.h
5750 * misc cosmetics and a bit more internal documentation
5751 * anticosmetics: mangled names in macros to evade debugger strangeness
5752 * tested on sparc, hp-700, dec-mips, rs6000
5753 with gcc & native cc (hp, dec only) allowing
5754 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5756 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5757 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5758 structure of old version, but most details differ.)