Ruby  2.7.2p137(2020-10-01revision5445e0435260b449decf2ac16f9d09bae3cafe72)
dlmalloc.c
Go to the documentation of this file.
1 /*
2  This is a version (aka dlmalloc) of malloc/free/realloc written by
3  Doug Lea and released to the public domain, as explained at
4  http://creativecommons.org/licenses/publicdomain. Send questions,
5  comments, complaints, performance data, etc to dl@cs.oswego.edu
6 
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
8 
9  Note: There may be an updated version of this malloc obtainable at
10  ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11  Check before installing!
12 
13 * Quickstart
14 
15  This library is all in one file to simplify the most common usage:
16  ftp it, compile it (-O3), and link it into another program. All of
17  the compile-time options default to reasonable values for use on
18  most platforms. You might later want to step through various
19  compile-time and dynamic tuning options.
20 
21  For convenience, an include file for code using this malloc is at:
22  ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23  You don't really need this .h file unless you call functions not
24  defined in your system include files. The .h file contains only the
25  excerpts from this file needed for using this malloc on ANSI C/C++
26  systems, so long as you haven't changed compile-time options about
27  naming and tuning parameters. If you do, then you can create your
28  own malloc.h that does include all settings by cutting at the point
29  indicated below. Note that you may already by default be using a C
30  library containing a malloc that is based on some version of this
31  malloc (for example in linux). You might still want to use the one
32  in this file to customize settings or to avoid overheads associated
33  with library versions.
34 
35 * Vital statistics:
36 
37  Supported pointer/size_t representation: 4 or 8 bytes
38  size_t MUST be an unsigned type of the same width as
39  pointers. (If you are using an ancient system that declares
40  size_t as a signed type, or need it to be a different width
41  than pointers, you can use a previous release of this malloc
42  (e.g. 2.7.2) supporting these.)
43 
44  Alignment: 8 bytes (default)
45  This suffices for nearly all current machines and C compilers.
46  However, you can define MALLOC_ALIGNMENT to be wider than this
47  if necessary (up to 128bytes), at the expense of using more space.
48 
49  Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50  8 or 16 bytes (if 8byte sizes)
51  Each malloced chunk has a hidden word of overhead holding size
52  and status information, and additional cross-check word
53  if FOOTERS is defined.
54 
55  Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56  8-byte ptrs: 32 bytes (including overhead)
57 
58  Even a request for zero bytes (i.e., malloc(0)) returns a
59  pointer to something of the minimum allocatable size.
60  The maximum overhead wastage (i.e., number of extra bytes
61  allocated than were requested in malloc) is less than or equal
62  to the minimum size, except for requests >= mmap_threshold that
63  are serviced via mmap(), where the worst case wastage is about
64  32 bytes plus the remainder from a system page (the minimal
65  mmap unit); typically 4096 or 8192 bytes.
66 
67  Security: static-safe; optionally more or less
68  The "security" of malloc refers to the ability of malicious
69  code to accentuate the effects of errors (for example, freeing
70  space that is not currently malloc'ed or overwriting past the
71  ends of chunks) in code that calls malloc. This malloc
72  guarantees not to modify any memory locations below the base of
73  heap, i.e., static variables, even in the presence of usage
74  errors. The routines additionally detect most improper frees
75  and reallocs. All this holds as long as the static bookkeeping
76  for malloc itself is not corrupted by some other means. This
77  is only one aspect of security -- these checks do not, and
78  cannot, detect all possible programming errors.
79 
80  If FOOTERS is defined nonzero, then each allocated chunk
81  carries an additional check word to verify that it was malloced
82  from its space. These check words are the same within each
83  execution of a program using malloc, but differ across
84  executions, so externally crafted fake chunks cannot be
85  freed. This improves security by rejecting frees/reallocs that
86  could corrupt heap memory, in addition to the checks preventing
87  writes to statics that are always on. This may further improve
88  security at the expense of time and space overhead. (Note that
89  FOOTERS may also be worth using with MSPACES.)
90 
91  By default detected errors cause the program to abort (calling
92  "abort()"). You can override this to instead proceed past
93  errors by defining PROCEED_ON_ERROR. In this case, a bad free
94  has no effect, and a malloc that encounters a bad address
95  caused by user overwrites will ignore the bad address by
96  dropping pointers and indices to all known memory. This may
97  be appropriate for programs that should continue if at all
98  possible in the face of programming errors, although they may
99  run out of memory because dropped memory is never reclaimed.
100 
101  If you don't like either of these options, you can define
102  CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103  else. And if if you are sure that your program using malloc has
104  no errors or vulnerabilities, you can define INSECURE to 1,
105  which might (or might not) provide a small performance improvement.
106 
107  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108  When USE_LOCKS is defined, each public call to malloc, free,
109  etc is surrounded with either a pthread mutex or a win32
110  spinlock (depending on WIN32). This is not especially fast, and
111  can be a major bottleneck. It is designed only to provide
112  minimal protection in concurrent environments, and to provide a
113  basis for extensions. If you are using malloc in a concurrent
114  program, consider instead using ptmalloc, which is derived from
115  a version of this malloc. (See http://www.malloc.de).
116 
117  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118  This malloc can use unix sbrk or any emulation (invoked using
119  the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120  (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121  memory. On most unix systems, it tends to work best if both
122  MORECORE and MMAP are enabled. On Win32, it uses emulations
123  based on VirtualAlloc. It also uses common C library functions
124  like memset.
125 
126  Compliance: I believe it is compliant with the Single Unix Specification
127  (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128  others as well.
129 
130 * Overview of algorithms
131 
132  This is not the fastest, most space-conserving, most portable, or
133  most tunable malloc ever written. However it is among the fastest
134  while also being among the most space-conserving, portable and
135  tunable. Consistent balance across these factors results in a good
136  general-purpose allocator for malloc-intensive programs.
137 
138  In most ways, this malloc is a best-fit allocator. Generally, it
139  chooses the best-fitting existing chunk for a request, with ties
140  broken in approximately least-recently-used order. (This strategy
141  normally maintains low fragmentation.) However, for requests less
142  than 256bytes, it deviates from best-fit when there is not an
143  exactly fitting available chunk by preferring to use space adjacent
144  to that used for the previous small request, as well as by breaking
145  ties in approximately most-recently-used order. (These enhance
146  locality of series of small allocations.) And for very large requests
147  (>= 256Kb by default), it relies on system memory mapping
148  facilities, if supported. (This helps avoid carrying around and
149  possibly fragmenting memory used only for large chunks.)
150 
151  All operations (except malloc_stats and mallinfo) have execution
152  times that are bounded by a constant factor of the number of bits in
153  a size_t, not counting any clearing in calloc or copying in realloc,
154  or actions surrounding MORECORE and MMAP that have times
155  proportional to the number of non-contiguous regions returned by
156  system allocation routines, which is often just 1.
157 
158  The implementation is not very modular and seriously overuses
159  macros. Perhaps someday all C compilers will do as good a job
160  inlining modular code as can now be done by brute-force expansion,
161  but now, enough of them seem not to.
162 
163  Some compilers issue a lot of warnings about code that is
164  dead/unreachable only on some platforms, and also about intentional
165  uses of negation on unsigned types. All known cases of each can be
166  ignored.
167 
168  For a longer but out of date high-level description, see
169  http://gee.cs.oswego.edu/dl/html/malloc.html
170 
171 * MSPACES
172  If MSPACES is defined, then in addition to malloc, free, etc.,
173  this file also defines mspace_malloc, mspace_free, etc. These
174  are versions of malloc routines that take an "mspace" argument
175  obtained using create_mspace, to control all internal bookkeeping.
176  If ONLY_MSPACES is defined, only these versions are compiled.
177  So if you would like to use this allocator for only some allocations,
178  and your system malloc for others, you can compile with
179  ONLY_MSPACES and then do something like...
180  static mspace mymspace = create_mspace(0,0); // for example
181  #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
182 
183  (Note: If you only need one instance of an mspace, you can instead
184  use "USE_DL_PREFIX" to relabel the global malloc.)
185 
186  You can similarly create thread-local allocators by storing
187  mspaces as thread-locals. For example:
188  static __thread mspace tlms = 0;
189  void* tlmalloc(size_t bytes) {
190  if (tlms == 0) tlms = create_mspace(0, 0);
191  return mspace_malloc(tlms, bytes);
192  }
193  void tlfree(void* mem) { mspace_free(tlms, mem); }
194 
195  Unless FOOTERS is defined, each mspace is completely independent.
196  You cannot allocate from one and free to another (although
197  conformance is only weakly checked, so usage errors are not always
198  caught). If FOOTERS is defined, then each chunk carries around a tag
199  indicating its originating mspace, and frees are directed to their
200  originating spaces.
201 
202  ------------------------- Compile-time options ---------------------------
203 
204 Be careful in setting #define values for numerical constants of type
205 size_t. On some systems, literal values are not automatically extended
206 to size_t precision unless they are explicitly casted.
207 
208 WIN32 default: defined if _WIN32 defined
209  Defining WIN32 sets up defaults for MS environment and compilers.
210  Otherwise defaults are for unix.
211 
212 MALLOC_ALIGNMENT default: (size_t)8
213  Controls the minimum alignment for malloc'ed chunks. It must be a
214  power of two and at least 8, even on machines for which smaller
215  alignments would suffice. It may be defined as larger than this
216  though. Note however that code and data structures are optimized for
217  the case of 8-byte alignment.
218 
219 MSPACES default: 0 (false)
220  If true, compile in support for independent allocation spaces.
221  This is only supported if HAVE_MMAP is true.
222 
223 ONLY_MSPACES default: 0 (false)
224  If true, only compile in mspace versions, not regular versions.
225 
226 USE_LOCKS default: 0 (false)
227  Causes each call to each public routine to be surrounded with
228  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229  overridden on a per-mspace basis for mspace versions.)
230 
231 FOOTERS default: 0
232  If true, provide extra checking and dispatching by placing
233  information in the footers of allocated chunks. This adds
234  space and time overhead.
235 
236 INSECURE default: 0
237  If true, omit checks for usage errors and heap space overwrites.
238 
239 USE_DL_PREFIX default: NOT defined
240  Causes compiler to prefix all public routines with the string 'dl'.
241  This can be useful when you only want to use this malloc in one part
242  of a program, using your regular system malloc elsewhere.
243 
244 ABORT default: defined as abort()
245  Defines how to abort on failed checks. On most systems, a failed
246  check cannot die with an "assert" or even print an informative
247  message, because the underlying print routines in turn call malloc,
248  which will fail again. Generally, the best policy is to simply call
249  abort(). It's not very useful to do more than this because many
250  errors due to overwriting will show up as address faults (null, odd
251  addresses etc) rather than malloc-triggered checks, so will also
252  abort. Also, most compilers know that abort() does not return, so
253  can better optimize code conditionally calling it.
254 
255 PROCEED_ON_ERROR default: defined as 0 (false)
256  Controls whether detected bad addresses cause them to bypassed
257  rather than aborting. If set, detected bad arguments to free and
258  realloc are ignored. And all bookkeeping information is zeroed out
259  upon a detected overwrite of freed heap space, thus losing the
260  ability to ever return it from malloc again, but enabling the
261  application to proceed. If PROCEED_ON_ERROR is defined, the
262  static variable malloc_corruption_error_count is compiled in
263  and can be examined to see if errors have occurred. This option
264  generates slower code than the default abort policy.
265 
266 DEBUG default: NOT defined
267  The DEBUG setting is mainly intended for people trying to modify
268  this code or diagnose problems when porting to new platforms.
269  However, it may also be able to better isolate user errors than just
270  using runtime checks. The assertions in the check routines spell
271  out in more detail the assumptions and invariants underlying the
272  algorithms. The checking is fairly extensive, and will slow down
273  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274  set will attempt to check every non-mmapped allocated and free chunk
275  in the course of computing the summaries.
276 
277 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
278  Debugging assertion failures can be nearly impossible if your
279  version of the assert macro causes malloc to be called, which will
280  lead to a cascade of further failures, blowing the runtime stack.
281  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282  which will usually make debugging easier.
283 
284 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
285  The action to take before "return 0" when malloc fails to be able to
286  return memory because there is none available.
287 
288 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
289  True if this system supports sbrk or an emulation of it.
290 
291 MORECORE default: sbrk
292  The name of the sbrk-style system routine to call to obtain more
293  memory. See below for guidance on writing custom MORECORE
294  functions. The type of the argument to sbrk/MORECORE varies across
295  systems. It cannot be size_t, because it supports negative
296  arguments, so it is normally the signed type of the same width as
297  size_t (sometimes declared as "intptr_t"). It doesn't much matter
298  though. Internally, we only call it with arguments less than half
299  the max value of a size_t, which should work across all reasonable
300  possibilities, although sometimes generating compiler warnings. See
301  near the end of this file for guidelines for creating a custom
302  version of MORECORE.
303 
304 MORECORE_CONTIGUOUS default: 1 (true)
305  If true, take advantage of fact that consecutive calls to MORECORE
306  with positive arguments always return contiguous increasing
307  addresses. This is true of unix sbrk. It does not hurt too much to
308  set it true anyway, since malloc copes with non-contiguities.
309  Setting it false when definitely non-contiguous saves time
310  and possibly wasted space it would take to discover this though.
311 
312 MORECORE_CANNOT_TRIM default: NOT defined
313  True if MORECORE cannot release space back to the system when given
314  negative arguments. This is generally necessary only if you are
315  using a hand-crafted MORECORE function that cannot handle negative
316  arguments.
317 
318 HAVE_MMAP default: 1 (true)
319  True if this system supports mmap or an emulation of it. If so, and
320  HAVE_MORECORE is not true, MMAP is used for all system
321  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322  primarily used to directly allocate very large blocks. It is also
323  used as a backup strategy in cases where MORECORE fails to provide
324  space from system. Note: A single call to MUNMAP is assumed to be
325  able to unmap memory that may have be allocated using multiple calls
326  to MMAP, so long as they are adjacent.
327 
328 HAVE_MREMAP default: 1 on linux, else 0
329  If true realloc() uses mremap() to re-allocate large blocks and
330  extend or shrink allocation spaces.
331 
332 MMAP_CLEARS default: 1 on unix
333  True if mmap clears memory so calloc doesn't need to. This is true
334  for standard unix mmap using /dev/zero.
335 
336 USE_BUILTIN_FFS default: 0 (i.e., not used)
337  Causes malloc to use the builtin ffs() function to compute indices.
338  Some compilers may recognize and intrinsify ffs to be faster than the
339  supplied C version. Also, the case of x86 using gcc is special-cased
340  to an asm instruction, so is already as fast as it can be, and so
341  this setting has no effect. (On most x86s, the asm version is only
342  slightly faster than the C version.)
343 
344 malloc_getpagesize default: derive from system includes, or 4096.
345  The system page size. To the extent possible, this malloc manages
346  memory from the system in page-size units. This may be (and
347  usually is) a function rather than a constant. This is ignored
348  if WIN32, where page size is determined using getSystemInfo during
349  initialization.
350 
351 USE_DEV_RANDOM default: 0 (i.e., not used)
352  Causes malloc to use /dev/random to initialize secure magic seed for
353  stamping footers. Otherwise, the current time is used.
354 
355 NO_MALLINFO default: 0
356  If defined, don't compile "mallinfo". This can be a simple way
357  of dealing with mismatches between system declarations and
358  those in this file.
359 
360 MALLINFO_FIELD_TYPE default: size_t
361  The type of the fields in the mallinfo struct. This was originally
362  defined as "int" in SVID etc, but is more usefully defined as
363  size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
364 
365 REALLOC_ZERO_BYTES_FREES default: not defined
366  This should be set if a call to realloc with zero bytes should
367  be the same as a call to free. Some people think it should. Otherwise,
368  since this malloc returns a unique pointer for malloc(0), so does
369  realloc(p, 0).
370 
371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
373 LACKS_STDLIB_H default: NOT defined unless on WIN32
374  Define these if your system does not have these header files.
375  You might need to manually insert some of the declarations they provide.
376 
377 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
378  system_info.dwAllocationGranularity in WIN32,
379  otherwise 64K.
380  Also settable using mallopt(M_GRANULARITY, x)
381  The unit for allocating and deallocating memory from the system. On
382  most systems with contiguous MORECORE, there is no reason to
383  make this more than a page. However, systems with MMAP tend to
384  either require or encourage larger granularities. You can increase
385  this value to prevent system allocation functions to be called so
386  often, especially if they are slow. The value must be at least one
387  page and must be a power of two. Setting to 0 causes initialization
388  to either page size or win32 region size. (Note: In previous
389  versions of malloc, the equivalent of this option was called
390  "TOP_PAD")
391 
392 DEFAULT_TRIM_THRESHOLD default: 2MB
393  Also settable using mallopt(M_TRIM_THRESHOLD, x)
394  The maximum amount of unused top-most memory to keep before
395  releasing via malloc_trim in free(). Automatic trimming is mainly
396  useful in long-lived programs using contiguous MORECORE. Because
397  trimming via sbrk can be slow on some systems, and can sometimes be
398  wasteful (in cases where programs immediately afterward allocate
399  more large chunks) the value should be high enough so that your
400  overall system performance would improve by releasing this much
401  memory. As a rough guide, you might set to a value close to the
402  average size of a process (program) running on your system.
403  Releasing this much memory would allow such a process to run in
404  memory. Generally, it is worth tuning trim thresholds when a
405  program undergoes phases where several large chunks are allocated
406  and released in ways that can reuse each other's storage, perhaps
407  mixed with phases where there are no such chunks at all. The trim
408  value must be greater than page size to have any useful effect. To
409  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410  some people use of mallocing a huge space and then freeing it at
411  program startup, in an attempt to reserve system memory, doesn't
412  have the intended effect under automatic trimming, since that memory
413  will immediately be returned to the system.
414 
415 DEFAULT_MMAP_THRESHOLD default: 256K
416  Also settable using mallopt(M_MMAP_THRESHOLD, x)
417  The request size threshold for using MMAP to directly service a
418  request. Requests of at least this size that cannot be allocated
419  using already-existing space will be serviced via mmap. (If enough
420  normal freed space already exists it is used instead.) Using mmap
421  segregates relatively large chunks of memory so that they can be
422  individually obtained and released from the host system. A request
423  serviced through mmap is never reused by any other request (at least
424  not directly; the system may just so happen to remap successive
425  requests to the same locations). Segregating space in this way has
426  the benefits that: Mmapped space can always be individually released
427  back to the system, which helps keep the system level memory demands
428  of a long-lived program low. Also, mapped memory doesn't become
429  `locked' between other chunks, as can happen with normally allocated
430  chunks, which means that even trimming via malloc_trim would not
431  release them. However, it has the disadvantage that the space
432  cannot be reclaimed, consolidated, and then used to service later
433  requests, as happens with normal chunks. The advantages of mmap
434  nearly always outweigh disadvantages for "large" chunks, but the
435  value of "large" may vary across systems. The default is an
436  empirically derived value that works well in most systems. You can
437  disable mmap by setting to MAX_SIZE_T.
438 
439 */
440 
441 #ifndef WIN32
442 #ifdef _WIN32
443 #define WIN32 1
444 #endif /* _WIN32 */
445 #endif /* WIN32 */
446 #ifdef WIN32
447 #define WIN32_LEAN_AND_MEAN
448 #include <windows.h>
449 #define HAVE_MMAP 1
450 #define HAVE_MORECORE 0
451 #define LACKS_UNISTD_H
452 #define LACKS_SYS_PARAM_H
453 #define LACKS_SYS_MMAN_H
454 #define LACKS_STRING_H
455 #define LACKS_STRINGS_H
456 #define LACKS_SYS_TYPES_H
457 #define LACKS_ERRNO_H
458 #define MALLOC_FAILURE_ACTION
459 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
460 #endif /* WIN32 */
461 
462 #ifdef __OS2__
463 #define INCL_DOS
464 #include <os2.h>
465 #define HAVE_MMAP 1
466 #define HAVE_MORECORE 0
467 #define LACKS_SYS_MMAN_H
468 #endif /* __OS2__ */
469 
470 #if defined(DARWIN) || defined(_DARWIN)
471 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
472 #ifndef HAVE_MORECORE
473 #define HAVE_MORECORE 0
474 #define HAVE_MMAP 1
475 #endif /* HAVE_MORECORE */
476 #endif /* DARWIN */
477 
478 #ifndef LACKS_SYS_TYPES_H
479 #include <sys/types.h> /* For size_t */
480 #endif /* LACKS_SYS_TYPES_H */
481 
482 /* The maximum possible size_t value has all bits set */
483 #define MAX_SIZE_T (~(size_t)0)
484 
485 #ifndef ONLY_MSPACES
486 #define ONLY_MSPACES 0
487 #endif /* ONLY_MSPACES */
488 #ifndef MSPACES
489 #if ONLY_MSPACES
490 #define MSPACES 1
491 #else /* ONLY_MSPACES */
492 #define MSPACES 0
493 #endif /* ONLY_MSPACES */
494 #endif /* MSPACES */
495 #ifndef MALLOC_ALIGNMENT
496 #define MALLOC_ALIGNMENT ((size_t)8U)
497 #endif /* MALLOC_ALIGNMENT */
498 #ifndef FOOTERS
499 #define FOOTERS 0
500 #endif /* FOOTERS */
501 #ifndef ABORT
502 #define ABORT abort()
503 #endif /* ABORT */
504 #ifndef ABORT_ON_ASSERT_FAILURE
505 #define ABORT_ON_ASSERT_FAILURE 1
506 #endif /* ABORT_ON_ASSERT_FAILURE */
507 #ifndef PROCEED_ON_ERROR
508 #define PROCEED_ON_ERROR 0
509 #endif /* PROCEED_ON_ERROR */
510 #ifndef USE_LOCKS
511 #define USE_LOCKS 0
512 #endif /* USE_LOCKS */
513 #ifndef INSECURE
514 #define INSECURE 0
515 #endif /* INSECURE */
516 #ifndef HAVE_MMAP
517 #define HAVE_MMAP 1
518 #endif /* HAVE_MMAP */
519 #ifndef MMAP_CLEARS
520 #define MMAP_CLEARS 1
521 #endif /* MMAP_CLEARS */
522 #ifndef HAVE_MREMAP
523 #ifdef linux
524 #define HAVE_MREMAP 1
525 #else /* linux */
526 #define HAVE_MREMAP 0
527 #endif /* linux */
528 #endif /* HAVE_MREMAP */
529 #ifndef MALLOC_FAILURE_ACTION
530 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
531 #endif /* MALLOC_FAILURE_ACTION */
532 #ifndef HAVE_MORECORE
533 #if ONLY_MSPACES
534 #define HAVE_MORECORE 0
535 #else /* ONLY_MSPACES */
536 #define HAVE_MORECORE 1
537 #endif /* ONLY_MSPACES */
538 #endif /* HAVE_MORECORE */
539 #if !HAVE_MORECORE
540 #define MORECORE_CONTIGUOUS 0
541 #else /* !HAVE_MORECORE */
542 #ifndef MORECORE
543 #define MORECORE sbrk
544 #endif /* MORECORE */
545 #ifndef MORECORE_CONTIGUOUS
546 #define MORECORE_CONTIGUOUS 1
547 #endif /* MORECORE_CONTIGUOUS */
548 #endif /* HAVE_MORECORE */
549 #ifndef DEFAULT_GRANULARITY
550 #if MORECORE_CONTIGUOUS
551 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
552 #else /* MORECORE_CONTIGUOUS */
553 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
554 #endif /* MORECORE_CONTIGUOUS */
555 #endif /* DEFAULT_GRANULARITY */
556 #ifndef DEFAULT_TRIM_THRESHOLD
557 #ifndef MORECORE_CANNOT_TRIM
558 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
559 #else /* MORECORE_CANNOT_TRIM */
560 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
561 #endif /* MORECORE_CANNOT_TRIM */
562 #endif /* DEFAULT_TRIM_THRESHOLD */
563 #ifndef DEFAULT_MMAP_THRESHOLD
564 #if HAVE_MMAP
565 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
566 #else /* HAVE_MMAP */
567 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
568 #endif /* HAVE_MMAP */
569 #endif /* DEFAULT_MMAP_THRESHOLD */
570 #ifndef USE_BUILTIN_FFS
571 #define USE_BUILTIN_FFS 0
572 #endif /* USE_BUILTIN_FFS */
573 #ifndef USE_DEV_RANDOM
574 #define USE_DEV_RANDOM 0
575 #endif /* USE_DEV_RANDOM */
576 #ifndef NO_MALLINFO
577 #define NO_MALLINFO 0
578 #endif /* NO_MALLINFO */
579 #ifndef MALLINFO_FIELD_TYPE
580 #define MALLINFO_FIELD_TYPE size_t
581 #endif /* MALLINFO_FIELD_TYPE */
582 
583 /*
584  mallopt tuning options. SVID/XPG defines four standard parameter
585  numbers for mallopt, normally defined in malloc.h. None of these
586  are used in this malloc, so setting them has no effect. But this
587  malloc does support the following options.
588 */
589 
590 #define M_TRIM_THRESHOLD (-1)
591 #define M_GRANULARITY (-2)
592 #define M_MMAP_THRESHOLD (-3)
593 
594 /* ------------------------ Mallinfo declarations ------------------------ */
595 
596 #if !NO_MALLINFO
597 /*
598  This version of malloc supports the standard SVID/XPG mallinfo
599  routine that returns a struct containing usage properties and
600  statistics. It should work on any system that has a
601  /usr/include/malloc.h defining struct mallinfo. The main
602  declaration needed is the mallinfo struct that is returned (by-copy)
603  by mallinfo(). The malloinfo struct contains a bunch of fields that
604  are not even meaningful in this version of malloc. These fields are
605  are instead filled by mallinfo() with other numbers that might be of
606  interest.
607 
608  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
609  /usr/include/malloc.h file that includes a declaration of struct
610  mallinfo. If so, it is included; else a compliant version is
611  declared below. These must be precisely the same for mallinfo() to
612  work. The original SVID version of this struct, defined on most
613  systems with mallinfo, declares all fields as ints. But some others
614  define as unsigned long. If your system defines the fields using a
615  type of different width than listed here, you MUST #include your
616  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
617 */
618 
619 /* #define HAVE_USR_INCLUDE_MALLOC_H */
620 
621 #ifdef HAVE_USR_INCLUDE_MALLOC_H
622 #include "/usr/include/malloc.h"
623 #else /* HAVE_USR_INCLUDE_MALLOC_H */
624 
625 /* HP-UX's stdlib.h redefines mallinfo unless _STRUCT_MALLINFO is defined */
626 #define _STRUCT_MALLINFO
627 
628 struct mallinfo {
629  MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
630  MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
631  MALLINFO_FIELD_TYPE smblks; /* always 0 */
632  MALLINFO_FIELD_TYPE hblks; /* always 0 */
633  MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
634  MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
635  MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
636  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
637  MALLINFO_FIELD_TYPE fordblks; /* total free space */
638  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
639 };
640 
641 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
642 #endif /* NO_MALLINFO */
643 
644 #ifdef __cplusplus
645 extern "C" {
646 #endif /* __cplusplus */
647 
648 #if !ONLY_MSPACES
649 
650 /* ------------------- Declarations of public routines ------------------- */
651 
652 #ifndef USE_DL_PREFIX
653 #define dlcalloc calloc
654 #define dlfree free
655 #define dlmalloc malloc
656 #define dlmemalign memalign
657 #define dlrealloc realloc
658 #define dlvalloc valloc
659 #define dlpvalloc pvalloc
660 #define dlmallinfo mallinfo
661 #define dlmallopt mallopt
662 #define dlmalloc_trim malloc_trim
663 #define dlmalloc_stats malloc_stats
664 #define dlmalloc_usable_size malloc_usable_size
665 #define dlmalloc_footprint malloc_footprint
666 #define dlmalloc_max_footprint malloc_max_footprint
667 #define dlindependent_calloc independent_calloc
668 #define dlindependent_comalloc independent_comalloc
669 #endif /* USE_DL_PREFIX */
670 
671 
672 /*
673  malloc(size_t n)
674  Returns a pointer to a newly allocated chunk of at least n bytes, or
675  null if no space is available, in which case errno is set to ENOMEM
676  on ANSI C systems.
677 
678  If n is zero, malloc returns a minimum-sized chunk. (The minimum
679  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
680  systems.) Note that size_t is an unsigned type, so calls with
681  arguments that would be negative if signed are interpreted as
682  requests for huge amounts of space, which will often fail. The
683  maximum supported value of n differs across systems, but is in all
684  cases less than the maximum representable value of a size_t.
685 */
686 void* dlmalloc(size_t);
687 
688 /*
689  free(void* p)
690  Releases the chunk of memory pointed to by p, that had been previously
691  allocated using malloc or a related routine such as realloc.
692  It has no effect if p is null. If p was not malloced or already
693  freed, free(p) will by default cause the current program to abort.
694 */
695 void dlfree(void*);
696 
697 /*
698  calloc(size_t n_elements, size_t element_size);
699  Returns a pointer to n_elements * element_size bytes, with all locations
700  set to zero.
701 */
702 void* dlcalloc(size_t, size_t);
703 
704 /*
705  realloc(void* p, size_t n)
706  Returns a pointer to a chunk of size n that contains the same data
707  as does chunk p up to the minimum of (n, p's size) bytes, or null
708  if no space is available.
709 
710  The returned pointer may or may not be the same as p. The algorithm
711  prefers extending p in most cases when possible, otherwise it
712  employs the equivalent of a malloc-copy-free sequence.
713 
714  If p is null, realloc is equivalent to malloc.
715 
716  If space is not available, realloc returns null, errno is set (if on
717  ANSI) and p is NOT freed.
718 
719  if n is for fewer bytes than already held by p, the newly unused
720  space is lopped off and freed if possible. realloc with a size
721  argument of zero (re)allocates a minimum-sized chunk.
722 
723  The old unix realloc convention of allowing the last-free'd chunk
724  to be used as an argument to realloc is not supported.
725 */
726 
727 void* dlrealloc(void*, size_t);
728 
729 /*
730  memalign(size_t alignment, size_t n);
731  Returns a pointer to a newly allocated chunk of n bytes, aligned
732  in accord with the alignment argument.
733 
734  The alignment argument should be a power of two. If the argument is
735  not a power of two, the nearest greater power is used.
736  8-byte alignment is guaranteed by normal malloc calls, so don't
737  bother calling memalign with an argument of 8 or less.
738 
739  Overreliance on memalign is a sure way to fragment space.
740 */
741 void* dlmemalign(size_t, size_t);
742 
743 /*
744  valloc(size_t n);
745  Equivalent to memalign(pagesize, n), where pagesize is the page
746  size of the system. If the pagesize is unknown, 4096 is used.
747 */
748 void* dlvalloc(size_t);
749 
750 /*
751  mallopt(int parameter_number, int parameter_value)
752  Sets tunable parameters The format is to provide a
753  (parameter-number, parameter-value) pair. mallopt then sets the
754  corresponding parameter to the argument value if it can (i.e., so
755  long as the value is meaningful), and returns 1 if successful else
756  0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
757  normally defined in malloc.h. None of these are use in this malloc,
758  so setting them has no effect. But this malloc also supports other
759  options in mallopt. See below for details. Briefly, supported
760  parameters are as follows (listed defaults are for "typical"
761  configurations).
762 
763  Symbol param # default allowed param values
764  M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
765  M_GRANULARITY -2 page size any power of 2 >= page size
766  M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
767 */
768 int dlmallopt(int, int);
769 
770 /*
771  malloc_footprint();
772  Returns the number of bytes obtained from the system. The total
773  number of bytes allocated by malloc, realloc etc., is less than this
774  value. Unlike mallinfo, this function returns only a precomputed
775  result, so can be called frequently to monitor memory consumption.
776  Even if locks are otherwise defined, this function does not use them,
777  so results might not be up to date.
778 */
779 size_t dlmalloc_footprint(void);
780 
781 /*
782  malloc_max_footprint();
783  Returns the maximum number of bytes obtained from the system. This
784  value will be greater than current footprint if deallocated space
785  has been reclaimed by the system. The peak number of bytes allocated
786  by malloc, realloc etc., is less than this value. Unlike mallinfo,
787  this function returns only a precomputed result, so can be called
788  frequently to monitor memory consumption. Even if locks are
789  otherwise defined, this function does not use them, so results might
790  not be up to date.
791 */
792 size_t dlmalloc_max_footprint(void);
793 
794 #if !NO_MALLINFO
795 /*
796  mallinfo()
797  Returns (by copy) a struct containing various summary statistics:
798 
799  arena: current total non-mmapped bytes allocated from system
800  ordblks: the number of free chunks
801  smblks: always zero.
802  hblks: current number of mmapped regions
803  hblkhd: total bytes held in mmapped regions
804  usmblks: the maximum total allocated space. This will be greater
805  than current total if trimming has occurred.
806  fsmblks: always zero
807  uordblks: current total allocated space (normal or mmapped)
808  fordblks: total free space
809  keepcost: the maximum number of bytes that could ideally be released
810  back to system via malloc_trim. ("ideally" means that
811  it ignores page restrictions etc.)
812 
813  Because these fields are ints, but internal bookkeeping may
814  be kept as longs, the reported values may wrap around zero and
815  thus be inaccurate.
816 */
817 struct mallinfo dlmallinfo(void);
818 #endif /* NO_MALLINFO */
819 
820 /*
821  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
822 
823  independent_calloc is similar to calloc, but instead of returning a
824  single cleared space, it returns an array of pointers to n_elements
825  independent elements that can hold contents of size elem_size, each
826  of which starts out cleared, and can be independently freed,
827  realloc'ed etc. The elements are guaranteed to be adjacently
828  allocated (this is not guaranteed to occur with multiple callocs or
829  mallocs), which may also improve cache locality in some
830  applications.
831 
832  The "chunks" argument is optional (i.e., may be null, which is
833  probably the most typical usage). If it is null, the returned array
834  is itself dynamically allocated and should also be freed when it is
835  no longer needed. Otherwise, the chunks array must be of at least
836  n_elements in length. It is filled in with the pointers to the
837  chunks.
838 
839  In either case, independent_calloc returns this pointer array, or
840  null if the allocation failed. If n_elements is zero and "chunks"
841  is null, it returns a chunk representing an array with zero elements
842  (which should be freed if not wanted).
843 
844  Each element must be individually freed when it is no longer
845  needed. If you'd like to instead be able to free all at once, you
846  should instead use regular calloc and assign pointers into this
847  space to represent elements. (In this case though, you cannot
848  independently free elements.)
849 
850  independent_calloc simplifies and speeds up implementations of many
851  kinds of pools. It may also be useful when constructing large data
852  structures that initially have a fixed number of fixed-sized nodes,
853  but the number is not known at compile time, and some of the nodes
854  may later need to be freed. For example:
855 
856  struct Node { int item; struct Node* next; };
857 
858  struct Node* build_list() {
859  struct Node** pool;
860  int n = read_number_of_nodes_needed();
861  if (n <= 0) return 0;
862  pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
863  if (pool == 0) die();
864  // organize into a linked list...
865  struct Node* first = pool[0];
866  for (i = 0; i < n-1; ++i)
867  pool[i]->next = pool[i+1];
868  free(pool); // Can now free the array (or not, if it is needed later)
869  return first;
870  }
871 */
872 void** dlindependent_calloc(size_t, size_t, void**);
873 
874 /*
875  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
876 
877  independent_comalloc allocates, all at once, a set of n_elements
878  chunks with sizes indicated in the "sizes" array. It returns
879  an array of pointers to these elements, each of which can be
880  independently freed, realloc'ed etc. The elements are guaranteed to
881  be adjacently allocated (this is not guaranteed to occur with
882  multiple callocs or mallocs), which may also improve cache locality
883  in some applications.
884 
885  The "chunks" argument is optional (i.e., may be null). If it is null
886  the returned array is itself dynamically allocated and should also
887  be freed when it is no longer needed. Otherwise, the chunks array
888  must be of at least n_elements in length. It is filled in with the
889  pointers to the chunks.
890 
891  In either case, independent_comalloc returns this pointer array, or
892  null if the allocation failed. If n_elements is zero and chunks is
893  null, it returns a chunk representing an array with zero elements
894  (which should be freed if not wanted).
895 
896  Each element must be individually freed when it is no longer
897  needed. If you'd like to instead be able to free all at once, you
898  should instead use a single regular malloc, and assign pointers at
899  particular offsets in the aggregate space. (In this case though, you
900  cannot independently free elements.)
901 
902  independent_comallac differs from independent_calloc in that each
903  element may have a different size, and also that it does not
904  automatically clear elements.
905 
906  independent_comalloc can be used to speed up allocation in cases
907  where several structs or objects must always be allocated at the
908  same time. For example:
909 
910  struct Head { ... }
911  struct Foot { ... }
912 
913  void send_message(char* msg) {
914  int msglen = strlen(msg);
915  size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
916  void* chunks[3];
917  if (independent_comalloc(3, sizes, chunks) == 0)
918  die();
919  struct Head* head = (struct Head*)(chunks[0]);
920  char* body = (char*)(chunks[1]);
921  struct Foot* foot = (struct Foot*)(chunks[2]);
922  // ...
923  }
924 
925  In general though, independent_comalloc is worth using only for
926  larger values of n_elements. For small values, you probably won't
927  detect enough difference from series of malloc calls to bother.
928 
929  Overuse of independent_comalloc can increase overall memory usage,
930  since it cannot reuse existing noncontiguous small chunks that
931  might be available for some of the elements.
932 */
933 void** dlindependent_comalloc(size_t, size_t*, void**);
934 
935 
936 /*
937  pvalloc(size_t n);
938  Equivalent to valloc(minimum-page-that-holds(n)), that is,
939  round up n to nearest pagesize.
940  */
941 void* dlpvalloc(size_t);
942 
943 /*
944  malloc_trim(size_t pad);
945 
946  If possible, gives memory back to the system (via negative arguments
947  to sbrk) if there is unused memory at the `high' end of the malloc
948  pool or in unused MMAP segments. You can call this after freeing
949  large blocks of memory to potentially reduce the system-level memory
950  requirements of a program. However, it cannot guarantee to reduce
951  memory. Under some allocation patterns, some large free blocks of
952  memory will be locked between two used chunks, so they cannot be
953  given back to the system.
954 
955  The `pad' argument to malloc_trim represents the amount of free
956  trailing space to leave untrimmed. If this argument is zero, only
957  the minimum amount of memory to maintain internal data structures
958  will be left. Non-zero arguments can be supplied to maintain enough
959  trailing space to service future expected allocations without having
960  to re-obtain memory from the system.
961 
962  Malloc_trim returns 1 if it actually released any memory, else 0.
963 */
964 int dlmalloc_trim(size_t);
965 
966 /*
967  malloc_usable_size(void* p);
968 
969  Returns the number of bytes you can actually use in
970  an allocated chunk, which may be more than you requested (although
971  often not) due to alignment and minimum size constraints.
972  You can use this many bytes without worrying about
973  overwriting other allocated objects. This is not a particularly great
974  programming practice. malloc_usable_size can be more useful in
975  debugging and assertions, for example:
976 
977  p = malloc(n);
978  assert(malloc_usable_size(p) >= 256);
979 */
980 size_t dlmalloc_usable_size(void*);
981 
982 /*
983  malloc_stats();
984  Prints on stderr the amount of space obtained from the system (both
985  via sbrk and mmap), the maximum amount (which may be more than
986  current if malloc_trim and/or munmap got called), and the current
987  number of bytes allocated via malloc (or realloc, etc) but not yet
988  freed. Note that this is the number of bytes allocated, not the
989  number requested. It will be larger than the number requested
990  because of alignment and bookkeeping overhead. Because it includes
991  alignment wastage as being in use, this figure may be greater than
992  zero even when no user-level chunks are allocated.
993 
994  The reported current and maximum system memory can be inaccurate if
995  a program makes other calls to system memory allocation functions
996  (normally sbrk) outside of malloc.
997 
998  malloc_stats prints only the most commonly interesting statistics.
999  More information can be obtained by calling mallinfo.
1000 */
1001 void dlmalloc_stats(void);
1002 
1003 #endif /* ONLY_MSPACES */
1004 
1005 #if MSPACES
1006 
1007 /*
1008  mspace is an opaque type representing an independent
1009  region of space that supports mspace_malloc, etc.
1010 */
1011 typedef void* mspace;
1012 
1013 /*
1014  create_mspace creates and returns a new independent space with the
1015  given initial capacity, or, if 0, the default granularity size. It
1016  returns null if there is no system memory available to create the
1017  space. If argument locked is non-zero, the space uses a separate
1018  lock to control access. The capacity of the space will grow
1019  dynamically as needed to service mspace_malloc requests. You can
1020  control the sizes of incremental increases of this space by
1021  compiling with a different DEFAULT_GRANULARITY or dynamically
1022  setting with mallopt(M_GRANULARITY, value).
1023 */
1024 mspace create_mspace(size_t capacity, int locked);
1025 
1026 /*
1027  destroy_mspace destroys the given space, and attempts to return all
1028  of its memory back to the system, returning the total number of
1029  bytes freed. After destruction, the results of access to all memory
1030  used by the space become undefined.
1031 */
1032 size_t destroy_mspace(mspace msp);
1033 
1034 /*
1035  create_mspace_with_base uses the memory supplied as the initial base
1036  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1037  space is used for bookkeeping, so the capacity must be at least this
1038  large. (Otherwise 0 is returned.) When this initial space is
1039  exhausted, additional memory will be obtained from the system.
1040  Destroying this space will deallocate all additionally allocated
1041  space (if possible) but not the initial base.
1042 */
1043 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1044 
1045 /*
1046  mspace_malloc behaves as malloc, but operates within
1047  the given space.
1048 */
1049 void* mspace_malloc(mspace msp, size_t bytes);
1050 
1051 /*
1052  mspace_free behaves as free, but operates within
1053  the given space.
1054 
1055  If compiled with FOOTERS==1, mspace_free is not actually needed.
1056  free may be called instead of mspace_free because freed chunks from
1057  any space are handled by their originating spaces.
1058 */
1059 void mspace_free(mspace msp, void* mem);
1060 
1061 /*
1062  mspace_realloc behaves as realloc, but operates within
1063  the given space.
1064 
1065  If compiled with FOOTERS==1, mspace_realloc is not actually
1066  needed. realloc may be called instead of mspace_realloc because
1067  realloced chunks from any space are handled by their originating
1068  spaces.
1069 */
1070 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1071 
1072 /*
1073  mspace_calloc behaves as calloc, but operates within
1074  the given space.
1075 */
1076 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1077 
1078 /*
1079  mspace_memalign behaves as memalign, but operates within
1080  the given space.
1081 */
1082 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1083 
1084 /*
1085  mspace_independent_calloc behaves as independent_calloc, but
1086  operates within the given space.
1087 */
1088 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1089  size_t elem_size, void* chunks[]);
1090 
1091 /*
1092  mspace_independent_comalloc behaves as independent_comalloc, but
1093  operates within the given space.
1094 */
1095 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1096  size_t sizes[], void* chunks[]);
1097 
1098 /*
1099  mspace_footprint() returns the number of bytes obtained from the
1100  system for this space.
1101 */
1102 size_t mspace_footprint(mspace msp);
1103 
1104 /*
1105  mspace_max_footprint() returns the peak number of bytes obtained from the
1106  system for this space.
1107 */
1108 size_t mspace_max_footprint(mspace msp);
1109 
1110 
1111 #if !NO_MALLINFO
1112 /*
1113  mspace_mallinfo behaves as mallinfo, but reports properties of
1114  the given space.
1115 */
1116 struct mallinfo mspace_mallinfo(mspace msp);
1117 #endif /* NO_MALLINFO */
1118 
1119 /*
1120  mspace_malloc_stats behaves as malloc_stats, but reports
1121  properties of the given space.
1122 */
1123 void mspace_malloc_stats(mspace msp);
1124 
1125 /*
1126  mspace_trim behaves as malloc_trim, but
1127  operates within the given space.
1128 */
1129 int mspace_trim(mspace msp, size_t pad);
1130 
1131 /*
1132  An alias for mallopt.
1133 */
1134 int mspace_mallopt(int, int);
1135 
1136 #endif /* MSPACES */
1137 
1138 #ifdef __cplusplus
1139 }; /* end of extern "C" */
1140 #endif /* __cplusplus */
1141 
1142 /*
1143  ========================================================================
1144  To make a fully customizable malloc.h header file, cut everything
1145  above this line, put into file malloc.h, edit to suit, and #include it
1146  on the next line, as well as in programs that use this malloc.
1147  ========================================================================
1148 */
1149 
1150 /* #include "malloc.h" */
1151 
1152 /*------------------------------ internal #includes ---------------------- */
1153 
1154 #ifdef _MSC_VER
1155 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1156 #endif /* _MSC_VER */
1157 
1158 #include <stdio.h> /* for printing in malloc_stats */
1159 
1160 #ifndef LACKS_ERRNO_H
1161 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1162 #endif /* LACKS_ERRNO_H */
1163 #if FOOTERS
1164 #include <time.h> /* for magic initialization */
1165 #endif /* FOOTERS */
1166 #ifndef LACKS_STDLIB_H
1167 #include <stdlib.h> /* for abort() */
1168 #endif /* LACKS_STDLIB_H */
1169 #ifdef DEBUG
1170 #if ABORT_ON_ASSERT_FAILURE
1171 #define assert(x) if(!(x)) ABORT
1172 #else /* ABORT_ON_ASSERT_FAILURE */
1173 #include <assert.h>
1174 #endif /* ABORT_ON_ASSERT_FAILURE */
1175 #else /* DEBUG */
1176 #define assert(x)
1177 #endif /* DEBUG */
1178 #ifndef LACKS_STRING_H
1179 #include <string.h> /* for memset etc */
1180 #endif /* LACKS_STRING_H */
1181 #if USE_BUILTIN_FFS
1182 #ifndef LACKS_STRINGS_H
1183 #include <strings.h> /* for ffs */
1184 #endif /* LACKS_STRINGS_H */
1185 #endif /* USE_BUILTIN_FFS */
1186 #if HAVE_MMAP
1187 #ifndef LACKS_SYS_MMAN_H
1188 #include <sys/mman.h> /* for mmap */
1189 #endif /* LACKS_SYS_MMAN_H */
1190 #ifndef LACKS_FCNTL_H
1191 #include <fcntl.h>
1192 #endif /* LACKS_FCNTL_H */
1193 #endif /* HAVE_MMAP */
1194 #if HAVE_MORECORE
1195 #ifndef LACKS_UNISTD_H
1196 #include <unistd.h> /* for sbrk */
1197 #else /* LACKS_UNISTD_H */
1198 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1199 extern void* sbrk(ptrdiff_t);
1200 #endif /* FreeBSD etc */
1201 #endif /* LACKS_UNISTD_H */
1202 #endif /* HAVE_MMAP */
1203 
1204 #ifndef WIN32
1205 #ifndef malloc_getpagesize
1206 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1207 # ifndef _SC_PAGE_SIZE
1208 # define _SC_PAGE_SIZE _SC_PAGESIZE
1209 # endif
1210 # endif
1211 # ifdef _SC_PAGE_SIZE
1212 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1213 # else
1214 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1215  extern size_t getpagesize();
1216 # define malloc_getpagesize getpagesize()
1217 # else
1218 # ifdef WIN32 /* use supplied emulation of getpagesize */
1219 # define malloc_getpagesize getpagesize()
1220 # else
1221 # ifndef LACKS_SYS_PARAM_H
1222 # include <sys/param.h>
1223 # endif
1224 # ifdef EXEC_PAGESIZE
1225 # define malloc_getpagesize EXEC_PAGESIZE
1226 # else
1227 # ifdef NBPG
1228 # ifndef CLSIZE
1229 # define malloc_getpagesize NBPG
1230 # else
1231 # define malloc_getpagesize (NBPG * CLSIZE)
1232 # endif
1233 # else
1234 # ifdef NBPC
1235 # define malloc_getpagesize NBPC
1236 # else
1237 # ifdef PAGESIZE
1238 # define malloc_getpagesize PAGESIZE
1239 # else /* just guess */
1240 # define malloc_getpagesize ((size_t)4096U)
1241 # endif
1242 # endif
1243 # endif
1244 # endif
1245 # endif
1246 # endif
1247 # endif
1248 #endif
1249 #endif
1250 
1251 /* ------------------- size_t and alignment properties -------------------- */
1252 
1253 /* The byte and bit size of a size_t */
1254 #define SIZE_T_SIZE (sizeof(size_t))
1255 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1256 
1257 /* Some constants coerced to size_t */
1258 /* Annoying but necessary to avoid errors on some platforms */
1259 #define SIZE_T_ZERO ((size_t)0)
1260 #define SIZE_T_ONE ((size_t)1)
1261 #define SIZE_T_TWO ((size_t)2)
1262 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1263 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1264 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1265 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1266 
1267 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1268 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1269 
1270 /* True if address a has acceptable alignment */
1271 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1272 
1273 /* the number of bytes to offset an address to align it */
1274 #define align_offset(A)\
1275  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1276  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1277 
1278 /* -------------------------- MMAP preliminaries ------------------------- */
1279 
1280 /*
1281  If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1282  checks to fail so compiler optimizer can delete code rather than
1283  using so many "#if"s.
1284 */
1285 
1286 
1287 /* MORECORE and MMAP must return MFAIL on failure */
1288 #define MFAIL ((void*)(MAX_SIZE_T))
1289 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1290 
1291 #if !HAVE_MMAP
1292 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1293 #define USE_MMAP_BIT (SIZE_T_ZERO)
1294 #define CALL_MMAP(s) MFAIL
1295 #define CALL_MUNMAP(a, s) (-1)
1296 #define DIRECT_MMAP(s) MFAIL
1297 
1298 #else /* HAVE_MMAP */
1299 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1300 #define USE_MMAP_BIT (SIZE_T_ONE)
1301 
1302 #if !defined(WIN32) && !defined (__OS2__)
1303 #define CALL_MUNMAP(a, s) munmap((a), (s))
1304 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1305 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1306 #define MAP_ANONYMOUS MAP_ANON
1307 #endif /* MAP_ANON */
1308 #ifdef MAP_ANONYMOUS
1309 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1310 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1311 #else /* MAP_ANONYMOUS */
1312 /*
1313  Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1314  is unlikely to be needed, but is supplied just in case.
1315 */
1316 #define MMAP_FLAGS (MAP_PRIVATE)
1317 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1318 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1319  (dev_zero_fd = open("/dev/zero", O_RDWR), \
1320  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1321  mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1322 #endif /* MAP_ANONYMOUS */
1323 
1324 #define DIRECT_MMAP(s) CALL_MMAP(s)
1325 
1326 #elif defined(__OS2__)
1327 
1328 /* OS/2 MMAP via DosAllocMem */
1329 static void* os2mmap(size_t size) {
1330  void* ptr;
1331  if (DosAllocMem(&ptr, size, OBJ_ANY|PAG_COMMIT|PAG_READ|PAG_WRITE) &&
1332  DosAllocMem(&ptr, size, PAG_COMMIT|PAG_READ|PAG_WRITE))
1333  return MFAIL;
1334  return ptr;
1335 }
1336 
1337 #define os2direct_mmap(n) os2mmap(n)
1338 
1339 /* This function supports releasing coalesed segments */
1340 static int os2munmap(void* ptr, size_t size) {
1341  while (size) {
1342  ULONG ulSize = size;
1343  ULONG ulFlags = 0;
1344  if (DosQueryMem(ptr, &ulSize, &ulFlags) != 0)
1345  return -1;
1346  if ((ulFlags & PAG_BASE) == 0 ||(ulFlags & PAG_COMMIT) == 0 ||
1347  ulSize > size)
1348  return -1;
1349  if (DosFreeMem(ptr) != 0)
1350  return -1;
1351  ptr = ( void * ) ( ( char * ) ptr + ulSize );
1352  size -= ulSize;
1353  }
1354  return 0;
1355 }
1356 
1357 #define CALL_MMAP(s) os2mmap(s)
1358 #define CALL_MUNMAP(a, s) os2munmap((a), (s))
1359 #define DIRECT_MMAP(s) os2direct_mmap(s)
1360 
1361 #else /* WIN32 */
1362 
1363 /* Win32 MMAP via VirtualAlloc */
1364 static void* win32mmap(size_t size) {
1365  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1366  return (ptr != 0)? ptr: MFAIL;
1367 }
1368 
1369 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1370 static void* win32direct_mmap(size_t size) {
1371  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1372  PAGE_EXECUTE_READWRITE);
1373  return (ptr != 0)? ptr: MFAIL;
1374 }
1375 
1376 /* This function supports releasing coalesed segments */
1377 static int win32munmap(void* ptr, size_t size) {
1378  MEMORY_BASIC_INFORMATION minfo;
1379  char* cptr = ptr;
1380  while (size) {
1381  if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1382  return -1;
1383  if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1384  minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1385  return -1;
1386  if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1387  return -1;
1388  cptr += minfo.RegionSize;
1389  size -= minfo.RegionSize;
1390  }
1391  return 0;
1392 }
1393 
1394 #define CALL_MMAP(s) win32mmap(s)
1395 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1396 #define DIRECT_MMAP(s) win32direct_mmap(s)
1397 #endif /* WIN32 */
1398 #endif /* HAVE_MMAP */
1399 
1400 #if HAVE_MMAP && HAVE_MREMAP
1401 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1402 #else /* HAVE_MMAP && HAVE_MREMAP */
1403 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1404 #endif /* HAVE_MMAP && HAVE_MREMAP */
1405 
1406 #if HAVE_MORECORE
1407 #define CALL_MORECORE(S) MORECORE(S)
1408 #else /* HAVE_MORECORE */
1409 #define CALL_MORECORE(S) MFAIL
1410 #endif /* HAVE_MORECORE */
1411 
1412 /* mstate bit set if contiguous morecore disabled or failed */
1413 #define USE_NONCONTIGUOUS_BIT (4U)
1414 
1415 /* segment bit set in create_mspace_with_base */
1416 #define EXTERN_BIT (8U)
1417 
1418 
1419 /* --------------------------- Lock preliminaries ------------------------ */
1420 
1421 #if USE_LOCKS
1422 
1423 /*
1424  When locks are defined, there are up to two global locks:
1425 
1426  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1427  MORECORE. In many cases sys_alloc requires two calls, that should
1428  not be interleaved with calls by other threads. This does not
1429  protect against direct calls to MORECORE by other threads not
1430  using this lock, so there is still code to cope the best we can on
1431  interference.
1432 
1433  * magic_init_mutex ensures that mparams.magic and other
1434  unique mparams values are initialized only once.
1435 */
1436 
1437 #if !defined(WIN32) && !defined(__OS2__)
1438 /* By default use posix locks */
1439 #include <pthread.h>
1440 #define MLOCK_T pthread_mutex_t
1441 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1442 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1443 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1444 
1445 #if HAVE_MORECORE
1446 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1447 #endif /* HAVE_MORECORE */
1448 
1449 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1450 
1451 #elif defined(__OS2__)
1452 #define MLOCK_T HMTX
1453 #define INITIAL_LOCK(l) DosCreateMutexSem(0, l, 0, FALSE)
1454 #define ACQUIRE_LOCK(l) DosRequestMutexSem(*l, SEM_INDEFINITE_WAIT)
1455 #define RELEASE_LOCK(l) DosReleaseMutexSem(*l)
1456 #if HAVE_MORECORE
1457 static MLOCK_T morecore_mutex;
1458 #endif /* HAVE_MORECORE */
1459 static MLOCK_T magic_init_mutex;
1460 
1461 #else /* WIN32 */
1462 /*
1463  Because lock-protected regions have bounded times, and there
1464  are no recursive lock calls, we can use simple spinlocks.
1465 */
1466 
1467 #define MLOCK_T long
1468 static int win32_acquire_lock (MLOCK_T *sl) {
1469  for (;;) {
1470 #ifdef InterlockedCompareExchangePointer
1471  if (!InterlockedCompareExchange(sl, 1, 0))
1472  return 0;
1473 #else /* Use older void* version */
1474  if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1475  return 0;
1476 #endif /* InterlockedCompareExchangePointer */
1477  Sleep (0);
1478  }
1479 }
1480 
1481 static void win32_release_lock (MLOCK_T *sl) {
1482  InterlockedExchange (sl, 0);
1483 }
1484 
1485 #define INITIAL_LOCK(l) *(l)=0
1486 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1487 #define RELEASE_LOCK(l) win32_release_lock(l)
1488 #if HAVE_MORECORE
1489 static MLOCK_T morecore_mutex;
1490 #endif /* HAVE_MORECORE */
1491 static MLOCK_T magic_init_mutex;
1492 #endif /* WIN32 */
1493 
1494 #define USE_LOCK_BIT (2U)
1495 #else /* USE_LOCKS */
1496 #define USE_LOCK_BIT (0U)
1497 #define INITIAL_LOCK(l)
1498 #endif /* USE_LOCKS */
1499 
1500 #if USE_LOCKS && HAVE_MORECORE
1501 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1502 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1503 #else /* USE_LOCKS && HAVE_MORECORE */
1504 #define ACQUIRE_MORECORE_LOCK()
1505 #define RELEASE_MORECORE_LOCK()
1506 #endif /* USE_LOCKS && HAVE_MORECORE */
1507 
1508 #if USE_LOCKS
1509 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1510 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1511 #else /* USE_LOCKS */
1512 #define ACQUIRE_MAGIC_INIT_LOCK()
1513 #define RELEASE_MAGIC_INIT_LOCK()
1514 #endif /* USE_LOCKS */
1515 
1516 
1517 /* ----------------------- Chunk representations ------------------------ */
1518 
1519 /*
1520  (The following includes lightly edited explanations by Colin Plumb.)
1521 
1522  The malloc_chunk declaration below is misleading (but accurate and
1523  necessary). It declares a "view" into memory allowing access to
1524  necessary fields at known offsets from a given base.
1525 
1526  Chunks of memory are maintained using a `boundary tag' method as
1527  originally described by Knuth. (See the paper by Paul Wilson
1528  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1529  techniques.) Sizes of free chunks are stored both in the front of
1530  each chunk and at the end. This makes consolidating fragmented
1531  chunks into bigger chunks fast. The head fields also hold bits
1532  representing whether chunks are free or in use.
1533 
1534  Here are some pictures to make it clearer. They are "exploded" to
1535  show that the state of a chunk can be thought of as extending from
1536  the high 31 bits of the head field of its header through the
1537  prev_foot and PINUSE_BIT bit of the following chunk header.
1538 
1539  A chunk that's in use looks like:
1540 
1541  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1542  | Size of previous chunk (if P = 1) |
1543  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1544  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1545  | Size of this chunk 1| +-+
1546  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547  | |
1548  +- -+
1549  | |
1550  +- -+
1551  | :
1552  +- size - sizeof(size_t) available payload bytes -+
1553  : |
1554  chunk-> +- -+
1555  | |
1556  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1557  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1558  | Size of next chunk (may or may not be in use) | +-+
1559  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1560 
1561  And if it's free, it looks like this:
1562 
1563  chunk-> +- -+
1564  | User payload (must be in use, or we would have merged!) |
1565  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1566  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1567  | Size of this chunk 0| +-+
1568  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1569  | Next pointer |
1570  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1571  | Prev pointer |
1572  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1573  | :
1574  +- size - sizeof(struct chunk) unused bytes -+
1575  : |
1576  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1577  | Size of this chunk |
1578  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1579  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1580  | Size of next chunk (must be in use, or we would have merged)| +-+
1581  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1582  | :
1583  +- User payload -+
1584  : |
1585  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1586  |0|
1587  +-+
1588  Note that since we always merge adjacent free chunks, the chunks
1589  adjacent to a free chunk must be in use.
1590 
1591  Given a pointer to a chunk (which can be derived trivially from the
1592  payload pointer) we can, in O(1) time, find out whether the adjacent
1593  chunks are free, and if so, unlink them from the lists that they
1594  are on and merge them with the current chunk.
1595 
1596  Chunks always begin on even word boundaries, so the mem portion
1597  (which is returned to the user) is also on an even word boundary, and
1598  thus at least double-word aligned.
1599 
1600  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1601  chunk size (which is always a multiple of two words), is an in-use
1602  bit for the *previous* chunk. If that bit is *clear*, then the
1603  word before the current chunk size contains the previous chunk
1604  size, and can be used to find the front of the previous chunk.
1605  The very first chunk allocated always has this bit set, preventing
1606  access to non-existent (or non-owned) memory. If pinuse is set for
1607  any given chunk, then you CANNOT determine the size of the
1608  previous chunk, and might even get a memory addressing fault when
1609  trying to do so.
1610 
1611  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1612  the chunk size redundantly records whether the current chunk is
1613  inuse. This redundancy enables usage checks within free and realloc,
1614  and reduces indirection when freeing and consolidating chunks.
1615 
1616  Each freshly allocated chunk must have both cinuse and pinuse set.
1617  That is, each allocated chunk borders either a previously allocated
1618  and still in-use chunk, or the base of its memory arena. This is
1619  ensured by making all allocations from the the `lowest' part of any
1620  found chunk. Further, no free chunk physically borders another one,
1621  so each free chunk is known to be preceded and followed by either
1622  inuse chunks or the ends of memory.
1623 
1624  Note that the `foot' of the current chunk is actually represented
1625  as the prev_foot of the NEXT chunk. This makes it easier to
1626  deal with alignments etc but can be very confusing when trying
1627  to extend or adapt this code.
1628 
1629  The exceptions to all this are
1630 
1631  1. The special chunk `top' is the top-most available chunk (i.e.,
1632  the one bordering the end of available memory). It is treated
1633  specially. Top is never included in any bin, is used only if
1634  no other chunk is available, and is released back to the
1635  system if it is very large (see M_TRIM_THRESHOLD). In effect,
1636  the top chunk is treated as larger (and thus less well
1637  fitting) than any other available chunk. The top chunk
1638  doesn't update its trailing size field since there is no next
1639  contiguous chunk that would have to index off it. However,
1640  space is still allocated for it (TOP_FOOT_SIZE) to enable
1641  separation or merging when space is extended.
1642 
1643  3. Chunks allocated via mmap, which have the lowest-order bit
1644  (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1645  PINUSE_BIT in their head fields. Because they are allocated
1646  one-by-one, each must carry its own prev_foot field, which is
1647  also used to hold the offset this chunk has within its mmapped
1648  region, which is needed to preserve alignment. Each mmapped
1649  chunk is trailed by the first two fields of a fake next-chunk
1650  for sake of usage checks.
1651 
1652 */
1653 
1655  size_t prev_foot; /* Size of previous chunk (if free). */
1656  size_t head; /* Size and inuse bits. */
1657  struct malloc_chunk* fd; /* double links -- used only if free. */
1658  struct malloc_chunk* bk;
1659 };
1660 
1661 typedef struct malloc_chunk mchunk;
1662 typedef struct malloc_chunk* mchunkptr;
1663 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1664 typedef size_t bindex_t; /* Described below */
1665 typedef unsigned int binmap_t; /* Described below */
1666 typedef unsigned int flag_t; /* The type of various bit flag sets */
1667 
1668 /* ------------------- Chunks sizes and alignments ----------------------- */
1669 
1670 #define MCHUNK_SIZE (sizeof(mchunk))
1671 
1672 #if FOOTERS
1673 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1674 #else /* FOOTERS */
1675 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1676 #endif /* FOOTERS */
1677 
1678 /* MMapped chunks need a second word of overhead ... */
1679 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1680 /* ... and additional padding for fake next-chunk at foot */
1681 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1682 
1683 /* The smallest size we can malloc is an aligned minimal chunk */
1684 #define MIN_CHUNK_SIZE\
1685  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1686 
1687 /* conversion from malloc headers to user pointers, and back */
1688 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1689 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1690 /* chunk associated with aligned address A */
1691 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1692 
1693 /* Bounds on request (not chunk) sizes. */
1694 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1695 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1696 
1697 /* pad request bytes into a usable size */
1698 #define pad_request(req) \
1699  (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1700 
1701 /* pad request, checking for minimum (but not maximum) */
1702 #define request2size(req) \
1703  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1704 
1705 
1706 /* ------------------ Operations on head and foot fields ----------------- */
1707 
1708 /*
1709  The head field of a chunk is or'ed with PINUSE_BIT when previous
1710  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1711  use. If the chunk was obtained with mmap, the prev_foot field has
1712  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1713  mmapped region to the base of the chunk.
1714 */
1715 
1716 #define PINUSE_BIT (SIZE_T_ONE)
1717 #define CINUSE_BIT (SIZE_T_TWO)
1718 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1719 
1720 /* Head value for fenceposts */
1721 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1722 
1723 /* extraction of fields from head words */
1724 #define cinuse(p) ((p)->head & CINUSE_BIT)
1725 #define pinuse(p) ((p)->head & PINUSE_BIT)
1726 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1727 
1728 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1729 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1730 
1731 /* Treat space at ptr +/- offset as a chunk */
1732 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1733 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1734 
1735 /* Ptr to next or previous physical malloc_chunk. */
1736 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1737 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1738 
1739 /* extract next chunk's pinuse bit */
1740 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1741 
1742 /* Get/set size at footer */
1743 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1744 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1745 
1746 /* Set size, pinuse bit, and foot */
1747 #define set_size_and_pinuse_of_free_chunk(p, s)\
1748  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1749 
1750 /* Set size, pinuse bit, foot, and clear next pinuse */
1751 #define set_free_with_pinuse(p, s, n)\
1752  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1753 
1754 #define is_mmapped(p)\
1755  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1756 
1757 /* Get the internal overhead associated with chunk p */
1758 #define overhead_for(p)\
1759  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1760 
1761 /* Return true if malloced space is not necessarily cleared */
1762 #if MMAP_CLEARS
1763 #define calloc_must_clear(p) (!is_mmapped(p))
1764 #else /* MMAP_CLEARS */
1765 #define calloc_must_clear(p) (1)
1766 #endif /* MMAP_CLEARS */
1767 
1768 /* ---------------------- Overlaid data structures ----------------------- */
1769 
1770 /*
1771  When chunks are not in use, they are treated as nodes of either
1772  lists or trees.
1773 
1774  "Small" chunks are stored in circular doubly-linked lists, and look
1775  like this:
1776 
1777  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1778  | Size of previous chunk |
1779  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1780  `head:' | Size of chunk, in bytes |P|
1781  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1782  | Forward pointer to next chunk in list |
1783  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1784  | Back pointer to previous chunk in list |
1785  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1786  | Unused space (may be 0 bytes long) .
1787  . .
1788  . |
1789 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1790  `foot:' | Size of chunk, in bytes |
1791  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1792 
1793  Larger chunks are kept in a form of bitwise digital trees (aka
1794  tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1795  free chunks greater than 256 bytes, their size doesn't impose any
1796  constraints on user chunk sizes. Each node looks like:
1797 
1798  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1799  | Size of previous chunk |
1800  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1801  `head:' | Size of chunk, in bytes |P|
1802  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1803  | Forward pointer to next chunk of same size |
1804  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1805  | Back pointer to previous chunk of same size |
1806  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1807  | Pointer to left child (child[0]) |
1808  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1809  | Pointer to right child (child[1]) |
1810  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1811  | Pointer to parent |
1812  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1813  | bin index of this chunk |
1814  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1815  | Unused space .
1816  . |
1817 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1818  `foot:' | Size of chunk, in bytes |
1819  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1820 
1821  Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1822  of the same size are arranged in a circularly-linked list, with only
1823  the oldest chunk (the next to be used, in our FIFO ordering)
1824  actually in the tree. (Tree members are distinguished by a non-null
1825  parent pointer.) If a chunk with the same size an an existing node
1826  is inserted, it is linked off the existing node using pointers that
1827  work in the same way as fd/bk pointers of small chunks.
1828 
1829  Each tree contains a power of 2 sized range of chunk sizes (the
1830  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1831  tree level, with the chunks in the smaller half of the range (0x100
1832  <= x < 0x140 for the top nose) in the left subtree and the larger
1833  half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1834  done by inspecting individual bits.
1835 
1836  Using these rules, each node's left subtree contains all smaller
1837  sizes than its right subtree. However, the node at the root of each
1838  subtree has no particular ordering relationship to either. (The
1839  dividing line between the subtree sizes is based on trie relation.)
1840  If we remove the last chunk of a given size from the interior of the
1841  tree, we need to replace it with a leaf node. The tree ordering
1842  rules permit a node to be replaced by any leaf below it.
1843 
1844  The smallest chunk in a tree (a common operation in a best-fit
1845  allocator) can be found by walking a path to the leftmost leaf in
1846  the tree. Unlike a usual binary tree, where we follow left child
1847  pointers until we reach a null, here we follow the right child
1848  pointer any time the left one is null, until we reach a leaf with
1849  both child pointers null. The smallest chunk in the tree will be
1850  somewhere along that path.
1851 
1852  The worst case number of steps to add, find, or remove a node is
1853  bounded by the number of bits differentiating chunks within
1854  bins. Under current bin calculations, this ranges from 6 up to 21
1855  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1856  is of course much better.
1857 */
1858 
1860  /* The first four fields must be compatible with malloc_chunk */
1861  size_t prev_foot;
1862  size_t head;
1865 
1869 };
1870 
1871 typedef struct malloc_tree_chunk tchunk;
1873 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1874 
1875 /* A little helper macro for trees */
1876 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1877 
1878 /* ----------------------------- Segments -------------------------------- */
1879 
1880 /*
1881  Each malloc space may include non-contiguous segments, held in a
1882  list headed by an embedded malloc_segment record representing the
1883  top-most space. Segments also include flags holding properties of
1884  the space. Large chunks that are directly allocated by mmap are not
1885  included in this list. They are instead independently created and
1886  destroyed without otherwise keeping track of them.
1887 
1888  Segment management mainly comes into play for spaces allocated by
1889  MMAP. Any call to MMAP might or might not return memory that is
1890  adjacent to an existing segment. MORECORE normally contiguously
1891  extends the current space, so this space is almost always adjacent,
1892  which is simpler and faster to deal with. (This is why MORECORE is
1893  used preferentially to MMAP when both are available -- see
1894  sys_alloc.) When allocating using MMAP, we don't use any of the
1895  hinting mechanisms (inconsistently) supported in various
1896  implementations of unix mmap, or distinguish reserving from
1897  committing memory. Instead, we just ask for space, and exploit
1898  contiguity when we get it. It is probably possible to do
1899  better than this on some systems, but no general scheme seems
1900  to be significantly better.
1901 
1902  Management entails a simpler variant of the consolidation scheme
1903  used for chunks to reduce fragmentation -- new adjacent memory is
1904  normally prepended or appended to an existing segment. However,
1905  there are limitations compared to chunk consolidation that mostly
1906  reflect the fact that segment processing is relatively infrequent
1907  (occurring only when getting memory from system) and that we
1908  don't expect to have huge numbers of segments:
1909 
1910  * Segments are not indexed, so traversal requires linear scans. (It
1911  would be possible to index these, but is not worth the extra
1912  overhead and complexity for most programs on most platforms.)
1913  * New segments are only appended to old ones when holding top-most
1914  memory; if they cannot be prepended to others, they are held in
1915  different segments.
1916 
1917  Except for the top-most segment of an mstate, each segment record
1918  is kept at the tail of its segment. Segments are added by pushing
1919  segment records onto the list headed by &mstate.seg for the
1920  containing mstate.
1921 
1922  Segment flags control allocation/merge/deallocation policies:
1923  * If EXTERN_BIT set, then we did not allocate this segment,
1924  and so should not try to deallocate or merge with others.
1925  (This currently holds only for the initial segment passed
1926  into create_mspace_with_base.)
1927  * If IS_MMAPPED_BIT set, the segment may be merged with
1928  other surrounding mmapped segments and trimmed/de-allocated
1929  using munmap.
1930  * If neither bit is set, then the segment was obtained using
1931  MORECORE so can be merged with surrounding MORECORE'd segments
1932  and deallocated/trimmed using MORECORE with negative arguments.
1933 */
1934 
1936  char* base; /* base address */
1937  size_t size; /* allocated size */
1938  struct malloc_segment* next; /* ptr to next segment */
1939 #if FFI_MMAP_EXEC_WRIT
1940  /* The mmap magic is supposed to store the address of the executable
1941  segment at the very end of the requested block. */
1942 
1943 # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
1944 
1945  /* We can only merge segments if their corresponding executable
1946  segments are at identical offsets. */
1947 # define check_segment_merge(S,b,s) \
1948  (mmap_exec_offset((b),(s)) == (S)->exec_offset)
1949 
1950 # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
1951 # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
1952 
1953  /* The removal of sflags only works with HAVE_MORECORE == 0. */
1954 
1955 # define get_segment_flags(S) (IS_MMAPPED_BIT)
1956 # define set_segment_flags(S,v) \
1957  (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) : \
1958  (((S)->exec_offset = \
1959  mmap_exec_offset((S)->base, (S)->size)), \
1960  (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) != \
1961  (S)->exec_offset) ? (ABORT, (v)) : \
1962  (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
1963 
1964  /* We use an offset here, instead of a pointer, because then, when
1965  base changes, we don't have to modify this. On architectures
1966  with segmented addresses, this might not work. */
1967  ptrdiff_t exec_offset;
1968 #else
1969 
1970 # define get_segment_flags(S) ((S)->sflags)
1971 # define set_segment_flags(S,v) ((S)->sflags = (v))
1972 # define check_segment_merge(S,b,s) (1)
1973 
1974  flag_t sflags; /* mmap and extern flag */
1975 #endif
1976 };
1977 
1978 #define is_mmapped_segment(S) (get_segment_flags(S) & IS_MMAPPED_BIT)
1979 #define is_extern_segment(S) (get_segment_flags(S) & EXTERN_BIT)
1980 
1981 typedef struct malloc_segment msegment;
1982 typedef struct malloc_segment* msegmentptr;
1983 
1984 /* ---------------------------- malloc_state ----------------------------- */
1985 
1986 /*
1987  A malloc_state holds all of the bookkeeping for a space.
1988  The main fields are:
1989 
1990  Top
1991  The topmost chunk of the currently active segment. Its size is
1992  cached in topsize. The actual size of topmost space is
1993  topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1994  fenceposts and segment records if necessary when getting more
1995  space from the system. The size at which to autotrim top is
1996  cached from mparams in trim_check, except that it is disabled if
1997  an autotrim fails.
1998 
1999  Designated victim (dv)
2000  This is the preferred chunk for servicing small requests that
2001  don't have exact fits. It is normally the chunk split off most
2002  recently to service another small request. Its size is cached in
2003  dvsize. The link fields of this chunk are not maintained since it
2004  is not kept in a bin.
2005 
2006  SmallBins
2007  An array of bin headers for free chunks. These bins hold chunks
2008  with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2009  chunks of all the same size, spaced 8 bytes apart. To simplify
2010  use in double-linked lists, each bin header acts as a malloc_chunk
2011  pointing to the real first node, if it exists (else pointing to
2012  itself). This avoids special-casing for headers. But to avoid
2013  waste, we allocate only the fd/bk pointers of bins, and then use
2014  repositioning tricks to treat these as the fields of a chunk.
2015 
2016  TreeBins
2017  Treebins are pointers to the roots of trees holding a range of
2018  sizes. There are 2 equally spaced treebins for each power of two
2019  from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2020  larger.
2021 
2022  Bin maps
2023  There is one bit map for small bins ("smallmap") and one for
2024  treebins ("treemap). Each bin sets its bit when non-empty, and
2025  clears the bit when empty. Bit operations are then used to avoid
2026  bin-by-bin searching -- nearly all "search" is done without ever
2027  looking at bins that won't be selected. The bit maps
2028  conservatively use 32 bits per map word, even if on 64bit system.
2029  For a good description of some of the bit-based techniques used
2030  here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2031  supplement at http://hackersdelight.org/). Many of these are
2032  intended to reduce the branchiness of paths through malloc etc, as
2033  well as to reduce the number of memory locations read or written.
2034 
2035  Segments
2036  A list of segments headed by an embedded malloc_segment record
2037  representing the initial space.
2038 
2039  Address check support
2040  The least_addr field is the least address ever obtained from
2041  MORECORE or MMAP. Attempted frees and reallocs of any address less
2042  than this are trapped (unless INSECURE is defined).
2043 
2044  Magic tag
2045  A cross-check field that should always hold same value as mparams.magic.
2046 
2047  Flags
2048  Bits recording whether to use MMAP, locks, or contiguous MORECORE
2049 
2050  Statistics
2051  Each space keeps track of current and maximum system memory
2052  obtained via MORECORE or MMAP.
2053 
2054  Locking
2055  If USE_LOCKS is defined, the "mutex" lock is acquired and released
2056  around every public call using this mspace.
2057 */
2058 
2059 /* Bin types, widths and sizes */
2060 #define NSMALLBINS (32U)
2061 #define NTREEBINS (32U)
2062 #define SMALLBIN_SHIFT (3U)
2063 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2064 #define TREEBIN_SHIFT (8U)
2065 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2066 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2067 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2068 
2072  size_t dvsize;
2073  size_t topsize;
2074  char* least_addr;
2077  size_t trim_check;
2078  size_t magic;
2081  size_t footprint;
2084 #if USE_LOCKS
2085  MLOCK_T mutex; /* locate lock among fields that rarely change */
2086 #endif /* USE_LOCKS */
2088 };
2089 
2090 typedef struct malloc_state* mstate;
2091 
2092 /* ------------- Global malloc_state and malloc_params ------------------- */
2093 
2094 /*
2095  malloc_params holds global properties, including those that can be
2096  dynamically set using mallopt. There is a single instance, mparams,
2097  initialized in init_mparams.
2098 */
2099 
2101  size_t magic;
2102  size_t page_size;
2103  size_t granularity;
2107 };
2108 
2109 static struct malloc_params mparams;
2110 
2111 /* The global malloc_state used for all non-"mspace" calls */
2112 static struct malloc_state _gm_;
2113 #define gm (&_gm_)
2114 #define is_global(M) ((M) == &_gm_)
2115 #define is_initialized(M) ((M)->top != 0)
2116 
2117 /* -------------------------- system alloc setup ------------------------- */
2118 
2119 /* Operations on mflags */
2120 
2121 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2122 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2123 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2124 
2125 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2126 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2127 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2128 
2129 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2130 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2131 
2132 #define set_lock(M,L)\
2133  ((M)->mflags = (L)?\
2134  ((M)->mflags | USE_LOCK_BIT) :\
2135  ((M)->mflags & ~USE_LOCK_BIT))
2136 
2137 /* page-align a size */
2138 #define page_align(S)\
2139  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2140 
2141 /* granularity-align a size */
2142 #define granularity_align(S)\
2143  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2144 
2145 #define is_page_aligned(S)\
2146  (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2147 #define is_granularity_aligned(S)\
2148  (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2149 
2150 /* True if segment S holds address A */
2151 #define segment_holds(S, A)\
2152  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2153 
2154 /* Return segment holding given address */
2155 static msegmentptr segment_holding(mstate m, char* addr) {
2156  msegmentptr sp = &m->seg;
2157  for (;;) {
2158  if (addr >= sp->base && addr < sp->base + sp->size)
2159  return sp;
2160  if ((sp = sp->next) == 0)
2161  return 0;
2162  }
2163 }
2164 
2165 /* Return true if segment contains a segment link */
2166 static int has_segment_link(mstate m, msegmentptr ss) {
2167  msegmentptr sp = &m->seg;
2168  for (;;) {
2169  if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2170  return 1;
2171  if ((sp = sp->next) == 0)
2172  return 0;
2173  }
2174 }
2175 
2176 #ifndef MORECORE_CANNOT_TRIM
2177 #define should_trim(M,s) ((s) > (M)->trim_check)
2178 #else /* MORECORE_CANNOT_TRIM */
2179 #define should_trim(M,s) (0)
2180 #endif /* MORECORE_CANNOT_TRIM */
2181 
2182 /*
2183  TOP_FOOT_SIZE is padding at the end of a segment, including space
2184  that may be needed to place segment records and fenceposts when new
2185  noncontiguous segments are added.
2186 */
2187 #define TOP_FOOT_SIZE\
2188  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2189 
2190 
2191 /* ------------------------------- Hooks -------------------------------- */
2192 
2193 /*
2194  PREACTION should be defined to return 0 on success, and nonzero on
2195  failure. If you are not using locking, you can redefine these to do
2196  anything you like.
2197 */
2198 
2199 #if USE_LOCKS
2200 
2201 /* Ensure locks are initialized */
2202 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2203 
2204 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2205 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2206 #else /* USE_LOCKS */
2207 
2208 #ifndef PREACTION
2209 #define PREACTION(M) (0)
2210 #endif /* PREACTION */
2211 
2212 #ifndef POSTACTION
2213 #define POSTACTION(M)
2214 #endif /* POSTACTION */
2215 
2216 #endif /* USE_LOCKS */
2217 
2218 /*
2219  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2220  USAGE_ERROR_ACTION is triggered on detected bad frees and
2221  reallocs. The argument p is an address that might have triggered the
2222  fault. It is ignored by the two predefined actions, but might be
2223  useful in custom actions that try to help diagnose errors.
2224 */
2225 
2226 #if PROCEED_ON_ERROR
2227 
2228 /* A count of the number of corruption errors causing resets */
2229 int malloc_corruption_error_count;
2230 
2231 /* default corruption action */
2232 static void reset_on_error(mstate m);
2233 
2234 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2235 #define USAGE_ERROR_ACTION(m, p)
2236 
2237 #else /* PROCEED_ON_ERROR */
2238 
2239 #ifndef CORRUPTION_ERROR_ACTION
2240 #define CORRUPTION_ERROR_ACTION(m) ABORT
2241 #endif /* CORRUPTION_ERROR_ACTION */
2242 
2243 #ifndef USAGE_ERROR_ACTION
2244 #define USAGE_ERROR_ACTION(m,p) ABORT
2245 #endif /* USAGE_ERROR_ACTION */
2246 
2247 #endif /* PROCEED_ON_ERROR */
2248 
2249 /* -------------------------- Debugging setup ---------------------------- */
2250 
2251 #if ! DEBUG
2252 
2253 #define check_free_chunk(M,P)
2254 #define check_inuse_chunk(M,P)
2255 #define check_malloced_chunk(M,P,N)
2256 #define check_mmapped_chunk(M,P)
2257 #define check_malloc_state(M)
2258 #define check_top_chunk(M,P)
2259 
2260 #else /* DEBUG */
2261 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2262 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2263 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2264 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2265 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2266 #define check_malloc_state(M) do_check_malloc_state(M)
2267 
2268 static void do_check_any_chunk(mstate m, mchunkptr p);
2269 static void do_check_top_chunk(mstate m, mchunkptr p);
2270 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2271 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2272 static void do_check_free_chunk(mstate m, mchunkptr p);
2273 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2274 static void do_check_tree(mstate m, tchunkptr t);
2275 static void do_check_treebin(mstate m, bindex_t i);
2276 static void do_check_smallbin(mstate m, bindex_t i);
2277 static void do_check_malloc_state(mstate m);
2278 static int bin_find(mstate m, mchunkptr x);
2279 static size_t traverse_and_check(mstate m);
2280 #endif /* DEBUG */
2281 
2282 /* ---------------------------- Indexing Bins ---------------------------- */
2283 
2284 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2285 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2286 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2287 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2288 
2289 /* addressing by index. See above about smallbin repositioning */
2290 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2291 #define treebin_at(M,i) (&((M)->treebins[i]))
2292 
2293 /* assign tree index for size S to variable I */
2294 #if defined(__GNUC__) && defined(i386)
2295 #define compute_tree_index(S, I)\
2296 {\
2297  size_t X = S >> TREEBIN_SHIFT;\
2298  if (X == 0)\
2299  I = 0;\
2300  else if (X > 0xFFFF)\
2301  I = NTREEBINS-1;\
2302  else {\
2303  unsigned int K;\
2304  __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2305  I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2306  }\
2307 }
2308 #else /* GNUC */
2309 #define compute_tree_index(S, I)\
2310 {\
2311  size_t X = S >> TREEBIN_SHIFT;\
2312  if (X == 0)\
2313  I = 0;\
2314  else if (X > 0xFFFF)\
2315  I = NTREEBINS-1;\
2316  else {\
2317  unsigned int Y = (unsigned int)X;\
2318  unsigned int N = ((Y - 0x100) >> 16) & 8;\
2319  unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2320  N += K;\
2321  N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2322  K = 14 - N + ((Y <<= K) >> 15);\
2323  I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2324  }\
2325 }
2326 #endif /* GNUC */
2327 
2328 /* Bit representing maximum resolved size in a treebin at i */
2329 #define bit_for_tree_index(i) \
2330  (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2331 
2332 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2333 #define leftshift_for_tree_index(i) \
2334  ((i == NTREEBINS-1)? 0 : \
2335  ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2336 
2337 /* The size of the smallest chunk held in bin with index i */
2338 #define minsize_for_tree_index(i) \
2339  ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2340  (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2341 
2342 
2343 /* ------------------------ Operations on bin maps ----------------------- */
2344 
2345 /* bit corresponding to given index */
2346 #define idx2bit(i) ((binmap_t)(1) << (i))
2347 
2348 /* Mark/Clear bits with given index */
2349 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2350 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2351 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2352 
2353 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2354 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2355 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2356 
2357 /* index corresponding to given bit */
2358 
2359 #if defined(__GNUC__) && defined(i386)
2360 #define compute_bit2idx(X, I)\
2361 {\
2362  unsigned int J;\
2363  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2364  I = (bindex_t)J;\
2365 }
2366 
2367 #else /* GNUC */
2368 #if USE_BUILTIN_FFS
2369 #define compute_bit2idx(X, I) I = ffs(X)-1
2370 
2371 #else /* USE_BUILTIN_FFS */
2372 #define compute_bit2idx(X, I)\
2373 {\
2374  unsigned int Y = X - 1;\
2375  unsigned int K = Y >> (16-4) & 16;\
2376  unsigned int N = K; Y >>= K;\
2377  N += K = Y >> (8-3) & 8; Y >>= K;\
2378  N += K = Y >> (4-2) & 4; Y >>= K;\
2379  N += K = Y >> (2-1) & 2; Y >>= K;\
2380  N += K = Y >> (1-0) & 1; Y >>= K;\
2381  I = (bindex_t)(N + Y);\
2382 }
2383 #endif /* USE_BUILTIN_FFS */
2384 #endif /* GNUC */
2385 
2386 /* isolate the least set bit of a bitmap */
2387 #define least_bit(x) ((x) & -(x))
2388 
2389 /* mask with all bits to left of least bit of x on */
2390 #define left_bits(x) ((x<<1) | -(x<<1))
2391 
2392 /* mask with all bits to left of or equal to least bit of x on */
2393 #define same_or_left_bits(x) ((x) | -(x))
2394 
2395 
2396 /* ----------------------- Runtime Check Support ------------------------- */
2397 
2398 /*
2399  For security, the main invariant is that malloc/free/etc never
2400  writes to a static address other than malloc_state, unless static
2401  malloc_state itself has been corrupted, which cannot occur via
2402  malloc (because of these checks). In essence this means that we
2403  believe all pointers, sizes, maps etc held in malloc_state, but
2404  check all of those linked or offsetted from other embedded data
2405  structures. These checks are interspersed with main code in a way
2406  that tends to minimize their run-time cost.
2407 
2408  When FOOTERS is defined, in addition to range checking, we also
2409  verify footer fields of inuse chunks, which can be used guarantee
2410  that the mstate controlling malloc/free is intact. This is a
2411  streamlined version of the approach described by William Robertson
2412  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2413  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2414  of an inuse chunk holds the xor of its mstate and a random seed,
2415  that is checked upon calls to free() and realloc(). This is
2416  (probablistically) unguessable from outside the program, but can be
2417  computed by any code successfully malloc'ing any chunk, so does not
2418  itself provide protection against code that has already broken
2419  security through some other means. Unlike Robertson et al, we
2420  always dynamically check addresses of all offset chunks (previous,
2421  next, etc). This turns out to be cheaper than relying on hashes.
2422 */
2423 
2424 #if !INSECURE
2425 /* Check if address a is at least as high as any from MORECORE or MMAP */
2426 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2427 /* Check if address of next chunk n is higher than base chunk p */
2428 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2429 /* Check if p has its cinuse bit on */
2430 #define ok_cinuse(p) cinuse(p)
2431 /* Check if p has its pinuse bit on */
2432 #define ok_pinuse(p) pinuse(p)
2433 
2434 #else /* !INSECURE */
2435 #define ok_address(M, a) (1)
2436 #define ok_next(b, n) (1)
2437 #define ok_cinuse(p) (1)
2438 #define ok_pinuse(p) (1)
2439 #endif /* !INSECURE */
2440 
2441 #if (FOOTERS && !INSECURE)
2442 /* Check if (alleged) mstate m has expected magic field */
2443 #define ok_magic(M) ((M)->magic == mparams.magic)
2444 #else /* (FOOTERS && !INSECURE) */
2445 #define ok_magic(M) (1)
2446 #endif /* (FOOTERS && !INSECURE) */
2447 
2448 
2449 /* In gcc, use __builtin_expect to minimize impact of checks */
2450 #if !INSECURE
2451 #if defined(__GNUC__) && __GNUC__ >= 3
2452 #define RTCHECK(e) __builtin_expect(e, 1)
2453 #else /* GNUC */
2454 #define RTCHECK(e) (e)
2455 #endif /* GNUC */
2456 #else /* !INSECURE */
2457 #define RTCHECK(e) (1)
2458 #endif /* !INSECURE */
2459 
2460 /* macros to set up inuse chunks with or without footers */
2461 
2462 #if !FOOTERS
2463 
2464 #define mark_inuse_foot(M,p,s)
2465 
2466 /* Set cinuse bit and pinuse bit of next chunk */
2467 #define set_inuse(M,p,s)\
2468  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2469  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2470 
2471 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2472 #define set_inuse_and_pinuse(M,p,s)\
2473  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2474  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2475 
2476 /* Set size, cinuse and pinuse bit of this chunk */
2477 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2478  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2479 
2480 #else /* FOOTERS */
2481 
2482 /* Set foot of inuse chunk to be xor of mstate and seed */
2483 #define mark_inuse_foot(M,p,s)\
2484  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2485 
2486 #define get_mstate_for(p)\
2487  ((mstate)(((mchunkptr)((char*)(p) +\
2488  (chunksize(p))))->prev_foot ^ mparams.magic))
2489 
2490 #define set_inuse(M,p,s)\
2491  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2492  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2493  mark_inuse_foot(M,p,s))
2494 
2495 #define set_inuse_and_pinuse(M,p,s)\
2496  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2497  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2498  mark_inuse_foot(M,p,s))
2499 
2500 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2501  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2502  mark_inuse_foot(M, p, s))
2503 
2504 #endif /* !FOOTERS */
2505 
2506 /* ---------------------------- setting mparams -------------------------- */
2507 
2508 /* Initialize mparams */
2509 static int init_mparams(void) {
2510  if (mparams.page_size == 0) {
2511  size_t s;
2512 
2515 #if MORECORE_CONTIGUOUS
2517 #else /* MORECORE_CONTIGUOUS */
2519 #endif /* MORECORE_CONTIGUOUS */
2520 
2521 #if (FOOTERS && !INSECURE)
2522  {
2523 #if USE_DEV_RANDOM
2524  int fd;
2525  unsigned char buf[sizeof(size_t)];
2526  /* Try to use /dev/urandom, else fall back on using time */
2527  if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2528  read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2529  s = *((size_t *) buf);
2530  close(fd);
2531  }
2532  else
2533 #endif /* USE_DEV_RANDOM */
2534  s = (size_t)(time(0) ^ (size_t)0x55555555U);
2535 
2536  s |= (size_t)8U; /* ensure nonzero */
2537  s &= ~(size_t)7U; /* improve chances of fault for bad values */
2538 
2539  }
2540 #else /* (FOOTERS && !INSECURE) */
2541  s = (size_t)0x58585858U;
2542 #endif /* (FOOTERS && !INSECURE) */
2544  if (mparams.magic == 0) {
2545  mparams.magic = s;
2546  /* Set up lock for main malloc area */
2547  INITIAL_LOCK(&gm->mutex);
2548  gm->mflags = mparams.default_mflags;
2549  }
2551 
2552 #if !defined(WIN32) && !defined(__OS2__)
2553  mparams.page_size = malloc_getpagesize;
2554  mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2555  DEFAULT_GRANULARITY : mparams.page_size);
2556 #elif defined (__OS2__)
2557  /* if low-memory is used, os2munmap() would break
2558  if it were anything other than 64k */
2559  mparams.page_size = 4096u;
2560  mparams.granularity = 65536u;
2561 #else /* WIN32 */
2562  {
2563  SYSTEM_INFO system_info;
2564  GetSystemInfo(&system_info);
2565  mparams.page_size = system_info.dwPageSize;
2566  mparams.granularity = system_info.dwAllocationGranularity;
2567  }
2568 #endif /* WIN32 */
2569 
2570  /* Sanity-check configuration:
2571  size_t must be unsigned and as wide as pointer type.
2572  ints must be at least 4 bytes.
2573  alignment must be at least 8.
2574  Alignment, min chunk size, and page size must all be powers of 2.
2575  */
2576  if ((sizeof(size_t) != sizeof(char*)) ||
2577  (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2578  (sizeof(int) < 4) ||
2579  (MALLOC_ALIGNMENT < (size_t)8U) ||
2581  ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2582  ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2583  ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2584  ABORT;
2585  }
2586  return 0;
2587 }
2588 
2589 /* support for mallopt */
2590 static int change_mparam(int param_number, int value) {
2591  size_t val = (size_t)value;
2592  init_mparams();
2593  switch(param_number) {
2594  case M_TRIM_THRESHOLD:
2595  mparams.trim_threshold = val;
2596  return 1;
2597  case M_GRANULARITY:
2598  if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2599  mparams.granularity = val;
2600  return 1;
2601  }
2602  else
2603  return 0;
2604  case M_MMAP_THRESHOLD:
2605  mparams.mmap_threshold = val;
2606  return 1;
2607  default:
2608  return 0;
2609  }
2610 }
2611 
2612 #if DEBUG
2613 /* ------------------------- Debugging Support --------------------------- */
2614 
2615 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2616 static void do_check_any_chunk(mstate m, mchunkptr p) {
2617  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2618  assert(ok_address(m, p));
2619 }
2620 
2621 /* Check properties of top chunk */
2622 static void do_check_top_chunk(mstate m, mchunkptr p) {
2623  msegmentptr sp = segment_holding(m, (char*)p);
2624  size_t sz = chunksize(p);
2625  assert(sp != 0);
2626  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2627  assert(ok_address(m, p));
2628  assert(sz == m->topsize);
2629  assert(sz > 0);
2630  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2631  assert(pinuse(p));
2632  assert(!next_pinuse(p));
2633 }
2634 
2635 /* Check properties of (inuse) mmapped chunks */
2636 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2637  size_t sz = chunksize(p);
2638  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2639  assert(is_mmapped(p));
2640  assert(use_mmap(m));
2641  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2642  assert(ok_address(m, p));
2643  assert(!is_small(sz));
2644  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2645  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2646  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2647 }
2648 
2649 /* Check properties of inuse chunks */
2650 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2651  do_check_any_chunk(m, p);
2652  assert(cinuse(p));
2653  assert(next_pinuse(p));
2654  /* If not pinuse and not mmapped, previous chunk has OK offset */
2655  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2656  if (is_mmapped(p))
2657  do_check_mmapped_chunk(m, p);
2658 }
2659 
2660 /* Check properties of free chunks */
2661 static void do_check_free_chunk(mstate m, mchunkptr p) {
2662  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2663  mchunkptr next = chunk_plus_offset(p, sz);
2664  do_check_any_chunk(m, p);
2665  assert(!cinuse(p));
2666  assert(!next_pinuse(p));
2667  assert (!is_mmapped(p));
2668  if (p != m->dv && p != m->top) {
2669  if (sz >= MIN_CHUNK_SIZE) {
2670  assert((sz & CHUNK_ALIGN_MASK) == 0);
2672  assert(next->prev_foot == sz);
2673  assert(pinuse(p));
2674  assert (next == m->top || cinuse(next));
2675  assert(p->fd->bk == p);
2676  assert(p->bk->fd == p);
2677  }
2678  else /* markers are always of size SIZE_T_SIZE */
2679  assert(sz == SIZE_T_SIZE);
2680  }
2681 }
2682 
2683 /* Check properties of malloced chunks at the point they are malloced */
2684 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2685  if (mem != 0) {
2686  mchunkptr p = mem2chunk(mem);
2687  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2688  do_check_inuse_chunk(m, p);
2689  assert((sz & CHUNK_ALIGN_MASK) == 0);
2690  assert(sz >= MIN_CHUNK_SIZE);
2691  assert(sz >= s);
2692  /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2693  assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2694  }
2695 }
2696 
2697 /* Check a tree and its subtrees. */
2698 static void do_check_tree(mstate m, tchunkptr t) {
2699  tchunkptr head = 0;
2700  tchunkptr u = t;
2701  bindex_t tindex = t->index;
2702  size_t tsize = chunksize(t);
2703  bindex_t idx;
2704  compute_tree_index(tsize, idx);
2705  assert(tindex == idx);
2706  assert(tsize >= MIN_LARGE_SIZE);
2707  assert(tsize >= minsize_for_tree_index(idx));
2708  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2709 
2710  do { /* traverse through chain of same-sized nodes */
2711  do_check_any_chunk(m, ((mchunkptr)u));
2712  assert(u->index == tindex);
2713  assert(chunksize(u) == tsize);
2714  assert(!cinuse(u));
2715  assert(!next_pinuse(u));
2716  assert(u->fd->bk == u);
2717  assert(u->bk->fd == u);
2718  if (u->parent == 0) {
2719  assert(u->child[0] == 0);
2720  assert(u->child[1] == 0);
2721  }
2722  else {
2723  assert(head == 0); /* only one node on chain has parent */
2724  head = u;
2725  assert(u->parent != u);
2726  assert (u->parent->child[0] == u ||
2727  u->parent->child[1] == u ||
2728  *((tbinptr*)(u->parent)) == u);
2729  if (u->child[0] != 0) {
2730  assert(u->child[0]->parent == u);
2731  assert(u->child[0] != u);
2732  do_check_tree(m, u->child[0]);
2733  }
2734  if (u->child[1] != 0) {
2735  assert(u->child[1]->parent == u);
2736  assert(u->child[1] != u);
2737  do_check_tree(m, u->child[1]);
2738  }
2739  if (u->child[0] != 0 && u->child[1] != 0) {
2740  assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2741  }
2742  }
2743  u = u->fd;
2744  } while (u != t);
2745  assert(head != 0);
2746 }
2747 
2748 /* Check all the chunks in a treebin. */
2749 static void do_check_treebin(mstate m, bindex_t i) {
2750  tbinptr* tb = treebin_at(m, i);
2751  tchunkptr t = *tb;
2752  int empty = (m->treemap & (1U << i)) == 0;
2753  if (t == 0)
2754  assert(empty);
2755  if (!empty)
2756  do_check_tree(m, t);
2757 }
2758 
2759 /* Check all the chunks in a smallbin. */
2760 static void do_check_smallbin(mstate m, bindex_t i) {
2761  sbinptr b = smallbin_at(m, i);
2762  mchunkptr p = b->bk;
2763  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2764  if (p == b)
2765  assert(empty);
2766  if (!empty) {
2767  for (; p != b; p = p->bk) {
2768  size_t size = chunksize(p);
2769  mchunkptr q;
2770  /* each chunk claims to be free */
2771  do_check_free_chunk(m, p);
2772  /* chunk belongs in bin */
2773  assert(small_index(size) == i);
2774  assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2775  /* chunk is followed by an inuse chunk */
2776  q = next_chunk(p);
2777  if (q->head != FENCEPOST_HEAD)
2778  do_check_inuse_chunk(m, q);
2779  }
2780  }
2781 }
2782 
2783 /* Find x in a bin. Used in other check functions. */
2784 static int bin_find(mstate m, mchunkptr x) {
2785  size_t size = chunksize(x);
2786  if (is_small(size)) {
2787  bindex_t sidx = small_index(size);
2788  sbinptr b = smallbin_at(m, sidx);
2789  if (smallmap_is_marked(m, sidx)) {
2790  mchunkptr p = b;
2791  do {
2792  if (p == x)
2793  return 1;
2794  } while ((p = p->fd) != b);
2795  }
2796  }
2797  else {
2798  bindex_t tidx;
2799  compute_tree_index(size, tidx);
2800  if (treemap_is_marked(m, tidx)) {
2801  tchunkptr t = *treebin_at(m, tidx);
2802  size_t sizebits = size << leftshift_for_tree_index(tidx);
2803  while (t != 0 && chunksize(t) != size) {
2804  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2805  sizebits <<= 1;
2806  }
2807  if (t != 0) {
2808  tchunkptr u = t;
2809  do {
2810  if (u == (tchunkptr)x)
2811  return 1;
2812  } while ((u = u->fd) != t);
2813  }
2814  }
2815  }
2816  return 0;
2817 }
2818 
2819 /* Traverse each chunk and check it; return total */
2820 static size_t traverse_and_check(mstate m) {
2821  size_t sum = 0;
2822  if (is_initialized(m)) {
2823  msegmentptr s = &m->seg;
2824  sum += m->topsize + TOP_FOOT_SIZE;
2825  while (s != 0) {
2826  mchunkptr q = align_as_chunk(s->base);
2827  mchunkptr lastq = 0;
2828  assert(pinuse(q));
2829  while (segment_holds(s, q) &&
2830  q != m->top && q->head != FENCEPOST_HEAD) {
2831  sum += chunksize(q);
2832  if (cinuse(q)) {
2833  assert(!bin_find(m, q));
2834  do_check_inuse_chunk(m, q);
2835  }
2836  else {
2837  assert(q == m->dv || bin_find(m, q));
2838  assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2839  do_check_free_chunk(m, q);
2840  }
2841  lastq = q;
2842  q = next_chunk(q);
2843  }
2844  s = s->next;
2845  }
2846  }
2847  return sum;
2848 }
2849 
2850 /* Check all properties of malloc_state. */
2851 static void do_check_malloc_state(mstate m) {
2852  bindex_t i;
2853  size_t total;
2854  /* check bins */
2855  for (i = 0; i < NSMALLBINS; ++i)
2856  do_check_smallbin(m, i);
2857  for (i = 0; i < NTREEBINS; ++i)
2858  do_check_treebin(m, i);
2859 
2860  if (m->dvsize != 0) { /* check dv chunk */
2861  do_check_any_chunk(m, m->dv);
2862  assert(m->dvsize == chunksize(m->dv));
2863  assert(m->dvsize >= MIN_CHUNK_SIZE);
2864  assert(bin_find(m, m->dv) == 0);
2865  }
2866 
2867  if (m->top != 0) { /* check top chunk */
2868  do_check_top_chunk(m, m->top);
2869  assert(m->topsize == chunksize(m->top));
2870  assert(m->topsize > 0);
2871  assert(bin_find(m, m->top) == 0);
2872  }
2873 
2874  total = traverse_and_check(m);
2875  assert(total <= m->footprint);
2876  assert(m->footprint <= m->max_footprint);
2877 }
2878 #endif /* DEBUG */
2879 
2880 /* ----------------------------- statistics ------------------------------ */
2881 
2882 #if !NO_MALLINFO
2883 static struct mallinfo internal_mallinfo(mstate m) {
2884  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2885  if (!PREACTION(m)) {
2886  check_malloc_state(m);
2887  if (is_initialized(m)) {
2888  size_t nfree = SIZE_T_ONE; /* top always free */
2889  size_t mfree = m->topsize + TOP_FOOT_SIZE;
2890  size_t sum = mfree;
2891  msegmentptr s = &m->seg;
2892  while (s != 0) {
2893  mchunkptr q = align_as_chunk(s->base);
2894  while (segment_holds(s, q) &&
2895  q != m->top && q->head != FENCEPOST_HEAD) {
2896  size_t sz = chunksize(q);
2897  sum += sz;
2898  if (!cinuse(q)) {
2899  mfree += sz;
2900  ++nfree;
2901  }
2902  q = next_chunk(q);
2903  }
2904  s = s->next;
2905  }
2906 
2907  nm.arena = sum;
2908  nm.ordblks = nfree;
2909  nm.hblkhd = m->footprint - sum;
2910  nm.usmblks = m->max_footprint;
2911  nm.uordblks = m->footprint - mfree;
2912  nm.fordblks = mfree;
2913  nm.keepcost = m->topsize;
2914  }
2915 
2916  POSTACTION(m);
2917  }
2918  return nm;
2919 }
2920 #endif /* !NO_MALLINFO */
2921 
2922 static void internal_malloc_stats(mstate m) {
2923  if (!PREACTION(m)) {
2924  size_t maxfp = 0;
2925  size_t fp = 0;
2926  size_t used = 0;
2927  check_malloc_state(m);
2928  if (is_initialized(m)) {
2929  msegmentptr s = &m->seg;
2930  maxfp = m->max_footprint;
2931  fp = m->footprint;
2932  used = fp - (m->topsize + TOP_FOOT_SIZE);
2933 
2934  while (s != 0) {
2935  mchunkptr q = align_as_chunk(s->base);
2936  while (segment_holds(s, q) &&
2937  q != m->top && q->head != FENCEPOST_HEAD) {
2938  if (!cinuse(q))
2939  used -= chunksize(q);
2940  q = next_chunk(q);
2941  }
2942  s = s->next;
2943  }
2944  }
2945 
2946  fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2947  fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2948  fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2949 
2950  POSTACTION(m);
2951  }
2952 }
2953 
2954 /* ----------------------- Operations on smallbins ----------------------- */
2955 
2956 /*
2957  Various forms of linking and unlinking are defined as macros. Even
2958  the ones for trees, which are very long but have very short typical
2959  paths. This is ugly but reduces reliance on inlining support of
2960  compilers.
2961 */
2962 
2963 /* Link a free chunk into a smallbin */
2964 #define insert_small_chunk(M, P, S) {\
2965  bindex_t I = small_index(S);\
2966  mchunkptr B = smallbin_at(M, I);\
2967  mchunkptr F = B;\
2968  assert(S >= MIN_CHUNK_SIZE);\
2969  if (!smallmap_is_marked(M, I))\
2970  mark_smallmap(M, I);\
2971  else if (RTCHECK(ok_address(M, B->fd)))\
2972  F = B->fd;\
2973  else {\
2974  CORRUPTION_ERROR_ACTION(M);\
2975  }\
2976  B->fd = P;\
2977  F->bk = P;\
2978  P->fd = F;\
2979  P->bk = B;\
2980 }
2981 
2982 /* Unlink a chunk from a smallbin */
2983 #define unlink_small_chunk(M, P, S) {\
2984  mchunkptr F = P->fd;\
2985  mchunkptr B = P->bk;\
2986  bindex_t I = small_index(S);\
2987  assert(P != B);\
2988  assert(P != F);\
2989  assert(chunksize(P) == small_index2size(I));\
2990  if (F == B)\
2991  clear_smallmap(M, I);\
2992  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2993  (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2994  F->bk = B;\
2995  B->fd = F;\
2996  }\
2997  else {\
2998  CORRUPTION_ERROR_ACTION(M);\
2999  }\
3000 }
3001 
3002 /* Unlink the first chunk from a smallbin */
3003 #define unlink_first_small_chunk(M, B, P, I) {\
3004  mchunkptr F = P->fd;\
3005  assert(P != B);\
3006  assert(P != F);\
3007  assert(chunksize(P) == small_index2size(I));\
3008  if (B == F)\
3009  clear_smallmap(M, I);\
3010  else if (RTCHECK(ok_address(M, F))) {\
3011  B->fd = F;\
3012  F->bk = B;\
3013  }\
3014  else {\
3015  CORRUPTION_ERROR_ACTION(M);\
3016  }\
3017 }
3018 
3019 /* Replace dv node, binning the old one */
3020 /* Used only when dvsize known to be small */
3021 #define replace_dv(M, P, S) {\
3022  size_t DVS = M->dvsize;\
3023  if (DVS != 0) {\
3024  mchunkptr DV = M->dv;\
3025  assert(is_small(DVS));\
3026  insert_small_chunk(M, DV, DVS);\
3027  }\
3028  M->dvsize = S;\
3029  M->dv = P;\
3030 }
3031 
3032 /* ------------------------- Operations on trees ------------------------- */
3033 
3034 /* Insert chunk into tree */
3035 #define insert_large_chunk(M, X, S) {\
3036  tbinptr* H;\
3037  bindex_t I;\
3038  compute_tree_index(S, I);\
3039  H = treebin_at(M, I);\
3040  X->index = I;\
3041  X->child[0] = X->child[1] = 0;\
3042  if (!treemap_is_marked(M, I)) {\
3043  mark_treemap(M, I);\
3044  *H = X;\
3045  X->parent = (tchunkptr)H;\
3046  X->fd = X->bk = X;\
3047  }\
3048  else {\
3049  tchunkptr T = *H;\
3050  size_t K = S << leftshift_for_tree_index(I);\
3051  for (;;) {\
3052  if (chunksize(T) != S) {\
3053  tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3054  K <<= 1;\
3055  if (*C != 0)\
3056  T = *C;\
3057  else if (RTCHECK(ok_address(M, C))) {\
3058  *C = X;\
3059  X->parent = T;\
3060  X->fd = X->bk = X;\
3061  break;\
3062  }\
3063  else {\
3064  CORRUPTION_ERROR_ACTION(M);\
3065  break;\
3066  }\
3067  }\
3068  else {\
3069  tchunkptr F = T->fd;\
3070  if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3071  T->fd = F->bk = X;\
3072  X->fd = F;\
3073  X->bk = T;\
3074  X->parent = 0;\
3075  break;\
3076  }\
3077  else {\
3078  CORRUPTION_ERROR_ACTION(M);\
3079  break;\
3080  }\
3081  }\
3082  }\
3083  }\
3084 }
3085 
3086 /*
3087  Unlink steps:
3088 
3089  1. If x is a chained node, unlink it from its same-sized fd/bk links
3090  and choose its bk node as its replacement.
3091  2. If x was the last node of its size, but not a leaf node, it must
3092  be replaced with a leaf node (not merely one with an open left or
3093  right), to make sure that lefts and rights of descendants
3094  correspond properly to bit masks. We use the rightmost descendant
3095  of x. We could use any other leaf, but this is easy to locate and
3096  tends to counteract removal of leftmosts elsewhere, and so keeps
3097  paths shorter than minimally guaranteed. This doesn't loop much
3098  because on average a node in a tree is near the bottom.
3099  3. If x is the base of a chain (i.e., has parent links) relink
3100  x's parent and children to x's replacement (or null if none).
3101 */
3102 
3103 #define unlink_large_chunk(M, X) {\
3104  tchunkptr XP = X->parent;\
3105  tchunkptr R;\
3106  if (X->bk != X) {\
3107  tchunkptr F = X->fd;\
3108  R = X->bk;\
3109  if (RTCHECK(ok_address(M, F))) {\
3110  F->bk = R;\
3111  R->fd = F;\
3112  }\
3113  else {\
3114  CORRUPTION_ERROR_ACTION(M);\
3115  }\
3116  }\
3117  else {\
3118  tchunkptr* RP;\
3119  if (((R = *(RP = &(X->child[1]))) != 0) ||\
3120  ((R = *(RP = &(X->child[0]))) != 0)) {\
3121  tchunkptr* CP;\
3122  while ((*(CP = &(R->child[1])) != 0) ||\
3123  (*(CP = &(R->child[0])) != 0)) {\
3124  R = *(RP = CP);\
3125  }\
3126  if (RTCHECK(ok_address(M, RP)))\
3127  *RP = 0;\
3128  else {\
3129  CORRUPTION_ERROR_ACTION(M);\
3130  }\
3131  }\
3132  }\
3133  if (XP != 0) {\
3134  tbinptr* H = treebin_at(M, X->index);\
3135  if (X == *H) {\
3136  if ((*H = R) == 0) \
3137  clear_treemap(M, X->index);\
3138  }\
3139  else if (RTCHECK(ok_address(M, XP))) {\
3140  if (XP->child[0] == X) \
3141  XP->child[0] = R;\
3142  else \
3143  XP->child[1] = R;\
3144  }\
3145  else\
3146  CORRUPTION_ERROR_ACTION(M);\
3147  if (R != 0) {\
3148  if (RTCHECK(ok_address(M, R))) {\
3149  tchunkptr C0, C1;\
3150  R->parent = XP;\
3151  if ((C0 = X->child[0]) != 0) {\
3152  if (RTCHECK(ok_address(M, C0))) {\
3153  R->child[0] = C0;\
3154  C0->parent = R;\
3155  }\
3156  else\
3157  CORRUPTION_ERROR_ACTION(M);\
3158  }\
3159  if ((C1 = X->child[1]) != 0) {\
3160  if (RTCHECK(ok_address(M, C1))) {\
3161  R->child[1] = C1;\
3162  C1->parent = R;\
3163  }\
3164  else\
3165  CORRUPTION_ERROR_ACTION(M);\
3166  }\
3167  }\
3168  else\
3169  CORRUPTION_ERROR_ACTION(M);\
3170  }\
3171  }\
3172 }
3173 
3174 /* Relays to large vs small bin operations */
3175 
3176 #define insert_chunk(M, P, S)\
3177  if (is_small(S)) insert_small_chunk(M, P, S)\
3178  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3179 
3180 #define unlink_chunk(M, P, S)\
3181  if (is_small(S)) unlink_small_chunk(M, P, S)\
3182  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3183 
3184 
3185 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3186 
3187 #if ONLY_MSPACES
3188 #define internal_malloc(m, b) mspace_malloc(m, b)
3189 #define internal_free(m, mem) mspace_free(m,mem);
3190 #else /* ONLY_MSPACES */
3191 #if MSPACES
3192 #define internal_malloc(m, b)\
3193  (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3194 #define internal_free(m, mem)\
3195  if (m == gm) dlfree(mem); else mspace_free(m,mem);
3196 #else /* MSPACES */
3197 #define internal_malloc(m, b) dlmalloc(b)
3198 #define internal_free(m, mem) dlfree(mem)
3199 #endif /* MSPACES */
3200 #endif /* ONLY_MSPACES */
3201 
3202 /* ----------------------- Direct-mmapping chunks ----------------------- */
3203 
3204 /*
3205  Directly mmapped chunks are set up with an offset to the start of
3206  the mmapped region stored in the prev_foot field of the chunk. This
3207  allows reconstruction of the required argument to MUNMAP when freed,
3208  and also allows adjustment of the returned chunk to meet alignment
3209  requirements (especially in memalign). There is also enough space
3210  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3211  the PINUSE bit so frees can be checked.
3212 */
3213 
3214 /* Malloc using mmap */
3215 static void* mmap_alloc(mstate m, size_t nb) {
3216  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3217  if (mmsize > nb) { /* Check for wrap around 0 */
3218  char* mm = (char*)(DIRECT_MMAP(mmsize));
3219  if (mm != CMFAIL) {
3220  size_t offset = align_offset(chunk2mem(mm));
3221  size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3222  mchunkptr p = (mchunkptr)(mm + offset);
3223  p->prev_foot = offset | IS_MMAPPED_BIT;
3224  (p)->head = (psize|CINUSE_BIT);
3225  mark_inuse_foot(m, p, psize);
3226  chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3227  chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3228 
3229  if (mm < m->least_addr)
3230  m->least_addr = mm;
3231  if ((m->footprint += mmsize) > m->max_footprint)
3232  m->max_footprint = m->footprint;
3234  check_mmapped_chunk(m, p);
3235  return chunk2mem(p);
3236  }
3237  }
3238  return 0;
3239 }
3240 
3241 /* Realloc using mmap */
3242 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3243  size_t oldsize = chunksize(oldp);
3244  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3245  return 0;
3246  /* Keep old chunk if big enough but not too big */
3247  if (oldsize >= nb + SIZE_T_SIZE &&
3248  (oldsize - nb) <= (mparams.granularity << 1))
3249  return oldp;
3250  else {
3251  size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3252  size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3253  size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3255  char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3256  oldmmsize, newmmsize, 1);
3257  if (cp != CMFAIL) {
3258  mchunkptr newp = (mchunkptr)(cp + offset);
3259  size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3260  newp->head = (psize|CINUSE_BIT);
3261  mark_inuse_foot(m, newp, psize);
3262  chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3263  chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3264 
3265  if (cp < m->least_addr)
3266  m->least_addr = cp;
3267  if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3268  m->max_footprint = m->footprint;
3269  check_mmapped_chunk(m, newp);
3270  return newp;
3271  }
3272  }
3273  return 0;
3274 }
3275 
3276 /* -------------------------- mspace management -------------------------- */
3277 
3278 /* Initialize top chunk and its size */
3279 static void init_top(mstate m, mchunkptr p, size_t psize) {
3280  /* Ensure alignment */
3281  size_t offset = align_offset(chunk2mem(p));
3282  p = (mchunkptr)((char*)p + offset);
3283  psize -= offset;
3284 
3285  m->top = p;
3286  m->topsize = psize;
3287  p->head = psize | PINUSE_BIT;
3288  /* set size of fake trailing chunk holding overhead space only once */
3289  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3290  m->trim_check = mparams.trim_threshold; /* reset on each update */
3291 }
3292 
3293 /* Initialize bins for a new mstate that is otherwise zeroed out */
3294 static void init_bins(mstate m) {
3295  /* Establish circular links for smallbins */
3296  bindex_t i;
3297  for (i = 0; i < NSMALLBINS; ++i) {
3298  sbinptr bin = smallbin_at(m,i);
3299  bin->fd = bin->bk = bin;
3300  }
3301 }
3302 
3303 #if PROCEED_ON_ERROR
3304 
3305 /* default corruption action */
3306 static void reset_on_error(mstate m) {
3307  int i;
3308  ++malloc_corruption_error_count;
3309  /* Reinitialize fields to forget about all memory */
3310  m->smallbins = m->treebins = 0;
3311  m->dvsize = m->topsize = 0;
3312  m->seg.base = 0;
3313  m->seg.size = 0;
3314  m->seg.next = 0;
3315  m->top = m->dv = 0;
3316  for (i = 0; i < NTREEBINS; ++i)
3317  *treebin_at(m, i) = 0;
3318  init_bins(m);
3319 }
3320 #endif /* PROCEED_ON_ERROR */
3321 
3322 /* Allocate chunk and prepend remainder with chunk in successor base. */
3323 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3324  size_t nb) {
3325  mchunkptr p = align_as_chunk(newbase);
3326  mchunkptr oldfirst = align_as_chunk(oldbase);
3327  size_t psize = (char*)oldfirst - (char*)p;
3328  mchunkptr q = chunk_plus_offset(p, nb);
3329  size_t qsize = psize - nb;
3331 
3332  assert((char*)oldfirst > (char*)q);
3333  assert(pinuse(oldfirst));
3334  assert(qsize >= MIN_CHUNK_SIZE);
3335 
3336  /* consolidate remainder with first chunk of old base */
3337  if (oldfirst == m->top) {
3338  size_t tsize = m->topsize += qsize;
3339  m->top = q;
3340  q->head = tsize | PINUSE_BIT;
3341  check_top_chunk(m, q);
3342  }
3343  else if (oldfirst == m->dv) {
3344  size_t dsize = m->dvsize += qsize;
3345  m->dv = q;
3347  }
3348  else {
3349  if (!cinuse(oldfirst)) {
3350  size_t nsize = chunksize(oldfirst);
3351  unlink_chunk(m, oldfirst, nsize);
3352  oldfirst = chunk_plus_offset(oldfirst, nsize);
3353  qsize += nsize;
3354  }
3355  set_free_with_pinuse(q, qsize, oldfirst);
3356  insert_chunk(m, q, qsize);
3357  check_free_chunk(m, q);
3358  }
3359 
3360  check_malloced_chunk(m, chunk2mem(p), nb);
3361  return chunk2mem(p);
3362 }
3363 
3364 
3365 /* Add a segment to hold a new noncontiguous region */
3366 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3367  /* Determine locations and sizes of segment, fenceposts, old top */
3368  char* old_top = (char*)m->top;
3369  msegmentptr oldsp = segment_holding(m, old_top);
3370  char* old_end = oldsp->base + oldsp->size;
3371  size_t ssize = pad_request(sizeof(struct malloc_segment));
3372  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3373  size_t offset = align_offset(chunk2mem(rawsp));
3374  char* asp = rawsp + offset;
3375  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3376  mchunkptr sp = (mchunkptr)csp;
3377  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3378  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3379  mchunkptr p = tnext;
3380  int nfences = 0;
3381 
3382  /* reset top to new space */
3383  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3384 
3385  /* Set up segment record */
3386  assert(is_aligned(ss));
3387  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3388  *ss = m->seg; /* Push current record */
3389  m->seg.base = tbase;
3390  m->seg.size = tsize;
3391  (void)set_segment_flags(&m->seg, mmapped);
3392  m->seg.next = ss;
3393 
3394  /* Insert trailing fenceposts */
3395  for (;;) {
3397  p->head = FENCEPOST_HEAD;
3398  ++nfences;
3399  if ((char*)(&(nextp->head)) < old_end)
3400  p = nextp;
3401  else
3402  break;
3403  }
3404  assert(nfences >= 2);
3405 
3406  /* Insert the rest of old top into a bin as an ordinary free chunk */
3407  if (csp != old_top) {
3408  mchunkptr q = (mchunkptr)old_top;
3409  size_t psize = csp - old_top;
3410  mchunkptr tn = chunk_plus_offset(q, psize);
3411  set_free_with_pinuse(q, psize, tn);
3412  insert_chunk(m, q, psize);
3413  }
3414 
3415  check_top_chunk(m, m->top);
3416 }
3417 
3418 /* -------------------------- System allocation -------------------------- */
3419 
3420 /* Get memory from system using MORECORE or MMAP */
3421 static void* sys_alloc(mstate m, size_t nb) {
3422  char* tbase = CMFAIL;
3423  size_t tsize = 0;
3424  flag_t mmap_flag = 0;
3425 
3426  init_mparams();
3427 
3428  /* Directly map large chunks */
3429  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3430  void* mem = mmap_alloc(m, nb);
3431  if (mem != 0)
3432  return mem;
3433  }
3434 
3435  /*
3436  Try getting memory in any of three ways (in most-preferred to
3437  least-preferred order):
3438  1. A call to MORECORE that can normally contiguously extend memory.
3439  (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3440  or main space is mmapped or a previous contiguous call failed)
3441  2. A call to MMAP new space (disabled if not HAVE_MMAP).
3442  Note that under the default settings, if MORECORE is unable to
3443  fulfill a request, and HAVE_MMAP is true, then mmap is
3444  used as a noncontiguous system allocator. This is a useful backup
3445  strategy for systems with holes in address spaces -- in this case
3446  sbrk cannot contiguously expand the heap, but mmap may be able to
3447  find space.
3448  3. A call to MORECORE that cannot usually contiguously extend memory.
3449  (disabled if not HAVE_MORECORE)
3450  */
3451 
3453  char* br = CMFAIL;
3454  msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3455  size_t asize = 0;
3457 
3458  if (ss == 0) { /* First time through or recovery */
3459  char* base = (char*)CALL_MORECORE(0);
3460  if (base != CMFAIL) {
3461  asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3462  /* Adjust to end on a page boundary */
3463  if (!is_page_aligned(base))
3464  asize += (page_align((size_t)base) - (size_t)base);
3465  /* Can't call MORECORE if size is negative when treated as signed */
3466  if (asize < HALF_MAX_SIZE_T &&
3467  (br = (char*)(CALL_MORECORE(asize))) == base) {
3468  tbase = base;
3469  tsize = asize;
3470  }
3471  }
3472  }
3473  else {
3474  /* Subtract out existing available top space from MORECORE request. */
3475  asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3476  /* Use mem here only if it did continuously extend old space */
3477  if (asize < HALF_MAX_SIZE_T &&
3478  (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3479  tbase = br;
3480  tsize = asize;
3481  }
3482  }
3483 
3484  if (tbase == CMFAIL) { /* Cope with partial failure */
3485  if (br != CMFAIL) { /* Try to use/extend the space we did get */
3486  if (asize < HALF_MAX_SIZE_T &&
3487  asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3488  size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3489  if (esize < HALF_MAX_SIZE_T) {
3490  char* end = (char*)CALL_MORECORE(esize);
3491  if (end != CMFAIL)
3492  asize += esize;
3493  else { /* Can't use; try to release */
3494  (void)CALL_MORECORE(-asize);
3495  br = CMFAIL;
3496  }
3497  }
3498  }
3499  }
3500  if (br != CMFAIL) { /* Use the space we did get */
3501  tbase = br;
3502  tsize = asize;
3503  }
3504  else
3505  disable_contiguous(m); /* Don't try contiguous path in the future */
3506  }
3507 
3509  }
3510 
3511  if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3512  size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3513  size_t rsize = granularity_align(req);
3514  if (rsize > nb) { /* Fail if wraps around zero */
3515  char* mp = (char*)(CALL_MMAP(rsize));
3516  if (mp != CMFAIL) {
3517  tbase = mp;
3518  tsize = rsize;
3519  mmap_flag = IS_MMAPPED_BIT;
3520  }
3521  }
3522  }
3523 
3524  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3525  size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3526  if (asize < HALF_MAX_SIZE_T) {
3527  char* br = CMFAIL;
3528  char* end = CMFAIL;
3530  br = (char*)(CALL_MORECORE(asize));
3531  end = (char*)(CALL_MORECORE(0));
3533  if (br != CMFAIL && end != CMFAIL && br < end) {
3534  size_t ssize = end - br;
3535  if (ssize > nb + TOP_FOOT_SIZE) {
3536  tbase = br;
3537  tsize = ssize;
3538  }
3539  }
3540  }
3541  }
3542 
3543  if (tbase != CMFAIL) {
3544 
3545  if ((m->footprint += tsize) > m->max_footprint)
3546  m->max_footprint = m->footprint;
3547 
3548  if (!is_initialized(m)) { /* first-time initialization */
3549  m->seg.base = m->least_addr = tbase;
3550  m->seg.size = tsize;
3551  (void)set_segment_flags(&m->seg, mmap_flag);
3552  m->magic = mparams.magic;
3553  init_bins(m);
3554  if (is_global(m))
3555  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3556  else {
3557  /* Offset top by embedded malloc_state */
3558  mchunkptr mn = next_chunk(mem2chunk(m));
3559  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3560  }
3561  }
3562 
3563  else {
3564  /* Try to merge with an existing segment */
3565  msegmentptr sp = &m->seg;
3566  while (sp != 0 && tbase != sp->base + sp->size)
3567  sp = sp->next;
3568  if (sp != 0 &&
3569  !is_extern_segment(sp) &&
3570  check_segment_merge(sp, tbase, tsize) &&
3571  (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
3572  segment_holds(sp, m->top)) { /* append */
3573  sp->size += tsize;
3574  init_top(m, m->top, m->topsize + tsize);
3575  }
3576  else {
3577  if (tbase < m->least_addr)
3578  m->least_addr = tbase;
3579  sp = &m->seg;
3580  while (sp != 0 && sp->base != tbase + tsize)
3581  sp = sp->next;
3582  if (sp != 0 &&
3583  !is_extern_segment(sp) &&
3584  check_segment_merge(sp, tbase, tsize) &&
3585  (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
3586  char* oldbase = sp->base;
3587  sp->base = tbase;
3588  sp->size += tsize;
3589  return prepend_alloc(m, tbase, oldbase, nb);
3590  }
3591  else
3592  add_segment(m, tbase, tsize, mmap_flag);
3593  }
3594  }
3595 
3596  if (nb < m->topsize) { /* Allocate from new or extended top space */
3597  size_t rsize = m->topsize -= nb;
3598  mchunkptr p = m->top;
3599  mchunkptr r = m->top = chunk_plus_offset(p, nb);
3600  r->head = rsize | PINUSE_BIT;
3602  check_top_chunk(m, m->top);
3603  check_malloced_chunk(m, chunk2mem(p), nb);
3604  return chunk2mem(p);
3605  }
3606  }
3607 
3609  return 0;
3610 }
3611 
3612 /* ----------------------- system deallocation -------------------------- */
3613 
3614 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3615 static size_t release_unused_segments(mstate m) {
3616  size_t released = 0;
3617  msegmentptr pred = &m->seg;
3618  msegmentptr sp = pred->next;
3619  while (sp != 0) {
3620  char* base = sp->base;
3621  size_t size = sp->size;
3622  msegmentptr next = sp->next;
3623  if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3624  mchunkptr p = align_as_chunk(base);
3625  size_t psize = chunksize(p);
3626  /* Can unmap if first chunk holds entire segment and not pinned */
3627  if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3628  tchunkptr tp = (tchunkptr)p;
3629  assert(segment_holds(sp, (char*)sp));
3630  if (p == m->dv) {
3631  m->dv = 0;
3632  m->dvsize = 0;
3633  }
3634  else {
3635  unlink_large_chunk(m, tp);
3636  }
3637  if (CALL_MUNMAP(base, size) == 0) {
3638  released += size;
3639  m->footprint -= size;
3640  /* unlink obsoleted record */
3641  sp = pred;
3642  sp->next = next;
3643  }
3644  else { /* back out if cannot unmap */
3645  insert_large_chunk(m, tp, psize);
3646  }
3647  }
3648  }
3649  pred = sp;
3650  sp = next;
3651  }
3652  return released;
3653 }
3654 
3655 static int sys_trim(mstate m, size_t pad) {
3656  size_t released = 0;
3657  if (pad < MAX_REQUEST && is_initialized(m)) {
3658  pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3659 
3660  if (m->topsize > pad) {
3661  /* Shrink top space in granularity-size units, keeping at least one */
3662  size_t unit = mparams.granularity;
3663  size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3664  SIZE_T_ONE) * unit;
3665  msegmentptr sp = segment_holding(m, (char*)m->top);
3666 
3667  if (!is_extern_segment(sp)) {
3668  if (is_mmapped_segment(sp)) {
3669  if (HAVE_MMAP &&
3670  sp->size >= extra &&
3671  !has_segment_link(m, sp)) { /* can't shrink if pinned */
3672  size_t newsize = sp->size - extra;
3673  /* Prefer mremap, fall back to munmap */
3674  if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3675  (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3676  released = extra;
3677  }
3678  }
3679  }
3680  else if (HAVE_MORECORE) {
3681  if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3682  extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3684  {
3685  /* Make sure end of memory is where we last set it. */
3686  char* old_br = (char*)(CALL_MORECORE(0));
3687  if (old_br == sp->base + sp->size) {
3688  char* rel_br = (char*)(CALL_MORECORE(-extra));
3689  char* new_br = (char*)(CALL_MORECORE(0));
3690  if (rel_br != CMFAIL && new_br < old_br)
3691  released = old_br - new_br;
3692  }
3693  }
3695  }
3696  }
3697 
3698  if (released != 0) {
3699  sp->size -= released;
3700  m->footprint -= released;
3701  init_top(m, m->top, m->topsize - released);
3702  check_top_chunk(m, m->top);
3703  }
3704  }
3705 
3706  /* Unmap any unused mmapped segments */
3707  if (HAVE_MMAP)
3708  released += release_unused_segments(m);
3709 
3710  /* On failure, disable autotrim to avoid repeated failed future calls */
3711  if (released == 0)
3712  m->trim_check = MAX_SIZE_T;
3713  }
3714 
3715  return (released != 0)? 1 : 0;
3716 }
3717 
3718 /* ---------------------------- malloc support --------------------------- */
3719 
3720 /* allocate a large request from the best fitting chunk in a treebin */
3721 static void* tmalloc_large(mstate m, size_t nb) {
3722  tchunkptr v = 0;
3723  size_t rsize = -nb; /* Unsigned negation */
3724  tchunkptr t;
3725  bindex_t idx;
3726  compute_tree_index(nb, idx);
3727 
3728  if ((t = *treebin_at(m, idx)) != 0) {
3729  /* Traverse tree for this bin looking for node with size == nb */
3730  size_t sizebits = nb << leftshift_for_tree_index(idx);
3731  tchunkptr rst = 0; /* The deepest untaken right subtree */
3732  for (;;) {
3733  tchunkptr rt;
3734  size_t trem = chunksize(t) - nb;
3735  if (trem < rsize) {
3736  v = t;
3737  if ((rsize = trem) == 0)
3738  break;
3739  }
3740  rt = t->child[1];
3741  t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3742  if (rt != 0 && rt != t)
3743  rst = rt;
3744  if (t == 0) {
3745  t = rst; /* set t to least subtree holding sizes > nb */
3746  break;
3747  }
3748  sizebits <<= 1;
3749  }
3750  }
3751 
3752  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3753  binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3754  if (leftbits != 0) {
3755  bindex_t i;
3756  binmap_t leastbit = least_bit(leftbits);
3757  compute_bit2idx(leastbit, i);
3758  t = *treebin_at(m, i);
3759  }
3760  }
3761 
3762  while (t != 0) { /* find smallest of tree or subtree */
3763  size_t trem = chunksize(t) - nb;
3764  if (trem < rsize) {
3765  rsize = trem;
3766  v = t;
3767  }
3768  t = leftmost_child(t);
3769  }
3770 
3771  /* If dv is a better fit, return 0 so malloc will use it */
3772  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3773  if (RTCHECK(ok_address(m, v))) { /* split */
3774  mchunkptr r = chunk_plus_offset(v, nb);
3775  assert(chunksize(v) == rsize + nb);
3776  if (RTCHECK(ok_next(v, r))) {
3777  unlink_large_chunk(m, v);
3778  if (rsize < MIN_CHUNK_SIZE)
3779  set_inuse_and_pinuse(m, v, (rsize + nb));
3780  else {
3783  insert_chunk(m, r, rsize);
3784  }
3785  return chunk2mem(v);
3786  }
3787  }
3789  }
3790  return 0;
3791 }
3792 
3793 /* allocate a small request from the best fitting chunk in a treebin */
3794 static void* tmalloc_small(mstate m, size_t nb) {
3795  tchunkptr t, v;
3796  size_t rsize;
3797  bindex_t i;
3798  binmap_t leastbit = least_bit(m->treemap);
3799  compute_bit2idx(leastbit, i);
3800 
3801  v = t = *treebin_at(m, i);
3802  rsize = chunksize(t) - nb;
3803 
3804  while ((t = leftmost_child(t)) != 0) {
3805  size_t trem = chunksize(t) - nb;
3806  if (trem < rsize) {
3807  rsize = trem;
3808  v = t;
3809  }
3810  }
3811 
3812  if (RTCHECK(ok_address(m, v))) {
3813  mchunkptr r = chunk_plus_offset(v, nb);
3814  assert(chunksize(v) == rsize + nb);
3815  if (RTCHECK(ok_next(v, r))) {
3816  unlink_large_chunk(m, v);
3817  if (rsize < MIN_CHUNK_SIZE)
3818  set_inuse_and_pinuse(m, v, (rsize + nb));
3819  else {
3822  replace_dv(m, r, rsize);
3823  }
3824  return chunk2mem(v);
3825  }
3826  }
3827 
3829  return 0;
3830 }
3831 
3832 /* --------------------------- realloc support --------------------------- */
3833 
3834 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3835  if (bytes >= MAX_REQUEST) {
3837  return 0;
3838  }
3839  if (!PREACTION(m)) {
3840  mchunkptr oldp = mem2chunk(oldmem);
3841  size_t oldsize = chunksize(oldp);
3842  mchunkptr next = chunk_plus_offset(oldp, oldsize);
3843  mchunkptr newp = 0;
3844  void* extra = 0;
3845 
3846  /* Try to either shrink or extend into top. Else malloc-copy-free */
3847 
3848  if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3849  ok_next(oldp, next) && ok_pinuse(next))) {
3850  size_t nb = request2size(bytes);
3851  if (is_mmapped(oldp))
3852  newp = mmap_resize(m, oldp, nb);
3853  else if (oldsize >= nb) { /* already big enough */
3854  size_t rsize = oldsize - nb;
3855  newp = oldp;
3856  if (rsize >= MIN_CHUNK_SIZE) {
3858  set_inuse(m, newp, nb);
3859  set_inuse(m, remainder, rsize);
3860  extra = chunk2mem(remainder);
3861  }
3862  }
3863  else if (next == m->top && oldsize + m->topsize > nb) {
3864  /* Expand into top */
3865  size_t newsize = oldsize + m->topsize;
3866  size_t newtopsize = newsize - nb;
3867  mchunkptr newtop = chunk_plus_offset(oldp, nb);
3868  set_inuse(m, oldp, nb);
3869  newtop->head = newtopsize |PINUSE_BIT;
3870  m->top = newtop;
3871  m->topsize = newtopsize;
3872  newp = oldp;
3873  }
3874  }
3875  else {
3876  USAGE_ERROR_ACTION(m, oldmem);
3877  POSTACTION(m);
3878  return 0;
3879  }
3880 
3881  POSTACTION(m);
3882 
3883  if (newp != 0) {
3884  if (extra != 0) {
3885  internal_free(m, extra);
3886  }
3887  check_inuse_chunk(m, newp);
3888  return chunk2mem(newp);
3889  }
3890  else {
3891  void* newmem = internal_malloc(m, bytes);
3892  if (newmem != 0) {
3893  size_t oc = oldsize - overhead_for(oldp);
3894  memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3895  internal_free(m, oldmem);
3896  }
3897  return newmem;
3898  }
3899  }
3900  return 0;
3901 }
3902 
3903 /* --------------------------- memalign support -------------------------- */
3904 
3905 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3906  if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3907  return internal_malloc(m, bytes);
3908  if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3909  alignment = MIN_CHUNK_SIZE;
3910  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3911  size_t a = MALLOC_ALIGNMENT << 1;
3912  while (a < alignment) a <<= 1;
3913  alignment = a;
3914  }
3915 
3916  if (bytes >= MAX_REQUEST - alignment) {
3917  if (m != 0) { /* Test isn't needed but avoids compiler warning */
3919  }
3920  }
3921  else {
3922  size_t nb = request2size(bytes);
3923  size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3924  char* mem = (char*)internal_malloc(m, req);
3925  if (mem != 0) {
3926  void* leader = 0;
3927  void* trailer = 0;
3928  mchunkptr p = mem2chunk(mem);
3929 
3930  if (PREACTION(m)) return 0;
3931  if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3932  /*
3933  Find an aligned spot inside chunk. Since we need to give
3934  back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3935  the first calculation places us at a spot with less than
3936  MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3937  We've allocated enough total room so that this is always
3938  possible.
3939  */
3940  char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3941  alignment -
3942  SIZE_T_ONE)) &
3943  -alignment));
3944  char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3945  br : br+alignment;
3946  mchunkptr newp = (mchunkptr)pos;
3947  size_t leadsize = pos - (char*)(p);
3948  size_t newsize = chunksize(p) - leadsize;
3949 
3950  if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3951  newp->prev_foot = p->prev_foot + leadsize;
3952  newp->head = (newsize|CINUSE_BIT);
3953  }
3954  else { /* Otherwise, give back leader, use the rest */
3955  set_inuse(m, newp, newsize);
3956  set_inuse(m, p, leadsize);
3957  leader = chunk2mem(p);
3958  }
3959  p = newp;
3960  }
3961 
3962  /* Give back spare room at the end */
3963  if (!is_mmapped(p)) {
3964  size_t size = chunksize(p);
3965  if (size > nb + MIN_CHUNK_SIZE) {
3966  size_t remainder_size = size - nb;
3968  set_inuse(m, p, nb);
3969  set_inuse(m, remainder, remainder_size);
3970  trailer = chunk2mem(remainder);
3971  }
3972  }
3973 
3974  assert (chunksize(p) >= nb);
3975  assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3976  check_inuse_chunk(m, p);
3977  POSTACTION(m);
3978  if (leader != 0) {
3979  internal_free(m, leader);
3980  }
3981  if (trailer != 0) {
3982  internal_free(m, trailer);
3983  }
3984  return chunk2mem(p);
3985  }
3986  }
3987  return 0;
3988 }
3989 
3990 /* ------------------------ comalloc/coalloc support --------------------- */
3991 
3992 static void** ialloc(mstate m,
3993  size_t n_elements,
3994  size_t* sizes,
3995  int opts,
3996  void* chunks[]) {
3997  /*
3998  This provides common support for independent_X routines, handling
3999  all of the combinations that can result.
4000 
4001  The opts arg has:
4002  bit 0 set if all elements are same size (using sizes[0])
4003  bit 1 set if elements should be zeroed
4004  */
4005 
4006  size_t element_size; /* chunksize of each element, if all same */
4007  size_t contents_size; /* total size of elements */
4008  size_t array_size; /* request size of pointer array */
4009  void* mem; /* malloced aggregate space */
4010  mchunkptr p; /* corresponding chunk */
4011  size_t remainder_size; /* remaining bytes while splitting */
4012  void** marray; /* either "chunks" or malloced ptr array */
4013  mchunkptr array_chunk; /* chunk for malloced ptr array */
4014  flag_t was_enabled; /* to disable mmap */
4015  size_t size;
4016  size_t i;
4017 
4018  /* compute array length, if needed */
4019  if (chunks != 0) {
4020  if (n_elements == 0)
4021  return chunks; /* nothing to do */
4022  marray = chunks;
4023  array_size = 0;
4024  }
4025  else {
4026  /* if empty req, must still return chunk representing empty array */
4027  if (n_elements == 0)
4028  return (void**)internal_malloc(m, 0);
4029  marray = 0;
4030  array_size = request2size(n_elements * (sizeof(void*)));
4031  }
4032 
4033  /* compute total element size */
4034  if (opts & 0x1) { /* all-same-size */
4035  element_size = request2size(*sizes);
4036  contents_size = n_elements * element_size;
4037  }
4038  else { /* add up all the sizes */
4039  element_size = 0;
4040  contents_size = 0;
4041  for (i = 0; i != n_elements; ++i)
4042  contents_size += request2size(sizes[i]);
4043  }
4044 
4045  size = contents_size + array_size;
4046 
4047  /*
4048  Allocate the aggregate chunk. First disable direct-mmapping so
4049  malloc won't use it, since we would not be able to later
4050  free/realloc space internal to a segregated mmap region.
4051  */
4052  was_enabled = use_mmap(m);
4053  disable_mmap(m);
4054  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4055  if (was_enabled)
4056  enable_mmap(m);
4057  if (mem == 0)
4058  return 0;
4059 
4060  if (PREACTION(m)) return 0;
4061  p = mem2chunk(mem);
4062  remainder_size = chunksize(p);
4063 
4064  assert(!is_mmapped(p));
4065 
4066  if (opts & 0x2) { /* optionally clear the elements */
4067  memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4068  }
4069 
4070  /* If not provided, allocate the pointer array as final part of chunk */
4071  if (marray == 0) {
4072  size_t array_chunk_size;
4073  array_chunk = chunk_plus_offset(p, contents_size);
4074  array_chunk_size = remainder_size - contents_size;
4075  marray = (void**) (chunk2mem(array_chunk));
4076  set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4077  remainder_size = contents_size;
4078  }
4079 
4080  /* split out elements */
4081  for (i = 0; ; ++i) {
4082  marray[i] = chunk2mem(p);
4083  if (i != n_elements-1) {
4084  if (element_size != 0)
4085  size = element_size;
4086  else
4087  size = request2size(sizes[i]);
4088  remainder_size -= size;
4090  p = chunk_plus_offset(p, size);
4091  }
4092  else { /* the final element absorbs any overallocation slop */
4093  set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4094  break;
4095  }
4096  }
4097 
4098 #if DEBUG
4099  if (marray != chunks) {
4100  /* final element must have exactly exhausted chunk */
4101  if (element_size != 0) {
4102  assert(remainder_size == element_size);
4103  }
4104  else {
4105  assert(remainder_size == request2size(sizes[i]));
4106  }
4107  check_inuse_chunk(m, mem2chunk(marray));
4108  }
4109  for (i = 0; i != n_elements; ++i)
4110  check_inuse_chunk(m, mem2chunk(marray[i]));
4111 
4112 #endif /* DEBUG */
4113 
4114  POSTACTION(m);
4115  return marray;
4116 }
4117 
4118 
4119 /* -------------------------- public routines ---------------------------- */
4120 
4121 #if !ONLY_MSPACES
4122 
4123 void* dlmalloc(size_t bytes) {
4124  /*
4125  Basic algorithm:
4126  If a small request (< 256 bytes minus per-chunk overhead):
4127  1. If one exists, use a remainderless chunk in associated smallbin.
4128  (Remainderless means that there are too few excess bytes to
4129  represent as a chunk.)
4130  2. If it is big enough, use the dv chunk, which is normally the
4131  chunk adjacent to the one used for the most recent small request.
4132  3. If one exists, split the smallest available chunk in a bin,
4133  saving remainder in dv.
4134  4. If it is big enough, use the top chunk.
4135  5. If available, get memory from system and use it
4136  Otherwise, for a large request:
4137  1. Find the smallest available binned chunk that fits, and use it
4138  if it is better fitting than dv chunk, splitting if necessary.
4139  2. If better fitting than any binned chunk, use the dv chunk.
4140  3. If it is big enough, use the top chunk.
4141  4. If request size >= mmap threshold, try to directly mmap this chunk.
4142  5. If available, get memory from system and use it
4143 
4144  The ugly goto's here ensure that postaction occurs along all paths.
4145  */
4146 
4147  if (!PREACTION(gm)) {
4148  void* mem;
4149  size_t nb;
4150  if (bytes <= MAX_SMALL_REQUEST) {
4151  bindex_t idx;
4152  binmap_t smallbits;
4153  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4154  idx = small_index(nb);
4155  smallbits = gm->smallmap >> idx;
4156 
4157  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4158  mchunkptr b, p;
4159  idx += ~smallbits & 1; /* Uses next bin if idx empty */
4160  b = smallbin_at(gm, idx);
4161  p = b->fd;
4162  assert(chunksize(p) == small_index2size(idx));
4163  unlink_first_small_chunk(gm, b, p, idx);
4165  mem = chunk2mem(p);
4166  check_malloced_chunk(gm, mem, nb);
4167  goto postaction;
4168  }
4169 
4170  else if (nb > gm->dvsize) {
4171  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4172  mchunkptr b, p, r;
4173  size_t rsize;
4174  bindex_t i;
4175  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4176  binmap_t leastbit = least_bit(leftbits);
4177  compute_bit2idx(leastbit, i);
4178  b = smallbin_at(gm, i);
4179  p = b->fd;
4181  unlink_first_small_chunk(gm, b, p, i);
4182  rsize = small_index2size(i) - nb;
4183  /* Fit here cannot be remainderless if 4byte sizes */
4184  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4186  else {
4188  r = chunk_plus_offset(p, nb);
4190  replace_dv(gm, r, rsize);
4191  }
4192  mem = chunk2mem(p);
4193  check_malloced_chunk(gm, mem, nb);
4194  goto postaction;
4195  }
4196 
4197  else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4198  check_malloced_chunk(gm, mem, nb);
4199  goto postaction;
4200  }
4201  }
4202  }
4203  else if (bytes >= MAX_REQUEST)
4204  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4205  else {
4206  nb = pad_request(bytes);
4207  if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4208  check_malloced_chunk(gm, mem, nb);
4209  goto postaction;
4210  }
4211  }
4212 
4213  if (nb <= gm->dvsize) {
4214  size_t rsize = gm->dvsize - nb;
4215  mchunkptr p = gm->dv;
4216  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4217  mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4218  gm->dvsize = rsize;
4221  }
4222  else { /* exhaust dv */
4223  size_t dvs = gm->dvsize;
4224  gm->dvsize = 0;
4225  gm->dv = 0;
4226  set_inuse_and_pinuse(gm, p, dvs);
4227  }
4228  mem = chunk2mem(p);
4229  check_malloced_chunk(gm, mem, nb);
4230  goto postaction;
4231  }
4232 
4233  else if (nb < gm->topsize) { /* Split top */
4234  size_t rsize = gm->topsize -= nb;
4235  mchunkptr p = gm->top;
4236  mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4237  r->head = rsize | PINUSE_BIT;
4239  mem = chunk2mem(p);
4240  check_top_chunk(gm, gm->top);
4241  check_malloced_chunk(gm, mem, nb);
4242  goto postaction;
4243  }
4244 
4245  mem = sys_alloc(gm, nb);
4246 
4247  postaction:
4248  POSTACTION(gm);
4249  return mem;
4250  }
4251 
4252  return 0;
4253 }
4254 
4255 void dlfree(void* mem) {
4256  /*
4257  Consolidate freed chunks with preceding or succeeding bordering
4258  free chunks, if they exist, and then place in a bin. Intermixed
4259  with special cases for top, dv, mmapped chunks, and usage errors.
4260  */
4261 
4262  if (mem != 0) {
4263  mchunkptr p = mem2chunk(mem);
4264 #if FOOTERS
4265  mstate fm = get_mstate_for(p);
4266  if (!ok_magic(fm)) {
4267  USAGE_ERROR_ACTION(fm, p);
4268  return;
4269  }
4270 #else /* FOOTERS */
4271 #define fm gm
4272 #endif /* FOOTERS */
4273  if (!PREACTION(fm)) {
4274  check_inuse_chunk(fm, p);
4275  if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4276  size_t psize = chunksize(p);
4277  mchunkptr next = chunk_plus_offset(p, psize);
4278  if (!pinuse(p)) {
4279  size_t prevsize = p->prev_foot;
4280  if ((prevsize & IS_MMAPPED_BIT) != 0) {
4281  prevsize &= ~IS_MMAPPED_BIT;
4282  psize += prevsize + MMAP_FOOT_PAD;
4283  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4284  fm->footprint -= psize;
4285  goto postaction;
4286  }
4287  else {
4288  mchunkptr prev = chunk_minus_offset(p, prevsize);
4289  psize += prevsize;
4290  p = prev;
4291  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4292  if (p != fm->dv) {
4293  unlink_chunk(fm, p, prevsize);
4294  }
4295  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4296  fm->dvsize = psize;
4297  set_free_with_pinuse(p, psize, next);
4298  goto postaction;
4299  }
4300  }
4301  else
4302  goto erroraction;
4303  }
4304  }
4305 
4306  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4307  if (!cinuse(next)) { /* consolidate forward */
4308  if (next == fm->top) {
4309  size_t tsize = fm->topsize += psize;
4310  fm->top = p;
4311  p->head = tsize | PINUSE_BIT;
4312  if (p == fm->dv) {
4313  fm->dv = 0;
4314  fm->dvsize = 0;
4315  }
4316  if (should_trim(fm, tsize))
4317  sys_trim(fm, 0);
4318  goto postaction;
4319  }
4320  else if (next == fm->dv) {
4321  size_t dsize = fm->dvsize += psize;
4322  fm->dv = p;
4324  goto postaction;
4325  }
4326  else {
4327  size_t nsize = chunksize(next);
4328  psize += nsize;
4329  unlink_chunk(fm, next, nsize);
4331  if (p == fm->dv) {
4332  fm->dvsize = psize;
4333  goto postaction;
4334  }
4335  }
4336  }
4337  else
4338  set_free_with_pinuse(p, psize, next);
4339  insert_chunk(fm, p, psize);
4340  check_free_chunk(fm, p);
4341  goto postaction;
4342  }
4343  }
4344  erroraction:
4345  USAGE_ERROR_ACTION(fm, p);
4346  postaction:
4347  POSTACTION(fm);
4348  }
4349  }
4350 #if !FOOTERS
4351 #undef fm
4352 #endif /* FOOTERS */
4353 }
4354 
4355 void* dlcalloc(size_t n_elements, size_t elem_size) {
4356  void* mem;
4357  size_t req = 0;
4358  if (n_elements != 0) {
4359  req = n_elements * elem_size;
4360  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4361  (req / n_elements != elem_size))
4362  req = MAX_SIZE_T; /* force downstream failure on overflow */
4363  }
4364  mem = dlmalloc(req);
4365  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4366  memset(mem, 0, req);
4367  return mem;
4368 }
4369 
4370 void* dlrealloc(void* oldmem, size_t bytes) {
4371  if (oldmem == 0)
4372  return dlmalloc(bytes);
4373 #ifdef REALLOC_ZERO_BYTES_FREES
4374  if (bytes == 0) {
4375  dlfree(oldmem);
4376  return 0;
4377  }
4378 #endif /* REALLOC_ZERO_BYTES_FREES */
4379  else {
4380 #if ! FOOTERS
4381  mstate m = gm;
4382 #else /* FOOTERS */
4383  mstate m = get_mstate_for(mem2chunk(oldmem));
4384  if (!ok_magic(m)) {
4385  USAGE_ERROR_ACTION(m, oldmem);
4386  return 0;
4387  }
4388 #endif /* FOOTERS */
4389  return internal_realloc(m, oldmem, bytes);
4390  }
4391 }
4392 
4393 void* dlmemalign(size_t alignment, size_t bytes) {
4394  return internal_memalign(gm, alignment, bytes);
4395 }
4396 
4397 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4398  void* chunks[]) {
4399  size_t sz = elem_size; /* serves as 1-element array */
4400  return ialloc(gm, n_elements, &sz, 3, chunks);
4401 }
4402 
4403 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4404  void* chunks[]) {
4405  return ialloc(gm, n_elements, sizes, 0, chunks);
4406 }
4407 
4408 void* dlvalloc(size_t bytes) {
4409  size_t pagesz;
4410  init_mparams();
4411  pagesz = mparams.page_size;
4412  return dlmemalign(pagesz, bytes);
4413 }
4414 
4415 void* dlpvalloc(size_t bytes) {
4416  size_t pagesz;
4417  init_mparams();
4418  pagesz = mparams.page_size;
4419  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4420 }
4421 
4422 int dlmalloc_trim(size_t pad) {
4423  int result = 0;
4424  if (!PREACTION(gm)) {
4425  result = sys_trim(gm, pad);
4426  POSTACTION(gm);
4427  }
4428  return result;
4429 }
4430 
4431 size_t dlmalloc_footprint(void) {
4432  return gm->footprint;
4433 }
4434 
4435 size_t dlmalloc_max_footprint(void) {
4436  return gm->max_footprint;
4437 }
4438 
4439 #if !NO_MALLINFO
4440 struct mallinfo dlmallinfo(void) {
4441  return internal_mallinfo(gm);
4442 }
4443 #endif /* NO_MALLINFO */
4444 
4445 void dlmalloc_stats() {
4446  internal_malloc_stats(gm);
4447 }
4448 
4449 size_t dlmalloc_usable_size(void* mem) {
4450  if (mem != 0) {
4451  mchunkptr p = mem2chunk(mem);
4452  if (cinuse(p))
4453  return chunksize(p) - overhead_for(p);
4454  }
4455  return 0;
4456 }
4457 
4458 int dlmallopt(int param_number, int value) {
4459  return change_mparam(param_number, value);
4460 }
4461 
4462 #endif /* !ONLY_MSPACES */
4463 
4464 /* ----------------------------- user mspaces ---------------------------- */
4465 
4466 #if MSPACES
4467 
4468 static mstate init_user_mstate(char* tbase, size_t tsize) {
4469  size_t msize = pad_request(sizeof(struct malloc_state));
4470  mchunkptr mn;
4471  mchunkptr msp = align_as_chunk(tbase);
4472  mstate m = (mstate)(chunk2mem(msp));
4473  memset(m, 0, msize);
4474  INITIAL_LOCK(&m->mutex);
4475  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4476  m->seg.base = m->least_addr = tbase;
4477  m->seg.size = m->footprint = m->max_footprint = tsize;
4478  m->magic = mparams.magic;
4479  m->mflags = mparams.default_mflags;
4480  disable_contiguous(m);
4481  init_bins(m);
4482  mn = next_chunk(mem2chunk(m));
4483  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4484  check_top_chunk(m, m->top);
4485  return m;
4486 }
4487 
4488 mspace create_mspace(size_t capacity, int locked) {
4489  mstate m = 0;
4490  size_t msize = pad_request(sizeof(struct malloc_state));
4491  init_mparams(); /* Ensure pagesize etc initialized */
4492 
4493  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4494  size_t rs = ((capacity == 0)? mparams.granularity :
4495  (capacity + TOP_FOOT_SIZE + msize));
4496  size_t tsize = granularity_align(rs);
4497  char* tbase = (char*)(CALL_MMAP(tsize));
4498  if (tbase != CMFAIL) {
4499  m = init_user_mstate(tbase, tsize);
4501  set_lock(m, locked);
4502  }
4503  }
4504  return (mspace)m;
4505 }
4506 
4507 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4508  mstate m = 0;
4509  size_t msize = pad_request(sizeof(struct malloc_state));
4510  init_mparams(); /* Ensure pagesize etc initialized */
4511 
4512  if (capacity > msize + TOP_FOOT_SIZE &&
4513  capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4514  m = init_user_mstate((char*)base, capacity);
4516  set_lock(m, locked);
4517  }
4518  return (mspace)m;
4519 }
4520 
4521 size_t destroy_mspace(mspace msp) {
4522  size_t freed = 0;
4523  mstate ms = (mstate)msp;
4524  if (ok_magic(ms)) {
4525  msegmentptr sp = &ms->seg;
4526  while (sp != 0) {
4527  char* base = sp->base;
4528  size_t size = sp->size;
4529  flag_t flag = get_segment_flags(sp);
4530  sp = sp->next;
4531  if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4532  CALL_MUNMAP(base, size) == 0)
4533  freed += size;
4534  }
4535  }
4536  else {
4537  USAGE_ERROR_ACTION(ms,ms);
4538  }
4539  return freed;
4540 }
4541 
4542 /*
4543  mspace versions of routines are near-clones of the global
4544  versions. This is not so nice but better than the alternatives.
4545 */
4546 
4547 
4548 void* mspace_malloc(mspace msp, size_t bytes) {
4549  mstate ms = (mstate)msp;
4550  if (!ok_magic(ms)) {
4551  USAGE_ERROR_ACTION(ms,ms);
4552  return 0;
4553  }
4554  if (!PREACTION(ms)) {
4555  void* mem;
4556  size_t nb;
4557  if (bytes <= MAX_SMALL_REQUEST) {
4558  bindex_t idx;
4559  binmap_t smallbits;
4560  nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4561  idx = small_index(nb);
4562  smallbits = ms->smallmap >> idx;
4563 
4564  if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4565  mchunkptr b, p;
4566  idx += ~smallbits & 1; /* Uses next bin if idx empty */
4567  b = smallbin_at(ms, idx);
4568  p = b->fd;
4569  assert(chunksize(p) == small_index2size(idx));
4570  unlink_first_small_chunk(ms, b, p, idx);
4572  mem = chunk2mem(p);
4573  check_malloced_chunk(ms, mem, nb);
4574  goto postaction;
4575  }
4576 
4577  else if (nb > ms->dvsize) {
4578  if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4579  mchunkptr b, p, r;
4580  size_t rsize;
4581  bindex_t i;
4582  binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4583  binmap_t leastbit = least_bit(leftbits);
4584  compute_bit2idx(leastbit, i);
4585  b = smallbin_at(ms, i);
4586  p = b->fd;
4588  unlink_first_small_chunk(ms, b, p, i);
4589  rsize = small_index2size(i) - nb;
4590  /* Fit here cannot be remainderless if 4byte sizes */
4591  if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4593  else {
4595  r = chunk_plus_offset(p, nb);
4597  replace_dv(ms, r, rsize);
4598  }
4599  mem = chunk2mem(p);
4600  check_malloced_chunk(ms, mem, nb);
4601  goto postaction;
4602  }
4603 
4604  else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4605  check_malloced_chunk(ms, mem, nb);
4606  goto postaction;
4607  }
4608  }
4609  }
4610  else if (bytes >= MAX_REQUEST)
4611  nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4612  else {
4613  nb = pad_request(bytes);
4614  if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4615  check_malloced_chunk(ms, mem, nb);
4616  goto postaction;
4617  }
4618  }
4619 
4620  if (nb <= ms->dvsize) {
4621  size_t rsize = ms->dvsize - nb;
4622  mchunkptr p = ms->dv;
4623  if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4624  mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4625  ms->dvsize = rsize;
4628  }
4629  else { /* exhaust dv */
4630  size_t dvs = ms->dvsize;
4631  ms->dvsize = 0;
4632  ms->dv = 0;
4633  set_inuse_and_pinuse(ms, p, dvs);
4634  }
4635  mem = chunk2mem(p);
4636  check_malloced_chunk(ms, mem, nb);
4637  goto postaction;
4638  }
4639 
4640  else if (nb < ms->topsize) { /* Split top */
4641  size_t rsize = ms->topsize -= nb;
4642  mchunkptr p = ms->top;
4643  mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4644  r->head = rsize | PINUSE_BIT;
4646  mem = chunk2mem(p);
4647  check_top_chunk(ms, ms->top);
4648  check_malloced_chunk(ms, mem, nb);
4649  goto postaction;
4650  }
4651 
4652  mem = sys_alloc(ms, nb);
4653 
4654  postaction:
4655  POSTACTION(ms);
4656  return mem;
4657  }
4658 
4659  return 0;
4660 }
4661 
4662 void mspace_free(mspace msp, void* mem) {
4663  if (mem != 0) {
4664  mchunkptr p = mem2chunk(mem);
4665 #if FOOTERS
4666  mstate fm = get_mstate_for(p);
4667 #else /* FOOTERS */
4668  mstate fm = (mstate)msp;
4669 #endif /* FOOTERS */
4670  if (!ok_magic(fm)) {
4671  USAGE_ERROR_ACTION(fm, p);
4672  return;
4673  }
4674  if (!PREACTION(fm)) {
4675  check_inuse_chunk(fm, p);
4676  if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4677  size_t psize = chunksize(p);
4678  mchunkptr next = chunk_plus_offset(p, psize);
4679  if (!pinuse(p)) {
4680  size_t prevsize = p->prev_foot;
4681  if ((prevsize & IS_MMAPPED_BIT) != 0) {
4682  prevsize &= ~IS_MMAPPED_BIT;
4683  psize += prevsize + MMAP_FOOT_PAD;
4684  if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4685  fm->footprint -= psize;
4686  goto postaction;
4687  }
4688  else {
4689  mchunkptr prev = chunk_minus_offset(p, prevsize);
4690  psize += prevsize;
4691  p = prev;
4692  if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4693  if (p != fm->dv) {
4694  unlink_chunk(fm, p, prevsize);
4695  }
4696  else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4697  fm->dvsize = psize;
4698  set_free_with_pinuse(p, psize, next);
4699  goto postaction;
4700  }
4701  }
4702  else
4703  goto erroraction;
4704  }
4705  }
4706 
4707  if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4708  if (!cinuse(next)) { /* consolidate forward */
4709  if (next == fm->top) {
4710  size_t tsize = fm->topsize += psize;
4711  fm->top = p;
4712  p->head = tsize | PINUSE_BIT;
4713  if (p == fm->dv) {
4714  fm->dv = 0;
4715  fm->dvsize = 0;
4716  }
4717  if (should_trim(fm, tsize))
4718  sys_trim(fm, 0);
4719  goto postaction;
4720  }
4721  else if (next == fm->dv) {
4722  size_t dsize = fm->dvsize += psize;
4723  fm->dv = p;
4725  goto postaction;
4726  }
4727  else {
4728  size_t nsize = chunksize(next);
4729  psize += nsize;
4730  unlink_chunk(fm, next, nsize);
4732  if (p == fm->dv) {
4733  fm->dvsize = psize;
4734  goto postaction;
4735  }
4736  }
4737  }
4738  else
4739  set_free_with_pinuse(p, psize, next);
4740  insert_chunk(fm, p, psize);
4741  check_free_chunk(fm, p);
4742  goto postaction;
4743  }
4744  }
4745  erroraction:
4746  USAGE_ERROR_ACTION(fm, p);
4747  postaction:
4748  POSTACTION(fm);
4749  }
4750  }
4751 }
4752 
4753 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4754  void* mem;
4755  size_t req = 0;
4756  mstate ms = (mstate)msp;
4757  if (!ok_magic(ms)) {
4758  USAGE_ERROR_ACTION(ms,ms);
4759  return 0;
4760  }
4761  if (n_elements != 0) {
4762  req = n_elements * elem_size;
4763  if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4764  (req / n_elements != elem_size))
4765  req = MAX_SIZE_T; /* force downstream failure on overflow */
4766  }
4767  mem = internal_malloc(ms, req);
4768  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4769  memset(mem, 0, req);
4770  return mem;
4771 }
4772 
4773 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4774  if (oldmem == 0)
4775  return mspace_malloc(msp, bytes);
4776 #ifdef REALLOC_ZERO_BYTES_FREES
4777  if (bytes == 0) {
4778  mspace_free(msp, oldmem);
4779  return 0;
4780  }
4781 #endif /* REALLOC_ZERO_BYTES_FREES */
4782  else {
4783 #if FOOTERS
4784  mchunkptr p = mem2chunk(oldmem);
4785  mstate ms = get_mstate_for(p);
4786 #else /* FOOTERS */
4787  mstate ms = (mstate)msp;
4788 #endif /* FOOTERS */
4789  if (!ok_magic(ms)) {
4790  USAGE_ERROR_ACTION(ms,ms);
4791  return 0;
4792  }
4793  return internal_realloc(ms, oldmem, bytes);
4794  }
4795 }
4796 
4797 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4798  mstate ms = (mstate)msp;
4799  if (!ok_magic(ms)) {
4800  USAGE_ERROR_ACTION(ms,ms);
4801  return 0;
4802  }
4803  return internal_memalign(ms, alignment, bytes);
4804 }
4805 
4806 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4807  size_t elem_size, void* chunks[]) {
4808  size_t sz = elem_size; /* serves as 1-element array */
4809  mstate ms = (mstate)msp;
4810  if (!ok_magic(ms)) {
4811  USAGE_ERROR_ACTION(ms,ms);
4812  return 0;
4813  }
4814  return ialloc(ms, n_elements, &sz, 3, chunks);
4815 }
4816 
4817 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4818  size_t sizes[], void* chunks[]) {
4819  mstate ms = (mstate)msp;
4820  if (!ok_magic(ms)) {
4821  USAGE_ERROR_ACTION(ms,ms);
4822  return 0;
4823  }
4824  return ialloc(ms, n_elements, sizes, 0, chunks);
4825 }
4826 
4827 int mspace_trim(mspace msp, size_t pad) {
4828  int result = 0;
4829  mstate ms = (mstate)msp;
4830  if (ok_magic(ms)) {
4831  if (!PREACTION(ms)) {
4832  result = sys_trim(ms, pad);
4833  POSTACTION(ms);
4834  }
4835  }
4836  else {
4837  USAGE_ERROR_ACTION(ms,ms);
4838  }
4839  return result;
4840 }
4841 
4842 void mspace_malloc_stats(mspace msp) {
4843  mstate ms = (mstate)msp;
4844  if (ok_magic(ms)) {
4845  internal_malloc_stats(ms);
4846  }
4847  else {
4848  USAGE_ERROR_ACTION(ms,ms);
4849  }
4850 }
4851 
4852 size_t mspace_footprint(mspace msp) {
4853  size_t result;
4854  mstate ms = (mstate)msp;
4855  if (ok_magic(ms)) {
4856  result = ms->footprint;
4857  }
4858  USAGE_ERROR_ACTION(ms,ms);
4859  return result;
4860 }
4861 
4862 
4863 size_t mspace_max_footprint(mspace msp) {
4864  size_t result;
4865  mstate ms = (mstate)msp;
4866  if (ok_magic(ms)) {
4867  result = ms->max_footprint;
4868  }
4869  USAGE_ERROR_ACTION(ms,ms);
4870  return result;
4871 }
4872 
4873 
4874 #if !NO_MALLINFO
4875 struct mallinfo mspace_mallinfo(mspace msp) {
4876  mstate ms = (mstate)msp;
4877  if (!ok_magic(ms)) {
4878  USAGE_ERROR_ACTION(ms,ms);
4879  }
4880  return internal_mallinfo(ms);
4881 }
4882 #endif /* NO_MALLINFO */
4883 
4884 int mspace_mallopt(int param_number, int value) {
4885  return change_mparam(param_number, value);
4886 }
4887 
4888 #endif /* MSPACES */
4889 
4890 /* -------------------- Alternative MORECORE functions ------------------- */
4891 
4892 /*
4893  Guidelines for creating a custom version of MORECORE:
4894 
4895  * For best performance, MORECORE should allocate in multiples of pagesize.
4896  * MORECORE may allocate more memory than requested. (Or even less,
4897  but this will usually result in a malloc failure.)
4898  * MORECORE must not allocate memory when given argument zero, but
4899  instead return one past the end address of memory from previous
4900  nonzero call.
4901  * For best performance, consecutive calls to MORECORE with positive
4902  arguments should return increasing addresses, indicating that
4903  space has been contiguously extended.
4904  * Even though consecutive calls to MORECORE need not return contiguous
4905  addresses, it must be OK for malloc'ed chunks to span multiple
4906  regions in those cases where they do happen to be contiguous.
4907  * MORECORE need not handle negative arguments -- it may instead
4908  just return MFAIL when given negative arguments.
4909  Negative arguments are always multiples of pagesize. MORECORE
4910  must not misinterpret negative args as large positive unsigned
4911  args. You can suppress all such calls from even occurring by defining
4912  MORECORE_CANNOT_TRIM,
4913 
4914  As an example alternative MORECORE, here is a custom allocator
4915  kindly contributed for pre-OSX macOS. It uses virtually but not
4916  necessarily physically contiguous non-paged memory (locked in,
4917  present and won't get swapped out). You can use it by uncommenting
4918  this section, adding some #includes, and setting up the appropriate
4919  defines above:
4920 
4921  #define MORECORE osMoreCore
4922 
4923  There is also a shutdown routine that should somehow be called for
4924  cleanup upon program exit.
4925 
4926  #define MAX_POOL_ENTRIES 100
4927  #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4928  static int next_os_pool;
4929  void *our_os_pools[MAX_POOL_ENTRIES];
4930 
4931  void *osMoreCore(int size)
4932  {
4933  void *ptr = 0;
4934  static void *sbrk_top = 0;
4935 
4936  if (size > 0)
4937  {
4938  if (size < MINIMUM_MORECORE_SIZE)
4939  size = MINIMUM_MORECORE_SIZE;
4940  if (CurrentExecutionLevel() == kTaskLevel)
4941  ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4942  if (ptr == 0)
4943  {
4944  return (void *) MFAIL;
4945  }
4946  // save ptrs so they can be freed during cleanup
4947  our_os_pools[next_os_pool] = ptr;
4948  next_os_pool++;
4949  ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4950  sbrk_top = (char *) ptr + size;
4951  return ptr;
4952  }
4953  else if (size < 0)
4954  {
4955  // we don't currently support shrink behavior
4956  return (void *) MFAIL;
4957  }
4958  else
4959  {
4960  return sbrk_top;
4961  }
4962  }
4963 
4964  // cleanup any allocated memory pools
4965  // called as last thing before shutting down driver
4966 
4967  void osCleanupMem(void)
4968  {
4969  void **ptr;
4970 
4971  for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4972  if (*ptr)
4973  {
4974  PoolDeallocate(*ptr);
4975  *ptr = 0;
4976  }
4977  }
4978 
4979 */
4980 
4981 
4982 /* -----------------------------------------------------------------------
4983 History:
4984  V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4985  * Add max_footprint functions
4986  * Ensure all appropriate literals are size_t
4987  * Fix conditional compilation problem for some #define settings
4988  * Avoid concatenating segments with the one provided
4989  in create_mspace_with_base
4990  * Rename some variables to avoid compiler shadowing warnings
4991  * Use explicit lock initialization.
4992  * Better handling of sbrk interference.
4993  * Simplify and fix segment insertion, trimming and mspace_destroy
4994  * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4995  * Thanks especially to Dennis Flanagan for help on these.
4996 
4997  V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4998  * Fix memalign brace error.
4999 
5000  V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5001  * Fix improper #endif nesting in C++
5002  * Add explicit casts needed for C++
5003 
5004  V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5005  * Use trees for large bins
5006  * Support mspaces
5007  * Use segments to unify sbrk-based and mmap-based system allocation,
5008  removing need for emulation on most platforms without sbrk.
5009  * Default safety checks
5010  * Optional footer checks. Thanks to William Robertson for the idea.
5011  * Internal code refactoring
5012  * Incorporate suggestions and platform-specific changes.
5013  Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5014  Aaron Bachmann, Emery Berger, and others.
5015  * Speed up non-fastbin processing enough to remove fastbins.
5016  * Remove useless cfree() to avoid conflicts with other apps.
5017  * Remove internal memcpy, memset. Compilers handle builtins better.
5018  * Remove some options that no one ever used and rename others.
5019 
5020  V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5021  * Fix malloc_state bitmap array misdeclaration
5022 
5023  V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5024  * Allow tuning of FIRST_SORTED_BIN_SIZE
5025  * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5026  * Better detection and support for non-contiguousness of MORECORE.
5027  Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5028  * Bypass most of malloc if no frees. Thanks To Emery Berger.
5029  * Fix freeing of old top non-contiguous chunk im sysmalloc.
5030  * Raised default trim and map thresholds to 256K.
5031  * Fix mmap-related #defines. Thanks to Lubos Lunak.
5032  * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5033  * Branch-free bin calculation
5034  * Default trim and mmap thresholds now 256K.
5035 
5036  V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5037  * Introduce independent_comalloc and independent_calloc.
5038  Thanks to Michael Pachos for motivation and help.
5039  * Make optional .h file available
5040  * Allow > 2GB requests on 32bit systems.
5041  * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5042  Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5043  and Anonymous.
5044  * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5045  helping test this.)
5046  * memalign: check alignment arg
5047  * realloc: don't try to shift chunks backwards, since this
5048  leads to more fragmentation in some programs and doesn't
5049  seem to help in any others.
5050  * Collect all cases in malloc requiring system memory into sysmalloc
5051  * Use mmap as backup to sbrk
5052  * Place all internal state in malloc_state
5053  * Introduce fastbins (although similar to 2.5.1)
5054  * Many minor tunings and cosmetic improvements
5055  * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5056  * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5057  Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5058  * Include errno.h to support default failure action.
5059 
5060  V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5061  * return null for negative arguments
5062  * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5063  * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5064  (e.g. WIN32 platforms)
5065  * Cleanup header file inclusion for WIN32 platforms
5066  * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5067  * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5068  memory allocation routines
5069  * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5070  * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5071  usage of 'assert' in non-WIN32 code
5072  * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5073  avoid infinite loop
5074  * Always call 'fREe()' rather than 'free()'
5075 
5076  V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5077  * Fixed ordering problem with boundary-stamping
5078 
5079  V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5080  * Added pvalloc, as recommended by H.J. Liu
5081  * Added 64bit pointer support mainly from Wolfram Gloger
5082  * Added anonymously donated WIN32 sbrk emulation
5083  * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5084  * malloc_extend_top: fix mask error that caused wastage after
5085  foreign sbrks
5086  * Add linux mremap support code from HJ Liu
5087 
5088  V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5089  * Integrated most documentation with the code.
5090  * Add support for mmap, with help from
5091  Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5092  * Use last_remainder in more cases.
5093  * Pack bins using idea from colin@nyx10.cs.du.edu
5094  * Use ordered bins instead of best-fit threshold
5095  * Eliminate block-local decls to simplify tracing and debugging.
5096  * Support another case of realloc via move into top
5097  * Fix error occurring when initial sbrk_base not word-aligned.
5098  * Rely on page size for units instead of SBRK_UNIT to
5099  avoid surprises about sbrk alignment conventions.
5100  * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5101  (raymond@es.ele.tue.nl) for the suggestion.
5102  * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5103  * More precautions for cases where other routines call sbrk,
5104  courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5105  * Added macros etc., allowing use in linux libc from
5106  H.J. Lu (hjl@gnu.ai.mit.edu)
5107  * Inverted this history list
5108 
5109  V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5110  * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5111  * Removed all preallocation code since under current scheme
5112  the work required to undo bad preallocations exceeds
5113  the work saved in good cases for most test programs.
5114  * No longer use return list or unconsolidated bins since
5115  no scheme using them consistently outperforms those that don't
5116  given above changes.
5117  * Use best fit for very large chunks to prevent some worst-cases.
5118  * Added some support for debugging
5119 
5120  V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5121  * Removed footers when chunks are in use. Thanks to
5122  Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5123 
5124  V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5125  * Added malloc_trim, with help from Wolfram Gloger
5126  (wmglo@Dent.MED.Uni-Muenchen.DE).
5127 
5128  V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5129 
5130  V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5131  * realloc: try to expand in both directions
5132  * malloc: swap order of clean-bin strategy;
5133  * realloc: only conditionally expand backwards
5134  * Try not to scavenge used bins
5135  * Use bin counts as a guide to preallocation
5136  * Occasionally bin return list chunks in first scan
5137  * Add a few optimizations from colin@nyx10.cs.du.edu
5138 
5139  V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5140  * faster bin computation & slightly different binning
5141  * merged all consolidations to one part of malloc proper
5142  (eliminating old malloc_find_space & malloc_clean_bin)
5143  * Scan 2 returns chunks (not just 1)
5144  * Propagate failure in realloc if malloc returns 0
5145  * Add stuff to allow compilation on non-ANSI compilers
5146  from kpv@research.att.com
5147 
5148  V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5149  * removed potential for odd address access in prev_chunk
5150  * removed dependency on getpagesize.h
5151  * misc cosmetics and a bit more internal documentation
5152  * anticosmetics: mangled names in macros to evade debugger strangeness
5153  * tested on sparc, hp-700, dec-mips, rs6000
5154  with gcc & native cc (hp, dec only) allowing
5155  Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5156 
5157  Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5158  * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5159  structure of old version, but most details differ.)
5160 
5161 */
memset
void * memset(void *, int, size_t)
is_global
#define is_global(M)
Definition: dlmalloc.c:2114
mem2chunk
#define mem2chunk(mem)
Definition: dlmalloc.c:1689
mallinfo::fordblks
MALLINFO_FIELD_TYPE fordblks
Definition: dlmalloc.c:637
disable_mmap
#define disable_mmap(M)
Definition: dlmalloc.c:2127
unlink_chunk
#define unlink_chunk(M, P, S)
Definition: dlmalloc.c:3180
dlfree
#define dlfree
Definition: dlmalloc.c:654
ok_next
#define ok_next(p, n)
Definition: dlmalloc.c:2428
treebin_at
#define treebin_at(M, i)
Definition: dlmalloc.c:2291
binmap_t
unsigned int binmap_t
Definition: dlmalloc.c:1665
bindex_t
size_t bindex_t
Definition: dlmalloc.c:1664
check_malloc_state
#define check_malloc_state(M)
Definition: dlmalloc.c:2257
request2size
#define request2size(req)
Definition: dlmalloc.c:1702
malloc_getpagesize
#define malloc_getpagesize
Definition: dlmalloc.c:1240
fm
#define fm
assert
#define assert(x)
Definition: dlmalloc.c:1176
INITIAL_LOCK
#define INITIAL_LOCK(l)
Definition: dlmalloc.c:1497
CINUSE_BIT
#define CINUSE_BIT
Definition: dlmalloc.c:1717
CMFAIL
#define CMFAIL
Definition: dlmalloc.c:1289
dlmallinfo
#define dlmallinfo
Definition: dlmalloc.c:660
segment_holds
#define segment_holds(S, A)
Definition: dlmalloc.c:2151
is_extern_segment
#define is_extern_segment(S)
Definition: dlmalloc.c:1979
dlpvalloc
#define dlpvalloc
Definition: dlmalloc.c:659
malloc_state::top
mchunkptr top
Definition: dlmalloc.c:2076
DEFAULT_TRIM_THRESHOLD
#define DEFAULT_TRIM_THRESHOLD
Definition: dlmalloc.c:558
malloc_tree_chunk::bk
struct malloc_tree_chunk * bk
Definition: dlmalloc.c:1864
MIN_REQUEST
#define MIN_REQUEST
Definition: dlmalloc.c:1695
malloc_params::magic
size_t magic
Definition: dlmalloc.c:2101
i
uint32_t i
Definition: rb_mjit_min_header-2.7.2.h:5499
mstate
struct malloc_state * mstate
Definition: dlmalloc.c:2090
RTCHECK
#define RTCHECK(e)
Definition: dlmalloc.c:2454
malloc_state::topsize
size_t topsize
Definition: dlmalloc.c:2073
EXTERN_BIT
#define EXTERN_BIT
Definition: dlmalloc.c:1416
sbinptr
struct malloc_chunk * sbinptr
Definition: dlmalloc.c:1663
malloc_segment::sflags
flag_t sflags
Definition: dlmalloc.c:1974
idx2bit
#define idx2bit(i)
Definition: dlmalloc.c:2346
chunksize
#define chunksize(p)
Definition: dlmalloc.c:1726
NSMALLBINS
#define NSMALLBINS
Definition: dlmalloc.c:2060
smallmap_is_marked
#define smallmap_is_marked(M, i)
Definition: dlmalloc.c:2351
malloc_state::least_addr
char * least_addr
Definition: dlmalloc.c:2074
next_pinuse
#define next_pinuse(p)
Definition: dlmalloc.c:1740
dlmallopt
#define dlmallopt
Definition: dlmalloc.c:661
page_align
#define page_align(S)
Definition: dlmalloc.c:2138
malloc_tree_chunk::child
struct malloc_tree_chunk * child[2]
Definition: dlmalloc.c:1866
assert.h
check_malloced_chunk
#define check_malloced_chunk(M, P, N)
Definition: dlmalloc.c:2255
malloc_state::smallbins
mchunkptr smallbins[(NSMALLBINS+1) *2]
Definition: dlmalloc.c:2079
malloc_params::trim_threshold
size_t trim_threshold
Definition: dlmalloc.c:2105
calloc_must_clear
#define calloc_must_clear(p)
Definition: dlmalloc.c:1763
NTREEBINS
#define NTREEBINS
Definition: dlmalloc.c:2061
is_initialized
#define is_initialized(M)
Definition: dlmalloc.c:2115
M_MMAP_THRESHOLD
#define M_MMAP_THRESHOLD
Definition: dlmalloc.c:592
POSTACTION
#define POSTACTION(M)
Definition: dlmalloc.c:2213
sbrk
void * sbrk(ptrdiff_t __incr)
PREACTION
#define PREACTION(M)
Definition: dlmalloc.c:2209
SIZE_T_SIZE
#define SIZE_T_SIZE
Definition: dlmalloc.c:1254
ptr
struct RIMemo * ptr
Definition: debug.c:65
ACQUIRE_MORECORE_LOCK
#define ACQUIRE_MORECORE_LOCK()
Definition: dlmalloc.c:1504
malloc_tree_chunk
Definition: dlmalloc.c:1859
malloc_params::page_size
size_t page_size
Definition: dlmalloc.c:2102
USE_LOCK_BIT
#define USE_LOCK_BIT
Definition: dlmalloc.c:1496
malloc_state::smallmap
binmap_t smallmap
Definition: dlmalloc.c:2070
insert_large_chunk
#define insert_large_chunk(M, X, S)
Definition: dlmalloc.c:3035
HAVE_MORECORE
#define HAVE_MORECORE
Definition: dlmalloc.c:536
check_segment_merge
#define check_segment_merge(S, b, s)
Definition: dlmalloc.c:1972
MIN_LARGE_SIZE
#define MIN_LARGE_SIZE
Definition: dlmalloc.c:2065
is_aligned
#define is_aligned(A)
Definition: dlmalloc.c:1271
CALL_MMAP
#define CALL_MMAP(s)
Definition: dlmalloc.c:1318
malloc_segment::next
struct malloc_segment * next
Definition: dlmalloc.c:1938
MORECORE_CONTIGUOUS
#define MORECORE_CONTIGUOUS
Definition: dlmalloc.c:546
mchunkptr
struct malloc_chunk * mchunkptr
Definition: dlmalloc.c:1662
malloc_segment
Definition: dlmalloc.c:1935
leftmost_child
#define leftmost_child(t)
Definition: dlmalloc.c:1876
malloc_state::max_footprint
size_t max_footprint
Definition: dlmalloc.c:2082
is_mmapped_segment
#define is_mmapped_segment(S)
Definition: dlmalloc.c:1978
DIRECT_MMAP
#define DIRECT_MMAP(s)
Definition: dlmalloc.c:1324
MALLINFO_FIELD_TYPE
#define MALLINFO_FIELD_TYPE
Definition: dlmalloc.c:580
malloc_chunk
Definition: dlmalloc.c:1654
dlmemalign
#define dlmemalign
Definition: dlmalloc.c:656
prev_chunk
#define prev_chunk(p)
Definition: dlmalloc.c:1737
MMAP_FOOT_PAD
#define MMAP_FOOT_PAD
Definition: dlmalloc.c:1681
remainder
double remainder(double, double)
USE_NONCONTIGUOUS_BIT
#define USE_NONCONTIGUOUS_BIT
Definition: dlmalloc.c:1413
void
void
Definition: rb_mjit_min_header-2.7.2.h:13321
malloc_state::magic
size_t magic
Definition: dlmalloc.c:2078
ok_magic
#define ok_magic(M)
Definition: dlmalloc.c:2445
MALLOC_ALIGNMENT
#define MALLOC_ALIGNMENT
Definition: dlmalloc.c:496
msegmentptr
struct malloc_segment * msegmentptr
Definition: dlmalloc.c:1982
HAVE_MMAP
#define HAVE_MMAP
Definition: dlmalloc.c:517
malloc_tree_chunk::prev_foot
size_t prev_foot
Definition: dlmalloc.c:1861
MAX_SMALL_REQUEST
#define MAX_SMALL_REQUEST
Definition: dlmalloc.c:2067
SIZE_T_BITSIZE
#define SIZE_T_BITSIZE
Definition: dlmalloc.c:1255
FENCEPOST_HEAD
#define FENCEPOST_HEAD
Definition: dlmalloc.c:1721
check_mmapped_chunk
#define check_mmapped_chunk(M, P)
Definition: dlmalloc.c:2256
CHUNK_OVERHEAD
#define CHUNK_OVERHEAD
Definition: dlmalloc.c:1675
use_noncontiguous
#define use_noncontiguous(M)
Definition: dlmalloc.c:2129
malloc_tree_chunk::fd
struct malloc_tree_chunk * fd
Definition: dlmalloc.c:1863
is_page_aligned
#define is_page_aligned(S)
Definition: dlmalloc.c:2145
set_free_with_pinuse
#define set_free_with_pinuse(p, s, n)
Definition: dlmalloc.c:1751
dlmalloc_max_footprint
#define dlmalloc_max_footprint
Definition: dlmalloc.c:666
internal_malloc
#define internal_malloc(m, b)
Definition: dlmalloc.c:3197
time
time_t time(time_t *_timer)
tbinptr
struct malloc_tree_chunk * tbinptr
Definition: dlmalloc.c:1873
small_index2size
#define small_index2size(i)
Definition: dlmalloc.c:2286
malloc_state::treemap
binmap_t treemap
Definition: dlmalloc.c:2071
dlindependent_comalloc
#define dlindependent_comalloc
Definition: dlmalloc.c:668
check_top_chunk
#define check_top_chunk(M, P)
Definition: dlmalloc.c:2258
RELEASE_MAGIC_INIT_LOCK
#define RELEASE_MAGIC_INIT_LOCK()
Definition: dlmalloc.c:1513
MFAIL
#define MFAIL
Definition: dlmalloc.c:1288
CHUNK_ALIGN_MASK
#define CHUNK_ALIGN_MASK
Definition: dlmalloc.c:1268
malloc_segment::base
char * base
Definition: dlmalloc.c:1936
check_free_chunk
#define check_free_chunk(M, P)
Definition: dlmalloc.c:2253
CORRUPTION_ERROR_ACTION
#define CORRUPTION_ERROR_ACTION(m)
Definition: dlmalloc.c:2240
chunk_minus_offset
#define chunk_minus_offset(p, s)
Definition: dlmalloc.c:1733
flag_t
unsigned int flag_t
Definition: dlmalloc.c:1666
getpagesize
int getpagesize(void)
replace_dv
#define replace_dv(M, P, S)
Definition: dlmalloc.c:3021
IS_MMAPPED_BIT
#define IS_MMAPPED_BIT
Definition: dlmalloc.c:1299
mallinfo::hblkhd
MALLINFO_FIELD_TYPE hblkhd
Definition: dlmalloc.c:633
malloc_tree_chunk::parent
struct malloc_tree_chunk * parent
Definition: dlmalloc.c:1867
pad_request
#define pad_request(req)
Definition: dlmalloc.c:1698
malloc_params
Definition: dlmalloc.c:2100
bin
char bin[32]
Definition: siphash.c:135
ACQUIRE_MAGIC_INIT_LOCK
#define ACQUIRE_MAGIC_INIT_LOCK()
Definition: dlmalloc.c:1512
mallinfo::ordblks
MALLINFO_FIELD_TYPE ordblks
Definition: dlmalloc.c:630
unlink_first_small_chunk
#define unlink_first_small_chunk(M, B, P, I)
Definition: dlmalloc.c:3003
pinuse
#define pinuse(p)
Definition: dlmalloc.c:1725
leftshift_for_tree_index
#define leftshift_for_tree_index(i)
Definition: dlmalloc.c:2333
DEFAULT_MMAP_THRESHOLD
#define DEFAULT_MMAP_THRESHOLD
Definition: dlmalloc.c:565
is_mmapped
#define is_mmapped(p)
Definition: dlmalloc.c:1754
internal_free
#define internal_free(m, mem)
Definition: dlmalloc.c:3198
MAX_REQUEST
#define MAX_REQUEST
Definition: dlmalloc.c:1694
HALF_MAX_SIZE_T
#define HALF_MAX_SIZE_T
Definition: dlmalloc.c:1265
MAX_SIZE_T
#define MAX_SIZE_T
Definition: dlmalloc.c:483
mallinfo::uordblks
MALLINFO_FIELD_TYPE uordblks
Definition: dlmalloc.c:636
mallinfo::fsmblks
MALLINFO_FIELD_TYPE fsmblks
Definition: dlmalloc.c:635
size
int size
Definition: encoding.c:58
align_offset
#define align_offset(A)
Definition: dlmalloc.c:1274
CALL_MUNMAP
#define CALL_MUNMAP(a, s)
Definition: dlmalloc.c:1303
next_chunk
#define next_chunk(p)
Definition: dlmalloc.c:1736
is_small
#define is_small(s)
Definition: dlmalloc.c:2284
disable_contiguous
#define disable_contiguous(M)
Definition: dlmalloc.c:2130
malloc_state::dv
mchunkptr dv
Definition: dlmalloc.c:2075
CALL_MORECORE
#define CALL_MORECORE(S)
Definition: dlmalloc.c:1407
dlmalloc_footprint
#define dlmalloc_footprint
Definition: dlmalloc.c:665
gm
#define gm
Definition: dlmalloc.c:2113
malloc_params::default_mflags
flag_t default_mflags
Definition: dlmalloc.c:2106
mallinfo::usmblks
MALLINFO_FIELD_TYPE usmblks
Definition: dlmalloc.c:634
set_inuse_and_pinuse
#define set_inuse_and_pinuse(M, p, s)
Definition: dlmalloc.c:2472
compute_bit2idx
#define compute_bit2idx(X, I)
Definition: dlmalloc.c:2372
dlrealloc
#define dlrealloc
Definition: dlmalloc.c:657
dlmalloc_usable_size
#define dlmalloc_usable_size
Definition: dlmalloc.c:664
malloc_state::seg
msegment seg
Definition: dlmalloc.c:2087
malloc_state::footprint
size_t footprint
Definition: dlmalloc.c:2081
malloc_chunk::fd
struct malloc_chunk * fd
Definition: dlmalloc.c:1657
U
Definition: dtoa.c:290
mallinfo::hblks
MALLINFO_FIELD_TYPE hblks
Definition: dlmalloc.c:632
malloc_chunk::head
size_t head
Definition: dlmalloc.c:1656
dlmalloc_stats
#define dlmalloc_stats
Definition: dlmalloc.c:663
buf
unsigned char buf[MIME_BUF_SIZE]
Definition: nkf.c:4322
CALL_MREMAP
#define CALL_MREMAP(addr, osz, nsz, mv)
Definition: dlmalloc.c:1403
smallbin_at
#define smallbin_at(M, i)
Definition: dlmalloc.c:2290
FOUR_SIZE_T_SIZES
#define FOUR_SIZE_T_SIZES
Definition: dlmalloc.c:1263
DEFAULT_GRANULARITY
#define DEFAULT_GRANULARITY
Definition: dlmalloc.c:551
granularity_align
#define granularity_align(S)
Definition: dlmalloc.c:2142
use_mmap
#define use_mmap(M)
Definition: dlmalloc.c:2125
mallinfo::smblks
MALLINFO_FIELD_TYPE smblks
Definition: dlmalloc.c:631
get_segment_flags
#define get_segment_flags(S)
Definition: dlmalloc.c:1970
left_bits
#define left_bits(x)
Definition: dlmalloc.c:2390
compute_tree_index
#define compute_tree_index(S, I)
Definition: dlmalloc.c:2309
least_bit
#define least_bit(x)
Definition: dlmalloc.c:2387
ok_address
#define ok_address(M, a)
Definition: dlmalloc.c:2426
tchunkptr
struct malloc_tree_chunk * tchunkptr
Definition: dlmalloc.c:1872
malloc_chunk::prev_foot
size_t prev_foot
Definition: dlmalloc.c:1655
set_segment_flags
#define set_segment_flags(S, v)
Definition: dlmalloc.c:1971
memcpy
void * memcpy(void *__restrict, const void *__restrict, size_t)
MALLOC_FAILURE_ACTION
#define MALLOC_FAILURE_ACTION
Definition: dlmalloc.c:530
set_inuse
#define set_inuse(M, p, s)
Definition: dlmalloc.c:2467
malloc_state::mflags
flag_t mflags
Definition: dlmalloc.c:2083
close
int close(int __fildes)
dlmalloc
#define dlmalloc
Definition: dlmalloc.c:655
RELEASE_MORECORE_LOCK
#define RELEASE_MORECORE_LOCK()
Definition: dlmalloc.c:1505
dlindependent_calloc
#define dlindependent_calloc
Definition: dlmalloc.c:667
M_GRANULARITY
#define M_GRANULARITY
Definition: dlmalloc.c:591
ok_cinuse
#define ok_cinuse(p)
Definition: dlmalloc.c:2430
malloc_state::dvsize
size_t dvsize
Definition: dlmalloc.c:2072
TOP_FOOT_SIZE
#define TOP_FOOT_SIZE
Definition: dlmalloc.c:2187
mallinfo::arena
MALLINFO_FIELD_TYPE arena
Definition: dlmalloc.c:629
should_trim
#define should_trim(M, s)
Definition: dlmalloc.c:2177
check_inuse_chunk
#define check_inuse_chunk(M, P)
Definition: dlmalloc.c:2254
size_t
long unsigned int size_t
Definition: rb_mjit_min_header-2.7.2.h:669
overhead_for
#define overhead_for(p)
Definition: dlmalloc.c:1758
small_index
#define small_index(s)
Definition: dlmalloc.c:2285
minsize_for_tree_index
#define minsize_for_tree_index(i)
Definition: dlmalloc.c:2338
M_TRIM_THRESHOLD
#define M_TRIM_THRESHOLD
Definition: dlmalloc.c:590
v
int VALUE v
Definition: rb_mjit_min_header-2.7.2.h:12380
len
uint8_t len
Definition: escape.c:17
set_size_and_pinuse_of_inuse_chunk
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)
Definition: dlmalloc.c:2477
mallinfo::keepcost
MALLINFO_FIELD_TYPE keepcost
Definition: dlmalloc.c:638
ptrdiff_t
long int ptrdiff_t
Definition: rb_mjit_min_header-2.7.2.h:805
dlcalloc
#define dlcalloc
Definition: dlmalloc.c:653
stderr
#define stderr
Definition: rb_mjit_min_header-2.7.2.h:1522
malloc_params::mmap_threshold
size_t mmap_threshold
Definition: dlmalloc.c:2104
MCHUNK_SIZE
#define MCHUNK_SIZE
Definition: dlmalloc.c:1670
malloc_state
Definition: dlmalloc.c:2069
read
_ssize_t read(int __fd, void *__buf, size_t __nbyte)
ABORT
#define ABORT
Definition: dlmalloc.c:502
treemap_is_marked
#define treemap_is_marked(M, i)
Definition: dlmalloc.c:2355
malloc_params::granularity
size_t granularity
Definition: dlmalloc.c:2103
set_lock
#define set_lock(M, L)
Definition: dlmalloc.c:2132
chunk_plus_offset
#define chunk_plus_offset(p, s)
Definition: dlmalloc.c:1732
set_size_and_pinuse_of_free_chunk
#define set_size_and_pinuse_of_free_chunk(p, s)
Definition: dlmalloc.c:1747
dlvalloc
#define dlvalloc
Definition: dlmalloc.c:658
malloc_state::treebins
tbinptr treebins[NTREEBINS]
Definition: dlmalloc.c:2080
PINUSE_BIT
#define PINUSE_BIT
Definition: dlmalloc.c:1716
malloc_tree_chunk::index
bindex_t index
Definition: dlmalloc.c:1868
USAGE_ERROR_ACTION
#define USAGE_ERROR_ACTION(m, p)
Definition: dlmalloc.c:2244
SIX_SIZE_T_SIZES
#define SIX_SIZE_T_SIZES
Definition: dlmalloc.c:1264
PTHREAD_MUTEX_INITIALIZER
#define PTHREAD_MUTEX_INITIALIZER
Definition: rb_mjit_min_header-2.7.2.h:9260
mallinfo
Definition: dlmalloc.c:628
SIZE_T_ONE
#define SIZE_T_ONE
Definition: dlmalloc.c:1260
malloc_tree_chunk::head
size_t head
Definition: dlmalloc.c:1862
malloc_segment::size
size_t size
Definition: dlmalloc.c:1937
INUSE_BITS
#define INUSE_BITS
Definition: dlmalloc.c:1718
insert_chunk
#define insert_chunk(M, P, S)
Definition: dlmalloc.c:3176
fprintf
int fprintf(FILE *__restrict, const char *__restrict,...) __attribute__((__format__(__printf__
chunk2mem
#define chunk2mem(p)
Definition: dlmalloc.c:1688
mark_inuse_foot
#define mark_inuse_foot(M, p, s)
Definition: dlmalloc.c:2464
MIN_CHUNK_SIZE
#define MIN_CHUNK_SIZE
Definition: dlmalloc.c:1684
malloc_chunk::bk
struct malloc_chunk * bk
Definition: dlmalloc.c:1658
align_as_chunk
#define align_as_chunk(A)
Definition: dlmalloc.c:1691
cinuse
#define cinuse(p)
Definition: dlmalloc.c:1724
enable_mmap
#define enable_mmap(M)
Definition: dlmalloc.c:2126
ok_pinuse
#define ok_pinuse(p)
Definition: dlmalloc.c:2432
USE_MMAP_BIT
#define USE_MMAP_BIT
Definition: dlmalloc.c:1300
unlink_large_chunk
#define unlink_large_chunk(M, X)
Definition: dlmalloc.c:3103
malloc_state::trim_check
size_t trim_check
Definition: dlmalloc.c:2077
dlmalloc_trim
#define dlmalloc_trim
Definition: dlmalloc.c:662