Subversion Repositories HelenOS-historic

Rev

Rev 985 | Go to most recent revision | Details | Last modification | View Log | RSS feed

Rev Author Line No. Line
968 palkovsky 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
441
#ifndef WIN32
442
#ifdef _WIN32
443
#define WIN32 1
444
#endif  /* _WIN32 */
445
#endif  /* WIN32 */
446
#ifdef WIN32
447
#define WIN32_LEAN_AND_MEAN
448
#include <windows.h>
449
#define HAVE_MMAP 1
450
#define HAVE_MORECORE 0
451
#define LACKS_UNISTD_H
452
#define LACKS_SYS_PARAM_H
453
#define LACKS_SYS_MMAN_H
454
#define LACKS_STRING_H
455
#define LACKS_STRINGS_H
456
#define LACKS_SYS_TYPES_H
457
#define LACKS_ERRNO_H
458
#define MALLOC_FAILURE_ACTION
459
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
460
#endif  /* WIN32 */
461
 
462
#if defined(DARWIN) || defined(_DARWIN)
463
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
464
#ifndef HAVE_MORECORE
465
#define HAVE_MORECORE 0
466
#define HAVE_MMAP 1
467
#endif  /* HAVE_MORECORE */
468
#endif  /* DARWIN */
469
 
470
#ifndef LACKS_SYS_TYPES_H
471
#include <sys/types.h>  /* For size_t */
472
#endif  /* LACKS_SYS_TYPES_H */
473
 
474
/* The maximum possible size_t value has all bits set */
475
#define MAX_SIZE_T           (~(size_t)0)
476
 
477
#ifndef ONLY_MSPACES
478
#define ONLY_MSPACES 0
479
#endif  /* ONLY_MSPACES */
480
#ifndef MSPACES
481
#if ONLY_MSPACES
482
#define MSPACES 1
483
#else   /* ONLY_MSPACES */
484
#define MSPACES 0
485
#endif  /* ONLY_MSPACES */
486
#endif  /* MSPACES */
487
#ifndef MALLOC_ALIGNMENT
488
#define MALLOC_ALIGNMENT ((size_t)8U)
489
#endif  /* MALLOC_ALIGNMENT */
490
#ifndef FOOTERS
491
#define FOOTERS 0
492
#endif  /* FOOTERS */
493
#ifndef ABORT
494
#define ABORT  abort()
495
#endif  /* ABORT */
496
#ifndef ABORT_ON_ASSERT_FAILURE
497
#define ABORT_ON_ASSERT_FAILURE 1
498
#endif  /* ABORT_ON_ASSERT_FAILURE */
499
#ifndef PROCEED_ON_ERROR
500
#define PROCEED_ON_ERROR 0
501
#endif  /* PROCEED_ON_ERROR */
502
#ifndef USE_LOCKS
503
#define USE_LOCKS 0
504
#endif  /* USE_LOCKS */
505
#ifndef INSECURE
506
#define INSECURE 0
507
#endif  /* INSECURE */
508
#ifndef HAVE_MMAP
509
#define HAVE_MMAP 1
510
#endif  /* HAVE_MMAP */
511
#ifndef MMAP_CLEARS
512
#define MMAP_CLEARS 1
513
#endif  /* MMAP_CLEARS */
514
#ifndef HAVE_MREMAP
515
#ifdef linux
516
#define HAVE_MREMAP 1
517
#else   /* linux */
518
#define HAVE_MREMAP 0
519
#endif  /* linux */
520
#endif  /* HAVE_MREMAP */
521
#ifndef MALLOC_FAILURE_ACTION
522
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
523
#endif  /* MALLOC_FAILURE_ACTION */
524
#ifndef HAVE_MORECORE
525
#if ONLY_MSPACES
526
#define HAVE_MORECORE 0
527
#else   /* ONLY_MSPACES */
528
#define HAVE_MORECORE 1
529
#endif  /* ONLY_MSPACES */
530
#endif  /* HAVE_MORECORE */
531
#if !HAVE_MORECORE
532
#define MORECORE_CONTIGUOUS 0
533
#else   /* !HAVE_MORECORE */
534
#ifndef MORECORE
535
#define MORECORE sbrk
536
#endif  /* MORECORE */
537
#ifndef MORECORE_CONTIGUOUS
538
#define MORECORE_CONTIGUOUS 1
539
#endif  /* MORECORE_CONTIGUOUS */
540
#endif  /* HAVE_MORECORE */
541
#ifndef DEFAULT_GRANULARITY
542
#if MORECORE_CONTIGUOUS
543
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
544
#else   /* MORECORE_CONTIGUOUS */
545
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
546
#endif  /* MORECORE_CONTIGUOUS */
547
#endif  /* DEFAULT_GRANULARITY */
548
#ifndef DEFAULT_TRIM_THRESHOLD
549
#ifndef MORECORE_CANNOT_TRIM
550
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
551
#else   /* MORECORE_CANNOT_TRIM */
552
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
553
#endif  /* MORECORE_CANNOT_TRIM */
554
#endif  /* DEFAULT_TRIM_THRESHOLD */
555
#ifndef DEFAULT_MMAP_THRESHOLD
556
#if HAVE_MMAP
557
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
558
#else   /* HAVE_MMAP */
559
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
560
#endif  /* HAVE_MMAP */
561
#endif  /* DEFAULT_MMAP_THRESHOLD */
562
#ifndef USE_BUILTIN_FFS
563
#define USE_BUILTIN_FFS 0
564
#endif  /* USE_BUILTIN_FFS */
565
#ifndef USE_DEV_RANDOM
566
#define USE_DEV_RANDOM 0
567
#endif  /* USE_DEV_RANDOM */
568
#ifndef NO_MALLINFO
569
#define NO_MALLINFO 0
570
#endif  /* NO_MALLINFO */
571
#ifndef MALLINFO_FIELD_TYPE
572
#define MALLINFO_FIELD_TYPE size_t
573
#endif  /* MALLINFO_FIELD_TYPE */
574
 
575
/*
576
  mallopt tuning options.  SVID/XPG defines four standard parameter
577
  numbers for mallopt, normally defined in malloc.h.  None of these
578
  are used in this malloc, so setting them has no effect. But this
579
  malloc does support the following options.
580
*/
581
 
582
#define M_TRIM_THRESHOLD     (-1)
583
#define M_GRANULARITY        (-2)
584
#define M_MMAP_THRESHOLD     (-3)
585
 
586
/* ------------------------ Mallinfo declarations ------------------------ */
587
 
588
#if !NO_MALLINFO
589
/*
590
  This version of malloc supports the standard SVID/XPG mallinfo
591
  routine that returns a struct containing usage properties and
592
  statistics. It should work on any system that has a
593
  /usr/include/malloc.h defining struct mallinfo.  The main
594
  declaration needed is the mallinfo struct that is returned (by-copy)
595
  by mallinfo().  The malloinfo struct contains a bunch of fields that
596
  are not even meaningful in this version of malloc.  These fields are
597
  are instead filled by mallinfo() with other numbers that might be of
598
  interest.
599
 
600
  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
601
  /usr/include/malloc.h file that includes a declaration of struct
602
  mallinfo.  If so, it is included; else a compliant version is
603
  declared below.  These must be precisely the same for mallinfo() to
604
  work.  The original SVID version of this struct, defined on most
605
  systems with mallinfo, declares all fields as ints. But some others
606
  define as unsigned long. If your system defines the fields using a
607
  type of different width than listed here, you MUST #include your
608
  system version and #define HAVE_USR_INCLUDE_MALLOC_H.
609
*/
610
 
611
/* #define HAVE_USR_INCLUDE_MALLOC_H */
612
 
613
#ifdef HAVE_USR_INCLUDE_MALLOC_H
614
#include "/usr/include/malloc.h"
615
#else /* HAVE_USR_INCLUDE_MALLOC_H */
616
 
617
struct mallinfo {
618
  MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
619
  MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
620
  MALLINFO_FIELD_TYPE smblks;   /* always 0 */
621
  MALLINFO_FIELD_TYPE hblks;    /* always 0 */
622
  MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
623
  MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
624
  MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
625
  MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
626
  MALLINFO_FIELD_TYPE fordblks; /* total free space */
627
  MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
628
};
629
 
630
#endif /* HAVE_USR_INCLUDE_MALLOC_H */
631
#endif /* NO_MALLINFO */
632
 
633
#ifdef __cplusplus
634
extern "C" {
635
#endif /* __cplusplus */
636
 
637
#if !ONLY_MSPACES
638
 
639
/* ------------------- Declarations of public routines ------------------- */
640
 
641
#ifndef USE_DL_PREFIX
642
#define dlcalloc               calloc
643
#define dlfree                 free
644
#define dlmalloc               malloc
645
#define dlmemalign             memalign
646
#define dlrealloc              realloc
647
#define dlvalloc               valloc
648
#define dlpvalloc              pvalloc
649
#define dlmallinfo             mallinfo
650
#define dlmallopt              mallopt
651
#define dlmalloc_trim          malloc_trim
652
#define dlmalloc_stats         malloc_stats
653
#define dlmalloc_usable_size   malloc_usable_size
654
#define dlmalloc_footprint     malloc_footprint
655
#define dlmalloc_max_footprint malloc_max_footprint
656
#define dlindependent_calloc   independent_calloc
657
#define dlindependent_comalloc independent_comalloc
658
#endif /* USE_DL_PREFIX */
659
 
660
 
661
/*
662
  malloc(size_t n)
663
  Returns a pointer to a newly allocated chunk of at least n bytes, or
664
  null if no space is available, in which case errno is set to ENOMEM
665
  on ANSI C systems.
666
 
667
  If n is zero, malloc returns a minimum-sized chunk. (The minimum
668
  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
669
  systems.)  Note that size_t is an unsigned type, so calls with
670
  arguments that would be negative if signed are interpreted as
671
  requests for huge amounts of space, which will often fail. The
672
  maximum supported value of n differs across systems, but is in all
673
  cases less than the maximum representable value of a size_t.
674
*/
675
void* dlmalloc(size_t);
676
 
677
/*
678
  free(void* p)
679
  Releases the chunk of memory pointed to by p, that had been previously
680
  allocated using malloc or a related routine such as realloc.
681
  It has no effect if p is null. If p was not malloced or already
682
  freed, free(p) will by default cause the current program to abort.
683
*/
684
void  dlfree(void*);
685
 
686
/*
687
  calloc(size_t n_elements, size_t element_size);
688
  Returns a pointer to n_elements * element_size bytes, with all locations
689
  set to zero.
690
*/
691
void* dlcalloc(size_t, size_t);
692
 
693
/*
694
  realloc(void* p, size_t n)
695
  Returns a pointer to a chunk of size n that contains the same data
696
  as does chunk p up to the minimum of (n, p's size) bytes, or null
697
  if no space is available.
698
 
699
  The returned pointer may or may not be the same as p. The algorithm
700
  prefers extending p in most cases when possible, otherwise it
701
  employs the equivalent of a malloc-copy-free sequence.
702
 
703
  If p is null, realloc is equivalent to malloc.
704
 
705
  If space is not available, realloc returns null, errno is set (if on
706
  ANSI) and p is NOT freed.
707
 
708
  if n is for fewer bytes than already held by p, the newly unused
709
  space is lopped off and freed if possible.  realloc with a size
710
  argument of zero (re)allocates a minimum-sized chunk.
711
 
712
  The old unix realloc convention of allowing the last-free'd chunk
713
  to be used as an argument to realloc is not supported.
714
*/
715
 
716
void* dlrealloc(void*, size_t);
717
 
718
/*
719
  memalign(size_t alignment, size_t n);
720
  Returns a pointer to a newly allocated chunk of n bytes, aligned
721
  in accord with the alignment argument.
722
 
723
  The alignment argument should be a power of two. If the argument is
724
  not a power of two, the nearest greater power is used.
725
  8-byte alignment is guaranteed by normal malloc calls, so don't
726
  bother calling memalign with an argument of 8 or less.
727
 
728
  Overreliance on memalign is a sure way to fragment space.
729
*/
730
void* dlmemalign(size_t, size_t);
731
 
732
/*
733
  valloc(size_t n);
734
  Equivalent to memalign(pagesize, n), where pagesize is the page
735
  size of the system. If the pagesize is unknown, 4096 is used.
736
*/
737
void* dlvalloc(size_t);
738
 
739
/*
740
  mallopt(int parameter_number, int parameter_value)
741
  Sets tunable parameters The format is to provide a
742
  (parameter-number, parameter-value) pair.  mallopt then sets the
743
  corresponding parameter to the argument value if it can (i.e., so
744
  long as the value is meaningful), and returns 1 if successful else
745
  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
746
  normally defined in malloc.h.  None of these are use in this malloc,
747
  so setting them has no effect. But this malloc also supports other
748
  options in mallopt. See below for details.  Briefly, supported
749
  parameters are as follows (listed defaults are for "typical"
750
  configurations).
751
 
752
  Symbol            param #  default    allowed param values
753
  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
754
  M_GRANULARITY        -2     page size   any power of 2 >= page size
755
  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
756
*/
757
int dlmallopt(int, int);
758
 
759
/*
760
  malloc_footprint();
761
  Returns the number of bytes obtained from the system.  The total
762
  number of bytes allocated by malloc, realloc etc., is less than this
763
  value. Unlike mallinfo, this function returns only a precomputed
764
  result, so can be called frequently to monitor memory consumption.
765
  Even if locks are otherwise defined, this function does not use them,
766
  so results might not be up to date.
767
*/
768
size_t dlmalloc_footprint(void);
769
 
770
/*
771
  malloc_max_footprint();
772
  Returns the maximum number of bytes obtained from the system. This
773
  value will be greater than current footprint if deallocated space
774
  has been reclaimed by the system. The peak number of bytes allocated
775
  by malloc, realloc etc., is less than this value. Unlike mallinfo,
776
  this function returns only a precomputed result, so can be called
777
  frequently to monitor memory consumption.  Even if locks are
778
  otherwise defined, this function does not use them, so results might
779
  not be up to date.
780
*/
781
size_t dlmalloc_max_footprint(void);
782
 
783
#if !NO_MALLINFO
784
/*
785
  mallinfo()
786
  Returns (by copy) a struct containing various summary statistics:
787
 
788
  arena:     current total non-mmapped bytes allocated from system
789
  ordblks:   the number of free chunks
790
  smblks:    always zero.
791
  hblks:     current number of mmapped regions
792
  hblkhd:    total bytes held in mmapped regions
793
  usmblks:   the maximum total allocated space. This will be greater
794
                than current total if trimming has occurred.
795
  fsmblks:   always zero
796
  uordblks:  current total allocated space (normal or mmapped)
797
  fordblks:  total free space
798
  keepcost:  the maximum number of bytes that could ideally be released
799
               back to system via malloc_trim. ("ideally" means that
800
               it ignores page restrictions etc.)
801
 
802
  Because these fields are ints, but internal bookkeeping may
803
  be kept as longs, the reported values may wrap around zero and
804
  thus be inaccurate.
805
*/
806
struct mallinfo dlmallinfo(void);
807
#endif /* NO_MALLINFO */
808
 
809
/*
810
  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
811
 
812
  independent_calloc is similar to calloc, but instead of returning a
813
  single cleared space, it returns an array of pointers to n_elements
814
  independent elements that can hold contents of size elem_size, each
815
  of which starts out cleared, and can be independently freed,
816
  realloc'ed etc. The elements are guaranteed to be adjacently
817
  allocated (this is not guaranteed to occur with multiple callocs or
818
  mallocs), which may also improve cache locality in some
819
  applications.
820
 
821
  The "chunks" argument is optional (i.e., may be null, which is
822
  probably the most typical usage). If it is null, the returned array
823
  is itself dynamically allocated and should also be freed when it is
824
  no longer needed. Otherwise, the chunks array must be of at least
825
  n_elements in length. It is filled in with the pointers to the
826
  chunks.
827
 
828
  In either case, independent_calloc returns this pointer array, or
829
  null if the allocation failed.  If n_elements is zero and "chunks"
830
  is null, it returns a chunk representing an array with zero elements
831
  (which should be freed if not wanted).
832
 
833
  Each element must be individually freed when it is no longer
834
  needed. If you'd like to instead be able to free all at once, you
835
  should instead use regular calloc and assign pointers into this
836
  space to represent elements.  (In this case though, you cannot
837
  independently free elements.)
838
 
839
  independent_calloc simplifies and speeds up implementations of many
840
  kinds of pools.  It may also be useful when constructing large data
841
  structures that initially have a fixed number of fixed-sized nodes,
842
  but the number is not known at compile time, and some of the nodes
843
  may later need to be freed. For example:
844
 
845
  struct Node { int item; struct Node* next; };
846
 
847
  struct Node* build_list() {
848
    struct Node** pool;
849
    int n = read_number_of_nodes_needed();
850
    if (n <= 0) return 0;
851
    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
852
    if (pool == 0) die();
853
    // organize into a linked list...
854
    struct Node* first = pool[0];
855
    for (i = 0; i < n-1; ++i)
856
      pool[i]->next = pool[i+1];
857
    free(pool);     // Can now free the array (or not, if it is needed later)
858
    return first;
859
  }
860
*/
861
void** dlindependent_calloc(size_t, size_t, void**);
862
 
863
/*
864
  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
865
 
866
  independent_comalloc allocates, all at once, a set of n_elements
867
  chunks with sizes indicated in the "sizes" array.    It returns
868
  an array of pointers to these elements, each of which can be
869
  independently freed, realloc'ed etc. The elements are guaranteed to
870
  be adjacently allocated (this is not guaranteed to occur with
871
  multiple callocs or mallocs), which may also improve cache locality
872
  in some applications.
873
 
874
  The "chunks" argument is optional (i.e., may be null). If it is null
875
  the returned array is itself dynamically allocated and should also
876
  be freed when it is no longer needed. Otherwise, the chunks array
877
  must be of at least n_elements in length. It is filled in with the
878
  pointers to the chunks.
879
 
880
  In either case, independent_comalloc returns this pointer array, or
881
  null if the allocation failed.  If n_elements is zero and chunks is
882
  null, it returns a chunk representing an array with zero elements
883
  (which should be freed if not wanted).
884
 
885
  Each element must be individually freed when it is no longer
886
  needed. If you'd like to instead be able to free all at once, you
887
  should instead use a single regular malloc, and assign pointers at
888
  particular offsets in the aggregate space. (In this case though, you
889
  cannot independently free elements.)
890
 
891
  independent_comallac differs from independent_calloc in that each
892
  element may have a different size, and also that it does not
893
  automatically clear elements.
894
 
895
  independent_comalloc can be used to speed up allocation in cases
896
  where several structs or objects must always be allocated at the
897
  same time.  For example:
898
 
899
  struct Head { ... }
900
  struct Foot { ... }
901
 
902
  void send_message(char* msg) {
903
    int msglen = strlen(msg);
904
    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
905
    void* chunks[3];
906
    if (independent_comalloc(3, sizes, chunks) == 0)
907
      die();
908
    struct Head* head = (struct Head*)(chunks[0]);
909
    char*        body = (char*)(chunks[1]);
910
    struct Foot* foot = (struct Foot*)(chunks[2]);
911
    // ...
912
  }
913
 
914
  In general though, independent_comalloc is worth using only for
915
  larger values of n_elements. For small values, you probably won't
916
  detect enough difference from series of malloc calls to bother.
917
 
918
  Overuse of independent_comalloc can increase overall memory usage,
919
  since it cannot reuse existing noncontiguous small chunks that
920
  might be available for some of the elements.
921
*/
922
void** dlindependent_comalloc(size_t, size_t*, void**);
923
 
924
 
925
/*
926
  pvalloc(size_t n);
927
  Equivalent to valloc(minimum-page-that-holds(n)), that is,
928
  round up n to nearest pagesize.
929
 */
930
void*  dlpvalloc(size_t);
931
 
932
/*
933
  malloc_trim(size_t pad);
934
 
935
  If possible, gives memory back to the system (via negative arguments
936
  to sbrk) if there is unused memory at the `high' end of the malloc
937
  pool or in unused MMAP segments. You can call this after freeing
938
  large blocks of memory to potentially reduce the system-level memory
939
  requirements of a program. However, it cannot guarantee to reduce
940
  memory. Under some allocation patterns, some large free blocks of
941
  memory will be locked between two used chunks, so they cannot be
942
  given back to the system.
943
 
944
  The `pad' argument to malloc_trim represents the amount of free
945
  trailing space to leave untrimmed. If this argument is zero, only
946
  the minimum amount of memory to maintain internal data structures
947
  will be left. Non-zero arguments can be supplied to maintain enough
948
  trailing space to service future expected allocations without having
949
  to re-obtain memory from the system.
950
 
951
  Malloc_trim returns 1 if it actually released any memory, else 0.
952
*/
953
int  dlmalloc_trim(size_t);
954
 
955
/*
956
  malloc_usable_size(void* p);
957
 
958
  Returns the number of bytes you can actually use in
959
  an allocated chunk, which may be more than you requested (although
960
  often not) due to alignment and minimum size constraints.
961
  You can use this many bytes without worrying about
962
  overwriting other allocated objects. This is not a particularly great
963
  programming practice. malloc_usable_size can be more useful in
964
  debugging and assertions, for example:
965
 
966
  p = malloc(n);
967
  assert(malloc_usable_size(p) >= 256);
968
*/
969
size_t dlmalloc_usable_size(void*);
970
 
971
/*
972
  malloc_stats();
973
  Prints on stderr the amount of space obtained from the system (both
974
  via sbrk and mmap), the maximum amount (which may be more than
975
  current if malloc_trim and/or munmap got called), and the current
976
  number of bytes allocated via malloc (or realloc, etc) but not yet
977
  freed. Note that this is the number of bytes allocated, not the
978
  number requested. It will be larger than the number requested
979
  because of alignment and bookkeeping overhead. Because it includes
980
  alignment wastage as being in use, this figure may be greater than
981
  zero even when no user-level chunks are allocated.
982
 
983
  The reported current and maximum system memory can be inaccurate if
984
  a program makes other calls to system memory allocation functions
985
  (normally sbrk) outside of malloc.
986
 
987
  malloc_stats prints only the most commonly interesting statistics.
988
  More information can be obtained by calling mallinfo.
989
*/
990
void  dlmalloc_stats(void);
991
 
992
#endif /* ONLY_MSPACES */
993
 
994
#if MSPACES
995
 
996
/*
997
  mspace is an opaque type representing an independent
998
  region of space that supports mspace_malloc, etc.
999
*/
1000
typedef void* mspace;
1001
 
1002
/*
1003
  create_mspace creates and returns a new independent space with the
1004
  given initial capacity, or, if 0, the default granularity size.  It
1005
  returns null if there is no system memory available to create the
1006
  space.  If argument locked is non-zero, the space uses a separate
1007
  lock to control access. The capacity of the space will grow
1008
  dynamically as needed to service mspace_malloc requests.  You can
1009
  control the sizes of incremental increases of this space by
1010
  compiling with a different DEFAULT_GRANULARITY or dynamically
1011
  setting with mallopt(M_GRANULARITY, value).
1012
*/
1013
mspace create_mspace(size_t capacity, int locked);
1014
 
1015
/*
1016
  destroy_mspace destroys the given space, and attempts to return all
1017
  of its memory back to the system, returning the total number of
1018
  bytes freed. After destruction, the results of access to all memory
1019
  used by the space become undefined.
1020
*/
1021
size_t destroy_mspace(mspace msp);
1022
 
1023
/*
1024
  create_mspace_with_base uses the memory supplied as the initial base
1025
  of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1026
  space is used for bookkeeping, so the capacity must be at least this
1027
  large. (Otherwise 0 is returned.) When this initial space is
1028
  exhausted, additional memory will be obtained from the system.
1029
  Destroying this space will deallocate all additionally allocated
1030
  space (if possible) but not the initial base.
1031
*/
1032
mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1033
 
1034
/*
1035
  mspace_malloc behaves as malloc, but operates within
1036
  the given space.
1037
*/
1038
void* mspace_malloc(mspace msp, size_t bytes);
1039
 
1040
/*
1041
  mspace_free behaves as free, but operates within
1042
  the given space.
1043
 
1044
  If compiled with FOOTERS==1, mspace_free is not actually needed.
1045
  free may be called instead of mspace_free because freed chunks from
1046
  any space are handled by their originating spaces.
1047
*/
1048
void mspace_free(mspace msp, void* mem);
1049
 
1050
/*
1051
  mspace_realloc behaves as realloc, but operates within
1052
  the given space.
1053
 
1054
  If compiled with FOOTERS==1, mspace_realloc is not actually
1055
  needed.  realloc may be called instead of mspace_realloc because
1056
  realloced chunks from any space are handled by their originating
1057
  spaces.
1058
*/
1059
void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1060
 
1061
/*
1062
  mspace_calloc behaves as calloc, but operates within
1063
  the given space.
1064
*/
1065
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1066
 
1067
/*
1068
  mspace_memalign behaves as memalign, but operates within
1069
  the given space.
1070
*/
1071
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1072
 
1073
/*
1074
  mspace_independent_calloc behaves as independent_calloc, but
1075
  operates within the given space.
1076
*/
1077
void** mspace_independent_calloc(mspace msp, size_t n_elements,
1078
                                 size_t elem_size, void* chunks[]);
1079
 
1080
/*
1081
  mspace_independent_comalloc behaves as independent_comalloc, but
1082
  operates within the given space.
1083
*/
1084
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1085
                                   size_t sizes[], void* chunks[]);
1086
 
1087
/*
1088
  mspace_footprint() returns the number of bytes obtained from the
1089
  system for this space.
1090
*/
1091
size_t mspace_footprint(mspace msp);
1092
 
1093
/*
1094
  mspace_max_footprint() returns the peak number of bytes obtained from the
1095
  system for this space.
1096
*/
1097
size_t mspace_max_footprint(mspace msp);
1098
 
1099
 
1100
#if !NO_MALLINFO
1101
/*
1102
  mspace_mallinfo behaves as mallinfo, but reports properties of
1103
  the given space.
1104
*/
1105
struct mallinfo mspace_mallinfo(mspace msp);
1106
#endif /* NO_MALLINFO */
1107
 
1108
/*
1109
  mspace_malloc_stats behaves as malloc_stats, but reports
1110
  properties of the given space.
1111
*/
1112
void mspace_malloc_stats(mspace msp);
1113
 
1114
/*
1115
  mspace_trim behaves as malloc_trim, but
1116
  operates within the given space.
1117
*/
1118
int mspace_trim(mspace msp, size_t pad);
1119
 
1120
/*
1121
  An alias for mallopt.
1122
*/
1123
int mspace_mallopt(int, int);
1124
 
1125
#endif /* MSPACES */
1126
 
1127
#ifdef __cplusplus
1128
};  /* end of extern "C" */
1129
#endif /* __cplusplus */
1130
 
1131
/*
1132
  ========================================================================
1133
  To make a fully customizable malloc.h header file, cut everything
1134
  above this line, put into file malloc.h, edit to suit, and #include it
1135
  on the next line, as well as in programs that use this malloc.
1136
  ========================================================================
1137
*/
1138
 
1139
/* #include "malloc.h" */
1140
 
1141
/*------------------------------ internal #includes ---------------------- */
1142
 
1143
#ifdef WIN32
1144
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1145
#endif /* WIN32 */
1146
 
1147
#include <stdio.h>       /* for printing in malloc_stats */
1148
 
1149
#ifndef LACKS_ERRNO_H
1150
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1151
#endif /* LACKS_ERRNO_H */
1152
#if FOOTERS
1153
#include <time.h>        /* for magic initialization */
1154
#endif /* FOOTERS */
1155
#ifndef LACKS_STDLIB_H
1156
#include <stdlib.h>      /* for abort() */
1157
#endif /* LACKS_STDLIB_H */
1158
#ifdef DEBUG
1159
#if ABORT_ON_ASSERT_FAILURE
1160
#define assert(x) if(!(x)) ABORT
1161
#else /* ABORT_ON_ASSERT_FAILURE */
1162
#include <assert.h>
1163
#endif /* ABORT_ON_ASSERT_FAILURE */
1164
#else  /* DEBUG */
1165
#define assert(x)
1166
#endif /* DEBUG */
1167
#ifndef LACKS_STRING_H
1168
#include <string.h>      /* for memset etc */
1169
#endif  /* LACKS_STRING_H */
1170
#if USE_BUILTIN_FFS
1171
#ifndef LACKS_STRINGS_H
1172
#include <strings.h>     /* for ffs */
1173
#endif /* LACKS_STRINGS_H */
1174
#endif /* USE_BUILTIN_FFS */
1175
#if HAVE_MMAP
1176
#ifndef LACKS_SYS_MMAN_H
1177
#include <sys/mman.h>    /* for mmap */
1178
#endif /* LACKS_SYS_MMAN_H */
1179
#ifndef LACKS_FCNTL_H
1180
#include <fcntl.h>
1181
#endif /* LACKS_FCNTL_H */
1182
#endif /* HAVE_MMAP */
1183
#if HAVE_MORECORE
1184
#ifndef LACKS_UNISTD_H
1185
#include <unistd.h>     /* for sbrk */
1186
#else /* LACKS_UNISTD_H */
1187
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1188
extern void*     sbrk(ptrdiff_t);
1189
#endif /* FreeBSD etc */
1190
#endif /* LACKS_UNISTD_H */
1191
#endif /* HAVE_MMAP */
1192
 
1193
#ifndef WIN32
1194
#ifndef malloc_getpagesize
1195
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1196
#    ifndef _SC_PAGE_SIZE
1197
#      define _SC_PAGE_SIZE _SC_PAGESIZE
1198
#    endif
1199
#  endif
1200
#  ifdef _SC_PAGE_SIZE
1201
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1202
#  else
1203
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1204
       extern size_t getpagesize();
1205
#      define malloc_getpagesize getpagesize()
1206
#    else
1207
#      ifdef WIN32 /* use supplied emulation of getpagesize */
1208
#        define malloc_getpagesize getpagesize()
1209
#      else
1210
#        ifndef LACKS_SYS_PARAM_H
1211
#          include <sys/param.h>
1212
#        endif
1213
#        ifdef EXEC_PAGESIZE
1214
#          define malloc_getpagesize EXEC_PAGESIZE
1215
#        else
1216
#          ifdef NBPG
1217
#            ifndef CLSIZE
1218
#              define malloc_getpagesize NBPG
1219
#            else
1220
#              define malloc_getpagesize (NBPG * CLSIZE)
1221
#            endif
1222
#          else
1223
#            ifdef NBPC
1224
#              define malloc_getpagesize NBPC
1225
#            else
1226
#              ifdef PAGESIZE
1227
#                define malloc_getpagesize PAGESIZE
1228
#              else /* just guess */
1229
#                define malloc_getpagesize ((size_t)4096U)
1230
#              endif
1231
#            endif
1232
#          endif
1233
#        endif
1234
#      endif
1235
#    endif
1236
#  endif
1237
#endif
1238
#endif
1239
 
1240
/* ------------------- size_t and alignment properties -------------------- */
1241
 
1242
/* The byte and bit size of a size_t */
1243
#define SIZE_T_SIZE         (sizeof(size_t))
1244
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1245
 
1246
/* Some constants coerced to size_t */
1247
/* Annoying but necessary to avoid errors on some plaftorms */
1248
#define SIZE_T_ZERO         ((size_t)0)
1249
#define SIZE_T_ONE          ((size_t)1)
1250
#define SIZE_T_TWO          ((size_t)2)
1251
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1252
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1253
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1254
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1255
 
1256
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1257
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1258
 
1259
/* True if address a has acceptable alignment */
1260
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1261
 
1262
/* the number of bytes to offset an address to align it */
1263
#define align_offset(A)\
1264
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1265
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1266
 
1267
/* -------------------------- MMAP preliminaries ------------------------- */
1268
 
1269
/*
1270
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1271
   checks to fail so compiler optimizer can delete code rather than
1272
   using so many "#if"s.
1273
*/
1274
 
1275
 
1276
/* MORECORE and MMAP must return MFAIL on failure */
1277
#define MFAIL                ((void*)(MAX_SIZE_T))
1278
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1279
 
1280
#if !HAVE_MMAP
1281
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1282
#define USE_MMAP_BIT         (SIZE_T_ZERO)
1283
#define CALL_MMAP(s)         MFAIL
1284
#define CALL_MUNMAP(a, s)    (-1)
1285
#define DIRECT_MMAP(s)       MFAIL
1286
 
1287
#else /* HAVE_MMAP */
1288
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
1289
#define USE_MMAP_BIT         (SIZE_T_ONE)
1290
 
1291
#ifndef WIN32
1292
#define CALL_MUNMAP(a, s)    munmap((a), (s))
1293
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
1294
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1295
#define MAP_ANONYMOUS        MAP_ANON
1296
#endif /* MAP_ANON */
1297
#ifdef MAP_ANONYMOUS
1298
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1299
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1300
#else /* MAP_ANONYMOUS */
1301
/*
1302
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1303
   is unlikely to be needed, but is supplied just in case.
1304
*/
1305
#define MMAP_FLAGS           (MAP_PRIVATE)
1306
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1307
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1308
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
1309
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1310
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1311
#endif /* MAP_ANONYMOUS */
1312
 
1313
#define DIRECT_MMAP(s)       CALL_MMAP(s)
1314
#else /* WIN32 */
1315
 
1316
/* Win32 MMAP via VirtualAlloc */
1317
static void* win32mmap(size_t size) {
1318
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1319
  return (ptr != 0)? ptr: MFAIL;
1320
}
1321
 
1322
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1323
static void* win32direct_mmap(size_t size) {
1324
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1325
                           PAGE_READWRITE);
1326
  return (ptr != 0)? ptr: MFAIL;
1327
}
1328
 
1329
/* This function supports releasing coalesed segments */
1330
static int win32munmap(void* ptr, size_t size) {
1331
  MEMORY_BASIC_INFORMATION minfo;
1332
  char* cptr = ptr;
1333
  while (size) {
1334
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1335
      return -1;
1336
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1337
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1338
      return -1;
1339
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1340
      return -1;
1341
    cptr += minfo.RegionSize;
1342
    size -= minfo.RegionSize;
1343
  }
1344
  return 0;
1345
}
1346
 
1347
#define CALL_MMAP(s)         win32mmap(s)
1348
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1349
#define DIRECT_MMAP(s)       win32direct_mmap(s)
1350
#endif /* WIN32 */
1351
#endif /* HAVE_MMAP */
1352
 
1353
#if HAVE_MMAP && HAVE_MREMAP
1354
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1355
#else  /* HAVE_MMAP && HAVE_MREMAP */
1356
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1357
#endif /* HAVE_MMAP && HAVE_MREMAP */
1358
 
1359
#if HAVE_MORECORE
1360
#define CALL_MORECORE(S)     MORECORE(S)
1361
#else  /* HAVE_MORECORE */
1362
#define CALL_MORECORE(S)     MFAIL
1363
#endif /* HAVE_MORECORE */
1364
 
1365
/* mstate bit set if continguous morecore disabled or failed */
1366
#define USE_NONCONTIGUOUS_BIT (4U)
1367
 
1368
/* segment bit set in create_mspace_with_base */
1369
#define EXTERN_BIT            (8U)
1370
 
1371
 
1372
/* --------------------------- Lock preliminaries ------------------------ */
1373
 
1374
#if USE_LOCKS
1375
 
1376
/*
1377
  When locks are defined, there are up to two global locks:
1378
 
1379
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1380
    MORECORE.  In many cases sys_alloc requires two calls, that should
1381
    not be interleaved with calls by other threads.  This does not
1382
    protect against direct calls to MORECORE by other threads not
1383
    using this lock, so there is still code to cope the best we can on
1384
    interference.
1385
 
1386
  * magic_init_mutex ensures that mparams.magic and other
1387
    unique mparams values are initialized only once.
1388
*/
1389
 
1390
#ifndef WIN32
1391
/* By default use posix locks */
1392
#include <pthread.h>
1393
#define MLOCK_T pthread_mutex_t
1394
#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1395
#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1396
#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1397
 
1398
#if HAVE_MORECORE
1399
static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1400
#endif /* HAVE_MORECORE */
1401
 
1402
static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1403
 
1404
#else /* WIN32 */
1405
/*
1406
   Because lock-protected regions have bounded times, and there
1407
   are no recursive lock calls, we can use simple spinlocks.
1408
*/
1409
 
1410
#define MLOCK_T long
1411
static int win32_acquire_lock (MLOCK_T *sl) {
1412
  for (;;) {
1413
#ifdef InterlockedCompareExchangePointer
1414
    if (!InterlockedCompareExchange(sl, 1, 0))
1415
      return 0;
1416
#else  /* Use older void* version */
1417
    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1418
      return 0;
1419
#endif /* InterlockedCompareExchangePointer */
1420
    Sleep (0);
1421
  }
1422
}
1423
 
1424
static void win32_release_lock (MLOCK_T *sl) {
1425
  InterlockedExchange (sl, 0);
1426
}
1427
 
1428
#define INITIAL_LOCK(l)      *(l)=0
1429
#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1430
#define RELEASE_LOCK(l)      win32_release_lock(l)
1431
#if HAVE_MORECORE
1432
static MLOCK_T morecore_mutex;
1433
#endif /* HAVE_MORECORE */
1434
static MLOCK_T magic_init_mutex;
1435
#endif /* WIN32 */
1436
 
1437
#define USE_LOCK_BIT               (2U)
1438
#else  /* USE_LOCKS */
1439
#define USE_LOCK_BIT               (0U)
1440
#define INITIAL_LOCK(l)
1441
#endif /* USE_LOCKS */
1442
 
1443
#if USE_LOCKS && HAVE_MORECORE
1444
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1445
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1446
#else /* USE_LOCKS && HAVE_MORECORE */
1447
#define ACQUIRE_MORECORE_LOCK()
1448
#define RELEASE_MORECORE_LOCK()
1449
#endif /* USE_LOCKS && HAVE_MORECORE */
1450
 
1451
#if USE_LOCKS
1452
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1453
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1454
#else  /* USE_LOCKS */
1455
#define ACQUIRE_MAGIC_INIT_LOCK()
1456
#define RELEASE_MAGIC_INIT_LOCK()
1457
#endif /* USE_LOCKS */
1458
 
1459
 
1460
/* -----------------------  Chunk representations ------------------------ */
1461
 
1462
/*
1463
  (The following includes lightly edited explanations by Colin Plumb.)
1464
 
1465
  The malloc_chunk declaration below is misleading (but accurate and
1466
  necessary).  It declares a "view" into memory allowing access to
1467
  necessary fields at known offsets from a given base.
1468
 
1469
  Chunks of memory are maintained using a `boundary tag' method as
1470
  originally described by Knuth.  (See the paper by Paul Wilson
1471
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1472
  techniques.)  Sizes of free chunks are stored both in the front of
1473
  each chunk and at the end.  This makes consolidating fragmented
1474
  chunks into bigger chunks fast.  The head fields also hold bits
1475
  representing whether chunks are free or in use.
1476
 
1477
  Here are some pictures to make it clearer.  They are "exploded" to
1478
  show that the state of a chunk can be thought of as extending from
1479
  the high 31 bits of the head field of its header through the
1480
  prev_foot and PINUSE_BIT bit of the following chunk header.
1481
 
1482
  A chunk that's in use looks like:
1483
 
1484
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1485
           | Size of previous chunk (if P = 1)                             |
1486
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1487
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1488
         | Size of this chunk                                         1| +-+
1489
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1490
         |                                                               |
1491
         +-                                                             -+
1492
         |                                                               |
1493
         +-                                                             -+
1494
         |                                                               :
1495
         +-      size - sizeof(size_t) available payload bytes          -+
1496
         :                                                               |
1497
 chunk-> +-                                                             -+
1498
         |                                                               |
1499
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1500
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1501
       | Size of next chunk (may or may not be in use)               | +-+
1502
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1503
 
1504
    And if it's free, it looks like this:
1505
 
1506
   chunk-> +-                                                             -+
1507
           | User payload (must be in use, or we would have merged!)       |
1508
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1509
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1510
         | Size of this chunk                                         0| +-+
1511
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1512
         | Next pointer                                                  |
1513
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1514
         | Prev pointer                                                  |
1515
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1516
         |                                                               :
1517
         +-      size - sizeof(struct chunk) unused bytes               -+
1518
         :                                                               |
1519
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1520
         | Size of this chunk                                            |
1521
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1522
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1523
       | Size of next chunk (must be in use, or we would have merged)| +-+
1524
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1525
       |                                                               :
1526
       +- User payload                                                -+
1527
       :                                                               |
1528
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1529
                                                                     |0|
1530
                                                                     +-+
1531
  Note that since we always merge adjacent free chunks, the chunks
1532
  adjacent to a free chunk must be in use.
1533
 
1534
  Given a pointer to a chunk (which can be derived trivially from the
1535
  payload pointer) we can, in O(1) time, find out whether the adjacent
1536
  chunks are free, and if so, unlink them from the lists that they
1537
  are on and merge them with the current chunk.
1538
 
1539
  Chunks always begin on even word boundaries, so the mem portion
1540
  (which is returned to the user) is also on an even word boundary, and
1541
  thus at least double-word aligned.
1542
 
1543
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1544
  chunk size (which is always a multiple of two words), is an in-use
1545
  bit for the *previous* chunk.  If that bit is *clear*, then the
1546
  word before the current chunk size contains the previous chunk
1547
  size, and can be used to find the front of the previous chunk.
1548
  The very first chunk allocated always has this bit set, preventing
1549
  access to non-existent (or non-owned) memory. If pinuse is set for
1550
  any given chunk, then you CANNOT determine the size of the
1551
  previous chunk, and might even get a memory addressing fault when
1552
  trying to do so.
1553
 
1554
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1555
  the chunk size redundantly records whether the current chunk is
1556
  inuse. This redundancy enables usage checks within free and realloc,
1557
  and reduces indirection when freeing and consolidating chunks.
1558
 
1559
  Each freshly allocated chunk must have both cinuse and pinuse set.
1560
  That is, each allocated chunk borders either a previously allocated
1561
  and still in-use chunk, or the base of its memory arena. This is
1562
  ensured by making all allocations from the the `lowest' part of any
1563
  found chunk.  Further, no free chunk physically borders another one,
1564
  so each free chunk is known to be preceded and followed by either
1565
  inuse chunks or the ends of memory.
1566
 
1567
  Note that the `foot' of the current chunk is actually represented
1568
  as the prev_foot of the NEXT chunk. This makes it easier to
1569
  deal with alignments etc but can be very confusing when trying
1570
  to extend or adapt this code.
1571
 
1572
  The exceptions to all this are
1573
 
1574
     1. The special chunk `top' is the top-most available chunk (i.e.,
1575
        the one bordering the end of available memory). It is treated
1576
        specially.  Top is never included in any bin, is used only if
1577
        no other chunk is available, and is released back to the
1578
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1579
        the top chunk is treated as larger (and thus less well
1580
        fitting) than any other available chunk.  The top chunk
1581
        doesn't update its trailing size field since there is no next
1582
        contiguous chunk that would have to index off it. However,
1583
        space is still allocated for it (TOP_FOOT_SIZE) to enable
1584
        separation or merging when space is extended.
1585
 
1586
     3. Chunks allocated via mmap, which have the lowest-order bit
1587
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1588
        PINUSE_BIT in their head fields.  Because they are allocated
1589
        one-by-one, each must carry its own prev_foot field, which is
1590
        also used to hold the offset this chunk has within its mmapped
1591
        region, which is needed to preserve alignment. Each mmapped
1592
        chunk is trailed by the first two fields of a fake next-chunk
1593
        for sake of usage checks.
1594
 
1595
*/
1596
 
1597
struct malloc_chunk {
1598
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
1599
  size_t               head;       /* Size and inuse bits. */
1600
  struct malloc_chunk* fd;         /* double links -- used only if free. */
1601
  struct malloc_chunk* bk;
1602
};
1603
 
1604
typedef struct malloc_chunk  mchunk;
1605
typedef struct malloc_chunk* mchunkptr;
1606
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1607
typedef unsigned int bindex_t;         /* Described below */
1608
typedef unsigned int binmap_t;         /* Described below */
1609
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1610
 
1611
/* ------------------- Chunks sizes and alignments ----------------------- */
1612
 
1613
#define MCHUNK_SIZE         (sizeof(mchunk))
1614
 
1615
#if FOOTERS
1616
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1617
#else /* FOOTERS */
1618
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1619
#endif /* FOOTERS */
1620
 
1621
/* MMapped chunks need a second word of overhead ... */
1622
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1623
/* ... and additional padding for fake next-chunk at foot */
1624
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1625
 
1626
/* The smallest size we can malloc is an aligned minimal chunk */
1627
#define MIN_CHUNK_SIZE\
1628
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1629
 
1630
/* conversion from malloc headers to user pointers, and back */
1631
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1632
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1633
/* chunk associated with aligned address A */
1634
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1635
 
1636
/* Bounds on request (not chunk) sizes. */
1637
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1638
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1639
 
1640
/* pad request bytes into a usable size */
1641
#define pad_request(req) \
1642
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1643
 
1644
/* pad request, checking for minimum (but not maximum) */
1645
#define request2size(req) \
1646
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1647
 
1648
 
1649
/* ------------------ Operations on head and foot fields ----------------- */
1650
 
1651
/*
1652
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1653
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1654
  use. If the chunk was obtained with mmap, the prev_foot field has
1655
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1656
  mmapped region to the base of the chunk.
1657
*/
1658
 
1659
#define PINUSE_BIT          (SIZE_T_ONE)
1660
#define CINUSE_BIT          (SIZE_T_TWO)
1661
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1662
 
1663
/* Head value for fenceposts */
1664
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1665
 
1666
/* extraction of fields from head words */
1667
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1668
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1669
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1670
 
1671
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1672
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1673
 
1674
/* Treat space at ptr +/- offset as a chunk */
1675
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1676
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1677
 
1678
/* Ptr to next or previous physical malloc_chunk. */
1679
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1680
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1681
 
1682
/* extract next chunk's pinuse bit */
1683
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1684
 
1685
/* Get/set size at footer */
1686
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1687
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1688
 
1689
/* Set size, pinuse bit, and foot */
1690
#define set_size_and_pinuse_of_free_chunk(p, s)\
1691
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1692
 
1693
/* Set size, pinuse bit, foot, and clear next pinuse */
1694
#define set_free_with_pinuse(p, s, n)\
1695
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1696
 
1697
#define is_mmapped(p)\
1698
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1699
 
1700
/* Get the internal overhead associated with chunk p */
1701
#define overhead_for(p)\
1702
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1703
 
1704
/* Return true if malloced space is not necessarily cleared */
1705
#if MMAP_CLEARS
1706
#define calloc_must_clear(p) (!is_mmapped(p))
1707
#else /* MMAP_CLEARS */
1708
#define calloc_must_clear(p) (1)
1709
#endif /* MMAP_CLEARS */
1710
 
1711
/* ---------------------- Overlaid data structures ----------------------- */
1712
 
1713
/*
1714
  When chunks are not in use, they are treated as nodes of either
1715
  lists or trees.
1716
 
1717
  "Small"  chunks are stored in circular doubly-linked lists, and look
1718
  like this:
1719
 
1720
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1721
            |             Size of previous chunk                            |
1722
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1723
    `head:' |             Size of chunk, in bytes                         |P|
1724
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1725
            |             Forward pointer to next chunk in list             |
1726
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1727
            |             Back pointer to previous chunk in list            |
1728
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1729
            |             Unused space (may be 0 bytes long)                .
1730
            .                                                               .
1731
            .                                                               |
1732
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1733
    `foot:' |             Size of chunk, in bytes                           |
1734
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1735
 
1736
  Larger chunks are kept in a form of bitwise digital trees (aka
1737
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1738
  free chunks greater than 256 bytes, their size doesn't impose any
1739
  constraints on user chunk sizes.  Each node looks like:
1740
 
1741
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1742
            |             Size of previous chunk                            |
1743
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1744
    `head:' |             Size of chunk, in bytes                         |P|
1745
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1746
            |             Forward pointer to next chunk of same size        |
1747
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1748
            |             Back pointer to previous chunk of same size       |
1749
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1750
            |             Pointer to left child (child[0])                  |
1751
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1752
            |             Pointer to right child (child[1])                 |
1753
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1754
            |             Pointer to parent                                 |
1755
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1756
            |             bin index of this chunk                           |
1757
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1758
            |             Unused space                                      .
1759
            .                                                               |
1760
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1761
    `foot:' |             Size of chunk, in bytes                           |
1762
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1763
 
1764
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1765
  of the same size are arranged in a circularly-linked list, with only
1766
  the oldest chunk (the next to be used, in our FIFO ordering)
1767
  actually in the tree.  (Tree members are distinguished by a non-null
1768
  parent pointer.)  If a chunk with the same size an an existing node
1769
  is inserted, it is linked off the existing node using pointers that
1770
  work in the same way as fd/bk pointers of small chunks.
1771
 
1772
  Each tree contains a power of 2 sized range of chunk sizes (the
1773
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1774
  tree level, with the chunks in the smaller half of the range (0x100
1775
  <= x < 0x140 for the top nose) in the left subtree and the larger
1776
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1777
  done by inspecting individual bits.
1778
 
1779
  Using these rules, each node's left subtree contains all smaller
1780
  sizes than its right subtree.  However, the node at the root of each
1781
  subtree has no particular ordering relationship to either.  (The
1782
  dividing line between the subtree sizes is based on trie relation.)
1783
  If we remove the last chunk of a given size from the interior of the
1784
  tree, we need to replace it with a leaf node.  The tree ordering
1785
  rules permit a node to be replaced by any leaf below it.
1786
 
1787
  The smallest chunk in a tree (a common operation in a best-fit
1788
  allocator) can be found by walking a path to the leftmost leaf in
1789
  the tree.  Unlike a usual binary tree, where we follow left child
1790
  pointers until we reach a null, here we follow the right child
1791
  pointer any time the left one is null, until we reach a leaf with
1792
  both child pointers null. The smallest chunk in the tree will be
1793
  somewhere along that path.
1794
 
1795
  The worst case number of steps to add, find, or remove a node is
1796
  bounded by the number of bits differentiating chunks within
1797
  bins. Under current bin calculations, this ranges from 6 up to 21
1798
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1799
  is of course much better.
1800
*/
1801
 
1802
struct malloc_tree_chunk {
1803
  /* The first four fields must be compatible with malloc_chunk */
1804
  size_t                    prev_foot;
1805
  size_t                    head;
1806
  struct malloc_tree_chunk* fd;
1807
  struct malloc_tree_chunk* bk;
1808
 
1809
  struct malloc_tree_chunk* child[2];
1810
  struct malloc_tree_chunk* parent;
1811
  bindex_t                  index;
1812
};
1813
 
1814
typedef struct malloc_tree_chunk  tchunk;
1815
typedef struct malloc_tree_chunk* tchunkptr;
1816
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1817
 
1818
/* A little helper macro for trees */
1819
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1820
 
1821
/* ----------------------------- Segments -------------------------------- */
1822
 
1823
/*
1824
  Each malloc space may include non-contiguous segments, held in a
1825
  list headed by an embedded malloc_segment record representing the
1826
  top-most space. Segments also include flags holding properties of
1827
  the space. Large chunks that are directly allocated by mmap are not
1828
  included in this list. They are instead independently created and
1829
  destroyed without otherwise keeping track of them.
1830
 
1831
  Segment management mainly comes into play for spaces allocated by
1832
  MMAP.  Any call to MMAP might or might not return memory that is
1833
  adjacent to an existing segment.  MORECORE normally contiguously
1834
  extends the current space, so this space is almost always adjacent,
1835
  which is simpler and faster to deal with. (This is why MORECORE is
1836
  used preferentially to MMAP when both are available -- see
1837
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1838
  hinting mechanisms (inconsistently) supported in various
1839
  implementations of unix mmap, or distinguish reserving from
1840
  committing memory. Instead, we just ask for space, and exploit
1841
  contiguity when we get it.  It is probably possible to do
1842
  better than this on some systems, but no general scheme seems
1843
  to be significantly better.
1844
 
1845
  Management entails a simpler variant of the consolidation scheme
1846
  used for chunks to reduce fragmentation -- new adjacent memory is
1847
  normally prepended or appended to an existing segment. However,
1848
  there are limitations compared to chunk consolidation that mostly
1849
  reflect the fact that segment processing is relatively infrequent
1850
  (occurring only when getting memory from system) and that we
1851
  don't expect to have huge numbers of segments:
1852
 
1853
  * Segments are not indexed, so traversal requires linear scans.  (It
1854
    would be possible to index these, but is not worth the extra
1855
    overhead and complexity for most programs on most platforms.)
1856
  * New segments are only appended to old ones when holding top-most
1857
    memory; if they cannot be prepended to others, they are held in
1858
    different segments.
1859
 
1860
  Except for the top-most segment of an mstate, each segment record
1861
  is kept at the tail of its segment. Segments are added by pushing
1862
  segment records onto the list headed by &mstate.seg for the
1863
  containing mstate.
1864
 
1865
  Segment flags control allocation/merge/deallocation policies:
1866
  * If EXTERN_BIT set, then we did not allocate this segment,
1867
    and so should not try to deallocate or merge with others.
1868
    (This currently holds only for the initial segment passed
1869
    into create_mspace_with_base.)
1870
  * If IS_MMAPPED_BIT set, the segment may be merged with
1871
    other surrounding mmapped segments and trimmed/de-allocated
1872
    using munmap.
1873
  * If neither bit is set, then the segment was obtained using
1874
    MORECORE so can be merged with surrounding MORECORE'd segments
1875
    and deallocated/trimmed using MORECORE with negative arguments.
1876
*/
1877
 
1878
struct malloc_segment {
1879
  char*        base;             /* base address */
1880
  size_t       size;             /* allocated size */
1881
  struct malloc_segment* next;   /* ptr to next segment */
1882
  flag_t       sflags;           /* mmap and extern flag */
1883
};
1884
 
1885
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1886
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1887
 
1888
typedef struct malloc_segment  msegment;
1889
typedef struct malloc_segment* msegmentptr;
1890
 
1891
/* ---------------------------- malloc_state ----------------------------- */
1892
 
1893
/*
1894
   A malloc_state holds all of the bookkeeping for a space.
1895
   The main fields are:
1896
 
1897
  Top
1898
    The topmost chunk of the currently active segment. Its size is
1899
    cached in topsize.  The actual size of topmost space is
1900
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1901
    fenceposts and segment records if necessary when getting more
1902
    space from the system.  The size at which to autotrim top is
1903
    cached from mparams in trim_check, except that it is disabled if
1904
    an autotrim fails.
1905
 
1906
  Designated victim (dv)
1907
    This is the preferred chunk for servicing small requests that
1908
    don't have exact fits.  It is normally the chunk split off most
1909
    recently to service another small request.  Its size is cached in
1910
    dvsize. The link fields of this chunk are not maintained since it
1911
    is not kept in a bin.
1912
 
1913
  SmallBins
1914
    An array of bin headers for free chunks.  These bins hold chunks
1915
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1916
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1917
    use in double-linked lists, each bin header acts as a malloc_chunk
1918
    pointing to the real first node, if it exists (else pointing to
1919
    itself).  This avoids special-casing for headers.  But to avoid
1920
    waste, we allocate only the fd/bk pointers of bins, and then use
1921
    repositioning tricks to treat these as the fields of a chunk.
1922
 
1923
  TreeBins
1924
    Treebins are pointers to the roots of trees holding a range of
1925
    sizes. There are 2 equally spaced treebins for each power of two
1926
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1927
    larger.
1928
 
1929
  Bin maps
1930
    There is one bit map for small bins ("smallmap") and one for
1931
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1932
    clears the bit when empty.  Bit operations are then used to avoid
1933
    bin-by-bin searching -- nearly all "search" is done without ever
1934
    looking at bins that won't be selected.  The bit maps
1935
    conservatively use 32 bits per map word, even if on 64bit system.
1936
    For a good description of some of the bit-based techniques used
1937
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1938
    supplement at http://hackersdelight.org/). Many of these are
1939
    intended to reduce the branchiness of paths through malloc etc, as
1940
    well as to reduce the number of memory locations read or written.
1941
 
1942
  Segments
1943
    A list of segments headed by an embedded malloc_segment record
1944
    representing the initial space.
1945
 
1946
  Address check support
1947
    The least_addr field is the least address ever obtained from
1948
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1949
    than this are trapped (unless INSECURE is defined).
1950
 
1951
  Magic tag
1952
    A cross-check field that should always hold same value as mparams.magic.
1953
 
1954
  Flags
1955
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1956
 
1957
  Statistics
1958
    Each space keeps track of current and maximum system memory
1959
    obtained via MORECORE or MMAP.
1960
 
1961
  Locking
1962
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1963
    around every public call using this mspace.
1964
*/
1965
 
1966
/* Bin types, widths and sizes */
1967
#define NSMALLBINS        (32U)
1968
#define NTREEBINS         (32U)
1969
#define SMALLBIN_SHIFT    (3U)
1970
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1971
#define TREEBIN_SHIFT     (8U)
1972
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1973
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1974
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1975
 
1976
struct malloc_state {
1977
  binmap_t   smallmap;
1978
  binmap_t   treemap;
1979
  size_t     dvsize;
1980
  size_t     topsize;
1981
  char*      least_addr;
1982
  mchunkptr  dv;
1983
  mchunkptr  top;
1984
  size_t     trim_check;
1985
  size_t     magic;
1986
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1987
  tbinptr    treebins[NTREEBINS];
1988
  size_t     footprint;
1989
  size_t     max_footprint;
1990
  flag_t     mflags;
1991
#if USE_LOCKS
1992
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1993
#endif /* USE_LOCKS */
1994
  msegment   seg;
1995
};
1996
 
1997
typedef struct malloc_state*    mstate;
1998
 
1999
/* ------------- Global malloc_state and malloc_params ------------------- */
2000
 
2001
/*
2002
  malloc_params holds global properties, including those that can be
2003
  dynamically set using mallopt. There is a single instance, mparams,
2004
  initialized in init_mparams.
2005
*/
2006
 
2007
struct malloc_params {
2008
  size_t magic;
2009
  size_t page_size;
2010
  size_t granularity;
2011
  size_t mmap_threshold;
2012
  size_t trim_threshold;
2013
  flag_t default_mflags;
2014
};
2015
 
2016
static struct malloc_params mparams;
2017
 
2018
/* The global malloc_state used for all non-"mspace" calls */
2019
static struct malloc_state _gm_;
2020
#define gm                 (&_gm_)
2021
#define is_global(M)       ((M) == &_gm_)
2022
#define is_initialized(M)  ((M)->top != 0)
2023
 
2024
/* -------------------------- system alloc setup ------------------------- */
2025
 
2026
/* Operations on mflags */
2027
 
2028
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2029
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2030
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2031
 
2032
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2033
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2034
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2035
 
2036
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2037
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2038
 
2039
#define set_lock(M,L)\
2040
 ((M)->mflags = (L)?\
2041
  ((M)->mflags | USE_LOCK_BIT) :\
2042
  ((M)->mflags & ~USE_LOCK_BIT))
2043
 
2044
/* page-align a size */
2045
#define page_align(S)\
2046
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2047
 
2048
/* granularity-align a size */
2049
#define granularity_align(S)\
2050
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2051
 
2052
#define is_page_aligned(S)\
2053
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2054
#define is_granularity_aligned(S)\
2055
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2056
 
2057
/*  True if segment S holds address A */
2058
#define segment_holds(S, A)\
2059
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2060
 
2061
/* Return segment holding given address */
2062
static msegmentptr segment_holding(mstate m, char* addr) {
2063
  msegmentptr sp = &m->seg;
2064
  for (;;) {
2065
    if (addr >= sp->base && addr < sp->base + sp->size)
2066
      return sp;
2067
    if ((sp = sp->next) == 0)
2068
      return 0;
2069
  }
2070
}
2071
 
2072
/* Return true if segment contains a segment link */
2073
static int has_segment_link(mstate m, msegmentptr ss) {
2074
  msegmentptr sp = &m->seg;
2075
  for (;;) {
2076
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2077
      return 1;
2078
    if ((sp = sp->next) == 0)
2079
      return 0;
2080
  }
2081
}
2082
 
2083
#ifndef MORECORE_CANNOT_TRIM
2084
#define should_trim(M,s)  ((s) > (M)->trim_check)
2085
#else  /* MORECORE_CANNOT_TRIM */
2086
#define should_trim(M,s)  (0)
2087
#endif /* MORECORE_CANNOT_TRIM */
2088
 
2089
/*
2090
  TOP_FOOT_SIZE is padding at the end of a segment, including space
2091
  that may be needed to place segment records and fenceposts when new
2092
  noncontiguous segments are added.
2093
*/
2094
#define TOP_FOOT_SIZE\
2095
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2096
 
2097
 
2098
/* -------------------------------  Hooks -------------------------------- */
2099
 
2100
/*
2101
  PREACTION should be defined to return 0 on success, and nonzero on
2102
  failure. If you are not using locking, you can redefine these to do
2103
  anything you like.
2104
*/
2105
 
2106
#if USE_LOCKS
2107
 
2108
/* Ensure locks are initialized */
2109
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2110
 
2111
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2112
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2113
#else /* USE_LOCKS */
2114
 
2115
#ifndef PREACTION
2116
#define PREACTION(M) (0)
2117
#endif  /* PREACTION */
2118
 
2119
#ifndef POSTACTION
2120
#define POSTACTION(M)
2121
#endif  /* POSTACTION */
2122
 
2123
#endif /* USE_LOCKS */
2124
 
2125
/*
2126
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2127
  USAGE_ERROR_ACTION is triggered on detected bad frees and
2128
  reallocs. The argument p is an address that might have triggered the
2129
  fault. It is ignored by the two predefined actions, but might be
2130
  useful in custom actions that try to help diagnose errors.
2131
*/
2132
 
2133
#if PROCEED_ON_ERROR
2134
 
2135
/* A count of the number of corruption errors causing resets */
2136
int malloc_corruption_error_count;
2137
 
2138
/* default corruption action */
2139
static void reset_on_error(mstate m);
2140
 
2141
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2142
#define USAGE_ERROR_ACTION(m, p)
2143
 
2144
#else /* PROCEED_ON_ERROR */
2145
 
2146
#ifndef CORRUPTION_ERROR_ACTION
2147
#define CORRUPTION_ERROR_ACTION(m) ABORT
2148
#endif /* CORRUPTION_ERROR_ACTION */
2149
 
2150
#ifndef USAGE_ERROR_ACTION
2151
#define USAGE_ERROR_ACTION(m,p) ABORT
2152
#endif /* USAGE_ERROR_ACTION */
2153
 
2154
#endif /* PROCEED_ON_ERROR */
2155
 
2156
/* -------------------------- Debugging setup ---------------------------- */
2157
 
2158
#if ! DEBUG
2159
 
2160
#define check_free_chunk(M,P)
2161
#define check_inuse_chunk(M,P)
2162
#define check_malloced_chunk(M,P,N)
2163
#define check_mmapped_chunk(M,P)
2164
#define check_malloc_state(M)
2165
#define check_top_chunk(M,P)
2166
 
2167
#else /* DEBUG */
2168
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2169
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2170
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2171
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2172
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2173
#define check_malloc_state(M)       do_check_malloc_state(M)
2174
 
2175
static void   do_check_any_chunk(mstate m, mchunkptr p);
2176
static void   do_check_top_chunk(mstate m, mchunkptr p);
2177
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2178
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2179
static void   do_check_free_chunk(mstate m, mchunkptr p);
2180
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2181
static void   do_check_tree(mstate m, tchunkptr t);
2182
static void   do_check_treebin(mstate m, bindex_t i);
2183
static void   do_check_smallbin(mstate m, bindex_t i);
2184
static void   do_check_malloc_state(mstate m);
2185
static int    bin_find(mstate m, mchunkptr x);
2186
static size_t traverse_and_check(mstate m);
2187
#endif /* DEBUG */
2188
 
2189
/* ---------------------------- Indexing Bins ---------------------------- */
2190
 
2191
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2192
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2193
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2194
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2195
 
2196
/* addressing by index. See above about smallbin repositioning */
2197
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2198
#define treebin_at(M,i)     (&((M)->treebins[i]))
2199
 
2200
/* assign tree index for size S to variable I */
2201
#if defined(__GNUC__) && defined(i386)
2202
#define compute_tree_index(S, I)\
2203
{\
2204
  size_t X = S >> TREEBIN_SHIFT;\
2205
  if (X == 0)\
2206
    I = 0;\
2207
  else if (X > 0xFFFF)\
2208
    I = NTREEBINS-1;\
2209
  else {\
2210
    unsigned int K;\
2211
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2212
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2213
  }\
2214
}
2215
#else /* GNUC */
2216
#define compute_tree_index(S, I)\
2217
{\
2218
  size_t X = S >> TREEBIN_SHIFT;\
2219
  if (X == 0)\
2220
    I = 0;\
2221
  else if (X > 0xFFFF)\
2222
    I = NTREEBINS-1;\
2223
  else {\
2224
    unsigned int Y = (unsigned int)X;\
2225
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
2226
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2227
    N += K;\
2228
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2229
    K = 14 - N + ((Y <<= K) >> 15);\
2230
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2231
  }\
2232
}
2233
#endif /* GNUC */
2234
 
2235
/* Bit representing maximum resolved size in a treebin at i */
2236
#define bit_for_tree_index(i) \
2237
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2238
 
2239
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2240
#define leftshift_for_tree_index(i) \
2241
   ((i == NTREEBINS-1)? 0 : \
2242
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2243
 
2244
/* The size of the smallest chunk held in bin with index i */
2245
#define minsize_for_tree_index(i) \
2246
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2247
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2248
 
2249
 
2250
/* ------------------------ Operations on bin maps ----------------------- */
2251
 
2252
/* bit corresponding to given index */
2253
#define idx2bit(i)              ((binmap_t)(1) << (i))
2254
 
2255
/* Mark/Clear bits with given index */
2256
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2257
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2258
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2259
 
2260
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2261
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2262
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2263
 
2264
/* index corresponding to given bit */
2265
 
2266
#if defined(__GNUC__) && defined(i386)
2267
#define compute_bit2idx(X, I)\
2268
{\
2269
  unsigned int J;\
2270
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2271
  I = (bindex_t)J;\
2272
}
2273
 
2274
#else /* GNUC */
2275
#if  USE_BUILTIN_FFS
2276
#define compute_bit2idx(X, I) I = ffs(X)-1
2277
 
2278
#else /* USE_BUILTIN_FFS */
2279
#define compute_bit2idx(X, I)\
2280
{\
2281
  unsigned int Y = X - 1;\
2282
  unsigned int K = Y >> (16-4) & 16;\
2283
  unsigned int N = K;        Y >>= K;\
2284
  N += K = Y >> (8-3) &  8;  Y >>= K;\
2285
  N += K = Y >> (4-2) &  4;  Y >>= K;\
2286
  N += K = Y >> (2-1) &  2;  Y >>= K;\
2287
  N += K = Y >> (1-0) &  1;  Y >>= K;\
2288
  I = (bindex_t)(N + Y);\
2289
}
2290
#endif /* USE_BUILTIN_FFS */
2291
#endif /* GNUC */
2292
 
2293
/* isolate the least set bit of a bitmap */
2294
#define least_bit(x)         ((x) & -(x))
2295
 
2296
/* mask with all bits to left of least bit of x on */
2297
#define left_bits(x)         ((x<<1) | -(x<<1))
2298
 
2299
/* mask with all bits to left of or equal to least bit of x on */
2300
#define same_or_left_bits(x) ((x) | -(x))
2301
 
2302
 
2303
/* ----------------------- Runtime Check Support ------------------------- */
2304
 
2305
/*
2306
  For security, the main invariant is that malloc/free/etc never
2307
  writes to a static address other than malloc_state, unless static
2308
  malloc_state itself has been corrupted, which cannot occur via
2309
  malloc (because of these checks). In essence this means that we
2310
  believe all pointers, sizes, maps etc held in malloc_state, but
2311
  check all of those linked or offsetted from other embedded data
2312
  structures.  These checks are interspersed with main code in a way
2313
  that tends to minimize their run-time cost.
2314
 
2315
  When FOOTERS is defined, in addition to range checking, we also
2316
  verify footer fields of inuse chunks, which can be used guarantee
2317
  that the mstate controlling malloc/free is intact.  This is a
2318
  streamlined version of the approach described by William Robertson
2319
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2320
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2321
  of an inuse chunk holds the xor of its mstate and a random seed,
2322
  that is checked upon calls to free() and realloc().  This is
2323
  (probablistically) unguessable from outside the program, but can be
2324
  computed by any code successfully malloc'ing any chunk, so does not
2325
  itself provide protection against code that has already broken
2326
  security through some other means.  Unlike Robertson et al, we
2327
  always dynamically check addresses of all offset chunks (previous,
2328
  next, etc). This turns out to be cheaper than relying on hashes.
2329
*/
2330
 
2331
#if !INSECURE
2332
/* Check if address a is at least as high as any from MORECORE or MMAP */
2333
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2334
/* Check if address of next chunk n is higher than base chunk p */
2335
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2336
/* Check if p has its cinuse bit on */
2337
#define ok_cinuse(p)     cinuse(p)
2338
/* Check if p has its pinuse bit on */
2339
#define ok_pinuse(p)     pinuse(p)
2340
 
2341
#else /* !INSECURE */
2342
#define ok_address(M, a) (1)
2343
#define ok_next(b, n)    (1)
2344
#define ok_cinuse(p)     (1)
2345
#define ok_pinuse(p)     (1)
2346
#endif /* !INSECURE */
2347
 
2348
#if (FOOTERS && !INSECURE)
2349
/* Check if (alleged) mstate m has expected magic field */
2350
#define ok_magic(M)      ((M)->magic == mparams.magic)
2351
#else  /* (FOOTERS && !INSECURE) */
2352
#define ok_magic(M)      (1)
2353
#endif /* (FOOTERS && !INSECURE) */
2354
 
2355
 
2356
/* In gcc, use __builtin_expect to minimize impact of checks */
2357
#if !INSECURE
2358
#if defined(__GNUC__) && __GNUC__ >= 3
2359
#define RTCHECK(e)  __builtin_expect(e, 1)
2360
#else /* GNUC */
2361
#define RTCHECK(e)  (e)
2362
#endif /* GNUC */
2363
#else /* !INSECURE */
2364
#define RTCHECK(e)  (1)
2365
#endif /* !INSECURE */
2366
 
2367
/* macros to set up inuse chunks with or without footers */
2368
 
2369
#if !FOOTERS
2370
 
2371
#define mark_inuse_foot(M,p,s)
2372
 
2373
/* Set cinuse bit and pinuse bit of next chunk */
2374
#define set_inuse(M,p,s)\
2375
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2376
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2377
 
2378
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2379
#define set_inuse_and_pinuse(M,p,s)\
2380
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2381
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2382
 
2383
/* Set size, cinuse and pinuse bit of this chunk */
2384
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2385
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2386
 
2387
#else /* FOOTERS */
2388
 
2389
/* Set foot of inuse chunk to be xor of mstate and seed */
2390
#define mark_inuse_foot(M,p,s)\
2391
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2392
 
2393
#define get_mstate_for(p)\
2394
  ((mstate)(((mchunkptr)((char*)(p) +\
2395
    (chunksize(p))))->prev_foot ^ mparams.magic))
2396
 
2397
#define set_inuse(M,p,s)\
2398
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2399
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2400
  mark_inuse_foot(M,p,s))
2401
 
2402
#define set_inuse_and_pinuse(M,p,s)\
2403
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2404
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2405
 mark_inuse_foot(M,p,s))
2406
 
2407
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2408
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2409
  mark_inuse_foot(M, p, s))
2410
 
2411
#endif /* !FOOTERS */
2412
 
2413
/* ---------------------------- setting mparams -------------------------- */
2414
 
2415
/* Initialize mparams */
2416
static int init_mparams(void) {
2417
  if (mparams.page_size == 0) {
2418
    size_t s;
2419
 
2420
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2421
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2422
#if MORECORE_CONTIGUOUS
2423
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2424
#else  /* MORECORE_CONTIGUOUS */
2425
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2426
#endif /* MORECORE_CONTIGUOUS */
2427
 
2428
#if (FOOTERS && !INSECURE)
2429
    {
2430
#if USE_DEV_RANDOM
2431
      int fd;
2432
      unsigned char buf[sizeof(size_t)];
2433
      /* Try to use /dev/urandom, else fall back on using time */
2434
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2435
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2436
        s = *((size_t *) buf);
2437
        close(fd);
2438
      }
2439
      else
2440
#endif /* USE_DEV_RANDOM */
2441
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
2442
 
2443
      s |= (size_t)8U;    /* ensure nonzero */
2444
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2445
 
2446
    }
2447
#else /* (FOOTERS && !INSECURE) */
2448
    s = (size_t)0x58585858U;
2449
#endif /* (FOOTERS && !INSECURE) */
2450
    ACQUIRE_MAGIC_INIT_LOCK();
2451
    if (mparams.magic == 0) {
2452
      mparams.magic = s;
2453
      /* Set up lock for main malloc area */
2454
      INITIAL_LOCK(&gm->mutex);
2455
      gm->mflags = mparams.default_mflags;
2456
    }
2457
    RELEASE_MAGIC_INIT_LOCK();
2458
 
2459
#ifndef WIN32
2460
    mparams.page_size = malloc_getpagesize;
2461
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2462
                           DEFAULT_GRANULARITY : mparams.page_size);
2463
#else /* WIN32 */
2464
    {
2465
      SYSTEM_INFO system_info;
2466
      GetSystemInfo(&system_info);
2467
      mparams.page_size = system_info.dwPageSize;
2468
      mparams.granularity = system_info.dwAllocationGranularity;
2469
    }
2470
#endif /* WIN32 */
2471
 
2472
    /* Sanity-check configuration:
2473
       size_t must be unsigned and as wide as pointer type.
2474
       ints must be at least 4 bytes.
2475
       alignment must be at least 8.
2476
       Alignment, min chunk size, and page size must all be powers of 2.
2477
    */
2478
    if ((sizeof(size_t) != sizeof(char*)) ||
2479
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2480
        (sizeof(int) < 4)  ||
2481
        (MALLOC_ALIGNMENT < (size_t)8U) ||
2482
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2483
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2484
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2485
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2486
      ABORT;
2487
  }
2488
  return 0;
2489
}
2490
 
2491
/* support for mallopt */
2492
static int change_mparam(int param_number, int value) {
2493
  size_t val = (size_t)value;
2494
  init_mparams();
2495
  switch(param_number) {
2496
  case M_TRIM_THRESHOLD:
2497
    mparams.trim_threshold = val;
2498
    return 1;
2499
  case M_GRANULARITY:
2500
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2501
      mparams.granularity = val;
2502
      return 1;
2503
    }
2504
    else
2505
      return 0;
2506
  case M_MMAP_THRESHOLD:
2507
    mparams.mmap_threshold = val;
2508
    return 1;
2509
  default:
2510
    return 0;
2511
  }
2512
}
2513
 
2514
#if DEBUG
2515
/* ------------------------- Debugging Support --------------------------- */
2516
 
2517
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
2518
static void do_check_any_chunk(mstate m, mchunkptr p) {
2519
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2520
  assert(ok_address(m, p));
2521
}
2522
 
2523
/* Check properties of top chunk */
2524
static void do_check_top_chunk(mstate m, mchunkptr p) {
2525
  msegmentptr sp = segment_holding(m, (char*)p);
2526
  size_t  sz = chunksize(p);
2527
  assert(sp != 0);
2528
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2529
  assert(ok_address(m, p));
2530
  assert(sz == m->topsize);
2531
  assert(sz > 0);
2532
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2533
  assert(pinuse(p));
2534
  assert(!next_pinuse(p));
2535
}
2536
 
2537
/* Check properties of (inuse) mmapped chunks */
2538
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2539
  size_t  sz = chunksize(p);
2540
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2541
  assert(is_mmapped(p));
2542
  assert(use_mmap(m));
2543
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2544
  assert(ok_address(m, p));
2545
  assert(!is_small(sz));
2546
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2547
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2548
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2549
}
2550
 
2551
/* Check properties of inuse chunks */
2552
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2553
  do_check_any_chunk(m, p);
2554
  assert(cinuse(p));
2555
  assert(next_pinuse(p));
2556
  /* If not pinuse and not mmapped, previous chunk has OK offset */
2557
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2558
  if (is_mmapped(p))
2559
    do_check_mmapped_chunk(m, p);
2560
}
2561
 
2562
/* Check properties of free chunks */
2563
static void do_check_free_chunk(mstate m, mchunkptr p) {
2564
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2565
  mchunkptr next = chunk_plus_offset(p, sz);
2566
  do_check_any_chunk(m, p);
2567
  assert(!cinuse(p));
2568
  assert(!next_pinuse(p));
2569
  assert (!is_mmapped(p));
2570
  if (p != m->dv && p != m->top) {
2571
    if (sz >= MIN_CHUNK_SIZE) {
2572
      assert((sz & CHUNK_ALIGN_MASK) == 0);
2573
      assert(is_aligned(chunk2mem(p)));
2574
      assert(next->prev_foot == sz);
2575
      assert(pinuse(p));
2576
      assert (next == m->top || cinuse(next));
2577
      assert(p->fd->bk == p);
2578
      assert(p->bk->fd == p);
2579
    }
2580
    else  /* markers are always of size SIZE_T_SIZE */
2581
      assert(sz == SIZE_T_SIZE);
2582
  }
2583
}
2584
 
2585
/* Check properties of malloced chunks at the point they are malloced */
2586
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2587
  if (mem != 0) {
2588
    mchunkptr p = mem2chunk(mem);
2589
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2590
    do_check_inuse_chunk(m, p);
2591
    assert((sz & CHUNK_ALIGN_MASK) == 0);
2592
    assert(sz >= MIN_CHUNK_SIZE);
2593
    assert(sz >= s);
2594
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2595
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2596
  }
2597
}
2598
 
2599
/* Check a tree and its subtrees.  */
2600
static void do_check_tree(mstate m, tchunkptr t) {
2601
  tchunkptr head = 0;
2602
  tchunkptr u = t;
2603
  bindex_t tindex = t->index;
2604
  size_t tsize = chunksize(t);
2605
  bindex_t idx;
2606
  compute_tree_index(tsize, idx);
2607
  assert(tindex == idx);
2608
  assert(tsize >= MIN_LARGE_SIZE);
2609
  assert(tsize >= minsize_for_tree_index(idx));
2610
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2611
 
2612
  do { /* traverse through chain of same-sized nodes */
2613
    do_check_any_chunk(m, ((mchunkptr)u));
2614
    assert(u->index == tindex);
2615
    assert(chunksize(u) == tsize);
2616
    assert(!cinuse(u));
2617
    assert(!next_pinuse(u));
2618
    assert(u->fd->bk == u);
2619
    assert(u->bk->fd == u);
2620
    if (u->parent == 0) {
2621
      assert(u->child[0] == 0);
2622
      assert(u->child[1] == 0);
2623
    }
2624
    else {
2625
      assert(head == 0); /* only one node on chain has parent */
2626
      head = u;
2627
      assert(u->parent != u);
2628
      assert (u->parent->child[0] == u ||
2629
              u->parent->child[1] == u ||
2630
              *((tbinptr*)(u->parent)) == u);
2631
      if (u->child[0] != 0) {
2632
        assert(u->child[0]->parent == u);
2633
        assert(u->child[0] != u);
2634
        do_check_tree(m, u->child[0]);
2635
      }
2636
      if (u->child[1] != 0) {
2637
        assert(u->child[1]->parent == u);
2638
        assert(u->child[1] != u);
2639
        do_check_tree(m, u->child[1]);
2640
      }
2641
      if (u->child[0] != 0 && u->child[1] != 0) {
2642
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2643
      }
2644
    }
2645
    u = u->fd;
2646
  } while (u != t);
2647
  assert(head != 0);
2648
}
2649
 
2650
/*  Check all the chunks in a treebin.  */
2651
static void do_check_treebin(mstate m, bindex_t i) {
2652
  tbinptr* tb = treebin_at(m, i);
2653
  tchunkptr t = *tb;
2654
  int empty = (m->treemap & (1U << i)) == 0;
2655
  if (t == 0)
2656
    assert(empty);
2657
  if (!empty)
2658
    do_check_tree(m, t);
2659
}
2660
 
2661
/*  Check all the chunks in a smallbin.  */
2662
static void do_check_smallbin(mstate m, bindex_t i) {
2663
  sbinptr b = smallbin_at(m, i);
2664
  mchunkptr p = b->bk;
2665
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2666
  if (p == b)
2667
    assert(empty);
2668
  if (!empty) {
2669
    for (; p != b; p = p->bk) {
2670
      size_t size = chunksize(p);
2671
      mchunkptr q;
2672
      /* each chunk claims to be free */
2673
      do_check_free_chunk(m, p);
2674
      /* chunk belongs in bin */
2675
      assert(small_index(size) == i);
2676
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2677
      /* chunk is followed by an inuse chunk */
2678
      q = next_chunk(p);
2679
      if (q->head != FENCEPOST_HEAD)
2680
        do_check_inuse_chunk(m, q);
2681
    }
2682
  }
2683
}
2684
 
2685
/* Find x in a bin. Used in other check functions. */
2686
static int bin_find(mstate m, mchunkptr x) {
2687
  size_t size = chunksize(x);
2688
  if (is_small(size)) {
2689
    bindex_t sidx = small_index(size);
2690
    sbinptr b = smallbin_at(m, sidx);
2691
    if (smallmap_is_marked(m, sidx)) {
2692
      mchunkptr p = b;
2693
      do {
2694
        if (p == x)
2695
          return 1;
2696
      } while ((p = p->fd) != b);
2697
    }
2698
  }
2699
  else {
2700
    bindex_t tidx;
2701
    compute_tree_index(size, tidx);
2702
    if (treemap_is_marked(m, tidx)) {
2703
      tchunkptr t = *treebin_at(m, tidx);
2704
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2705
      while (t != 0 && chunksize(t) != size) {
2706
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2707
        sizebits <<= 1;
2708
      }
2709
      if (t != 0) {
2710
        tchunkptr u = t;
2711
        do {
2712
          if (u == (tchunkptr)x)
2713
            return 1;
2714
        } while ((u = u->fd) != t);
2715
      }
2716
    }
2717
  }
2718
  return 0;
2719
}
2720
 
2721
/* Traverse each chunk and check it; return total */
2722
static size_t traverse_and_check(mstate m) {
2723
  size_t sum = 0;
2724
  if (is_initialized(m)) {
2725
    msegmentptr s = &m->seg;
2726
    sum += m->topsize + TOP_FOOT_SIZE;
2727
    while (s != 0) {
2728
      mchunkptr q = align_as_chunk(s->base);
2729
      mchunkptr lastq = 0;
2730
      assert(pinuse(q));
2731
      while (segment_holds(s, q) &&
2732
             q != m->top && q->head != FENCEPOST_HEAD) {
2733
        sum += chunksize(q);
2734
        if (cinuse(q)) {
2735
          assert(!bin_find(m, q));
2736
          do_check_inuse_chunk(m, q);
2737
        }
2738
        else {
2739
          assert(q == m->dv || bin_find(m, q));
2740
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2741
          do_check_free_chunk(m, q);
2742
        }
2743
        lastq = q;
2744
        q = next_chunk(q);
2745
      }
2746
      s = s->next;
2747
    }
2748
  }
2749
  return sum;
2750
}
2751
 
2752
/* Check all properties of malloc_state. */
2753
static void do_check_malloc_state(mstate m) {
2754
  bindex_t i;
2755
  size_t total;
2756
  /* check bins */
2757
  for (i = 0; i < NSMALLBINS; ++i)
2758
    do_check_smallbin(m, i);
2759
  for (i = 0; i < NTREEBINS; ++i)
2760
    do_check_treebin(m, i);
2761
 
2762
  if (m->dvsize != 0) { /* check dv chunk */
2763
    do_check_any_chunk(m, m->dv);
2764
    assert(m->dvsize == chunksize(m->dv));
2765
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2766
    assert(bin_find(m, m->dv) == 0);
2767
  }
2768
 
2769
  if (m->top != 0) {   /* check top chunk */
2770
    do_check_top_chunk(m, m->top);
2771
    assert(m->topsize == chunksize(m->top));
2772
    assert(m->topsize > 0);
2773
    assert(bin_find(m, m->top) == 0);
2774
  }
2775
 
2776
  total = traverse_and_check(m);
2777
  assert(total <= m->footprint);
2778
  assert(m->footprint <= m->max_footprint);
2779
}
2780
#endif /* DEBUG */
2781
 
2782
/* ----------------------------- statistics ------------------------------ */
2783
 
2784
#if !NO_MALLINFO
2785
static struct mallinfo internal_mallinfo(mstate m) {
2786
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2787
  if (!PREACTION(m)) {
2788
    check_malloc_state(m);
2789
    if (is_initialized(m)) {
2790
      size_t nfree = SIZE_T_ONE; /* top always free */
2791
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2792
      size_t sum = mfree;
2793
      msegmentptr s = &m->seg;
2794
      while (s != 0) {
2795
        mchunkptr q = align_as_chunk(s->base);
2796
        while (segment_holds(s, q) &&
2797
               q != m->top && q->head != FENCEPOST_HEAD) {
2798
          size_t sz = chunksize(q);
2799
          sum += sz;
2800
          if (!cinuse(q)) {
2801
            mfree += sz;
2802
            ++nfree;
2803
          }
2804
          q = next_chunk(q);
2805
        }
2806
        s = s->next;
2807
      }
2808
 
2809
      nm.arena    = sum;
2810
      nm.ordblks  = nfree;
2811
      nm.hblkhd   = m->footprint - sum;
2812
      nm.usmblks  = m->max_footprint;
2813
      nm.uordblks = m->footprint - mfree;
2814
      nm.fordblks = mfree;
2815
      nm.keepcost = m->topsize;
2816
    }
2817
 
2818
    POSTACTION(m);
2819
  }
2820
  return nm;
2821
}
2822
#endif /* !NO_MALLINFO */
2823
 
2824
static void internal_malloc_stats(mstate m) {
2825
  if (!PREACTION(m)) {
2826
    size_t maxfp = 0;
2827
    size_t fp = 0;
2828
    size_t used = 0;
2829
    check_malloc_state(m);
2830
    if (is_initialized(m)) {
2831
      msegmentptr s = &m->seg;
2832
      maxfp = m->max_footprint;
2833
      fp = m->footprint;
2834
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2835
 
2836
      while (s != 0) {
2837
        mchunkptr q = align_as_chunk(s->base);
2838
        while (segment_holds(s, q) &&
2839
               q != m->top && q->head != FENCEPOST_HEAD) {
2840
          if (!cinuse(q))
2841
            used -= chunksize(q);
2842
          q = next_chunk(q);
2843
        }
2844
        s = s->next;
2845
      }
2846
    }
2847
 
2848
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2849
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2850
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2851
 
2852
    POSTACTION(m);
2853
  }
2854
}
2855
 
2856
/* ----------------------- Operations on smallbins ----------------------- */
2857
 
2858
/*
2859
  Various forms of linking and unlinking are defined as macros.  Even
2860
  the ones for trees, which are very long but have very short typical
2861
  paths.  This is ugly but reduces reliance on inlining support of
2862
  compilers.
2863
*/
2864
 
2865
/* Link a free chunk into a smallbin  */
2866
#define insert_small_chunk(M, P, S) {\
2867
  bindex_t I  = small_index(S);\
2868
  mchunkptr B = smallbin_at(M, I);\
2869
  mchunkptr F = B;\
2870
  assert(S >= MIN_CHUNK_SIZE);\
2871
  if (!smallmap_is_marked(M, I))\
2872
    mark_smallmap(M, I);\
2873
  else if (RTCHECK(ok_address(M, B->fd)))\
2874
    F = B->fd;\
2875
  else {\
2876
    CORRUPTION_ERROR_ACTION(M);\
2877
  }\
2878
  B->fd = P;\
2879
  F->bk = P;\
2880
  P->fd = F;\
2881
  P->bk = B;\
2882
}
2883
 
2884
/* Unlink a chunk from a smallbin  */
2885
#define unlink_small_chunk(M, P, S) {\
2886
  mchunkptr F = P->fd;\
2887
  mchunkptr B = P->bk;\
2888
  bindex_t I = small_index(S);\
2889
  assert(P != B);\
2890
  assert(P != F);\
2891
  assert(chunksize(P) == small_index2size(I));\
2892
  if (F == B)\
2893
    clear_smallmap(M, I);\
2894
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2895
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2896
    F->bk = B;\
2897
    B->fd = F;\
2898
  }\
2899
  else {\
2900
    CORRUPTION_ERROR_ACTION(M);\
2901
  }\
2902
}
2903
 
2904
/* Unlink the first chunk from a smallbin */
2905
#define unlink_first_small_chunk(M, B, P, I) {\
2906
  mchunkptr F = P->fd;\
2907
  assert(P != B);\
2908
  assert(P != F);\
2909
  assert(chunksize(P) == small_index2size(I));\
2910
  if (B == F)\
2911
    clear_smallmap(M, I);\
2912
  else if (RTCHECK(ok_address(M, F))) {\
2913
    B->fd = F;\
2914
    F->bk = B;\
2915
  }\
2916
  else {\
2917
    CORRUPTION_ERROR_ACTION(M);\
2918
  }\
2919
}
2920
 
2921
/* Replace dv node, binning the old one */
2922
/* Used only when dvsize known to be small */
2923
#define replace_dv(M, P, S) {\
2924
  size_t DVS = M->dvsize;\
2925
  if (DVS != 0) {\
2926
    mchunkptr DV = M->dv;\
2927
    assert(is_small(DVS));\
2928
    insert_small_chunk(M, DV, DVS);\
2929
  }\
2930
  M->dvsize = S;\
2931
  M->dv = P;\
2932
}
2933
 
2934
/* ------------------------- Operations on trees ------------------------- */
2935
 
2936
/* Insert chunk into tree */
2937
#define insert_large_chunk(M, X, S) {\
2938
  tbinptr* H;\
2939
  bindex_t I;\
2940
  compute_tree_index(S, I);\
2941
  H = treebin_at(M, I);\
2942
  X->index = I;\
2943
  X->child[0] = X->child[1] = 0;\
2944
  if (!treemap_is_marked(M, I)) {\
2945
    mark_treemap(M, I);\
2946
    *H = X;\
2947
    X->parent = (tchunkptr)H;\
2948
    X->fd = X->bk = X;\
2949
  }\
2950
  else {\
2951
    tchunkptr T = *H;\
2952
    size_t K = S << leftshift_for_tree_index(I);\
2953
    for (;;) {\
2954
      if (chunksize(T) != S) {\
2955
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2956
        K <<= 1;\
2957
        if (*C != 0)\
2958
          T = *C;\
2959
        else if (RTCHECK(ok_address(M, C))) {\
2960
          *C = X;\
2961
          X->parent = T;\
2962
          X->fd = X->bk = X;\
2963
          break;\
2964
        }\
2965
        else {\
2966
          CORRUPTION_ERROR_ACTION(M);\
2967
          break;\
2968
        }\
2969
      }\
2970
      else {\
2971
        tchunkptr F = T->fd;\
2972
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2973
          T->fd = F->bk = X;\
2974
          X->fd = F;\
2975
          X->bk = T;\
2976
          X->parent = 0;\
2977
          break;\
2978
        }\
2979
        else {\
2980
          CORRUPTION_ERROR_ACTION(M);\
2981
          break;\
2982
        }\
2983
      }\
2984
    }\
2985
  }\
2986
}
2987
 
2988
/*
2989
  Unlink steps:
2990
 
2991
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2992
     and choose its bk node as its replacement.
2993
  2. If x was the last node of its size, but not a leaf node, it must
2994
     be replaced with a leaf node (not merely one with an open left or
2995
     right), to make sure that lefts and rights of descendents
2996
     correspond properly to bit masks.  We use the rightmost descendent
2997
     of x.  We could use any other leaf, but this is easy to locate and
2998
     tends to counteract removal of leftmosts elsewhere, and so keeps
2999
     paths shorter than minimally guaranteed.  This doesn't loop much
3000
     because on average a node in a tree is near the bottom.
3001
  3. If x is the base of a chain (i.e., has parent links) relink
3002
     x's parent and children to x's replacement (or null if none).
3003
*/
3004
 
3005
#define unlink_large_chunk(M, X) {\
3006
  tchunkptr XP = X->parent;\
3007
  tchunkptr R;\
3008
  if (X->bk != X) {\
3009
    tchunkptr F = X->fd;\
3010
    R = X->bk;\
3011
    if (RTCHECK(ok_address(M, F))) {\
3012
      F->bk = R;\
3013
      R->fd = F;\
3014
    }\
3015
    else {\
3016
      CORRUPTION_ERROR_ACTION(M);\
3017
    }\
3018
  }\
3019
  else {\
3020
    tchunkptr* RP;\
3021
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
3022
        ((R = *(RP = &(X->child[0]))) != 0)) {\
3023
      tchunkptr* CP;\
3024
      while ((*(CP = &(R->child[1])) != 0) ||\
3025
             (*(CP = &(R->child[0])) != 0)) {\
3026
        R = *(RP = CP);\
3027
      }\
3028
      if (RTCHECK(ok_address(M, RP)))\
3029
        *RP = 0;\
3030
      else {\
3031
        CORRUPTION_ERROR_ACTION(M);\
3032
      }\
3033
    }\
3034
  }\
3035
  if (XP != 0) {\
3036
    tbinptr* H = treebin_at(M, X->index);\
3037
    if (X == *H) {\
3038
      if ((*H = R) == 0) \
3039
        clear_treemap(M, X->index);\
3040
    }\
3041
    else if (RTCHECK(ok_address(M, XP))) {\
3042
      if (XP->child[0] == X) \
3043
        XP->child[0] = R;\
3044
      else \
3045
        XP->child[1] = R;\
3046
    }\
3047
    else\
3048
      CORRUPTION_ERROR_ACTION(M);\
3049
    if (R != 0) {\
3050
      if (RTCHECK(ok_address(M, R))) {\
3051
        tchunkptr C0, C1;\
3052
        R->parent = XP;\
3053
        if ((C0 = X->child[0]) != 0) {\
3054
          if (RTCHECK(ok_address(M, C0))) {\
3055
            R->child[0] = C0;\
3056
            C0->parent = R;\
3057
          }\
3058
          else\
3059
            CORRUPTION_ERROR_ACTION(M);\
3060
        }\
3061
        if ((C1 = X->child[1]) != 0) {\
3062
          if (RTCHECK(ok_address(M, C1))) {\
3063
            R->child[1] = C1;\
3064
            C1->parent = R;\
3065
          }\
3066
          else\
3067
            CORRUPTION_ERROR_ACTION(M);\
3068
        }\
3069
      }\
3070
      else\
3071
        CORRUPTION_ERROR_ACTION(M);\
3072
    }\
3073
  }\
3074
}
3075
 
3076
/* Relays to large vs small bin operations */
3077
 
3078
#define insert_chunk(M, P, S)\
3079
  if (is_small(S)) insert_small_chunk(M, P, S)\
3080
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3081
 
3082
#define unlink_chunk(M, P, S)\
3083
  if (is_small(S)) unlink_small_chunk(M, P, S)\
3084
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3085
 
3086
 
3087
/* Relays to internal calls to malloc/free from realloc, memalign etc */
3088
 
3089
#if ONLY_MSPACES
3090
#define internal_malloc(m, b) mspace_malloc(m, b)
3091
#define internal_free(m, mem) mspace_free(m,mem);
3092
#else /* ONLY_MSPACES */
3093
#if MSPACES
3094
#define internal_malloc(m, b)\
3095
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3096
#define internal_free(m, mem)\
3097
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
3098
#else /* MSPACES */
3099
#define internal_malloc(m, b) dlmalloc(b)
3100
#define internal_free(m, mem) dlfree(mem)
3101
#endif /* MSPACES */
3102
#endif /* ONLY_MSPACES */
3103
 
3104
/* -----------------------  Direct-mmapping chunks ----------------------- */
3105
 
3106
/*
3107
  Directly mmapped chunks are set up with an offset to the start of
3108
  the mmapped region stored in the prev_foot field of the chunk. This
3109
  allows reconstruction of the required argument to MUNMAP when freed,
3110
  and also allows adjustment of the returned chunk to meet alignment
3111
  requirements (especially in memalign).  There is also enough space
3112
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3113
  the PINUSE bit so frees can be checked.
3114
*/
3115
 
3116
/* Malloc using mmap */
3117
static void* mmap_alloc(mstate m, size_t nb) {
3118
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3119
  if (mmsize > nb) {     /* Check for wrap around 0 */
3120
    char* mm = (char*)(DIRECT_MMAP(mmsize));
3121
    if (mm != CMFAIL) {
3122
      size_t offset = align_offset(chunk2mem(mm));
3123
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3124
      mchunkptr p = (mchunkptr)(mm + offset);
3125
      p->prev_foot = offset | IS_MMAPPED_BIT;
3126
      (p)->head = (psize|CINUSE_BIT);
3127
      mark_inuse_foot(m, p, psize);
3128
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3129
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3130
 
3131
      if (mm < m->least_addr)
3132
        m->least_addr = mm;
3133
      if ((m->footprint += mmsize) > m->max_footprint)
3134
        m->max_footprint = m->footprint;
3135
      assert(is_aligned(chunk2mem(p)));
3136
      check_mmapped_chunk(m, p);
3137
      return chunk2mem(p);
3138
    }
3139
  }
3140
  return 0;
3141
}
3142
 
3143
/* Realloc using mmap */
3144
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3145
  size_t oldsize = chunksize(oldp);
3146
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
3147
    return 0;
3148
  /* Keep old chunk if big enough but not too big */
3149
  if (oldsize >= nb + SIZE_T_SIZE &&
3150
      (oldsize - nb) <= (mparams.granularity << 1))
3151
    return oldp;
3152
  else {
3153
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3154
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3155
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3156
                                         CHUNK_ALIGN_MASK);
3157
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3158
                                  oldmmsize, newmmsize, 1);
3159
    if (cp != CMFAIL) {
3160
      mchunkptr newp = (mchunkptr)(cp + offset);
3161
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3162
      newp->head = (psize|CINUSE_BIT);
3163
      mark_inuse_foot(m, newp, psize);
3164
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3165
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3166
 
3167
      if (cp < m->least_addr)
3168
        m->least_addr = cp;
3169
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3170
        m->max_footprint = m->footprint;
3171
      check_mmapped_chunk(m, newp);
3172
      return newp;
3173
    }
3174
  }
3175
  return 0;
3176
}
3177
 
3178
/* -------------------------- mspace management -------------------------- */
3179
 
3180
/* Initialize top chunk and its size */
3181
static void init_top(mstate m, mchunkptr p, size_t psize) {
3182
  /* Ensure alignment */
3183
  size_t offset = align_offset(chunk2mem(p));
3184
  p = (mchunkptr)((char*)p + offset);
3185
  psize -= offset;
3186
 
3187
  m->top = p;
3188
  m->topsize = psize;
3189
  p->head = psize | PINUSE_BIT;
3190
  /* set size of fake trailing chunk holding overhead space only once */
3191
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3192
  m->trim_check = mparams.trim_threshold; /* reset on each update */
3193
}
3194
 
3195
/* Initialize bins for a new mstate that is otherwise zeroed out */
3196
static void init_bins(mstate m) {
3197
  /* Establish circular links for smallbins */
3198
  bindex_t i;
3199
  for (i = 0; i < NSMALLBINS; ++i) {
3200
    sbinptr bin = smallbin_at(m,i);
3201
    bin->fd = bin->bk = bin;
3202
  }
3203
}
3204
 
3205
#if PROCEED_ON_ERROR
3206
 
3207
/* default corruption action */
3208
static void reset_on_error(mstate m) {
3209
  int i;
3210
  ++malloc_corruption_error_count;
3211
  /* Reinitialize fields to forget about all memory */
3212
  m->smallbins = m->treebins = 0;
3213
  m->dvsize = m->topsize = 0;
3214
  m->seg.base = 0;
3215
  m->seg.size = 0;
3216
  m->seg.next = 0;
3217
  m->top = m->dv = 0;
3218
  for (i = 0; i < NTREEBINS; ++i)
3219
    *treebin_at(m, i) = 0;
3220
  init_bins(m);
3221
}
3222
#endif /* PROCEED_ON_ERROR */
3223
 
3224
/* Allocate chunk and prepend remainder with chunk in successor base. */
3225
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3226
                           size_t nb) {
3227
  mchunkptr p = align_as_chunk(newbase);
3228
  mchunkptr oldfirst = align_as_chunk(oldbase);
3229
  size_t psize = (char*)oldfirst - (char*)p;
3230
  mchunkptr q = chunk_plus_offset(p, nb);
3231
  size_t qsize = psize - nb;
3232
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3233
 
3234
  assert((char*)oldfirst > (char*)q);
3235
  assert(pinuse(oldfirst));
3236
  assert(qsize >= MIN_CHUNK_SIZE);
3237
 
3238
  /* consolidate remainder with first chunk of old base */
3239
  if (oldfirst == m->top) {
3240
    size_t tsize = m->topsize += qsize;
3241
    m->top = q;
3242
    q->head = tsize | PINUSE_BIT;
3243
    check_top_chunk(m, q);
3244
  }
3245
  else if (oldfirst == m->dv) {
3246
    size_t dsize = m->dvsize += qsize;
3247
    m->dv = q;
3248
    set_size_and_pinuse_of_free_chunk(q, dsize);
3249
  }
3250
  else {
3251
    if (!cinuse(oldfirst)) {
3252
      size_t nsize = chunksize(oldfirst);
3253
      unlink_chunk(m, oldfirst, nsize);
3254
      oldfirst = chunk_plus_offset(oldfirst, nsize);
3255
      qsize += nsize;
3256
    }
3257
    set_free_with_pinuse(q, qsize, oldfirst);
3258
    insert_chunk(m, q, qsize);
3259
    check_free_chunk(m, q);
3260
  }
3261
 
3262
  check_malloced_chunk(m, chunk2mem(p), nb);
3263
  return chunk2mem(p);
3264
}
3265
 
3266
 
3267
/* Add a segment to hold a new noncontiguous region */
3268
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3269
  /* Determine locations and sizes of segment, fenceposts, old top */
3270
  char* old_top = (char*)m->top;
3271
  msegmentptr oldsp = segment_holding(m, old_top);
3272
  char* old_end = oldsp->base + oldsp->size;
3273
  size_t ssize = pad_request(sizeof(struct malloc_segment));
3274
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3275
  size_t offset = align_offset(chunk2mem(rawsp));
3276
  char* asp = rawsp + offset;
3277
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3278
  mchunkptr sp = (mchunkptr)csp;
3279
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3280
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
3281
  mchunkptr p = tnext;
3282
  int nfences = 0;
3283
 
3284
  /* reset top to new space */
3285
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3286
 
3287
  /* Set up segment record */
3288
  assert(is_aligned(ss));
3289
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3290
  *ss = m->seg; /* Push current record */
3291
  m->seg.base = tbase;
3292
  m->seg.size = tsize;
3293
  m->seg.sflags = mmapped;
3294
  m->seg.next = ss;
3295
 
3296
  /* Insert trailing fenceposts */
3297
  for (;;) {
3298
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3299
    p->head = FENCEPOST_HEAD;
3300
    ++nfences;
3301
    if ((char*)(&(nextp->head)) < old_end)
3302
      p = nextp;
3303
    else
3304
      break;
3305
  }
3306
  assert(nfences >= 2);
3307
 
3308
  /* Insert the rest of old top into a bin as an ordinary free chunk */
3309
  if (csp != old_top) {
3310
    mchunkptr q = (mchunkptr)old_top;
3311
    size_t psize = csp - old_top;
3312
    mchunkptr tn = chunk_plus_offset(q, psize);
3313
    set_free_with_pinuse(q, psize, tn);
3314
    insert_chunk(m, q, psize);
3315
  }
3316
 
3317
  check_top_chunk(m, m->top);
3318
}
3319
 
3320
/* -------------------------- System allocation -------------------------- */
3321
 
3322
/* Get memory from system using MORECORE or MMAP */
3323
static void* sys_alloc(mstate m, size_t nb) {
3324
  char* tbase = CMFAIL;
3325
  size_t tsize = 0;
3326
  flag_t mmap_flag = 0;
3327
 
3328
  init_mparams();
3329
 
3330
  /* Directly map large chunks */
3331
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3332
    void* mem = mmap_alloc(m, nb);
3333
    if (mem != 0)
3334
      return mem;
3335
  }
3336
 
3337
  /*
3338
    Try getting memory in any of three ways (in most-preferred to
3339
    least-preferred order):
3340
    1. A call to MORECORE that can normally contiguously extend memory.
3341
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3342
       or main space is mmapped or a previous contiguous call failed)
3343
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
3344
       Note that under the default settings, if MORECORE is unable to
3345
       fulfill a request, and HAVE_MMAP is true, then mmap is
3346
       used as a noncontiguous system allocator. This is a useful backup
3347
       strategy for systems with holes in address spaces -- in this case
3348
       sbrk cannot contiguously expand the heap, but mmap may be able to
3349
       find space.
3350
    3. A call to MORECORE that cannot usually contiguously extend memory.
3351
       (disabled if not HAVE_MORECORE)
3352
  */
3353
 
3354
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3355
    char* br = CMFAIL;
3356
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3357
    size_t asize = 0;
3358
    ACQUIRE_MORECORE_LOCK();
3359
 
3360
    if (ss == 0) {  /* First time through or recovery */
3361
      char* base = (char*)CALL_MORECORE(0);
3362
      if (base != CMFAIL) {
3363
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3364
        /* Adjust to end on a page boundary */
3365
        if (!is_page_aligned(base))
3366
          asize += (page_align((size_t)base) - (size_t)base);
3367
        /* Can't call MORECORE if size is negative when treated as signed */
3368
        if (asize < HALF_MAX_SIZE_T &&
3369
            (br = (char*)(CALL_MORECORE(asize))) == base) {
3370
          tbase = base;
3371
          tsize = asize;
3372
        }
3373
      }
3374
    }
3375
    else {
3376
      /* Subtract out existing available top space from MORECORE request. */
3377
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3378
      /* Use mem here only if it did continuously extend old space */
3379
      if (asize < HALF_MAX_SIZE_T &&
3380
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3381
        tbase = br;
3382
        tsize = asize;
3383
      }
3384
    }
3385
 
3386
    if (tbase == CMFAIL) {    /* Cope with partial failure */
3387
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3388
        if (asize < HALF_MAX_SIZE_T &&
3389
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3390
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3391
          if (esize < HALF_MAX_SIZE_T) {
3392
            char* end = (char*)CALL_MORECORE(esize);
3393
            if (end != CMFAIL)
3394
              asize += esize;
3395
            else {            /* Can't use; try to release */
3396
              CALL_MORECORE(-asize);
3397
              br = CMFAIL;
3398
            }
3399
          }
3400
        }
3401
      }
3402
      if (br != CMFAIL) {    /* Use the space we did get */
3403
        tbase = br;
3404
        tsize = asize;
3405
      }
3406
      else
3407
        disable_contiguous(m); /* Don't try contiguous path in the future */
3408
    }
3409
 
3410
    RELEASE_MORECORE_LOCK();
3411
  }
3412
 
3413
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3414
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3415
    size_t rsize = granularity_align(req);
3416
    if (rsize > nb) { /* Fail if wraps around zero */
3417
      char* mp = (char*)(CALL_MMAP(rsize));
3418
      if (mp != CMFAIL) {
3419
        tbase = mp;
3420
        tsize = rsize;
3421
        mmap_flag = IS_MMAPPED_BIT;
3422
      }
3423
    }
3424
  }
3425
 
3426
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3427
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3428
    if (asize < HALF_MAX_SIZE_T) {
3429
      char* br = CMFAIL;
3430
      char* end = CMFAIL;
3431
      ACQUIRE_MORECORE_LOCK();
3432
      br = (char*)(CALL_MORECORE(asize));
3433
      end = (char*)(CALL_MORECORE(0));
3434
      RELEASE_MORECORE_LOCK();
3435
      if (br != CMFAIL && end != CMFAIL && br < end) {
3436
        size_t ssize = end - br;
3437
        if (ssize > nb + TOP_FOOT_SIZE) {
3438
          tbase = br;
3439
          tsize = ssize;
3440
        }
3441
      }
3442
    }
3443
  }
3444
 
3445
  if (tbase != CMFAIL) {
3446
 
3447
    if ((m->footprint += tsize) > m->max_footprint)
3448
      m->max_footprint = m->footprint;
3449
 
3450
    if (!is_initialized(m)) { /* first-time initialization */
3451
      m->seg.base = m->least_addr = tbase;
3452
      m->seg.size = tsize;
3453
      m->seg.sflags = mmap_flag;
3454
      m->magic = mparams.magic;
3455
      init_bins(m);
3456
      if (is_global(m))
3457
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3458
      else {
3459
        /* Offset top by embedded malloc_state */
3460
        mchunkptr mn = next_chunk(mem2chunk(m));
3461
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3462
      }
3463
    }
3464
 
3465
    else {
3466
      /* Try to merge with an existing segment */
3467
      msegmentptr sp = &m->seg;
3468
      while (sp != 0 && tbase != sp->base + sp->size)
3469
        sp = sp->next;
3470
      if (sp != 0 &&
3471
          !is_extern_segment(sp) &&
3472
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3473
          segment_holds(sp, m->top)) { /* append */
3474
        sp->size += tsize;
3475
        init_top(m, m->top, m->topsize + tsize);
3476
      }
3477
      else {
3478
        if (tbase < m->least_addr)
3479
          m->least_addr = tbase;
3480
        sp = &m->seg;
3481
        while (sp != 0 && sp->base != tbase + tsize)
3482
          sp = sp->next;
3483
        if (sp != 0 &&
3484
            !is_extern_segment(sp) &&
3485
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3486
          char* oldbase = sp->base;
3487
          sp->base = tbase;
3488
          sp->size += tsize;
3489
          return prepend_alloc(m, tbase, oldbase, nb);
3490
        }
3491
        else
3492
          add_segment(m, tbase, tsize, mmap_flag);
3493
      }
3494
    }
3495
 
3496
    if (nb < m->topsize) { /* Allocate from new or extended top space */
3497
      size_t rsize = m->topsize -= nb;
3498
      mchunkptr p = m->top;
3499
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
3500
      r->head = rsize | PINUSE_BIT;
3501
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3502
      check_top_chunk(m, m->top);
3503
      check_malloced_chunk(m, chunk2mem(p), nb);
3504
      return chunk2mem(p);
3505
    }
3506
  }
3507
 
3508
  MALLOC_FAILURE_ACTION;
3509
  return 0;
3510
}
3511
 
3512
/* -----------------------  system deallocation -------------------------- */
3513
 
3514
/* Unmap and unlink any mmapped segments that don't contain used chunks */
3515
static size_t release_unused_segments(mstate m) {
3516
  size_t released = 0;
3517
  msegmentptr pred = &m->seg;
3518
  msegmentptr sp = pred->next;
3519
  while (sp != 0) {
3520
    char* base = sp->base;
3521
    size_t size = sp->size;
3522
    msegmentptr next = sp->next;
3523
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3524
      mchunkptr p = align_as_chunk(base);
3525
      size_t psize = chunksize(p);
3526
      /* Can unmap if first chunk holds entire segment and not pinned */
3527
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3528
        tchunkptr tp = (tchunkptr)p;
3529
        assert(segment_holds(sp, (char*)sp));
3530
        if (p == m->dv) {
3531
          m->dv = 0;
3532
          m->dvsize = 0;
3533
        }
3534
        else {
3535
          unlink_large_chunk(m, tp);
3536
        }
3537
        if (CALL_MUNMAP(base, size) == 0) {
3538
          released += size;
3539
          m->footprint -= size;
3540
          /* unlink obsoleted record */
3541
          sp = pred;
3542
          sp->next = next;
3543
        }
3544
        else { /* back out if cannot unmap */
3545
          insert_large_chunk(m, tp, psize);
3546
        }
3547
      }
3548
    }
3549
    pred = sp;
3550
    sp = next;
3551
  }
3552
  return released;
3553
}
3554
 
3555
static int sys_trim(mstate m, size_t pad) {
3556
  size_t released = 0;
3557
  if (pad < MAX_REQUEST && is_initialized(m)) {
3558
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3559
 
3560
    if (m->topsize > pad) {
3561
      /* Shrink top space in granularity-size units, keeping at least one */
3562
      size_t unit = mparams.granularity;
3563
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3564
                      SIZE_T_ONE) * unit;
3565
      msegmentptr sp = segment_holding(m, (char*)m->top);
3566
 
3567
      if (!is_extern_segment(sp)) {
3568
        if (is_mmapped_segment(sp)) {
3569
          if (HAVE_MMAP &&
3570
              sp->size >= extra &&
3571
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
3572
            size_t newsize = sp->size - extra;
3573
            /* Prefer mremap, fall back to munmap */
3574
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3575
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3576
              released = extra;
3577
            }
3578
          }
3579
        }
3580
        else if (HAVE_MORECORE) {
3581
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3582
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3583
          ACQUIRE_MORECORE_LOCK();
3584
          {
3585
            /* Make sure end of memory is where we last set it. */
3586
            char* old_br = (char*)(CALL_MORECORE(0));
3587
            if (old_br == sp->base + sp->size) {
3588
              char* rel_br = (char*)(CALL_MORECORE(-extra));
3589
              char* new_br = (char*)(CALL_MORECORE(0));
3590
              if (rel_br != CMFAIL && new_br < old_br)
3591
                released = old_br - new_br;
3592
            }
3593
          }
3594
          RELEASE_MORECORE_LOCK();
3595
        }
3596
      }
3597
 
3598
      if (released != 0) {
3599
        sp->size -= released;
3600
        m->footprint -= released;
3601
        init_top(m, m->top, m->topsize - released);
3602
        check_top_chunk(m, m->top);
3603
      }
3604
    }
3605
 
3606
    /* Unmap any unused mmapped segments */
3607
    if (HAVE_MMAP)
3608
      released += release_unused_segments(m);
3609
 
3610
    /* On failure, disable autotrim to avoid repeated failed future calls */
3611
    if (released == 0)
3612
      m->trim_check = MAX_SIZE_T;
3613
  }
3614
 
3615
  return (released != 0)? 1 : 0;
3616
}
3617
 
3618
/* ---------------------------- malloc support --------------------------- */
3619
 
3620
/* allocate a large request from the best fitting chunk in a treebin */
3621
static void* tmalloc_large(mstate m, size_t nb) {
3622
  tchunkptr v = 0;
3623
  size_t rsize = -nb; /* Unsigned negation */
3624
  tchunkptr t;
3625
  bindex_t idx;
3626
  compute_tree_index(nb, idx);
3627
 
3628
  if ((t = *treebin_at(m, idx)) != 0) {
3629
    /* Traverse tree for this bin looking for node with size == nb */
3630
    size_t sizebits = nb << leftshift_for_tree_index(idx);
3631
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
3632
    for (;;) {
3633
      tchunkptr rt;
3634
      size_t trem = chunksize(t) - nb;
3635
      if (trem < rsize) {
3636
        v = t;
3637
        if ((rsize = trem) == 0)
3638
          break;
3639
      }
3640
      rt = t->child[1];
3641
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3642
      if (rt != 0 && rt != t)
3643
        rst = rt;
3644
      if (t == 0) {
3645
        t = rst; /* set t to least subtree holding sizes > nb */
3646
        break;
3647
      }
3648
      sizebits <<= 1;
3649
    }
3650
  }
3651
 
3652
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3653
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3654
    if (leftbits != 0) {
3655
      bindex_t i;
3656
      binmap_t leastbit = least_bit(leftbits);
3657
      compute_bit2idx(leastbit, i);
3658
      t = *treebin_at(m, i);
3659
    }
3660
  }
3661
 
3662
  while (t != 0) { /* find smallest of tree or subtree */
3663
    size_t trem = chunksize(t) - nb;
3664
    if (trem < rsize) {
3665
      rsize = trem;
3666
      v = t;
3667
    }
3668
    t = leftmost_child(t);
3669
  }
3670
 
3671
  /*  If dv is a better fit, return 0 so malloc will use it */
3672
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3673
    if (RTCHECK(ok_address(m, v))) { /* split */
3674
      mchunkptr r = chunk_plus_offset(v, nb);
3675
      assert(chunksize(v) == rsize + nb);
3676
      if (RTCHECK(ok_next(v, r))) {
3677
        unlink_large_chunk(m, v);
3678
        if (rsize < MIN_CHUNK_SIZE)
3679
          set_inuse_and_pinuse(m, v, (rsize + nb));
3680
        else {
3681
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3682
          set_size_and_pinuse_of_free_chunk(r, rsize);
3683
          insert_chunk(m, r, rsize);
3684
        }
3685
        return chunk2mem(v);
3686
      }
3687
    }
3688
    CORRUPTION_ERROR_ACTION(m);
3689
  }
3690
  return 0;
3691
}
3692
 
3693
/* allocate a small request from the best fitting chunk in a treebin */
3694
static void* tmalloc_small(mstate m, size_t nb) {
3695
  tchunkptr t, v;
3696
  size_t rsize;
3697
  bindex_t i;
3698
  binmap_t leastbit = least_bit(m->treemap);
3699
  compute_bit2idx(leastbit, i);
3700
 
3701
  v = t = *treebin_at(m, i);
3702
  rsize = chunksize(t) - nb;
3703
 
3704
  while ((t = leftmost_child(t)) != 0) {
3705
    size_t trem = chunksize(t) - nb;
3706
    if (trem < rsize) {
3707
      rsize = trem;
3708
      v = t;
3709
    }
3710
  }
3711
 
3712
  if (RTCHECK(ok_address(m, v))) {
3713
    mchunkptr r = chunk_plus_offset(v, nb);
3714
    assert(chunksize(v) == rsize + nb);
3715
    if (RTCHECK(ok_next(v, r))) {
3716
      unlink_large_chunk(m, v);
3717
      if (rsize < MIN_CHUNK_SIZE)
3718
        set_inuse_and_pinuse(m, v, (rsize + nb));
3719
      else {
3720
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3721
        set_size_and_pinuse_of_free_chunk(r, rsize);
3722
        replace_dv(m, r, rsize);
3723
      }
3724
      return chunk2mem(v);
3725
    }
3726
  }
3727
 
3728
  CORRUPTION_ERROR_ACTION(m);
3729
  return 0;
3730
}
3731
 
3732
/* --------------------------- realloc support --------------------------- */
3733
 
3734
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3735
  if (bytes >= MAX_REQUEST) {
3736
    MALLOC_FAILURE_ACTION;
3737
    return 0;
3738
  }
3739
  if (!PREACTION(m)) {
3740
    mchunkptr oldp = mem2chunk(oldmem);
3741
    size_t oldsize = chunksize(oldp);
3742
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3743
    mchunkptr newp = 0;
3744
    void* extra = 0;
3745
 
3746
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3747
 
3748
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3749
                ok_next(oldp, next) && ok_pinuse(next))) {
3750
      size_t nb = request2size(bytes);
3751
      if (is_mmapped(oldp))
3752
        newp = mmap_resize(m, oldp, nb);
3753
      else if (oldsize >= nb) { /* already big enough */
3754
        size_t rsize = oldsize - nb;
3755
        newp = oldp;
3756
        if (rsize >= MIN_CHUNK_SIZE) {
3757
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3758
          set_inuse(m, newp, nb);
3759
          set_inuse(m, remainder, rsize);
3760
          extra = chunk2mem(remainder);
3761
        }
3762
      }
3763
      else if (next == m->top && oldsize + m->topsize > nb) {
3764
        /* Expand into top */
3765
        size_t newsize = oldsize + m->topsize;
3766
        size_t newtopsize = newsize - nb;
3767
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3768
        set_inuse(m, oldp, nb);
3769
        newtop->head = newtopsize |PINUSE_BIT;
3770
        m->top = newtop;
3771
        m->topsize = newtopsize;
3772
        newp = oldp;
3773
      }
3774
    }
3775
    else {
3776
      USAGE_ERROR_ACTION(m, oldmem);
3777
      POSTACTION(m);
3778
      return 0;
3779
    }
3780
 
3781
    POSTACTION(m);
3782
 
3783
    if (newp != 0) {
3784
      if (extra != 0) {
3785
        internal_free(m, extra);
3786
      }
3787
      check_inuse_chunk(m, newp);
3788
      return chunk2mem(newp);
3789
    }
3790
    else {
3791
      void* newmem = internal_malloc(m, bytes);
3792
      if (newmem != 0) {
3793
        size_t oc = oldsize - overhead_for(oldp);
3794
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3795
        internal_free(m, oldmem);
3796
      }
3797
      return newmem;
3798
    }
3799
  }
3800
  return 0;
3801
}
3802
 
3803
/* --------------------------- memalign support -------------------------- */
3804
 
3805
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3806
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3807
    return internal_malloc(m, bytes);
3808
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3809
    alignment = MIN_CHUNK_SIZE;
3810
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3811
    size_t a = MALLOC_ALIGNMENT << 1;
3812
    while (a < alignment) a <<= 1;
3813
    alignment = a;
3814
  }
3815
 
3816
  if (bytes >= MAX_REQUEST - alignment) {
3817
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3818
      MALLOC_FAILURE_ACTION;
3819
    }
3820
  }
3821
  else {
3822
    size_t nb = request2size(bytes);
3823
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3824
    char* mem = (char*)internal_malloc(m, req);
3825
    if (mem != 0) {
3826
      void* leader = 0;
3827
      void* trailer = 0;
3828
      mchunkptr p = mem2chunk(mem);
3829
 
3830
      if (PREACTION(m)) return 0;
3831
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3832
        /*
3833
          Find an aligned spot inside chunk.  Since we need to give
3834
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3835
          the first calculation places us at a spot with less than
3836
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3837
          We've allocated enough total room so that this is always
3838
          possible.
3839
        */
3840
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3841
                                                       alignment -
3842
                                                       SIZE_T_ONE)) &
3843
                                             -alignment));
3844
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3845
          br : br+alignment;
3846
        mchunkptr newp = (mchunkptr)pos;
3847
        size_t leadsize = pos - (char*)(p);
3848
        size_t newsize = chunksize(p) - leadsize;
3849
 
3850
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3851
          newp->prev_foot = p->prev_foot + leadsize;
3852
          newp->head = (newsize|CINUSE_BIT);
3853
        }
3854
        else { /* Otherwise, give back leader, use the rest */
3855
          set_inuse(m, newp, newsize);
3856
          set_inuse(m, p, leadsize);
3857
          leader = chunk2mem(p);
3858
        }
3859
        p = newp;
3860
      }
3861
 
3862
      /* Give back spare room at the end */
3863
      if (!is_mmapped(p)) {
3864
        size_t size = chunksize(p);
3865
        if (size > nb + MIN_CHUNK_SIZE) {
3866
          size_t remainder_size = size - nb;
3867
          mchunkptr remainder = chunk_plus_offset(p, nb);
3868
          set_inuse(m, p, nb);
3869
          set_inuse(m, remainder, remainder_size);
3870
          trailer = chunk2mem(remainder);
3871
        }
3872
      }
3873
 
3874
      assert (chunksize(p) >= nb);
3875
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3876
      check_inuse_chunk(m, p);
3877
      POSTACTION(m);
3878
      if (leader != 0) {
3879
        internal_free(m, leader);
3880
      }
3881
      if (trailer != 0) {
3882
        internal_free(m, trailer);
3883
      }
3884
      return chunk2mem(p);
3885
    }
3886
  }
3887
  return 0;
3888
}
3889
 
3890
/* ------------------------ comalloc/coalloc support --------------------- */
3891
 
3892
static void** ialloc(mstate m,
3893
                     size_t n_elements,
3894
                     size_t* sizes,
3895
                     int opts,
3896
                     void* chunks[]) {
3897
  /*
3898
    This provides common support for independent_X routines, handling
3899
    all of the combinations that can result.
3900
 
3901
    The opts arg has:
3902
    bit 0 set if all elements are same size (using sizes[0])
3903
    bit 1 set if elements should be zeroed
3904
  */
3905
 
3906
  size_t    element_size;   /* chunksize of each element, if all same */
3907
  size_t    contents_size;  /* total size of elements */
3908
  size_t    array_size;     /* request size of pointer array */
3909
  void*     mem;            /* malloced aggregate space */
3910
  mchunkptr p;              /* corresponding chunk */
3911
  size_t    remainder_size; /* remaining bytes while splitting */
3912
  void**    marray;         /* either "chunks" or malloced ptr array */
3913
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3914
  flag_t    was_enabled;    /* to disable mmap */
3915
  size_t    size;
3916
  size_t    i;
3917
 
3918
  /* compute array length, if needed */
3919
  if (chunks != 0) {
3920
    if (n_elements == 0)
3921
      return chunks; /* nothing to do */
3922
    marray = chunks;
3923
    array_size = 0;
3924
  }
3925
  else {
3926
    /* if empty req, must still return chunk representing empty array */
3927
    if (n_elements == 0)
3928
      return (void**)internal_malloc(m, 0);
3929
    marray = 0;
3930
    array_size = request2size(n_elements * (sizeof(void*)));
3931
  }
3932
 
3933
  /* compute total element size */
3934
  if (opts & 0x1) { /* all-same-size */
3935
    element_size = request2size(*sizes);
3936
    contents_size = n_elements * element_size;
3937
  }
3938
  else { /* add up all the sizes */
3939
    element_size = 0;
3940
    contents_size = 0;
3941
    for (i = 0; i != n_elements; ++i)
3942
      contents_size += request2size(sizes[i]);
3943
  }
3944
 
3945
  size = contents_size + array_size;
3946
 
3947
  /*
3948
     Allocate the aggregate chunk.  First disable direct-mmapping so
3949
     malloc won't use it, since we would not be able to later
3950
     free/realloc space internal to a segregated mmap region.
3951
  */
3952
  was_enabled = use_mmap(m);
3953
  disable_mmap(m);
3954
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3955
  if (was_enabled)
3956
    enable_mmap(m);
3957
  if (mem == 0)
3958
    return 0;
3959
 
3960
  if (PREACTION(m)) return 0;
3961
  p = mem2chunk(mem);
3962
  remainder_size = chunksize(p);
3963
 
3964
  assert(!is_mmapped(p));
3965
 
3966
  if (opts & 0x2) {       /* optionally clear the elements */
3967
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3968
  }
3969
 
3970
  /* If not provided, allocate the pointer array as final part of chunk */
3971
  if (marray == 0) {
3972
    size_t  array_chunk_size;
3973
    array_chunk = chunk_plus_offset(p, contents_size);
3974
    array_chunk_size = remainder_size - contents_size;
3975
    marray = (void**) (chunk2mem(array_chunk));
3976
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3977
    remainder_size = contents_size;
3978
  }
3979
 
3980
  /* split out elements */
3981
  for (i = 0; ; ++i) {
3982
    marray[i] = chunk2mem(p);
3983
    if (i != n_elements-1) {
3984
      if (element_size != 0)
3985
        size = element_size;
3986
      else
3987
        size = request2size(sizes[i]);
3988
      remainder_size -= size;
3989
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3990
      p = chunk_plus_offset(p, size);
3991
    }
3992
    else { /* the final element absorbs any overallocation slop */
3993
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3994
      break;
3995
    }
3996
  }
3997
 
3998
#if DEBUG
3999
  if (marray != chunks) {
4000
    /* final element must have exactly exhausted chunk */
4001
    if (element_size != 0) {
4002
      assert(remainder_size == element_size);
4003
    }
4004
    else {
4005
      assert(remainder_size == request2size(sizes[i]));
4006
    }
4007
    check_inuse_chunk(m, mem2chunk(marray));
4008
  }
4009
  for (i = 0; i != n_elements; ++i)
4010
    check_inuse_chunk(m, mem2chunk(marray[i]));
4011
 
4012
#endif /* DEBUG */
4013
 
4014
  POSTACTION(m);
4015
  return marray;
4016
}
4017
 
4018
 
4019
/* -------------------------- public routines ---------------------------- */
4020
 
4021
#if !ONLY_MSPACES
4022
 
4023
void* dlmalloc(size_t bytes) {
4024
  /*
4025
     Basic algorithm:
4026
     If a small request (< 256 bytes minus per-chunk overhead):
4027
       1. If one exists, use a remainderless chunk in associated smallbin.
4028
          (Remainderless means that there are too few excess bytes to
4029
          represent as a chunk.)
4030
       2. If it is big enough, use the dv chunk, which is normally the
4031
          chunk adjacent to the one used for the most recent small request.
4032
       3. If one exists, split the smallest available chunk in a bin,
4033
          saving remainder in dv.
4034
       4. If it is big enough, use the top chunk.
4035
       5. If available, get memory from system and use it
4036
     Otherwise, for a large request:
4037
       1. Find the smallest available binned chunk that fits, and use it
4038
          if it is better fitting than dv chunk, splitting if necessary.
4039
       2. If better fitting than any binned chunk, use the dv chunk.
4040
       3. If it is big enough, use the top chunk.
4041
       4. If request size >= mmap threshold, try to directly mmap this chunk.
4042
       5. If available, get memory from system and use it
4043
 
4044
     The ugly goto's here ensure that postaction occurs along all paths.
4045
  */
4046
 
4047
  if (!PREACTION(gm)) {
4048
    void* mem;
4049
    size_t nb;
4050
    if (bytes <= MAX_SMALL_REQUEST) {
4051
      bindex_t idx;
4052
      binmap_t smallbits;
4053
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4054
      idx = small_index(nb);
4055
      smallbits = gm->smallmap >> idx;
4056
 
4057
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4058
        mchunkptr b, p;
4059
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4060
        b = smallbin_at(gm, idx);
4061
        p = b->fd;
4062
        assert(chunksize(p) == small_index2size(idx));
4063
        unlink_first_small_chunk(gm, b, p, idx);
4064
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
4065
        mem = chunk2mem(p);
4066
        check_malloced_chunk(gm, mem, nb);
4067
        goto postaction;
4068
      }
4069
 
4070
      else if (nb > gm->dvsize) {
4071
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4072
          mchunkptr b, p, r;
4073
          size_t rsize;
4074
          bindex_t i;
4075
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4076
          binmap_t leastbit = least_bit(leftbits);
4077
          compute_bit2idx(leastbit, i);
4078
          b = smallbin_at(gm, i);
4079
          p = b->fd;
4080
          assert(chunksize(p) == small_index2size(i));
4081
          unlink_first_small_chunk(gm, b, p, i);
4082
          rsize = small_index2size(i) - nb;
4083
          /* Fit here cannot be remainderless if 4byte sizes */
4084
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4085
            set_inuse_and_pinuse(gm, p, small_index2size(i));
4086
          else {
4087
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4088
            r = chunk_plus_offset(p, nb);
4089
            set_size_and_pinuse_of_free_chunk(r, rsize);
4090
            replace_dv(gm, r, rsize);
4091
          }
4092
          mem = chunk2mem(p);
4093
          check_malloced_chunk(gm, mem, nb);
4094
          goto postaction;
4095
        }
4096
 
4097
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4098
          check_malloced_chunk(gm, mem, nb);
4099
          goto postaction;
4100
        }
4101
      }
4102
    }
4103
    else if (bytes >= MAX_REQUEST)
4104
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4105
    else {
4106
      nb = pad_request(bytes);
4107
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4108
        check_malloced_chunk(gm, mem, nb);
4109
        goto postaction;
4110
      }
4111
    }
4112
 
4113
    if (nb <= gm->dvsize) {
4114
      size_t rsize = gm->dvsize - nb;
4115
      mchunkptr p = gm->dv;
4116
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4117
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4118
        gm->dvsize = rsize;
4119
        set_size_and_pinuse_of_free_chunk(r, rsize);
4120
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4121
      }
4122
      else { /* exhaust dv */
4123
        size_t dvs = gm->dvsize;
4124
        gm->dvsize = 0;
4125
        gm->dv = 0;
4126
        set_inuse_and_pinuse(gm, p, dvs);
4127
      }
4128
      mem = chunk2mem(p);
4129
      check_malloced_chunk(gm, mem, nb);
4130
      goto postaction;
4131
    }
4132
 
4133
    else if (nb < gm->topsize) { /* Split top */
4134
      size_t rsize = gm->topsize -= nb;
4135
      mchunkptr p = gm->top;
4136
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4137
      r->head = rsize | PINUSE_BIT;
4138
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4139
      mem = chunk2mem(p);
4140
      check_top_chunk(gm, gm->top);
4141
      check_malloced_chunk(gm, mem, nb);
4142
      goto postaction;
4143
    }
4144
 
4145
    mem = sys_alloc(gm, nb);
4146
 
4147
  postaction:
4148
    POSTACTION(gm);
4149
    return mem;
4150
  }
4151
 
4152
  return 0;
4153
}
4154
 
4155
void dlfree(void* mem) {
4156
  /*
4157
     Consolidate freed chunks with preceeding or succeeding bordering
4158
     free chunks, if they exist, and then place in a bin.  Intermixed
4159
     with special cases for top, dv, mmapped chunks, and usage errors.
4160
  */
4161
 
4162
  if (mem != 0) {
4163
    mchunkptr p  = mem2chunk(mem);
4164
#if FOOTERS
4165
    mstate fm = get_mstate_for(p);
4166
    if (!ok_magic(fm)) {
4167
      USAGE_ERROR_ACTION(fm, p);
4168
      return;
4169
    }
4170
#else /* FOOTERS */
4171
#define fm gm
4172
#endif /* FOOTERS */
4173
    if (!PREACTION(fm)) {
4174
      check_inuse_chunk(fm, p);
4175
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4176
        size_t psize = chunksize(p);
4177
        mchunkptr next = chunk_plus_offset(p, psize);
4178
        if (!pinuse(p)) {
4179
          size_t prevsize = p->prev_foot;
4180
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4181
            prevsize &= ~IS_MMAPPED_BIT;
4182
            psize += prevsize + MMAP_FOOT_PAD;
4183
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4184
              fm->footprint -= psize;
4185
            goto postaction;
4186
          }
4187
          else {
4188
            mchunkptr prev = chunk_minus_offset(p, prevsize);
4189
            psize += prevsize;
4190
            p = prev;
4191
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4192
              if (p != fm->dv) {
4193
                unlink_chunk(fm, p, prevsize);
4194
              }
4195
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4196
                fm->dvsize = psize;
4197
                set_free_with_pinuse(p, psize, next);
4198
                goto postaction;
4199
              }
4200
            }
4201
            else
4202
              goto erroraction;
4203
          }
4204
        }
4205
 
4206
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4207
          if (!cinuse(next)) {  /* consolidate forward */
4208
            if (next == fm->top) {
4209
              size_t tsize = fm->topsize += psize;
4210
              fm->top = p;
4211
              p->head = tsize | PINUSE_BIT;
4212
              if (p == fm->dv) {
4213
                fm->dv = 0;
4214
                fm->dvsize = 0;
4215
              }
4216
              if (should_trim(fm, tsize))
4217
                sys_trim(fm, 0);
4218
              goto postaction;
4219
            }
4220
            else if (next == fm->dv) {
4221
              size_t dsize = fm->dvsize += psize;
4222
              fm->dv = p;
4223
              set_size_and_pinuse_of_free_chunk(p, dsize);
4224
              goto postaction;
4225
            }
4226
            else {
4227
              size_t nsize = chunksize(next);
4228
              psize += nsize;
4229
              unlink_chunk(fm, next, nsize);
4230
              set_size_and_pinuse_of_free_chunk(p, psize);
4231
              if (p == fm->dv) {
4232
                fm->dvsize = psize;
4233
                goto postaction;
4234
              }
4235
            }
4236
          }
4237
          else
4238
            set_free_with_pinuse(p, psize, next);
4239
          insert_chunk(fm, p, psize);
4240
          check_free_chunk(fm, p);
4241
          goto postaction;
4242
        }
4243
      }
4244
    erroraction:
4245
      USAGE_ERROR_ACTION(fm, p);
4246
    postaction:
4247
      POSTACTION(fm);
4248
    }
4249
  }
4250
#if !FOOTERS
4251
#undef fm
4252
#endif /* FOOTERS */
4253
}
4254
 
4255
void* dlcalloc(size_t n_elements, size_t elem_size) {
4256
  void* mem;
4257
  size_t req = 0;
4258
  if (n_elements != 0) {
4259
    req = n_elements * elem_size;
4260
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4261
        (req / n_elements != elem_size))
4262
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4263
  }
4264
  mem = dlmalloc(req);
4265
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4266
    memset(mem, 0, req);
4267
  return mem;
4268
}
4269
 
4270
void* dlrealloc(void* oldmem, size_t bytes) {
4271
  if (oldmem == 0)
4272
    return dlmalloc(bytes);
4273
#ifdef REALLOC_ZERO_BYTES_FREES
4274
  if (bytes == 0) {
4275
    dlfree(oldmem);
4276
    return 0;
4277
  }
4278
#endif /* REALLOC_ZERO_BYTES_FREES */
4279
  else {
4280
#if ! FOOTERS
4281
    mstate m = gm;
4282
#else /* FOOTERS */
4283
    mstate m = get_mstate_for(mem2chunk(oldmem));
4284
    if (!ok_magic(m)) {
4285
      USAGE_ERROR_ACTION(m, oldmem);
4286
      return 0;
4287
    }
4288
#endif /* FOOTERS */
4289
    return internal_realloc(m, oldmem, bytes);
4290
  }
4291
}
4292
 
4293
void* dlmemalign(size_t alignment, size_t bytes) {
4294
  return internal_memalign(gm, alignment, bytes);
4295
}
4296
 
4297
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4298
                                 void* chunks[]) {
4299
  size_t sz = elem_size; /* serves as 1-element array */
4300
  return ialloc(gm, n_elements, &sz, 3, chunks);
4301
}
4302
 
4303
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4304
                                   void* chunks[]) {
4305
  return ialloc(gm, n_elements, sizes, 0, chunks);
4306
}
4307
 
4308
void* dlvalloc(size_t bytes) {
4309
  size_t pagesz;
4310
  init_mparams();
4311
  pagesz = mparams.page_size;
4312
  return dlmemalign(pagesz, bytes);
4313
}
4314
 
4315
void* dlpvalloc(size_t bytes) {
4316
  size_t pagesz;
4317
  init_mparams();
4318
  pagesz = mparams.page_size;
4319
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4320
}
4321
 
4322
int dlmalloc_trim(size_t pad) {
4323
  int result = 0;
4324
  if (!PREACTION(gm)) {
4325
    result = sys_trim(gm, pad);
4326
    POSTACTION(gm);
4327
  }
4328
  return result;
4329
}
4330
 
4331
size_t dlmalloc_footprint(void) {
4332
  return gm->footprint;
4333
}
4334
 
4335
size_t dlmalloc_max_footprint(void) {
4336
  return gm->max_footprint;
4337
}
4338
 
4339
#if !NO_MALLINFO
4340
struct mallinfo dlmallinfo(void) {
4341
  return internal_mallinfo(gm);
4342
}
4343
#endif /* NO_MALLINFO */
4344
 
4345
void dlmalloc_stats() {
4346
  internal_malloc_stats(gm);
4347
}
4348
 
4349
size_t dlmalloc_usable_size(void* mem) {
4350
  if (mem != 0) {
4351
    mchunkptr p = mem2chunk(mem);
4352
    if (cinuse(p))
4353
      return chunksize(p) - overhead_for(p);
4354
  }
4355
  return 0;
4356
}
4357
 
4358
int dlmallopt(int param_number, int value) {
4359
  return change_mparam(param_number, value);
4360
}
4361
 
4362
#endif /* !ONLY_MSPACES */
4363
 
4364
/* ----------------------------- user mspaces ---------------------------- */
4365
 
4366
#if MSPACES
4367
 
4368
static mstate init_user_mstate(char* tbase, size_t tsize) {
4369
  size_t msize = pad_request(sizeof(struct malloc_state));
4370
  mchunkptr mn;
4371
  mchunkptr msp = align_as_chunk(tbase);
4372
  mstate m = (mstate)(chunk2mem(msp));
4373
  memset(m, 0, msize);
4374
  INITIAL_LOCK(&m->mutex);
4375
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4376
  m->seg.base = m->least_addr = tbase;
4377
  m->seg.size = m->footprint = m->max_footprint = tsize;
4378
  m->magic = mparams.magic;
4379
  m->mflags = mparams.default_mflags;
4380
  disable_contiguous(m);
4381
  init_bins(m);
4382
  mn = next_chunk(mem2chunk(m));
4383
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4384
  check_top_chunk(m, m->top);
4385
  return m;
4386
}
4387
 
4388
mspace create_mspace(size_t capacity, int locked) {
4389
  mstate m = 0;
4390
  size_t msize = pad_request(sizeof(struct malloc_state));
4391
  init_mparams(); /* Ensure pagesize etc initialized */
4392
 
4393
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4394
    size_t rs = ((capacity == 0)? mparams.granularity :
4395
                 (capacity + TOP_FOOT_SIZE + msize));
4396
    size_t tsize = granularity_align(rs);
4397
    char* tbase = (char*)(CALL_MMAP(tsize));
4398
    if (tbase != CMFAIL) {
4399
      m = init_user_mstate(tbase, tsize);
4400
      m->seg.sflags = IS_MMAPPED_BIT;
4401
      set_lock(m, locked);
4402
    }
4403
  }
4404
  return (mspace)m;
4405
}
4406
 
4407
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4408
  mstate m = 0;
4409
  size_t msize = pad_request(sizeof(struct malloc_state));
4410
  init_mparams(); /* Ensure pagesize etc initialized */
4411
 
4412
  if (capacity > msize + TOP_FOOT_SIZE &&
4413
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4414
    m = init_user_mstate((char*)base, capacity);
4415
    m->seg.sflags = EXTERN_BIT;
4416
    set_lock(m, locked);
4417
  }
4418
  return (mspace)m;
4419
}
4420
 
4421
size_t destroy_mspace(mspace msp) {
4422
  size_t freed = 0;
4423
  mstate ms = (mstate)msp;
4424
  if (ok_magic(ms)) {
4425
    msegmentptr sp = &ms->seg;
4426
    while (sp != 0) {
4427
      char* base = sp->base;
4428
      size_t size = sp->size;
4429
      flag_t flag = sp->sflags;
4430
      sp = sp->next;
4431
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4432
          CALL_MUNMAP(base, size) == 0)
4433
        freed += size;
4434
    }
4435
  }
4436
  else {
4437
    USAGE_ERROR_ACTION(ms,ms);
4438
  }
4439
  return freed;
4440
}
4441
 
4442
/*
4443
  mspace versions of routines are near-clones of the global
4444
  versions. This is not so nice but better than the alternatives.
4445
*/
4446
 
4447
 
4448
void* mspace_malloc(mspace msp, size_t bytes) {
4449
  mstate ms = (mstate)msp;
4450
  if (!ok_magic(ms)) {
4451
    USAGE_ERROR_ACTION(ms,ms);
4452
    return 0;
4453
  }
4454
  if (!PREACTION(ms)) {
4455
    void* mem;
4456
    size_t nb;
4457
    if (bytes <= MAX_SMALL_REQUEST) {
4458
      bindex_t idx;
4459
      binmap_t smallbits;
4460
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4461
      idx = small_index(nb);
4462
      smallbits = ms->smallmap >> idx;
4463
 
4464
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4465
        mchunkptr b, p;
4466
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4467
        b = smallbin_at(ms, idx);
4468
        p = b->fd;
4469
        assert(chunksize(p) == small_index2size(idx));
4470
        unlink_first_small_chunk(ms, b, p, idx);
4471
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
4472
        mem = chunk2mem(p);
4473
        check_malloced_chunk(ms, mem, nb);
4474
        goto postaction;
4475
      }
4476
 
4477
      else if (nb > ms->dvsize) {
4478
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4479
          mchunkptr b, p, r;
4480
          size_t rsize;
4481
          bindex_t i;
4482
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4483
          binmap_t leastbit = least_bit(leftbits);
4484
          compute_bit2idx(leastbit, i);
4485
          b = smallbin_at(ms, i);
4486
          p = b->fd;
4487
          assert(chunksize(p) == small_index2size(i));
4488
          unlink_first_small_chunk(ms, b, p, i);
4489
          rsize = small_index2size(i) - nb;
4490
          /* Fit here cannot be remainderless if 4byte sizes */
4491
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4492
            set_inuse_and_pinuse(ms, p, small_index2size(i));
4493
          else {
4494
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4495
            r = chunk_plus_offset(p, nb);
4496
            set_size_and_pinuse_of_free_chunk(r, rsize);
4497
            replace_dv(ms, r, rsize);
4498
          }
4499
          mem = chunk2mem(p);
4500
          check_malloced_chunk(ms, mem, nb);
4501
          goto postaction;
4502
        }
4503
 
4504
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4505
          check_malloced_chunk(ms, mem, nb);
4506
          goto postaction;
4507
        }
4508
      }
4509
    }
4510
    else if (bytes >= MAX_REQUEST)
4511
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4512
    else {
4513
      nb = pad_request(bytes);
4514
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4515
        check_malloced_chunk(ms, mem, nb);
4516
        goto postaction;
4517
      }
4518
    }
4519
 
4520
    if (nb <= ms->dvsize) {
4521
      size_t rsize = ms->dvsize - nb;
4522
      mchunkptr p = ms->dv;
4523
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4524
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4525
        ms->dvsize = rsize;
4526
        set_size_and_pinuse_of_free_chunk(r, rsize);
4527
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4528
      }
4529
      else { /* exhaust dv */
4530
        size_t dvs = ms->dvsize;
4531
        ms->dvsize = 0;
4532
        ms->dv = 0;
4533
        set_inuse_and_pinuse(ms, p, dvs);
4534
      }
4535
      mem = chunk2mem(p);
4536
      check_malloced_chunk(ms, mem, nb);
4537
      goto postaction;
4538
    }
4539
 
4540
    else if (nb < ms->topsize) { /* Split top */
4541
      size_t rsize = ms->topsize -= nb;
4542
      mchunkptr p = ms->top;
4543
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4544
      r->head = rsize | PINUSE_BIT;
4545
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4546
      mem = chunk2mem(p);
4547
      check_top_chunk(ms, ms->top);
4548
      check_malloced_chunk(ms, mem, nb);
4549
      goto postaction;
4550
    }
4551
 
4552
    mem = sys_alloc(ms, nb);
4553
 
4554
  postaction:
4555
    POSTACTION(ms);
4556
    return mem;
4557
  }
4558
 
4559
  return 0;
4560
}
4561
 
4562
void mspace_free(mspace msp, void* mem) {
4563
  if (mem != 0) {
4564
    mchunkptr p  = mem2chunk(mem);
4565
#if FOOTERS
4566
    mstate fm = get_mstate_for(p);
4567
#else /* FOOTERS */
4568
    mstate fm = (mstate)msp;
4569
#endif /* FOOTERS */
4570
    if (!ok_magic(fm)) {
4571
      USAGE_ERROR_ACTION(fm, p);
4572
      return;
4573
    }
4574
    if (!PREACTION(fm)) {
4575
      check_inuse_chunk(fm, p);
4576
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4577
        size_t psize = chunksize(p);
4578
        mchunkptr next = chunk_plus_offset(p, psize);
4579
        if (!pinuse(p)) {
4580
          size_t prevsize = p->prev_foot;
4581
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4582
            prevsize &= ~IS_MMAPPED_BIT;
4583
            psize += prevsize + MMAP_FOOT_PAD;
4584
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4585
              fm->footprint -= psize;
4586
            goto postaction;
4587
          }
4588
          else {
4589
            mchunkptr prev = chunk_minus_offset(p, prevsize);
4590
            psize += prevsize;
4591
            p = prev;
4592
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4593
              if (p != fm->dv) {
4594
                unlink_chunk(fm, p, prevsize);
4595
              }
4596
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4597
                fm->dvsize = psize;
4598
                set_free_with_pinuse(p, psize, next);
4599
                goto postaction;
4600
              }
4601
            }
4602
            else
4603
              goto erroraction;
4604
          }
4605
        }
4606
 
4607
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4608
          if (!cinuse(next)) {  /* consolidate forward */
4609
            if (next == fm->top) {
4610
              size_t tsize = fm->topsize += psize;
4611
              fm->top = p;
4612
              p->head = tsize | PINUSE_BIT;
4613
              if (p == fm->dv) {
4614
                fm->dv = 0;
4615
                fm->dvsize = 0;
4616
              }
4617
              if (should_trim(fm, tsize))
4618
                sys_trim(fm, 0);
4619
              goto postaction;
4620
            }
4621
            else if (next == fm->dv) {
4622
              size_t dsize = fm->dvsize += psize;
4623
              fm->dv = p;
4624
              set_size_and_pinuse_of_free_chunk(p, dsize);
4625
              goto postaction;
4626
            }
4627
            else {
4628
              size_t nsize = chunksize(next);
4629
              psize += nsize;
4630
              unlink_chunk(fm, next, nsize);
4631
              set_size_and_pinuse_of_free_chunk(p, psize);
4632
              if (p == fm->dv) {
4633
                fm->dvsize = psize;
4634
                goto postaction;
4635
              }
4636
            }
4637
          }
4638
          else
4639
            set_free_with_pinuse(p, psize, next);
4640
          insert_chunk(fm, p, psize);
4641
          check_free_chunk(fm, p);
4642
          goto postaction;
4643
        }
4644
      }
4645
    erroraction:
4646
      USAGE_ERROR_ACTION(fm, p);
4647
    postaction:
4648
      POSTACTION(fm);
4649
    }
4650
  }
4651
}
4652
 
4653
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4654
  void* mem;
4655
  size_t req = 0;
4656
  mstate ms = (mstate)msp;
4657
  if (!ok_magic(ms)) {
4658
    USAGE_ERROR_ACTION(ms,ms);
4659
    return 0;
4660
  }
4661
  if (n_elements != 0) {
4662
    req = n_elements * elem_size;
4663
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4664
        (req / n_elements != elem_size))
4665
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4666
  }
4667
  mem = internal_malloc(ms, req);
4668
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4669
    memset(mem, 0, req);
4670
  return mem;
4671
}
4672
 
4673
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4674
  if (oldmem == 0)
4675
    return mspace_malloc(msp, bytes);
4676
#ifdef REALLOC_ZERO_BYTES_FREES
4677
  if (bytes == 0) {
4678
    mspace_free(msp, oldmem);
4679
    return 0;
4680
  }
4681
#endif /* REALLOC_ZERO_BYTES_FREES */
4682
  else {
4683
#if FOOTERS
4684
    mchunkptr p  = mem2chunk(oldmem);
4685
    mstate ms = get_mstate_for(p);
4686
#else /* FOOTERS */
4687
    mstate ms = (mstate)msp;
4688
#endif /* FOOTERS */
4689
    if (!ok_magic(ms)) {
4690
      USAGE_ERROR_ACTION(ms,ms);
4691
      return 0;
4692
    }
4693
    return internal_realloc(ms, oldmem, bytes);
4694
  }
4695
}
4696
 
4697
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4698
  mstate ms = (mstate)msp;
4699
  if (!ok_magic(ms)) {
4700
    USAGE_ERROR_ACTION(ms,ms);
4701
    return 0;
4702
  }
4703
  return internal_memalign(ms, alignment, bytes);
4704
}
4705
 
4706
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4707
                                 size_t elem_size, void* chunks[]) {
4708
  size_t sz = elem_size; /* serves as 1-element array */
4709
  mstate ms = (mstate)msp;
4710
  if (!ok_magic(ms)) {
4711
    USAGE_ERROR_ACTION(ms,ms);
4712
    return 0;
4713
  }
4714
  return ialloc(ms, n_elements, &sz, 3, chunks);
4715
}
4716
 
4717
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4718
                                   size_t sizes[], void* chunks[]) {
4719
  mstate ms = (mstate)msp;
4720
  if (!ok_magic(ms)) {
4721
    USAGE_ERROR_ACTION(ms,ms);
4722
    return 0;
4723
  }
4724
  return ialloc(ms, n_elements, sizes, 0, chunks);
4725
}
4726
 
4727
int mspace_trim(mspace msp, size_t pad) {
4728
  int result = 0;
4729
  mstate ms = (mstate)msp;
4730
  if (ok_magic(ms)) {
4731
    if (!PREACTION(ms)) {
4732
      result = sys_trim(ms, pad);
4733
      POSTACTION(ms);
4734
    }
4735
  }
4736
  else {
4737
    USAGE_ERROR_ACTION(ms,ms);
4738
  }
4739
  return result;
4740
}
4741
 
4742
void mspace_malloc_stats(mspace msp) {
4743
  mstate ms = (mstate)msp;
4744
  if (ok_magic(ms)) {
4745
    internal_malloc_stats(ms);
4746
  }
4747
  else {
4748
    USAGE_ERROR_ACTION(ms,ms);
4749
  }
4750
}
4751
 
4752
size_t mspace_footprint(mspace msp) {
4753
  size_t result;
4754
  mstate ms = (mstate)msp;
4755
  if (ok_magic(ms)) {
4756
    result = ms->footprint;
4757
  }
4758
  USAGE_ERROR_ACTION(ms,ms);
4759
  return result;
4760
}
4761
 
4762
 
4763
size_t mspace_max_footprint(mspace msp) {
4764
  size_t result;
4765
  mstate ms = (mstate)msp;
4766
  if (ok_magic(ms)) {
4767
    result = ms->max_footprint;
4768
  }
4769
  USAGE_ERROR_ACTION(ms,ms);
4770
  return result;
4771
}
4772
 
4773
 
4774
#if !NO_MALLINFO
4775
struct mallinfo mspace_mallinfo(mspace msp) {
4776
  mstate ms = (mstate)msp;
4777
  if (!ok_magic(ms)) {
4778
    USAGE_ERROR_ACTION(ms,ms);
4779
  }
4780
  return internal_mallinfo(ms);
4781
}
4782
#endif /* NO_MALLINFO */
4783
 
4784
int mspace_mallopt(int param_number, int value) {
4785
  return change_mparam(param_number, value);
4786
}
4787
 
4788
#endif /* MSPACES */
4789
 
4790
/* -------------------- Alternative MORECORE functions ------------------- */
4791
 
4792
/*
4793
  Guidelines for creating a custom version of MORECORE:
4794
 
4795
  * For best performance, MORECORE should allocate in multiples of pagesize.
4796
  * MORECORE may allocate more memory than requested. (Or even less,
4797
      but this will usually result in a malloc failure.)
4798
  * MORECORE must not allocate memory when given argument zero, but
4799
      instead return one past the end address of memory from previous
4800
      nonzero call.
4801
  * For best performance, consecutive calls to MORECORE with positive
4802
      arguments should return increasing addresses, indicating that
4803
      space has been contiguously extended.
4804
  * Even though consecutive calls to MORECORE need not return contiguous
4805
      addresses, it must be OK for malloc'ed chunks to span multiple
4806
      regions in those cases where they do happen to be contiguous.
4807
  * MORECORE need not handle negative arguments -- it may instead
4808
      just return MFAIL when given negative arguments.
4809
      Negative arguments are always multiples of pagesize. MORECORE
4810
      must not misinterpret negative args as large positive unsigned
4811
      args. You can suppress all such calls from even occurring by defining
4812
      MORECORE_CANNOT_TRIM,
4813
 
4814
  As an example alternative MORECORE, here is a custom allocator
4815
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4816
  necessarily physically contiguous non-paged memory (locked in,
4817
  present and won't get swapped out).  You can use it by uncommenting
4818
  this section, adding some #includes, and setting up the appropriate
4819
  defines above:
4820
 
4821
      #define MORECORE osMoreCore
4822
 
4823
  There is also a shutdown routine that should somehow be called for
4824
  cleanup upon program exit.
4825
 
4826
  #define MAX_POOL_ENTRIES 100
4827
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4828
  static int next_os_pool;
4829
  void *our_os_pools[MAX_POOL_ENTRIES];
4830
 
4831
  void *osMoreCore(int size)
4832
  {
4833
    void *ptr = 0;
4834
    static void *sbrk_top = 0;
4835
 
4836
    if (size > 0)
4837
    {
4838
      if (size < MINIMUM_MORECORE_SIZE)
4839
         size = MINIMUM_MORECORE_SIZE;
4840
      if (CurrentExecutionLevel() == kTaskLevel)
4841
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4842
      if (ptr == 0)
4843
      {
4844
        return (void *) MFAIL;
4845
      }
4846
      // save ptrs so they can be freed during cleanup
4847
      our_os_pools[next_os_pool] = ptr;
4848
      next_os_pool++;
4849
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4850
      sbrk_top = (char *) ptr + size;
4851
      return ptr;
4852
    }
4853
    else if (size < 0)
4854
    {
4855
      // we don't currently support shrink behavior
4856
      return (void *) MFAIL;
4857
    }
4858
    else
4859
    {
4860
      return sbrk_top;
4861
    }
4862
  }
4863
 
4864
  // cleanup any allocated memory pools
4865
  // called as last thing before shutting down driver
4866
 
4867
  void osCleanupMem(void)
4868
  {
4869
    void **ptr;
4870
 
4871
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4872
      if (*ptr)
4873
      {
4874
         PoolDeallocate(*ptr);
4875
         *ptr = 0;
4876
      }
4877
  }
4878
 
4879
*/
4880
 
4881
 
4882
/* -----------------------------------------------------------------------
4883
History:
4884
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4885
      * Add max_footprint functions
4886
      * Ensure all appropriate literals are size_t
4887
      * Fix conditional compilation problem for some #define settings
4888
      * Avoid concatenating segments with the one provided
4889
        in create_mspace_with_base
4890
      * Rename some variables to avoid compiler shadowing warnings
4891
      * Use explicit lock initialization.
4892
      * Better handling of sbrk interference.
4893
      * Simplify and fix segment insertion, trimming and mspace_destroy
4894
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4895
      * Thanks especially to Dennis Flanagan for help on these.
4896
 
4897
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4898
      * Fix memalign brace error.
4899
 
4900
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4901
      * Fix improper #endif nesting in C++
4902
      * Add explicit casts needed for C++
4903
 
4904
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4905
      * Use trees for large bins
4906
      * Support mspaces
4907
      * Use segments to unify sbrk-based and mmap-based system allocation,
4908
        removing need for emulation on most platforms without sbrk.
4909
      * Default safety checks
4910
      * Optional footer checks. Thanks to William Robertson for the idea.
4911
      * Internal code refactoring
4912
      * Incorporate suggestions and platform-specific changes.
4913
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4914
        Aaron Bachmann,  Emery Berger, and others.
4915
      * Speed up non-fastbin processing enough to remove fastbins.
4916
      * Remove useless cfree() to avoid conflicts with other apps.
4917
      * Remove internal memcpy, memset. Compilers handle builtins better.
4918
      * Remove some options that no one ever used and rename others.
4919
 
4920
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4921
      * Fix malloc_state bitmap array misdeclaration
4922
 
4923
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4924
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4925
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4926
      * Better detection and support for non-contiguousness of MORECORE.
4927
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4928
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4929
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4930
      * Raised default trim and map thresholds to 256K.
4931
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4932
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4933
      * Branch-free bin calculation
4934
      * Default trim and mmap thresholds now 256K.
4935
 
4936
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4937
      * Introduce independent_comalloc and independent_calloc.
4938
        Thanks to Michael Pachos for motivation and help.
4939
      * Make optional .h file available
4940
      * Allow > 2GB requests on 32bit systems.
4941
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4942
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4943
        and Anonymous.
4944
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4945
        helping test this.)
4946
      * memalign: check alignment arg
4947
      * realloc: don't try to shift chunks backwards, since this
4948
        leads to  more fragmentation in some programs and doesn't
4949
        seem to help in any others.
4950
      * Collect all cases in malloc requiring system memory into sysmalloc
4951
      * Use mmap as backup to sbrk
4952
      * Place all internal state in malloc_state
4953
      * Introduce fastbins (although similar to 2.5.1)
4954
      * Many minor tunings and cosmetic improvements
4955
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4956
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4957
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4958
      * Include errno.h to support default failure action.
4959
 
4960
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4961
      * return null for negative arguments
4962
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4963
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4964
          (e.g. WIN32 platforms)
4965
         * Cleanup header file inclusion for WIN32 platforms
4966
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4967
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4968
           memory allocation routines
4969
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4970
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4971
           usage of 'assert' in non-WIN32 code
4972
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4973
           avoid infinite loop
4974
      * Always call 'fREe()' rather than 'free()'
4975
 
4976
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4977
      * Fixed ordering problem with boundary-stamping
4978
 
4979
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4980
      * Added pvalloc, as recommended by H.J. Liu
4981
      * Added 64bit pointer support mainly from Wolfram Gloger
4982
      * Added anonymously donated WIN32 sbrk emulation
4983
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4984
      * malloc_extend_top: fix mask error that caused wastage after
4985
        foreign sbrks
4986
      * Add linux mremap support code from HJ Liu
4987
 
4988
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4989
      * Integrated most documentation with the code.
4990
      * Add support for mmap, with help from
4991
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4992
      * Use last_remainder in more cases.
4993
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4994
      * Use ordered bins instead of best-fit threshhold
4995
      * Eliminate block-local decls to simplify tracing and debugging.
4996
      * Support another case of realloc via move into top
4997
      * Fix error occuring when initial sbrk_base not word-aligned.
4998
      * Rely on page size for units instead of SBRK_UNIT to
4999
        avoid surprises about sbrk alignment conventions.
5000
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5001
        (raymond@es.ele.tue.nl) for the suggestion.
5002
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5003
      * More precautions for cases where other routines call sbrk,
5004
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5005
      * Added macros etc., allowing use in linux libc from
5006
        H.J. Lu (hjl@gnu.ai.mit.edu)
5007
      * Inverted this history list
5008
 
5009
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5010
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5011
      * Removed all preallocation code since under current scheme
5012
        the work required to undo bad preallocations exceeds
5013
        the work saved in good cases for most test programs.
5014
      * No longer use return list or unconsolidated bins since
5015
        no scheme using them consistently outperforms those that don't
5016
        given above changes.
5017
      * Use best fit for very large chunks to prevent some worst-cases.
5018
      * Added some support for debugging
5019
 
5020
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5021
      * Removed footers when chunks are in use. Thanks to
5022
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5023
 
5024
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5025
      * Added malloc_trim, with help from Wolfram Gloger
5026
        (wmglo@Dent.MED.Uni-Muenchen.DE).
5027
 
5028
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5029
 
5030
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5031
      * realloc: try to expand in both directions
5032
      * malloc: swap order of clean-bin strategy;
5033
      * realloc: only conditionally expand backwards
5034
      * Try not to scavenge used bins
5035
      * Use bin counts as a guide to preallocation
5036
      * Occasionally bin return list chunks in first scan
5037
      * Add a few optimizations from colin@nyx10.cs.du.edu
5038
 
5039
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5040
      * faster bin computation & slightly different binning
5041
      * merged all consolidations to one part of malloc proper
5042
         (eliminating old malloc_find_space & malloc_clean_bin)
5043
      * Scan 2 returns chunks (not just 1)
5044
      * Propagate failure in realloc if malloc returns 0
5045
      * Add stuff to allow compilation on non-ANSI compilers
5046
          from kpv@research.att.com
5047
 
5048
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5049
      * removed potential for odd address access in prev_chunk
5050
      * removed dependency on getpagesize.h
5051
      * misc cosmetics and a bit more internal documentation
5052
      * anticosmetics: mangled names in macros to evade debugger strangeness
5053
      * tested on sparc, hp-700, dec-mips, rs6000
5054
          with gcc & native cc (hp, dec only) allowing
5055
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5056
 
5057
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5058
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5059
         structure of old version,  but most details differ.)
5060
 
5061
*/