Subversion Repositories HelenOS-historic

Rev

Rev 968 | Rev 999 | Go to most recent revision | Details | Compare with Previous | Last modification | View Log | RSS feed

Rev Author Line No. Line
968 palkovsky 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
441
#ifndef WIN32
442
#ifdef _WIN32
443
#define WIN32 1
444
#endif  /* _WIN32 */
445
#endif  /* WIN32 */
446
#ifdef WIN32
447
#define WIN32_LEAN_AND_MEAN
448
#include <windows.h>
449
#define HAVE_MMAP 1
450
#define HAVE_MORECORE 0
451
#define LACKS_UNISTD_H
452
#define LACKS_SYS_PARAM_H
453
#define LACKS_SYS_MMAN_H
454
#define LACKS_STRING_H
455
#define LACKS_STRINGS_H
456
#define LACKS_SYS_TYPES_H
457
#define LACKS_ERRNO_H
458
#define MALLOC_FAILURE_ACTION
459
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
460
#endif  /* WIN32 */
461
 
462
#if defined(DARWIN) || defined(_DARWIN)
463
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
464
#ifndef HAVE_MORECORE
465
#define HAVE_MORECORE 0
466
#define HAVE_MMAP 1
467
#endif  /* HAVE_MORECORE */
468
#endif  /* DARWIN */
469
 
470
#ifndef LACKS_SYS_TYPES_H
471
#include <sys/types.h>  /* For size_t */
472
#endif  /* LACKS_SYS_TYPES_H */
473
 
474
/* The maximum possible size_t value has all bits set */
475
#define MAX_SIZE_T           (~(size_t)0)
476
 
477
#ifndef ONLY_MSPACES
478
#define ONLY_MSPACES 0
479
#endif  /* ONLY_MSPACES */
480
#ifndef MSPACES
481
#if ONLY_MSPACES
482
#define MSPACES 1
483
#else   /* ONLY_MSPACES */
484
#define MSPACES 0
485
#endif  /* ONLY_MSPACES */
486
#endif  /* MSPACES */
487
#ifndef MALLOC_ALIGNMENT
488
#define MALLOC_ALIGNMENT ((size_t)8U)
489
#endif  /* MALLOC_ALIGNMENT */
490
#ifndef FOOTERS
491
#define FOOTERS 0
492
#endif  /* FOOTERS */
493
#ifndef ABORT
494
#define ABORT  abort()
495
#endif  /* ABORT */
496
#ifndef ABORT_ON_ASSERT_FAILURE
497
#define ABORT_ON_ASSERT_FAILURE 1
498
#endif  /* ABORT_ON_ASSERT_FAILURE */
499
#ifndef PROCEED_ON_ERROR
500
#define PROCEED_ON_ERROR 0
501
#endif  /* PROCEED_ON_ERROR */
502
#ifndef USE_LOCKS
503
#define USE_LOCKS 0
504
#endif  /* USE_LOCKS */
505
#ifndef INSECURE
506
#define INSECURE 0
507
#endif  /* INSECURE */
508
#ifndef HAVE_MMAP
509
#define HAVE_MMAP 1
510
#endif  /* HAVE_MMAP */
511
#ifndef MMAP_CLEARS
512
#define MMAP_CLEARS 1
513
#endif  /* MMAP_CLEARS */
514
#ifndef HAVE_MREMAP
515
#ifdef linux
516
#define HAVE_MREMAP 1
517
#else   /* linux */
518
#define HAVE_MREMAP 0
519
#endif  /* linux */
520
#endif  /* HAVE_MREMAP */
521
#ifndef MALLOC_FAILURE_ACTION
522
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
523
#endif  /* MALLOC_FAILURE_ACTION */
524
#ifndef HAVE_MORECORE
525
#if ONLY_MSPACES
526
#define HAVE_MORECORE 0
527
#else   /* ONLY_MSPACES */
528
#define HAVE_MORECORE 1
529
#endif  /* ONLY_MSPACES */
530
#endif  /* HAVE_MORECORE */
531
#if !HAVE_MORECORE
532
#define MORECORE_CONTIGUOUS 0
533
#else   /* !HAVE_MORECORE */
534
#ifndef MORECORE
535
#define MORECORE sbrk
536
#endif  /* MORECORE */
537
#ifndef MORECORE_CONTIGUOUS
538
#define MORECORE_CONTIGUOUS 1
539
#endif  /* MORECORE_CONTIGUOUS */
540
#endif  /* HAVE_MORECORE */
541
#ifndef DEFAULT_GRANULARITY
542
#if MORECORE_CONTIGUOUS
543
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
544
#else   /* MORECORE_CONTIGUOUS */
545
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
546
#endif  /* MORECORE_CONTIGUOUS */
547
#endif  /* DEFAULT_GRANULARITY */
548
#ifndef DEFAULT_TRIM_THRESHOLD
549
#ifndef MORECORE_CANNOT_TRIM
550
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
551
#else   /* MORECORE_CANNOT_TRIM */
552
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
553
#endif  /* MORECORE_CANNOT_TRIM */
554
#endif  /* DEFAULT_TRIM_THRESHOLD */
555
#ifndef DEFAULT_MMAP_THRESHOLD
556
#if HAVE_MMAP
557
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
558
#else   /* HAVE_MMAP */
559
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
560
#endif  /* HAVE_MMAP */
561
#endif  /* DEFAULT_MMAP_THRESHOLD */
562
#ifndef USE_BUILTIN_FFS
563
#define USE_BUILTIN_FFS 0
564
#endif  /* USE_BUILTIN_FFS */
565
#ifndef USE_DEV_RANDOM
566
#define USE_DEV_RANDOM 0
567
#endif  /* USE_DEV_RANDOM */
568
#ifndef NO_MALLINFO
569
#define NO_MALLINFO 0
570
#endif  /* NO_MALLINFO */
571
#ifndef MALLINFO_FIELD_TYPE
572
#define MALLINFO_FIELD_TYPE size_t
573
#endif  /* MALLINFO_FIELD_TYPE */
574
 
575
/*
576
  mallopt tuning options.  SVID/XPG defines four standard parameter
577
  numbers for mallopt, normally defined in malloc.h.  None of these
578
  are used in this malloc, so setting them has no effect. But this
579
  malloc does support the following options.
580
*/
581
 
582
#define M_TRIM_THRESHOLD     (-1)
583
#define M_GRANULARITY        (-2)
584
#define M_MMAP_THRESHOLD     (-3)
585
 
985 palkovsky 586
/** Non-default helenos customizations */
587
#define LACKS_FCNTL_H
588
#define LACKS_SYS_MMAN_H
589
#define LACKS_SYS_PARAM_H
590
#undef HAVE_MMAP
591
#define HAVE_MMAP 0
592
#define LACKS_ERRNO_H
593
/* Set errno? */
594
#undef MALLOC_FAILURE_ACTION
595
#define MALLOC_FAILURE_ACTION 
968 palkovsky 596
 
597
 
598
/*
599
  ========================================================================
600
  To make a fully customizable malloc.h header file, cut everything
601
  above this line, put into file malloc.h, edit to suit, and #include it
602
  on the next line, as well as in programs that use this malloc.
603
  ========================================================================
604
*/
605
 
985 palkovsky 606
#include "malloc.h"
968 palkovsky 607
 
608
/*------------------------------ internal #includes ---------------------- */
609
 
610
#ifdef WIN32
611
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
612
#endif /* WIN32 */
613
 
614
#include <stdio.h>       /* for printing in malloc_stats */
615
 
616
#ifndef LACKS_ERRNO_H
617
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
618
#endif /* LACKS_ERRNO_H */
619
#if FOOTERS
620
#include <time.h>        /* for magic initialization */
621
#endif /* FOOTERS */
622
#ifndef LACKS_STDLIB_H
623
#include <stdlib.h>      /* for abort() */
624
#endif /* LACKS_STDLIB_H */
625
#ifdef DEBUG
626
#if ABORT_ON_ASSERT_FAILURE
627
#define assert(x) if(!(x)) ABORT
628
#else /* ABORT_ON_ASSERT_FAILURE */
629
#include <assert.h>
630
#endif /* ABORT_ON_ASSERT_FAILURE */
631
#else  /* DEBUG */
632
#define assert(x)
633
#endif /* DEBUG */
634
#ifndef LACKS_STRING_H
635
#include <string.h>      /* for memset etc */
636
#endif  /* LACKS_STRING_H */
637
#if USE_BUILTIN_FFS
638
#ifndef LACKS_STRINGS_H
639
#include <strings.h>     /* for ffs */
640
#endif /* LACKS_STRINGS_H */
641
#endif /* USE_BUILTIN_FFS */
642
#if HAVE_MMAP
643
#ifndef LACKS_SYS_MMAN_H
644
#include <sys/mman.h>    /* for mmap */
645
#endif /* LACKS_SYS_MMAN_H */
646
#ifndef LACKS_FCNTL_H
647
#include <fcntl.h>
648
#endif /* LACKS_FCNTL_H */
649
#endif /* HAVE_MMAP */
650
#if HAVE_MORECORE
651
#ifndef LACKS_UNISTD_H
652
#include <unistd.h>     /* for sbrk */
653
#else /* LACKS_UNISTD_H */
654
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
655
extern void*     sbrk(ptrdiff_t);
656
#endif /* FreeBSD etc */
657
#endif /* LACKS_UNISTD_H */
658
#endif /* HAVE_MMAP */
659
 
660
#ifndef WIN32
661
#ifndef malloc_getpagesize
662
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
663
#    ifndef _SC_PAGE_SIZE
664
#      define _SC_PAGE_SIZE _SC_PAGESIZE
665
#    endif
666
#  endif
667
#  ifdef _SC_PAGE_SIZE
668
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
669
#  else
670
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
671
       extern size_t getpagesize();
672
#      define malloc_getpagesize getpagesize()
673
#    else
674
#      ifdef WIN32 /* use supplied emulation of getpagesize */
675
#        define malloc_getpagesize getpagesize()
676
#      else
677
#        ifndef LACKS_SYS_PARAM_H
678
#          include <sys/param.h>
679
#        endif
680
#        ifdef EXEC_PAGESIZE
681
#          define malloc_getpagesize EXEC_PAGESIZE
682
#        else
683
#          ifdef NBPG
684
#            ifndef CLSIZE
685
#              define malloc_getpagesize NBPG
686
#            else
687
#              define malloc_getpagesize (NBPG * CLSIZE)
688
#            endif
689
#          else
690
#            ifdef NBPC
691
#              define malloc_getpagesize NBPC
692
#            else
693
#              ifdef PAGESIZE
694
#                define malloc_getpagesize PAGESIZE
695
#              else /* just guess */
696
#                define malloc_getpagesize ((size_t)4096U)
697
#              endif
698
#            endif
699
#          endif
700
#        endif
701
#      endif
702
#    endif
703
#  endif
704
#endif
705
#endif
706
 
707
/* ------------------- size_t and alignment properties -------------------- */
708
 
709
/* The byte and bit size of a size_t */
710
#define SIZE_T_SIZE         (sizeof(size_t))
711
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
712
 
713
/* Some constants coerced to size_t */
714
/* Annoying but necessary to avoid errors on some plaftorms */
715
#define SIZE_T_ZERO         ((size_t)0)
716
#define SIZE_T_ONE          ((size_t)1)
717
#define SIZE_T_TWO          ((size_t)2)
718
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
719
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
720
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
721
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
722
 
723
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
724
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
725
 
726
/* True if address a has acceptable alignment */
727
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
728
 
729
/* the number of bytes to offset an address to align it */
730
#define align_offset(A)\
731
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
732
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
733
 
734
/* -------------------------- MMAP preliminaries ------------------------- */
735
 
736
/*
737
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
738
   checks to fail so compiler optimizer can delete code rather than
739
   using so many "#if"s.
740
*/
741
 
742
 
743
/* MORECORE and MMAP must return MFAIL on failure */
744
#define MFAIL                ((void*)(MAX_SIZE_T))
745
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
746
 
747
#if !HAVE_MMAP
748
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
749
#define USE_MMAP_BIT         (SIZE_T_ZERO)
750
#define CALL_MMAP(s)         MFAIL
751
#define CALL_MUNMAP(a, s)    (-1)
752
#define DIRECT_MMAP(s)       MFAIL
753
 
754
#else /* HAVE_MMAP */
755
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
756
#define USE_MMAP_BIT         (SIZE_T_ONE)
757
 
758
#ifndef WIN32
759
#define CALL_MUNMAP(a, s)    munmap((a), (s))
760
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
761
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
762
#define MAP_ANONYMOUS        MAP_ANON
763
#endif /* MAP_ANON */
764
#ifdef MAP_ANONYMOUS
765
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
766
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
767
#else /* MAP_ANONYMOUS */
768
/*
769
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
770
   is unlikely to be needed, but is supplied just in case.
771
*/
772
#define MMAP_FLAGS           (MAP_PRIVATE)
773
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
774
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
775
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
776
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
777
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
778
#endif /* MAP_ANONYMOUS */
779
 
780
#define DIRECT_MMAP(s)       CALL_MMAP(s)
781
#else /* WIN32 */
782
 
783
/* Win32 MMAP via VirtualAlloc */
784
static void* win32mmap(size_t size) {
785
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
786
  return (ptr != 0)? ptr: MFAIL;
787
}
788
 
789
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
790
static void* win32direct_mmap(size_t size) {
791
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
792
                           PAGE_READWRITE);
793
  return (ptr != 0)? ptr: MFAIL;
794
}
795
 
796
/* This function supports releasing coalesed segments */
797
static int win32munmap(void* ptr, size_t size) {
798
  MEMORY_BASIC_INFORMATION minfo;
799
  char* cptr = ptr;
800
  while (size) {
801
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
802
      return -1;
803
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
804
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
805
      return -1;
806
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
807
      return -1;
808
    cptr += minfo.RegionSize;
809
    size -= minfo.RegionSize;
810
  }
811
  return 0;
812
}
813
 
814
#define CALL_MMAP(s)         win32mmap(s)
815
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
816
#define DIRECT_MMAP(s)       win32direct_mmap(s)
817
#endif /* WIN32 */
818
#endif /* HAVE_MMAP */
819
 
820
#if HAVE_MMAP && HAVE_MREMAP
821
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
822
#else  /* HAVE_MMAP && HAVE_MREMAP */
823
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
824
#endif /* HAVE_MMAP && HAVE_MREMAP */
825
 
826
#if HAVE_MORECORE
827
#define CALL_MORECORE(S)     MORECORE(S)
828
#else  /* HAVE_MORECORE */
829
#define CALL_MORECORE(S)     MFAIL
830
#endif /* HAVE_MORECORE */
831
 
832
/* mstate bit set if continguous morecore disabled or failed */
833
#define USE_NONCONTIGUOUS_BIT (4U)
834
 
835
/* segment bit set in create_mspace_with_base */
836
#define EXTERN_BIT            (8U)
837
 
838
 
839
/* --------------------------- Lock preliminaries ------------------------ */
840
 
841
#if USE_LOCKS
842
 
843
/*
844
  When locks are defined, there are up to two global locks:
845
 
846
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
847
    MORECORE.  In many cases sys_alloc requires two calls, that should
848
    not be interleaved with calls by other threads.  This does not
849
    protect against direct calls to MORECORE by other threads not
850
    using this lock, so there is still code to cope the best we can on
851
    interference.
852
 
853
  * magic_init_mutex ensures that mparams.magic and other
854
    unique mparams values are initialized only once.
855
*/
856
 
857
#ifndef WIN32
858
/* By default use posix locks */
859
#include <pthread.h>
860
#define MLOCK_T pthread_mutex_t
861
#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
862
#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
863
#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
864
 
865
#if HAVE_MORECORE
866
static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
867
#endif /* HAVE_MORECORE */
868
 
869
static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
870
 
871
#else /* WIN32 */
872
/*
873
   Because lock-protected regions have bounded times, and there
874
   are no recursive lock calls, we can use simple spinlocks.
875
*/
876
 
877
#define MLOCK_T long
878
static int win32_acquire_lock (MLOCK_T *sl) {
879
  for (;;) {
880
#ifdef InterlockedCompareExchangePointer
881
    if (!InterlockedCompareExchange(sl, 1, 0))
882
      return 0;
883
#else  /* Use older void* version */
884
    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
885
      return 0;
886
#endif /* InterlockedCompareExchangePointer */
887
    Sleep (0);
888
  }
889
}
890
 
891
static void win32_release_lock (MLOCK_T *sl) {
892
  InterlockedExchange (sl, 0);
893
}
894
 
895
#define INITIAL_LOCK(l)      *(l)=0
896
#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
897
#define RELEASE_LOCK(l)      win32_release_lock(l)
898
#if HAVE_MORECORE
899
static MLOCK_T morecore_mutex;
900
#endif /* HAVE_MORECORE */
901
static MLOCK_T magic_init_mutex;
902
#endif /* WIN32 */
903
 
904
#define USE_LOCK_BIT               (2U)
905
#else  /* USE_LOCKS */
906
#define USE_LOCK_BIT               (0U)
907
#define INITIAL_LOCK(l)
908
#endif /* USE_LOCKS */
909
 
910
#if USE_LOCKS && HAVE_MORECORE
911
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
912
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
913
#else /* USE_LOCKS && HAVE_MORECORE */
914
#define ACQUIRE_MORECORE_LOCK()
915
#define RELEASE_MORECORE_LOCK()
916
#endif /* USE_LOCKS && HAVE_MORECORE */
917
 
918
#if USE_LOCKS
919
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
920
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
921
#else  /* USE_LOCKS */
922
#define ACQUIRE_MAGIC_INIT_LOCK()
923
#define RELEASE_MAGIC_INIT_LOCK()
924
#endif /* USE_LOCKS */
925
 
926
 
927
/* -----------------------  Chunk representations ------------------------ */
928
 
929
/*
930
  (The following includes lightly edited explanations by Colin Plumb.)
931
 
932
  The malloc_chunk declaration below is misleading (but accurate and
933
  necessary).  It declares a "view" into memory allowing access to
934
  necessary fields at known offsets from a given base.
935
 
936
  Chunks of memory are maintained using a `boundary tag' method as
937
  originally described by Knuth.  (See the paper by Paul Wilson
938
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
939
  techniques.)  Sizes of free chunks are stored both in the front of
940
  each chunk and at the end.  This makes consolidating fragmented
941
  chunks into bigger chunks fast.  The head fields also hold bits
942
  representing whether chunks are free or in use.
943
 
944
  Here are some pictures to make it clearer.  They are "exploded" to
945
  show that the state of a chunk can be thought of as extending from
946
  the high 31 bits of the head field of its header through the
947
  prev_foot and PINUSE_BIT bit of the following chunk header.
948
 
949
  A chunk that's in use looks like:
950
 
951
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
952
           | Size of previous chunk (if P = 1)                             |
953
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
954
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
955
         | Size of this chunk                                         1| +-+
956
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
957
         |                                                               |
958
         +-                                                             -+
959
         |                                                               |
960
         +-                                                             -+
961
         |                                                               :
962
         +-      size - sizeof(size_t) available payload bytes          -+
963
         :                                                               |
964
 chunk-> +-                                                             -+
965
         |                                                               |
966
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
967
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
968
       | Size of next chunk (may or may not be in use)               | +-+
969
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
970
 
971
    And if it's free, it looks like this:
972
 
973
   chunk-> +-                                                             -+
974
           | User payload (must be in use, or we would have merged!)       |
975
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
976
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
977
         | Size of this chunk                                         0| +-+
978
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
979
         | Next pointer                                                  |
980
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
981
         | Prev pointer                                                  |
982
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
983
         |                                                               :
984
         +-      size - sizeof(struct chunk) unused bytes               -+
985
         :                                                               |
986
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
987
         | Size of this chunk                                            |
988
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
989
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
990
       | Size of next chunk (must be in use, or we would have merged)| +-+
991
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
992
       |                                                               :
993
       +- User payload                                                -+
994
       :                                                               |
995
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
996
                                                                     |0|
997
                                                                     +-+
998
  Note that since we always merge adjacent free chunks, the chunks
999
  adjacent to a free chunk must be in use.
1000
 
1001
  Given a pointer to a chunk (which can be derived trivially from the
1002
  payload pointer) we can, in O(1) time, find out whether the adjacent
1003
  chunks are free, and if so, unlink them from the lists that they
1004
  are on and merge them with the current chunk.
1005
 
1006
  Chunks always begin on even word boundaries, so the mem portion
1007
  (which is returned to the user) is also on an even word boundary, and
1008
  thus at least double-word aligned.
1009
 
1010
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1011
  chunk size (which is always a multiple of two words), is an in-use
1012
  bit for the *previous* chunk.  If that bit is *clear*, then the
1013
  word before the current chunk size contains the previous chunk
1014
  size, and can be used to find the front of the previous chunk.
1015
  The very first chunk allocated always has this bit set, preventing
1016
  access to non-existent (or non-owned) memory. If pinuse is set for
1017
  any given chunk, then you CANNOT determine the size of the
1018
  previous chunk, and might even get a memory addressing fault when
1019
  trying to do so.
1020
 
1021
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1022
  the chunk size redundantly records whether the current chunk is
1023
  inuse. This redundancy enables usage checks within free and realloc,
1024
  and reduces indirection when freeing and consolidating chunks.
1025
 
1026
  Each freshly allocated chunk must have both cinuse and pinuse set.
1027
  That is, each allocated chunk borders either a previously allocated
1028
  and still in-use chunk, or the base of its memory arena. This is
1029
  ensured by making all allocations from the the `lowest' part of any
1030
  found chunk.  Further, no free chunk physically borders another one,
1031
  so each free chunk is known to be preceded and followed by either
1032
  inuse chunks or the ends of memory.
1033
 
1034
  Note that the `foot' of the current chunk is actually represented
1035
  as the prev_foot of the NEXT chunk. This makes it easier to
1036
  deal with alignments etc but can be very confusing when trying
1037
  to extend or adapt this code.
1038
 
1039
  The exceptions to all this are
1040
 
1041
     1. The special chunk `top' is the top-most available chunk (i.e.,
1042
        the one bordering the end of available memory). It is treated
1043
        specially.  Top is never included in any bin, is used only if
1044
        no other chunk is available, and is released back to the
1045
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1046
        the top chunk is treated as larger (and thus less well
1047
        fitting) than any other available chunk.  The top chunk
1048
        doesn't update its trailing size field since there is no next
1049
        contiguous chunk that would have to index off it. However,
1050
        space is still allocated for it (TOP_FOOT_SIZE) to enable
1051
        separation or merging when space is extended.
1052
 
1053
     3. Chunks allocated via mmap, which have the lowest-order bit
1054
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1055
        PINUSE_BIT in their head fields.  Because they are allocated
1056
        one-by-one, each must carry its own prev_foot field, which is
1057
        also used to hold the offset this chunk has within its mmapped
1058
        region, which is needed to preserve alignment. Each mmapped
1059
        chunk is trailed by the first two fields of a fake next-chunk
1060
        for sake of usage checks.
1061
 
1062
*/
1063
 
1064
struct malloc_chunk {
1065
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
1066
  size_t               head;       /* Size and inuse bits. */
1067
  struct malloc_chunk* fd;         /* double links -- used only if free. */
1068
  struct malloc_chunk* bk;
1069
};
1070
 
1071
typedef struct malloc_chunk  mchunk;
1072
typedef struct malloc_chunk* mchunkptr;
1073
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1074
typedef unsigned int bindex_t;         /* Described below */
1075
typedef unsigned int binmap_t;         /* Described below */
1076
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1077
 
1078
/* ------------------- Chunks sizes and alignments ----------------------- */
1079
 
1080
#define MCHUNK_SIZE         (sizeof(mchunk))
1081
 
1082
#if FOOTERS
1083
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1084
#else /* FOOTERS */
1085
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1086
#endif /* FOOTERS */
1087
 
1088
/* MMapped chunks need a second word of overhead ... */
1089
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1090
/* ... and additional padding for fake next-chunk at foot */
1091
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1092
 
1093
/* The smallest size we can malloc is an aligned minimal chunk */
1094
#define MIN_CHUNK_SIZE\
1095
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1096
 
1097
/* conversion from malloc headers to user pointers, and back */
1098
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1099
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1100
/* chunk associated with aligned address A */
1101
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1102
 
1103
/* Bounds on request (not chunk) sizes. */
1104
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1105
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1106
 
1107
/* pad request bytes into a usable size */
1108
#define pad_request(req) \
1109
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1110
 
1111
/* pad request, checking for minimum (but not maximum) */
1112
#define request2size(req) \
1113
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1114
 
1115
 
1116
/* ------------------ Operations on head and foot fields ----------------- */
1117
 
1118
/*
1119
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1120
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1121
  use. If the chunk was obtained with mmap, the prev_foot field has
1122
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1123
  mmapped region to the base of the chunk.
1124
*/
1125
 
1126
#define PINUSE_BIT          (SIZE_T_ONE)
1127
#define CINUSE_BIT          (SIZE_T_TWO)
1128
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1129
 
1130
/* Head value for fenceposts */
1131
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1132
 
1133
/* extraction of fields from head words */
1134
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1135
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1136
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1137
 
1138
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1139
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1140
 
1141
/* Treat space at ptr +/- offset as a chunk */
1142
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1143
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1144
 
1145
/* Ptr to next or previous physical malloc_chunk. */
1146
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1147
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1148
 
1149
/* extract next chunk's pinuse bit */
1150
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1151
 
1152
/* Get/set size at footer */
1153
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1154
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1155
 
1156
/* Set size, pinuse bit, and foot */
1157
#define set_size_and_pinuse_of_free_chunk(p, s)\
1158
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1159
 
1160
/* Set size, pinuse bit, foot, and clear next pinuse */
1161
#define set_free_with_pinuse(p, s, n)\
1162
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1163
 
1164
#define is_mmapped(p)\
1165
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1166
 
1167
/* Get the internal overhead associated with chunk p */
1168
#define overhead_for(p)\
1169
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1170
 
1171
/* Return true if malloced space is not necessarily cleared */
1172
#if MMAP_CLEARS
1173
#define calloc_must_clear(p) (!is_mmapped(p))
1174
#else /* MMAP_CLEARS */
1175
#define calloc_must_clear(p) (1)
1176
#endif /* MMAP_CLEARS */
1177
 
1178
/* ---------------------- Overlaid data structures ----------------------- */
1179
 
1180
/*
1181
  When chunks are not in use, they are treated as nodes of either
1182
  lists or trees.
1183
 
1184
  "Small"  chunks are stored in circular doubly-linked lists, and look
1185
  like this:
1186
 
1187
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1188
            |             Size of previous chunk                            |
1189
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1190
    `head:' |             Size of chunk, in bytes                         |P|
1191
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1192
            |             Forward pointer to next chunk in list             |
1193
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1194
            |             Back pointer to previous chunk in list            |
1195
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1196
            |             Unused space (may be 0 bytes long)                .
1197
            .                                                               .
1198
            .                                                               |
1199
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1200
    `foot:' |             Size of chunk, in bytes                           |
1201
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1202
 
1203
  Larger chunks are kept in a form of bitwise digital trees (aka
1204
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1205
  free chunks greater than 256 bytes, their size doesn't impose any
1206
  constraints on user chunk sizes.  Each node looks like:
1207
 
1208
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209
            |             Size of previous chunk                            |
1210
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1211
    `head:' |             Size of chunk, in bytes                         |P|
1212
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213
            |             Forward pointer to next chunk of same size        |
1214
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1215
            |             Back pointer to previous chunk of same size       |
1216
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1217
            |             Pointer to left child (child[0])                  |
1218
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1219
            |             Pointer to right child (child[1])                 |
1220
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1221
            |             Pointer to parent                                 |
1222
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1223
            |             bin index of this chunk                           |
1224
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1225
            |             Unused space                                      .
1226
            .                                                               |
1227
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1228
    `foot:' |             Size of chunk, in bytes                           |
1229
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1230
 
1231
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1232
  of the same size are arranged in a circularly-linked list, with only
1233
  the oldest chunk (the next to be used, in our FIFO ordering)
1234
  actually in the tree.  (Tree members are distinguished by a non-null
1235
  parent pointer.)  If a chunk with the same size an an existing node
1236
  is inserted, it is linked off the existing node using pointers that
1237
  work in the same way as fd/bk pointers of small chunks.
1238
 
1239
  Each tree contains a power of 2 sized range of chunk sizes (the
1240
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1241
  tree level, with the chunks in the smaller half of the range (0x100
1242
  <= x < 0x140 for the top nose) in the left subtree and the larger
1243
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1244
  done by inspecting individual bits.
1245
 
1246
  Using these rules, each node's left subtree contains all smaller
1247
  sizes than its right subtree.  However, the node at the root of each
1248
  subtree has no particular ordering relationship to either.  (The
1249
  dividing line between the subtree sizes is based on trie relation.)
1250
  If we remove the last chunk of a given size from the interior of the
1251
  tree, we need to replace it with a leaf node.  The tree ordering
1252
  rules permit a node to be replaced by any leaf below it.
1253
 
1254
  The smallest chunk in a tree (a common operation in a best-fit
1255
  allocator) can be found by walking a path to the leftmost leaf in
1256
  the tree.  Unlike a usual binary tree, where we follow left child
1257
  pointers until we reach a null, here we follow the right child
1258
  pointer any time the left one is null, until we reach a leaf with
1259
  both child pointers null. The smallest chunk in the tree will be
1260
  somewhere along that path.
1261
 
1262
  The worst case number of steps to add, find, or remove a node is
1263
  bounded by the number of bits differentiating chunks within
1264
  bins. Under current bin calculations, this ranges from 6 up to 21
1265
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1266
  is of course much better.
1267
*/
1268
 
1269
struct malloc_tree_chunk {
1270
  /* The first four fields must be compatible with malloc_chunk */
1271
  size_t                    prev_foot;
1272
  size_t                    head;
1273
  struct malloc_tree_chunk* fd;
1274
  struct malloc_tree_chunk* bk;
1275
 
1276
  struct malloc_tree_chunk* child[2];
1277
  struct malloc_tree_chunk* parent;
1278
  bindex_t                  index;
1279
};
1280
 
1281
typedef struct malloc_tree_chunk  tchunk;
1282
typedef struct malloc_tree_chunk* tchunkptr;
1283
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1284
 
1285
/* A little helper macro for trees */
1286
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1287
 
1288
/* ----------------------------- Segments -------------------------------- */
1289
 
1290
/*
1291
  Each malloc space may include non-contiguous segments, held in a
1292
  list headed by an embedded malloc_segment record representing the
1293
  top-most space. Segments also include flags holding properties of
1294
  the space. Large chunks that are directly allocated by mmap are not
1295
  included in this list. They are instead independently created and
1296
  destroyed without otherwise keeping track of them.
1297
 
1298
  Segment management mainly comes into play for spaces allocated by
1299
  MMAP.  Any call to MMAP might or might not return memory that is
1300
  adjacent to an existing segment.  MORECORE normally contiguously
1301
  extends the current space, so this space is almost always adjacent,
1302
  which is simpler and faster to deal with. (This is why MORECORE is
1303
  used preferentially to MMAP when both are available -- see
1304
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1305
  hinting mechanisms (inconsistently) supported in various
1306
  implementations of unix mmap, or distinguish reserving from
1307
  committing memory. Instead, we just ask for space, and exploit
1308
  contiguity when we get it.  It is probably possible to do
1309
  better than this on some systems, but no general scheme seems
1310
  to be significantly better.
1311
 
1312
  Management entails a simpler variant of the consolidation scheme
1313
  used for chunks to reduce fragmentation -- new adjacent memory is
1314
  normally prepended or appended to an existing segment. However,
1315
  there are limitations compared to chunk consolidation that mostly
1316
  reflect the fact that segment processing is relatively infrequent
1317
  (occurring only when getting memory from system) and that we
1318
  don't expect to have huge numbers of segments:
1319
 
1320
  * Segments are not indexed, so traversal requires linear scans.  (It
1321
    would be possible to index these, but is not worth the extra
1322
    overhead and complexity for most programs on most platforms.)
1323
  * New segments are only appended to old ones when holding top-most
1324
    memory; if they cannot be prepended to others, they are held in
1325
    different segments.
1326
 
1327
  Except for the top-most segment of an mstate, each segment record
1328
  is kept at the tail of its segment. Segments are added by pushing
1329
  segment records onto the list headed by &mstate.seg for the
1330
  containing mstate.
1331
 
1332
  Segment flags control allocation/merge/deallocation policies:
1333
  * If EXTERN_BIT set, then we did not allocate this segment,
1334
    and so should not try to deallocate or merge with others.
1335
    (This currently holds only for the initial segment passed
1336
    into create_mspace_with_base.)
1337
  * If IS_MMAPPED_BIT set, the segment may be merged with
1338
    other surrounding mmapped segments and trimmed/de-allocated
1339
    using munmap.
1340
  * If neither bit is set, then the segment was obtained using
1341
    MORECORE so can be merged with surrounding MORECORE'd segments
1342
    and deallocated/trimmed using MORECORE with negative arguments.
1343
*/
1344
 
1345
struct malloc_segment {
1346
  char*        base;             /* base address */
1347
  size_t       size;             /* allocated size */
1348
  struct malloc_segment* next;   /* ptr to next segment */
1349
  flag_t       sflags;           /* mmap and extern flag */
1350
};
1351
 
1352
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1353
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1354
 
1355
typedef struct malloc_segment  msegment;
1356
typedef struct malloc_segment* msegmentptr;
1357
 
1358
/* ---------------------------- malloc_state ----------------------------- */
1359
 
1360
/*
1361
   A malloc_state holds all of the bookkeeping for a space.
1362
   The main fields are:
1363
 
1364
  Top
1365
    The topmost chunk of the currently active segment. Its size is
1366
    cached in topsize.  The actual size of topmost space is
1367
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1368
    fenceposts and segment records if necessary when getting more
1369
    space from the system.  The size at which to autotrim top is
1370
    cached from mparams in trim_check, except that it is disabled if
1371
    an autotrim fails.
1372
 
1373
  Designated victim (dv)
1374
    This is the preferred chunk for servicing small requests that
1375
    don't have exact fits.  It is normally the chunk split off most
1376
    recently to service another small request.  Its size is cached in
1377
    dvsize. The link fields of this chunk are not maintained since it
1378
    is not kept in a bin.
1379
 
1380
  SmallBins
1381
    An array of bin headers for free chunks.  These bins hold chunks
1382
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1383
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1384
    use in double-linked lists, each bin header acts as a malloc_chunk
1385
    pointing to the real first node, if it exists (else pointing to
1386
    itself).  This avoids special-casing for headers.  But to avoid
1387
    waste, we allocate only the fd/bk pointers of bins, and then use
1388
    repositioning tricks to treat these as the fields of a chunk.
1389
 
1390
  TreeBins
1391
    Treebins are pointers to the roots of trees holding a range of
1392
    sizes. There are 2 equally spaced treebins for each power of two
1393
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1394
    larger.
1395
 
1396
  Bin maps
1397
    There is one bit map for small bins ("smallmap") and one for
1398
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1399
    clears the bit when empty.  Bit operations are then used to avoid
1400
    bin-by-bin searching -- nearly all "search" is done without ever
1401
    looking at bins that won't be selected.  The bit maps
1402
    conservatively use 32 bits per map word, even if on 64bit system.
1403
    For a good description of some of the bit-based techniques used
1404
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1405
    supplement at http://hackersdelight.org/). Many of these are
1406
    intended to reduce the branchiness of paths through malloc etc, as
1407
    well as to reduce the number of memory locations read or written.
1408
 
1409
  Segments
1410
    A list of segments headed by an embedded malloc_segment record
1411
    representing the initial space.
1412
 
1413
  Address check support
1414
    The least_addr field is the least address ever obtained from
1415
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1416
    than this are trapped (unless INSECURE is defined).
1417
 
1418
  Magic tag
1419
    A cross-check field that should always hold same value as mparams.magic.
1420
 
1421
  Flags
1422
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1423
 
1424
  Statistics
1425
    Each space keeps track of current and maximum system memory
1426
    obtained via MORECORE or MMAP.
1427
 
1428
  Locking
1429
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1430
    around every public call using this mspace.
1431
*/
1432
 
1433
/* Bin types, widths and sizes */
1434
#define NSMALLBINS        (32U)
1435
#define NTREEBINS         (32U)
1436
#define SMALLBIN_SHIFT    (3U)
1437
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1438
#define TREEBIN_SHIFT     (8U)
1439
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1440
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1441
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1442
 
1443
struct malloc_state {
1444
  binmap_t   smallmap;
1445
  binmap_t   treemap;
1446
  size_t     dvsize;
1447
  size_t     topsize;
1448
  char*      least_addr;
1449
  mchunkptr  dv;
1450
  mchunkptr  top;
1451
  size_t     trim_check;
1452
  size_t     magic;
1453
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1454
  tbinptr    treebins[NTREEBINS];
1455
  size_t     footprint;
1456
  size_t     max_footprint;
1457
  flag_t     mflags;
1458
#if USE_LOCKS
1459
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1460
#endif /* USE_LOCKS */
1461
  msegment   seg;
1462
};
1463
 
1464
typedef struct malloc_state*    mstate;
1465
 
1466
/* ------------- Global malloc_state and malloc_params ------------------- */
1467
 
1468
/*
1469
  malloc_params holds global properties, including those that can be
1470
  dynamically set using mallopt. There is a single instance, mparams,
1471
  initialized in init_mparams.
1472
*/
1473
 
1474
struct malloc_params {
1475
  size_t magic;
1476
  size_t page_size;
1477
  size_t granularity;
1478
  size_t mmap_threshold;
1479
  size_t trim_threshold;
1480
  flag_t default_mflags;
1481
};
1482
 
1483
static struct malloc_params mparams;
1484
 
1485
/* The global malloc_state used for all non-"mspace" calls */
1486
static struct malloc_state _gm_;
1487
#define gm                 (&_gm_)
1488
#define is_global(M)       ((M) == &_gm_)
1489
#define is_initialized(M)  ((M)->top != 0)
1490
 
1491
/* -------------------------- system alloc setup ------------------------- */
1492
 
1493
/* Operations on mflags */
1494
 
1495
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1496
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1497
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1498
 
1499
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1500
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1501
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1502
 
1503
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1504
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1505
 
1506
#define set_lock(M,L)\
1507
 ((M)->mflags = (L)?\
1508
  ((M)->mflags | USE_LOCK_BIT) :\
1509
  ((M)->mflags & ~USE_LOCK_BIT))
1510
 
1511
/* page-align a size */
1512
#define page_align(S)\
1513
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1514
 
1515
/* granularity-align a size */
1516
#define granularity_align(S)\
1517
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1518
 
1519
#define is_page_aligned(S)\
1520
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1521
#define is_granularity_aligned(S)\
1522
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1523
 
1524
/*  True if segment S holds address A */
1525
#define segment_holds(S, A)\
1526
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1527
 
1528
/* Return segment holding given address */
1529
static msegmentptr segment_holding(mstate m, char* addr) {
1530
  msegmentptr sp = &m->seg;
1531
  for (;;) {
1532
    if (addr >= sp->base && addr < sp->base + sp->size)
1533
      return sp;
1534
    if ((sp = sp->next) == 0)
1535
      return 0;
1536
  }
1537
}
1538
 
1539
/* Return true if segment contains a segment link */
1540
static int has_segment_link(mstate m, msegmentptr ss) {
1541
  msegmentptr sp = &m->seg;
1542
  for (;;) {
1543
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1544
      return 1;
1545
    if ((sp = sp->next) == 0)
1546
      return 0;
1547
  }
1548
}
1549
 
1550
#ifndef MORECORE_CANNOT_TRIM
1551
#define should_trim(M,s)  ((s) > (M)->trim_check)
1552
#else  /* MORECORE_CANNOT_TRIM */
1553
#define should_trim(M,s)  (0)
1554
#endif /* MORECORE_CANNOT_TRIM */
1555
 
1556
/*
1557
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1558
  that may be needed to place segment records and fenceposts when new
1559
  noncontiguous segments are added.
1560
*/
1561
#define TOP_FOOT_SIZE\
1562
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1563
 
1564
 
1565
/* -------------------------------  Hooks -------------------------------- */
1566
 
1567
/*
1568
  PREACTION should be defined to return 0 on success, and nonzero on
1569
  failure. If you are not using locking, you can redefine these to do
1570
  anything you like.
1571
*/
1572
 
1573
#if USE_LOCKS
1574
 
1575
/* Ensure locks are initialized */
1576
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1577
 
1578
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1579
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1580
#else /* USE_LOCKS */
1581
 
1582
#ifndef PREACTION
1583
#define PREACTION(M) (0)
1584
#endif  /* PREACTION */
1585
 
1586
#ifndef POSTACTION
1587
#define POSTACTION(M)
1588
#endif  /* POSTACTION */
1589
 
1590
#endif /* USE_LOCKS */
1591
 
1592
/*
1593
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1594
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1595
  reallocs. The argument p is an address that might have triggered the
1596
  fault. It is ignored by the two predefined actions, but might be
1597
  useful in custom actions that try to help diagnose errors.
1598
*/
1599
 
1600
#if PROCEED_ON_ERROR
1601
 
1602
/* A count of the number of corruption errors causing resets */
1603
int malloc_corruption_error_count;
1604
 
1605
/* default corruption action */
1606
static void reset_on_error(mstate m);
1607
 
1608
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1609
#define USAGE_ERROR_ACTION(m, p)
1610
 
1611
#else /* PROCEED_ON_ERROR */
1612
 
1613
#ifndef CORRUPTION_ERROR_ACTION
1614
#define CORRUPTION_ERROR_ACTION(m) ABORT
1615
#endif /* CORRUPTION_ERROR_ACTION */
1616
 
1617
#ifndef USAGE_ERROR_ACTION
1618
#define USAGE_ERROR_ACTION(m,p) ABORT
1619
#endif /* USAGE_ERROR_ACTION */
1620
 
1621
#endif /* PROCEED_ON_ERROR */
1622
 
1623
/* -------------------------- Debugging setup ---------------------------- */
1624
 
1625
#if ! DEBUG
1626
 
1627
#define check_free_chunk(M,P)
1628
#define check_inuse_chunk(M,P)
1629
#define check_malloced_chunk(M,P,N)
1630
#define check_mmapped_chunk(M,P)
1631
#define check_malloc_state(M)
1632
#define check_top_chunk(M,P)
1633
 
1634
#else /* DEBUG */
1635
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1636
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1637
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1638
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1639
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1640
#define check_malloc_state(M)       do_check_malloc_state(M)
1641
 
1642
static void   do_check_any_chunk(mstate m, mchunkptr p);
1643
static void   do_check_top_chunk(mstate m, mchunkptr p);
1644
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1645
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1646
static void   do_check_free_chunk(mstate m, mchunkptr p);
1647
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1648
static void   do_check_tree(mstate m, tchunkptr t);
1649
static void   do_check_treebin(mstate m, bindex_t i);
1650
static void   do_check_smallbin(mstate m, bindex_t i);
1651
static void   do_check_malloc_state(mstate m);
1652
static int    bin_find(mstate m, mchunkptr x);
1653
static size_t traverse_and_check(mstate m);
1654
#endif /* DEBUG */
1655
 
1656
/* ---------------------------- Indexing Bins ---------------------------- */
1657
 
1658
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1659
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1660
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1661
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1662
 
1663
/* addressing by index. See above about smallbin repositioning */
1664
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1665
#define treebin_at(M,i)     (&((M)->treebins[i]))
1666
 
1667
/* assign tree index for size S to variable I */
1668
#if defined(__GNUC__) && defined(i386)
1669
#define compute_tree_index(S, I)\
1670
{\
1671
  size_t X = S >> TREEBIN_SHIFT;\
1672
  if (X == 0)\
1673
    I = 0;\
1674
  else if (X > 0xFFFF)\
1675
    I = NTREEBINS-1;\
1676
  else {\
1677
    unsigned int K;\
1678
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1679
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1680
  }\
1681
}
1682
#else /* GNUC */
1683
#define compute_tree_index(S, I)\
1684
{\
1685
  size_t X = S >> TREEBIN_SHIFT;\
1686
  if (X == 0)\
1687
    I = 0;\
1688
  else if (X > 0xFFFF)\
1689
    I = NTREEBINS-1;\
1690
  else {\
1691
    unsigned int Y = (unsigned int)X;\
1692
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1693
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1694
    N += K;\
1695
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1696
    K = 14 - N + ((Y <<= K) >> 15);\
1697
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1698
  }\
1699
}
1700
#endif /* GNUC */
1701
 
1702
/* Bit representing maximum resolved size in a treebin at i */
1703
#define bit_for_tree_index(i) \
1704
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1705
 
1706
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1707
#define leftshift_for_tree_index(i) \
1708
   ((i == NTREEBINS-1)? 0 : \
1709
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1710
 
1711
/* The size of the smallest chunk held in bin with index i */
1712
#define minsize_for_tree_index(i) \
1713
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1714
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1715
 
1716
 
1717
/* ------------------------ Operations on bin maps ----------------------- */
1718
 
1719
/* bit corresponding to given index */
1720
#define idx2bit(i)              ((binmap_t)(1) << (i))
1721
 
1722
/* Mark/Clear bits with given index */
1723
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1724
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1725
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1726
 
1727
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1728
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1729
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1730
 
1731
/* index corresponding to given bit */
1732
 
1733
#if defined(__GNUC__) && defined(i386)
1734
#define compute_bit2idx(X, I)\
1735
{\
1736
  unsigned int J;\
1737
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1738
  I = (bindex_t)J;\
1739
}
1740
 
1741
#else /* GNUC */
1742
#if  USE_BUILTIN_FFS
1743
#define compute_bit2idx(X, I) I = ffs(X)-1
1744
 
1745
#else /* USE_BUILTIN_FFS */
1746
#define compute_bit2idx(X, I)\
1747
{\
1748
  unsigned int Y = X - 1;\
1749
  unsigned int K = Y >> (16-4) & 16;\
1750
  unsigned int N = K;        Y >>= K;\
1751
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1752
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1753
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1754
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1755
  I = (bindex_t)(N + Y);\
1756
}
1757
#endif /* USE_BUILTIN_FFS */
1758
#endif /* GNUC */
1759
 
1760
/* isolate the least set bit of a bitmap */
1761
#define least_bit(x)         ((x) & -(x))
1762
 
1763
/* mask with all bits to left of least bit of x on */
1764
#define left_bits(x)         ((x<<1) | -(x<<1))
1765
 
1766
/* mask with all bits to left of or equal to least bit of x on */
1767
#define same_or_left_bits(x) ((x) | -(x))
1768
 
1769
 
1770
/* ----------------------- Runtime Check Support ------------------------- */
1771
 
1772
/*
1773
  For security, the main invariant is that malloc/free/etc never
1774
  writes to a static address other than malloc_state, unless static
1775
  malloc_state itself has been corrupted, which cannot occur via
1776
  malloc (because of these checks). In essence this means that we
1777
  believe all pointers, sizes, maps etc held in malloc_state, but
1778
  check all of those linked or offsetted from other embedded data
1779
  structures.  These checks are interspersed with main code in a way
1780
  that tends to minimize their run-time cost.
1781
 
1782
  When FOOTERS is defined, in addition to range checking, we also
1783
  verify footer fields of inuse chunks, which can be used guarantee
1784
  that the mstate controlling malloc/free is intact.  This is a
1785
  streamlined version of the approach described by William Robertson
1786
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1787
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1788
  of an inuse chunk holds the xor of its mstate and a random seed,
1789
  that is checked upon calls to free() and realloc().  This is
1790
  (probablistically) unguessable from outside the program, but can be
1791
  computed by any code successfully malloc'ing any chunk, so does not
1792
  itself provide protection against code that has already broken
1793
  security through some other means.  Unlike Robertson et al, we
1794
  always dynamically check addresses of all offset chunks (previous,
1795
  next, etc). This turns out to be cheaper than relying on hashes.
1796
*/
1797
 
1798
#if !INSECURE
1799
/* Check if address a is at least as high as any from MORECORE or MMAP */
1800
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1801
/* Check if address of next chunk n is higher than base chunk p */
1802
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1803
/* Check if p has its cinuse bit on */
1804
#define ok_cinuse(p)     cinuse(p)
1805
/* Check if p has its pinuse bit on */
1806
#define ok_pinuse(p)     pinuse(p)
1807
 
1808
#else /* !INSECURE */
1809
#define ok_address(M, a) (1)
1810
#define ok_next(b, n)    (1)
1811
#define ok_cinuse(p)     (1)
1812
#define ok_pinuse(p)     (1)
1813
#endif /* !INSECURE */
1814
 
1815
#if (FOOTERS && !INSECURE)
1816
/* Check if (alleged) mstate m has expected magic field */
1817
#define ok_magic(M)      ((M)->magic == mparams.magic)
1818
#else  /* (FOOTERS && !INSECURE) */
1819
#define ok_magic(M)      (1)
1820
#endif /* (FOOTERS && !INSECURE) */
1821
 
1822
 
1823
/* In gcc, use __builtin_expect to minimize impact of checks */
1824
#if !INSECURE
1825
#if defined(__GNUC__) && __GNUC__ >= 3
1826
#define RTCHECK(e)  __builtin_expect(e, 1)
1827
#else /* GNUC */
1828
#define RTCHECK(e)  (e)
1829
#endif /* GNUC */
1830
#else /* !INSECURE */
1831
#define RTCHECK(e)  (1)
1832
#endif /* !INSECURE */
1833
 
1834
/* macros to set up inuse chunks with or without footers */
1835
 
1836
#if !FOOTERS
1837
 
1838
#define mark_inuse_foot(M,p,s)
1839
 
1840
/* Set cinuse bit and pinuse bit of next chunk */
1841
#define set_inuse(M,p,s)\
1842
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1843
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1844
 
1845
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1846
#define set_inuse_and_pinuse(M,p,s)\
1847
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1848
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1849
 
1850
/* Set size, cinuse and pinuse bit of this chunk */
1851
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1852
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1853
 
1854
#else /* FOOTERS */
1855
 
1856
/* Set foot of inuse chunk to be xor of mstate and seed */
1857
#define mark_inuse_foot(M,p,s)\
1858
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1859
 
1860
#define get_mstate_for(p)\
1861
  ((mstate)(((mchunkptr)((char*)(p) +\
1862
    (chunksize(p))))->prev_foot ^ mparams.magic))
1863
 
1864
#define set_inuse(M,p,s)\
1865
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1866
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1867
  mark_inuse_foot(M,p,s))
1868
 
1869
#define set_inuse_and_pinuse(M,p,s)\
1870
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1871
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1872
 mark_inuse_foot(M,p,s))
1873
 
1874
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1875
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1876
  mark_inuse_foot(M, p, s))
1877
 
1878
#endif /* !FOOTERS */
1879
 
1880
/* ---------------------------- setting mparams -------------------------- */
1881
 
1882
/* Initialize mparams */
1883
static int init_mparams(void) {
1884
  if (mparams.page_size == 0) {
1885
    size_t s;
1886
 
1887
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1888
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1889
#if MORECORE_CONTIGUOUS
1890
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1891
#else  /* MORECORE_CONTIGUOUS */
1892
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1893
#endif /* MORECORE_CONTIGUOUS */
1894
 
1895
#if (FOOTERS && !INSECURE)
1896
    {
1897
#if USE_DEV_RANDOM
1898
      int fd;
1899
      unsigned char buf[sizeof(size_t)];
1900
      /* Try to use /dev/urandom, else fall back on using time */
1901
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1902
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1903
        s = *((size_t *) buf);
1904
        close(fd);
1905
      }
1906
      else
1907
#endif /* USE_DEV_RANDOM */
1908
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1909
 
1910
      s |= (size_t)8U;    /* ensure nonzero */
1911
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1912
 
1913
    }
1914
#else /* (FOOTERS && !INSECURE) */
1915
    s = (size_t)0x58585858U;
1916
#endif /* (FOOTERS && !INSECURE) */
1917
    ACQUIRE_MAGIC_INIT_LOCK();
1918
    if (mparams.magic == 0) {
1919
      mparams.magic = s;
1920
      /* Set up lock for main malloc area */
1921
      INITIAL_LOCK(&gm->mutex);
1922
      gm->mflags = mparams.default_mflags;
1923
    }
1924
    RELEASE_MAGIC_INIT_LOCK();
1925
 
1926
#ifndef WIN32
1927
    mparams.page_size = malloc_getpagesize;
1928
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1929
                           DEFAULT_GRANULARITY : mparams.page_size);
1930
#else /* WIN32 */
1931
    {
1932
      SYSTEM_INFO system_info;
1933
      GetSystemInfo(&system_info);
1934
      mparams.page_size = system_info.dwPageSize;
1935
      mparams.granularity = system_info.dwAllocationGranularity;
1936
    }
1937
#endif /* WIN32 */
1938
 
1939
    /* Sanity-check configuration:
1940
       size_t must be unsigned and as wide as pointer type.
1941
       ints must be at least 4 bytes.
1942
       alignment must be at least 8.
1943
       Alignment, min chunk size, and page size must all be powers of 2.
1944
    */
1945
    if ((sizeof(size_t) != sizeof(char*)) ||
1946
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1947
        (sizeof(int) < 4)  ||
1948
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1949
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1950
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1951
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1952
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1953
      ABORT;
1954
  }
1955
  return 0;
1956
}
1957
 
1958
/* support for mallopt */
1959
static int change_mparam(int param_number, int value) {
1960
  size_t val = (size_t)value;
1961
  init_mparams();
1962
  switch(param_number) {
1963
  case M_TRIM_THRESHOLD:
1964
    mparams.trim_threshold = val;
1965
    return 1;
1966
  case M_GRANULARITY:
1967
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1968
      mparams.granularity = val;
1969
      return 1;
1970
    }
1971
    else
1972
      return 0;
1973
  case M_MMAP_THRESHOLD:
1974
    mparams.mmap_threshold = val;
1975
    return 1;
1976
  default:
1977
    return 0;
1978
  }
1979
}
1980
 
1981
#if DEBUG
1982
/* ------------------------- Debugging Support --------------------------- */
1983
 
1984
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1985
static void do_check_any_chunk(mstate m, mchunkptr p) {
1986
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1987
  assert(ok_address(m, p));
1988
}
1989
 
1990
/* Check properties of top chunk */
1991
static void do_check_top_chunk(mstate m, mchunkptr p) {
1992
  msegmentptr sp = segment_holding(m, (char*)p);
1993
  size_t  sz = chunksize(p);
1994
  assert(sp != 0);
1995
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1996
  assert(ok_address(m, p));
1997
  assert(sz == m->topsize);
1998
  assert(sz > 0);
1999
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2000
  assert(pinuse(p));
2001
  assert(!next_pinuse(p));
2002
}
2003
 
2004
/* Check properties of (inuse) mmapped chunks */
2005
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2006
  size_t  sz = chunksize(p);
2007
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2008
  assert(is_mmapped(p));
2009
  assert(use_mmap(m));
2010
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2011
  assert(ok_address(m, p));
2012
  assert(!is_small(sz));
2013
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2014
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2015
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2016
}
2017
 
2018
/* Check properties of inuse chunks */
2019
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2020
  do_check_any_chunk(m, p);
2021
  assert(cinuse(p));
2022
  assert(next_pinuse(p));
2023
  /* If not pinuse and not mmapped, previous chunk has OK offset */
2024
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2025
  if (is_mmapped(p))
2026
    do_check_mmapped_chunk(m, p);
2027
}
2028
 
2029
/* Check properties of free chunks */
2030
static void do_check_free_chunk(mstate m, mchunkptr p) {
2031
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2032
  mchunkptr next = chunk_plus_offset(p, sz);
2033
  do_check_any_chunk(m, p);
2034
  assert(!cinuse(p));
2035
  assert(!next_pinuse(p));
2036
  assert (!is_mmapped(p));
2037
  if (p != m->dv && p != m->top) {
2038
    if (sz >= MIN_CHUNK_SIZE) {
2039
      assert((sz & CHUNK_ALIGN_MASK) == 0);
2040
      assert(is_aligned(chunk2mem(p)));
2041
      assert(next->prev_foot == sz);
2042
      assert(pinuse(p));
2043
      assert (next == m->top || cinuse(next));
2044
      assert(p->fd->bk == p);
2045
      assert(p->bk->fd == p);
2046
    }
2047
    else  /* markers are always of size SIZE_T_SIZE */
2048
      assert(sz == SIZE_T_SIZE);
2049
  }
2050
}
2051
 
2052
/* Check properties of malloced chunks at the point they are malloced */
2053
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2054
  if (mem != 0) {
2055
    mchunkptr p = mem2chunk(mem);
2056
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2057
    do_check_inuse_chunk(m, p);
2058
    assert((sz & CHUNK_ALIGN_MASK) == 0);
2059
    assert(sz >= MIN_CHUNK_SIZE);
2060
    assert(sz >= s);
2061
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2062
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2063
  }
2064
}
2065
 
2066
/* Check a tree and its subtrees.  */
2067
static void do_check_tree(mstate m, tchunkptr t) {
2068
  tchunkptr head = 0;
2069
  tchunkptr u = t;
2070
  bindex_t tindex = t->index;
2071
  size_t tsize = chunksize(t);
2072
  bindex_t idx;
2073
  compute_tree_index(tsize, idx);
2074
  assert(tindex == idx);
2075
  assert(tsize >= MIN_LARGE_SIZE);
2076
  assert(tsize >= minsize_for_tree_index(idx));
2077
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2078
 
2079
  do { /* traverse through chain of same-sized nodes */
2080
    do_check_any_chunk(m, ((mchunkptr)u));
2081
    assert(u->index == tindex);
2082
    assert(chunksize(u) == tsize);
2083
    assert(!cinuse(u));
2084
    assert(!next_pinuse(u));
2085
    assert(u->fd->bk == u);
2086
    assert(u->bk->fd == u);
2087
    if (u->parent == 0) {
2088
      assert(u->child[0] == 0);
2089
      assert(u->child[1] == 0);
2090
    }
2091
    else {
2092
      assert(head == 0); /* only one node on chain has parent */
2093
      head = u;
2094
      assert(u->parent != u);
2095
      assert (u->parent->child[0] == u ||
2096
              u->parent->child[1] == u ||
2097
              *((tbinptr*)(u->parent)) == u);
2098
      if (u->child[0] != 0) {
2099
        assert(u->child[0]->parent == u);
2100
        assert(u->child[0] != u);
2101
        do_check_tree(m, u->child[0]);
2102
      }
2103
      if (u->child[1] != 0) {
2104
        assert(u->child[1]->parent == u);
2105
        assert(u->child[1] != u);
2106
        do_check_tree(m, u->child[1]);
2107
      }
2108
      if (u->child[0] != 0 && u->child[1] != 0) {
2109
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2110
      }
2111
    }
2112
    u = u->fd;
2113
  } while (u != t);
2114
  assert(head != 0);
2115
}
2116
 
2117
/*  Check all the chunks in a treebin.  */
2118
static void do_check_treebin(mstate m, bindex_t i) {
2119
  tbinptr* tb = treebin_at(m, i);
2120
  tchunkptr t = *tb;
2121
  int empty = (m->treemap & (1U << i)) == 0;
2122
  if (t == 0)
2123
    assert(empty);
2124
  if (!empty)
2125
    do_check_tree(m, t);
2126
}
2127
 
2128
/*  Check all the chunks in a smallbin.  */
2129
static void do_check_smallbin(mstate m, bindex_t i) {
2130
  sbinptr b = smallbin_at(m, i);
2131
  mchunkptr p = b->bk;
2132
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2133
  if (p == b)
2134
    assert(empty);
2135
  if (!empty) {
2136
    for (; p != b; p = p->bk) {
2137
      size_t size = chunksize(p);
2138
      mchunkptr q;
2139
      /* each chunk claims to be free */
2140
      do_check_free_chunk(m, p);
2141
      /* chunk belongs in bin */
2142
      assert(small_index(size) == i);
2143
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2144
      /* chunk is followed by an inuse chunk */
2145
      q = next_chunk(p);
2146
      if (q->head != FENCEPOST_HEAD)
2147
        do_check_inuse_chunk(m, q);
2148
    }
2149
  }
2150
}
2151
 
2152
/* Find x in a bin. Used in other check functions. */
2153
static int bin_find(mstate m, mchunkptr x) {
2154
  size_t size = chunksize(x);
2155
  if (is_small(size)) {
2156
    bindex_t sidx = small_index(size);
2157
    sbinptr b = smallbin_at(m, sidx);
2158
    if (smallmap_is_marked(m, sidx)) {
2159
      mchunkptr p = b;
2160
      do {
2161
        if (p == x)
2162
          return 1;
2163
      } while ((p = p->fd) != b);
2164
    }
2165
  }
2166
  else {
2167
    bindex_t tidx;
2168
    compute_tree_index(size, tidx);
2169
    if (treemap_is_marked(m, tidx)) {
2170
      tchunkptr t = *treebin_at(m, tidx);
2171
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2172
      while (t != 0 && chunksize(t) != size) {
2173
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2174
        sizebits <<= 1;
2175
      }
2176
      if (t != 0) {
2177
        tchunkptr u = t;
2178
        do {
2179
          if (u == (tchunkptr)x)
2180
            return 1;
2181
        } while ((u = u->fd) != t);
2182
      }
2183
    }
2184
  }
2185
  return 0;
2186
}
2187
 
2188
/* Traverse each chunk and check it; return total */
2189
static size_t traverse_and_check(mstate m) {
2190
  size_t sum = 0;
2191
  if (is_initialized(m)) {
2192
    msegmentptr s = &m->seg;
2193
    sum += m->topsize + TOP_FOOT_SIZE;
2194
    while (s != 0) {
2195
      mchunkptr q = align_as_chunk(s->base);
2196
      mchunkptr lastq = 0;
2197
      assert(pinuse(q));
2198
      while (segment_holds(s, q) &&
2199
             q != m->top && q->head != FENCEPOST_HEAD) {
2200
        sum += chunksize(q);
2201
        if (cinuse(q)) {
2202
          assert(!bin_find(m, q));
2203
          do_check_inuse_chunk(m, q);
2204
        }
2205
        else {
2206
          assert(q == m->dv || bin_find(m, q));
2207
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2208
          do_check_free_chunk(m, q);
2209
        }
2210
        lastq = q;
2211
        q = next_chunk(q);
2212
      }
2213
      s = s->next;
2214
    }
2215
  }
2216
  return sum;
2217
}
2218
 
2219
/* Check all properties of malloc_state. */
2220
static void do_check_malloc_state(mstate m) {
2221
  bindex_t i;
2222
  size_t total;
2223
  /* check bins */
2224
  for (i = 0; i < NSMALLBINS; ++i)
2225
    do_check_smallbin(m, i);
2226
  for (i = 0; i < NTREEBINS; ++i)
2227
    do_check_treebin(m, i);
2228
 
2229
  if (m->dvsize != 0) { /* check dv chunk */
2230
    do_check_any_chunk(m, m->dv);
2231
    assert(m->dvsize == chunksize(m->dv));
2232
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2233
    assert(bin_find(m, m->dv) == 0);
2234
  }
2235
 
2236
  if (m->top != 0) {   /* check top chunk */
2237
    do_check_top_chunk(m, m->top);
2238
    assert(m->topsize == chunksize(m->top));
2239
    assert(m->topsize > 0);
2240
    assert(bin_find(m, m->top) == 0);
2241
  }
2242
 
2243
  total = traverse_and_check(m);
2244
  assert(total <= m->footprint);
2245
  assert(m->footprint <= m->max_footprint);
2246
}
2247
#endif /* DEBUG */
2248
 
2249
/* ----------------------------- statistics ------------------------------ */
2250
 
2251
#if !NO_MALLINFO
2252
static struct mallinfo internal_mallinfo(mstate m) {
2253
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2254
  if (!PREACTION(m)) {
2255
    check_malloc_state(m);
2256
    if (is_initialized(m)) {
2257
      size_t nfree = SIZE_T_ONE; /* top always free */
2258
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2259
      size_t sum = mfree;
2260
      msegmentptr s = &m->seg;
2261
      while (s != 0) {
2262
        mchunkptr q = align_as_chunk(s->base);
2263
        while (segment_holds(s, q) &&
2264
               q != m->top && q->head != FENCEPOST_HEAD) {
2265
          size_t sz = chunksize(q);
2266
          sum += sz;
2267
          if (!cinuse(q)) {
2268
            mfree += sz;
2269
            ++nfree;
2270
          }
2271
          q = next_chunk(q);
2272
        }
2273
        s = s->next;
2274
      }
2275
 
2276
      nm.arena    = sum;
2277
      nm.ordblks  = nfree;
2278
      nm.hblkhd   = m->footprint - sum;
2279
      nm.usmblks  = m->max_footprint;
2280
      nm.uordblks = m->footprint - mfree;
2281
      nm.fordblks = mfree;
2282
      nm.keepcost = m->topsize;
2283
    }
2284
 
2285
    POSTACTION(m);
2286
  }
2287
  return nm;
2288
}
2289
#endif /* !NO_MALLINFO */
2290
 
2291
static void internal_malloc_stats(mstate m) {
2292
  if (!PREACTION(m)) {
2293
    size_t maxfp = 0;
2294
    size_t fp = 0;
2295
    size_t used = 0;
2296
    check_malloc_state(m);
2297
    if (is_initialized(m)) {
2298
      msegmentptr s = &m->seg;
2299
      maxfp = m->max_footprint;
2300
      fp = m->footprint;
2301
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2302
 
2303
      while (s != 0) {
2304
        mchunkptr q = align_as_chunk(s->base);
2305
        while (segment_holds(s, q) &&
2306
               q != m->top && q->head != FENCEPOST_HEAD) {
2307
          if (!cinuse(q))
2308
            used -= chunksize(q);
2309
          q = next_chunk(q);
2310
        }
2311
        s = s->next;
2312
      }
2313
    }
2314
 
2315
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2316
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2317
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2318
 
2319
    POSTACTION(m);
2320
  }
2321
}
2322
 
2323
/* ----------------------- Operations on smallbins ----------------------- */
2324
 
2325
/*
2326
  Various forms of linking and unlinking are defined as macros.  Even
2327
  the ones for trees, which are very long but have very short typical
2328
  paths.  This is ugly but reduces reliance on inlining support of
2329
  compilers.
2330
*/
2331
 
2332
/* Link a free chunk into a smallbin  */
2333
#define insert_small_chunk(M, P, S) {\
2334
  bindex_t I  = small_index(S);\
2335
  mchunkptr B = smallbin_at(M, I);\
2336
  mchunkptr F = B;\
2337
  assert(S >= MIN_CHUNK_SIZE);\
2338
  if (!smallmap_is_marked(M, I))\
2339
    mark_smallmap(M, I);\
2340
  else if (RTCHECK(ok_address(M, B->fd)))\
2341
    F = B->fd;\
2342
  else {\
2343
    CORRUPTION_ERROR_ACTION(M);\
2344
  }\
2345
  B->fd = P;\
2346
  F->bk = P;\
2347
  P->fd = F;\
2348
  P->bk = B;\
2349
}
2350
 
2351
/* Unlink a chunk from a smallbin  */
2352
#define unlink_small_chunk(M, P, S) {\
2353
  mchunkptr F = P->fd;\
2354
  mchunkptr B = P->bk;\
2355
  bindex_t I = small_index(S);\
2356
  assert(P != B);\
2357
  assert(P != F);\
2358
  assert(chunksize(P) == small_index2size(I));\
2359
  if (F == B)\
2360
    clear_smallmap(M, I);\
2361
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2362
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2363
    F->bk = B;\
2364
    B->fd = F;\
2365
  }\
2366
  else {\
2367
    CORRUPTION_ERROR_ACTION(M);\
2368
  }\
2369
}
2370
 
2371
/* Unlink the first chunk from a smallbin */
2372
#define unlink_first_small_chunk(M, B, P, I) {\
2373
  mchunkptr F = P->fd;\
2374
  assert(P != B);\
2375
  assert(P != F);\
2376
  assert(chunksize(P) == small_index2size(I));\
2377
  if (B == F)\
2378
    clear_smallmap(M, I);\
2379
  else if (RTCHECK(ok_address(M, F))) {\
2380
    B->fd = F;\
2381
    F->bk = B;\
2382
  }\
2383
  else {\
2384
    CORRUPTION_ERROR_ACTION(M);\
2385
  }\
2386
}
2387
 
2388
/* Replace dv node, binning the old one */
2389
/* Used only when dvsize known to be small */
2390
#define replace_dv(M, P, S) {\
2391
  size_t DVS = M->dvsize;\
2392
  if (DVS != 0) {\
2393
    mchunkptr DV = M->dv;\
2394
    assert(is_small(DVS));\
2395
    insert_small_chunk(M, DV, DVS);\
2396
  }\
2397
  M->dvsize = S;\
2398
  M->dv = P;\
2399
}
2400
 
2401
/* ------------------------- Operations on trees ------------------------- */
2402
 
2403
/* Insert chunk into tree */
2404
#define insert_large_chunk(M, X, S) {\
2405
  tbinptr* H;\
2406
  bindex_t I;\
2407
  compute_tree_index(S, I);\
2408
  H = treebin_at(M, I);\
2409
  X->index = I;\
2410
  X->child[0] = X->child[1] = 0;\
2411
  if (!treemap_is_marked(M, I)) {\
2412
    mark_treemap(M, I);\
2413
    *H = X;\
2414
    X->parent = (tchunkptr)H;\
2415
    X->fd = X->bk = X;\
2416
  }\
2417
  else {\
2418
    tchunkptr T = *H;\
2419
    size_t K = S << leftshift_for_tree_index(I);\
2420
    for (;;) {\
2421
      if (chunksize(T) != S) {\
2422
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2423
        K <<= 1;\
2424
        if (*C != 0)\
2425
          T = *C;\
2426
        else if (RTCHECK(ok_address(M, C))) {\
2427
          *C = X;\
2428
          X->parent = T;\
2429
          X->fd = X->bk = X;\
2430
          break;\
2431
        }\
2432
        else {\
2433
          CORRUPTION_ERROR_ACTION(M);\
2434
          break;\
2435
        }\
2436
      }\
2437
      else {\
2438
        tchunkptr F = T->fd;\
2439
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2440
          T->fd = F->bk = X;\
2441
          X->fd = F;\
2442
          X->bk = T;\
2443
          X->parent = 0;\
2444
          break;\
2445
        }\
2446
        else {\
2447
          CORRUPTION_ERROR_ACTION(M);\
2448
          break;\
2449
        }\
2450
      }\
2451
    }\
2452
  }\
2453
}
2454
 
2455
/*
2456
  Unlink steps:
2457
 
2458
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2459
     and choose its bk node as its replacement.
2460
  2. If x was the last node of its size, but not a leaf node, it must
2461
     be replaced with a leaf node (not merely one with an open left or
2462
     right), to make sure that lefts and rights of descendents
2463
     correspond properly to bit masks.  We use the rightmost descendent
2464
     of x.  We could use any other leaf, but this is easy to locate and
2465
     tends to counteract removal of leftmosts elsewhere, and so keeps
2466
     paths shorter than minimally guaranteed.  This doesn't loop much
2467
     because on average a node in a tree is near the bottom.
2468
  3. If x is the base of a chain (i.e., has parent links) relink
2469
     x's parent and children to x's replacement (or null if none).
2470
*/
2471
 
2472
#define unlink_large_chunk(M, X) {\
2473
  tchunkptr XP = X->parent;\
2474
  tchunkptr R;\
2475
  if (X->bk != X) {\
2476
    tchunkptr F = X->fd;\
2477
    R = X->bk;\
2478
    if (RTCHECK(ok_address(M, F))) {\
2479
      F->bk = R;\
2480
      R->fd = F;\
2481
    }\
2482
    else {\
2483
      CORRUPTION_ERROR_ACTION(M);\
2484
    }\
2485
  }\
2486
  else {\
2487
    tchunkptr* RP;\
2488
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2489
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2490
      tchunkptr* CP;\
2491
      while ((*(CP = &(R->child[1])) != 0) ||\
2492
             (*(CP = &(R->child[0])) != 0)) {\
2493
        R = *(RP = CP);\
2494
      }\
2495
      if (RTCHECK(ok_address(M, RP)))\
2496
        *RP = 0;\
2497
      else {\
2498
        CORRUPTION_ERROR_ACTION(M);\
2499
      }\
2500
    }\
2501
  }\
2502
  if (XP != 0) {\
2503
    tbinptr* H = treebin_at(M, X->index);\
2504
    if (X == *H) {\
2505
      if ((*H = R) == 0) \
2506
        clear_treemap(M, X->index);\
2507
    }\
2508
    else if (RTCHECK(ok_address(M, XP))) {\
2509
      if (XP->child[0] == X) \
2510
        XP->child[0] = R;\
2511
      else \
2512
        XP->child[1] = R;\
2513
    }\
2514
    else\
2515
      CORRUPTION_ERROR_ACTION(M);\
2516
    if (R != 0) {\
2517
      if (RTCHECK(ok_address(M, R))) {\
2518
        tchunkptr C0, C1;\
2519
        R->parent = XP;\
2520
        if ((C0 = X->child[0]) != 0) {\
2521
          if (RTCHECK(ok_address(M, C0))) {\
2522
            R->child[0] = C0;\
2523
            C0->parent = R;\
2524
          }\
2525
          else\
2526
            CORRUPTION_ERROR_ACTION(M);\
2527
        }\
2528
        if ((C1 = X->child[1]) != 0) {\
2529
          if (RTCHECK(ok_address(M, C1))) {\
2530
            R->child[1] = C1;\
2531
            C1->parent = R;\
2532
          }\
2533
          else\
2534
            CORRUPTION_ERROR_ACTION(M);\
2535
        }\
2536
      }\
2537
      else\
2538
        CORRUPTION_ERROR_ACTION(M);\
2539
    }\
2540
  }\
2541
}
2542
 
2543
/* Relays to large vs small bin operations */
2544
 
2545
#define insert_chunk(M, P, S)\
2546
  if (is_small(S)) insert_small_chunk(M, P, S)\
2547
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2548
 
2549
#define unlink_chunk(M, P, S)\
2550
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2551
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2552
 
2553
 
2554
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2555
 
2556
#if ONLY_MSPACES
2557
#define internal_malloc(m, b) mspace_malloc(m, b)
2558
#define internal_free(m, mem) mspace_free(m,mem);
2559
#else /* ONLY_MSPACES */
2560
#if MSPACES
2561
#define internal_malloc(m, b)\
2562
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2563
#define internal_free(m, mem)\
2564
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2565
#else /* MSPACES */
2566
#define internal_malloc(m, b) dlmalloc(b)
2567
#define internal_free(m, mem) dlfree(mem)
2568
#endif /* MSPACES */
2569
#endif /* ONLY_MSPACES */
2570
 
2571
/* -----------------------  Direct-mmapping chunks ----------------------- */
2572
 
2573
/*
2574
  Directly mmapped chunks are set up with an offset to the start of
2575
  the mmapped region stored in the prev_foot field of the chunk. This
2576
  allows reconstruction of the required argument to MUNMAP when freed,
2577
  and also allows adjustment of the returned chunk to meet alignment
2578
  requirements (especially in memalign).  There is also enough space
2579
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2580
  the PINUSE bit so frees can be checked.
2581
*/
2582
 
2583
/* Malloc using mmap */
2584
static void* mmap_alloc(mstate m, size_t nb) {
2585
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2586
  if (mmsize > nb) {     /* Check for wrap around 0 */
2587
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2588
    if (mm != CMFAIL) {
2589
      size_t offset = align_offset(chunk2mem(mm));
2590
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2591
      mchunkptr p = (mchunkptr)(mm + offset);
2592
      p->prev_foot = offset | IS_MMAPPED_BIT;
2593
      (p)->head = (psize|CINUSE_BIT);
2594
      mark_inuse_foot(m, p, psize);
2595
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2596
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2597
 
2598
      if (mm < m->least_addr)
2599
        m->least_addr = mm;
2600
      if ((m->footprint += mmsize) > m->max_footprint)
2601
        m->max_footprint = m->footprint;
2602
      assert(is_aligned(chunk2mem(p)));
2603
      check_mmapped_chunk(m, p);
2604
      return chunk2mem(p);
2605
    }
2606
  }
2607
  return 0;
2608
}
2609
 
2610
/* Realloc using mmap */
2611
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2612
  size_t oldsize = chunksize(oldp);
2613
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2614
    return 0;
2615
  /* Keep old chunk if big enough but not too big */
2616
  if (oldsize >= nb + SIZE_T_SIZE &&
2617
      (oldsize - nb) <= (mparams.granularity << 1))
2618
    return oldp;
2619
  else {
2620
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2621
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2622
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2623
                                         CHUNK_ALIGN_MASK);
2624
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2625
                                  oldmmsize, newmmsize, 1);
2626
    if (cp != CMFAIL) {
2627
      mchunkptr newp = (mchunkptr)(cp + offset);
2628
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2629
      newp->head = (psize|CINUSE_BIT);
2630
      mark_inuse_foot(m, newp, psize);
2631
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2632
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2633
 
2634
      if (cp < m->least_addr)
2635
        m->least_addr = cp;
2636
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2637
        m->max_footprint = m->footprint;
2638
      check_mmapped_chunk(m, newp);
2639
      return newp;
2640
    }
2641
  }
2642
  return 0;
2643
}
2644
 
2645
/* -------------------------- mspace management -------------------------- */
2646
 
2647
/* Initialize top chunk and its size */
2648
static void init_top(mstate m, mchunkptr p, size_t psize) {
2649
  /* Ensure alignment */
2650
  size_t offset = align_offset(chunk2mem(p));
2651
  p = (mchunkptr)((char*)p + offset);
2652
  psize -= offset;
2653
 
2654
  m->top = p;
2655
  m->topsize = psize;
2656
  p->head = psize | PINUSE_BIT;
2657
  /* set size of fake trailing chunk holding overhead space only once */
2658
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2659
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2660
}
2661
 
2662
/* Initialize bins for a new mstate that is otherwise zeroed out */
2663
static void init_bins(mstate m) {
2664
  /* Establish circular links for smallbins */
2665
  bindex_t i;
2666
  for (i = 0; i < NSMALLBINS; ++i) {
2667
    sbinptr bin = smallbin_at(m,i);
2668
    bin->fd = bin->bk = bin;
2669
  }
2670
}
2671
 
2672
#if PROCEED_ON_ERROR
2673
 
2674
/* default corruption action */
2675
static void reset_on_error(mstate m) {
2676
  int i;
2677
  ++malloc_corruption_error_count;
2678
  /* Reinitialize fields to forget about all memory */
2679
  m->smallbins = m->treebins = 0;
2680
  m->dvsize = m->topsize = 0;
2681
  m->seg.base = 0;
2682
  m->seg.size = 0;
2683
  m->seg.next = 0;
2684
  m->top = m->dv = 0;
2685
  for (i = 0; i < NTREEBINS; ++i)
2686
    *treebin_at(m, i) = 0;
2687
  init_bins(m);
2688
}
2689
#endif /* PROCEED_ON_ERROR */
2690
 
2691
/* Allocate chunk and prepend remainder with chunk in successor base. */
2692
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2693
                           size_t nb) {
2694
  mchunkptr p = align_as_chunk(newbase);
2695
  mchunkptr oldfirst = align_as_chunk(oldbase);
2696
  size_t psize = (char*)oldfirst - (char*)p;
2697
  mchunkptr q = chunk_plus_offset(p, nb);
2698
  size_t qsize = psize - nb;
2699
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2700
 
2701
  assert((char*)oldfirst > (char*)q);
2702
  assert(pinuse(oldfirst));
2703
  assert(qsize >= MIN_CHUNK_SIZE);
2704
 
2705
  /* consolidate remainder with first chunk of old base */
2706
  if (oldfirst == m->top) {
2707
    size_t tsize = m->topsize += qsize;
2708
    m->top = q;
2709
    q->head = tsize | PINUSE_BIT;
2710
    check_top_chunk(m, q);
2711
  }
2712
  else if (oldfirst == m->dv) {
2713
    size_t dsize = m->dvsize += qsize;
2714
    m->dv = q;
2715
    set_size_and_pinuse_of_free_chunk(q, dsize);
2716
  }
2717
  else {
2718
    if (!cinuse(oldfirst)) {
2719
      size_t nsize = chunksize(oldfirst);
2720
      unlink_chunk(m, oldfirst, nsize);
2721
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2722
      qsize += nsize;
2723
    }
2724
    set_free_with_pinuse(q, qsize, oldfirst);
2725
    insert_chunk(m, q, qsize);
2726
    check_free_chunk(m, q);
2727
  }
2728
 
2729
  check_malloced_chunk(m, chunk2mem(p), nb);
2730
  return chunk2mem(p);
2731
}
2732
 
2733
 
2734
/* Add a segment to hold a new noncontiguous region */
2735
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2736
  /* Determine locations and sizes of segment, fenceposts, old top */
2737
  char* old_top = (char*)m->top;
2738
  msegmentptr oldsp = segment_holding(m, old_top);
2739
  char* old_end = oldsp->base + oldsp->size;
2740
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2741
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2742
  size_t offset = align_offset(chunk2mem(rawsp));
2743
  char* asp = rawsp + offset;
2744
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2745
  mchunkptr sp = (mchunkptr)csp;
2746
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2747
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2748
  mchunkptr p = tnext;
2749
  int nfences = 0;
2750
 
2751
  /* reset top to new space */
2752
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2753
 
2754
  /* Set up segment record */
2755
  assert(is_aligned(ss));
2756
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2757
  *ss = m->seg; /* Push current record */
2758
  m->seg.base = tbase;
2759
  m->seg.size = tsize;
2760
  m->seg.sflags = mmapped;
2761
  m->seg.next = ss;
2762
 
2763
  /* Insert trailing fenceposts */
2764
  for (;;) {
2765
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2766
    p->head = FENCEPOST_HEAD;
2767
    ++nfences;
2768
    if ((char*)(&(nextp->head)) < old_end)
2769
      p = nextp;
2770
    else
2771
      break;
2772
  }
2773
  assert(nfences >= 2);
2774
 
2775
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2776
  if (csp != old_top) {
2777
    mchunkptr q = (mchunkptr)old_top;
2778
    size_t psize = csp - old_top;
2779
    mchunkptr tn = chunk_plus_offset(q, psize);
2780
    set_free_with_pinuse(q, psize, tn);
2781
    insert_chunk(m, q, psize);
2782
  }
2783
 
2784
  check_top_chunk(m, m->top);
2785
}
2786
 
2787
/* -------------------------- System allocation -------------------------- */
2788
 
2789
/* Get memory from system using MORECORE or MMAP */
2790
static void* sys_alloc(mstate m, size_t nb) {
2791
  char* tbase = CMFAIL;
2792
  size_t tsize = 0;
2793
  flag_t mmap_flag = 0;
2794
 
2795
  init_mparams();
2796
 
2797
  /* Directly map large chunks */
2798
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2799
    void* mem = mmap_alloc(m, nb);
2800
    if (mem != 0)
2801
      return mem;
2802
  }
2803
 
2804
  /*
2805
    Try getting memory in any of three ways (in most-preferred to
2806
    least-preferred order):
2807
    1. A call to MORECORE that can normally contiguously extend memory.
2808
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2809
       or main space is mmapped or a previous contiguous call failed)
2810
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2811
       Note that under the default settings, if MORECORE is unable to
2812
       fulfill a request, and HAVE_MMAP is true, then mmap is
2813
       used as a noncontiguous system allocator. This is a useful backup
2814
       strategy for systems with holes in address spaces -- in this case
2815
       sbrk cannot contiguously expand the heap, but mmap may be able to
2816
       find space.
2817
    3. A call to MORECORE that cannot usually contiguously extend memory.
2818
       (disabled if not HAVE_MORECORE)
2819
  */
2820
 
2821
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2822
    char* br = CMFAIL;
2823
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2824
    size_t asize = 0;
2825
    ACQUIRE_MORECORE_LOCK();
2826
 
2827
    if (ss == 0) {  /* First time through or recovery */
2828
      char* base = (char*)CALL_MORECORE(0);
2829
      if (base != CMFAIL) {
2830
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2831
        /* Adjust to end on a page boundary */
2832
        if (!is_page_aligned(base))
2833
          asize += (page_align((size_t)base) - (size_t)base);
2834
        /* Can't call MORECORE if size is negative when treated as signed */
2835
        if (asize < HALF_MAX_SIZE_T &&
2836
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2837
          tbase = base;
2838
          tsize = asize;
2839
        }
2840
      }
2841
    }
2842
    else {
2843
      /* Subtract out existing available top space from MORECORE request. */
2844
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2845
      /* Use mem here only if it did continuously extend old space */
2846
      if (asize < HALF_MAX_SIZE_T &&
2847
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2848
        tbase = br;
2849
        tsize = asize;
2850
      }
2851
    }
2852
 
2853
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2854
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2855
        if (asize < HALF_MAX_SIZE_T &&
2856
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2857
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2858
          if (esize < HALF_MAX_SIZE_T) {
2859
            char* end = (char*)CALL_MORECORE(esize);
2860
            if (end != CMFAIL)
2861
              asize += esize;
2862
            else {            /* Can't use; try to release */
2863
              CALL_MORECORE(-asize);
2864
              br = CMFAIL;
2865
            }
2866
          }
2867
        }
2868
      }
2869
      if (br != CMFAIL) {    /* Use the space we did get */
2870
        tbase = br;
2871
        tsize = asize;
2872
      }
2873
      else
2874
        disable_contiguous(m); /* Don't try contiguous path in the future */
2875
    }
2876
 
2877
    RELEASE_MORECORE_LOCK();
2878
  }
2879
 
2880
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2881
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2882
    size_t rsize = granularity_align(req);
2883
    if (rsize > nb) { /* Fail if wraps around zero */
2884
      char* mp = (char*)(CALL_MMAP(rsize));
2885
      if (mp != CMFAIL) {
2886
        tbase = mp;
2887
        tsize = rsize;
2888
        mmap_flag = IS_MMAPPED_BIT;
2889
      }
2890
    }
2891
  }
2892
 
2893
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2894
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2895
    if (asize < HALF_MAX_SIZE_T) {
2896
      char* br = CMFAIL;
2897
      char* end = CMFAIL;
2898
      ACQUIRE_MORECORE_LOCK();
2899
      br = (char*)(CALL_MORECORE(asize));
2900
      end = (char*)(CALL_MORECORE(0));
2901
      RELEASE_MORECORE_LOCK();
2902
      if (br != CMFAIL && end != CMFAIL && br < end) {
2903
        size_t ssize = end - br;
2904
        if (ssize > nb + TOP_FOOT_SIZE) {
2905
          tbase = br;
2906
          tsize = ssize;
2907
        }
2908
      }
2909
    }
2910
  }
2911
 
2912
  if (tbase != CMFAIL) {
2913
 
2914
    if ((m->footprint += tsize) > m->max_footprint)
2915
      m->max_footprint = m->footprint;
2916
 
2917
    if (!is_initialized(m)) { /* first-time initialization */
2918
      m->seg.base = m->least_addr = tbase;
2919
      m->seg.size = tsize;
2920
      m->seg.sflags = mmap_flag;
2921
      m->magic = mparams.magic;
2922
      init_bins(m);
2923
      if (is_global(m))
2924
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2925
      else {
2926
        /* Offset top by embedded malloc_state */
2927
        mchunkptr mn = next_chunk(mem2chunk(m));
2928
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2929
      }
2930
    }
2931
 
2932
    else {
2933
      /* Try to merge with an existing segment */
2934
      msegmentptr sp = &m->seg;
2935
      while (sp != 0 && tbase != sp->base + sp->size)
2936
        sp = sp->next;
2937
      if (sp != 0 &&
2938
          !is_extern_segment(sp) &&
2939
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2940
          segment_holds(sp, m->top)) { /* append */
2941
        sp->size += tsize;
2942
        init_top(m, m->top, m->topsize + tsize);
2943
      }
2944
      else {
2945
        if (tbase < m->least_addr)
2946
          m->least_addr = tbase;
2947
        sp = &m->seg;
2948
        while (sp != 0 && sp->base != tbase + tsize)
2949
          sp = sp->next;
2950
        if (sp != 0 &&
2951
            !is_extern_segment(sp) &&
2952
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2953
          char* oldbase = sp->base;
2954
          sp->base = tbase;
2955
          sp->size += tsize;
2956
          return prepend_alloc(m, tbase, oldbase, nb);
2957
        }
2958
        else
2959
          add_segment(m, tbase, tsize, mmap_flag);
2960
      }
2961
    }
2962
 
2963
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2964
      size_t rsize = m->topsize -= nb;
2965
      mchunkptr p = m->top;
2966
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2967
      r->head = rsize | PINUSE_BIT;
2968
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2969
      check_top_chunk(m, m->top);
2970
      check_malloced_chunk(m, chunk2mem(p), nb);
2971
      return chunk2mem(p);
2972
    }
2973
  }
2974
 
2975
  MALLOC_FAILURE_ACTION;
2976
  return 0;
2977
}
2978
 
2979
/* -----------------------  system deallocation -------------------------- */
2980
 
2981
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2982
static size_t release_unused_segments(mstate m) {
2983
  size_t released = 0;
2984
  msegmentptr pred = &m->seg;
2985
  msegmentptr sp = pred->next;
2986
  while (sp != 0) {
2987
    char* base = sp->base;
2988
    size_t size = sp->size;
2989
    msegmentptr next = sp->next;
2990
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2991
      mchunkptr p = align_as_chunk(base);
2992
      size_t psize = chunksize(p);
2993
      /* Can unmap if first chunk holds entire segment and not pinned */
2994
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2995
        tchunkptr tp = (tchunkptr)p;
2996
        assert(segment_holds(sp, (char*)sp));
2997
        if (p == m->dv) {
2998
          m->dv = 0;
2999
          m->dvsize = 0;
3000
        }
3001
        else {
3002
          unlink_large_chunk(m, tp);
3003
        }
3004
        if (CALL_MUNMAP(base, size) == 0) {
3005
          released += size;
3006
          m->footprint -= size;
3007
          /* unlink obsoleted record */
3008
          sp = pred;
3009
          sp->next = next;
3010
        }
3011
        else { /* back out if cannot unmap */
3012
          insert_large_chunk(m, tp, psize);
3013
        }
3014
      }
3015
    }
3016
    pred = sp;
3017
    sp = next;
3018
  }
3019
  return released;
3020
}
3021
 
3022
static int sys_trim(mstate m, size_t pad) {
3023
  size_t released = 0;
3024
  if (pad < MAX_REQUEST && is_initialized(m)) {
3025
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3026
 
3027
    if (m->topsize > pad) {
3028
      /* Shrink top space in granularity-size units, keeping at least one */
3029
      size_t unit = mparams.granularity;
3030
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3031
                      SIZE_T_ONE) * unit;
3032
      msegmentptr sp = segment_holding(m, (char*)m->top);
3033
 
3034
      if (!is_extern_segment(sp)) {
3035
        if (is_mmapped_segment(sp)) {
3036
          if (HAVE_MMAP &&
3037
              sp->size >= extra &&
3038
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
3039
            size_t newsize = sp->size - extra;
3040
            /* Prefer mremap, fall back to munmap */
3041
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3042
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3043
              released = extra;
3044
            }
3045
          }
3046
        }
3047
        else if (HAVE_MORECORE) {
3048
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3049
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3050
          ACQUIRE_MORECORE_LOCK();
3051
          {
3052
            /* Make sure end of memory is where we last set it. */
3053
            char* old_br = (char*)(CALL_MORECORE(0));
3054
            if (old_br == sp->base + sp->size) {
3055
              char* rel_br = (char*)(CALL_MORECORE(-extra));
3056
              char* new_br = (char*)(CALL_MORECORE(0));
3057
              if (rel_br != CMFAIL && new_br < old_br)
3058
                released = old_br - new_br;
3059
            }
3060
          }
3061
          RELEASE_MORECORE_LOCK();
3062
        }
3063
      }
3064
 
3065
      if (released != 0) {
3066
        sp->size -= released;
3067
        m->footprint -= released;
3068
        init_top(m, m->top, m->topsize - released);
3069
        check_top_chunk(m, m->top);
3070
      }
3071
    }
3072
 
3073
    /* Unmap any unused mmapped segments */
3074
    if (HAVE_MMAP)
3075
      released += release_unused_segments(m);
3076
 
3077
    /* On failure, disable autotrim to avoid repeated failed future calls */
3078
    if (released == 0)
3079
      m->trim_check = MAX_SIZE_T;
3080
  }
3081
 
3082
  return (released != 0)? 1 : 0;
3083
}
3084
 
3085
/* ---------------------------- malloc support --------------------------- */
3086
 
3087
/* allocate a large request from the best fitting chunk in a treebin */
3088
static void* tmalloc_large(mstate m, size_t nb) {
3089
  tchunkptr v = 0;
3090
  size_t rsize = -nb; /* Unsigned negation */
3091
  tchunkptr t;
3092
  bindex_t idx;
3093
  compute_tree_index(nb, idx);
3094
 
3095
  if ((t = *treebin_at(m, idx)) != 0) {
3096
    /* Traverse tree for this bin looking for node with size == nb */
3097
    size_t sizebits = nb << leftshift_for_tree_index(idx);
3098
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
3099
    for (;;) {
3100
      tchunkptr rt;
3101
      size_t trem = chunksize(t) - nb;
3102
      if (trem < rsize) {
3103
        v = t;
3104
        if ((rsize = trem) == 0)
3105
          break;
3106
      }
3107
      rt = t->child[1];
3108
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3109
      if (rt != 0 && rt != t)
3110
        rst = rt;
3111
      if (t == 0) {
3112
        t = rst; /* set t to least subtree holding sizes > nb */
3113
        break;
3114
      }
3115
      sizebits <<= 1;
3116
    }
3117
  }
3118
 
3119
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3120
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3121
    if (leftbits != 0) {
3122
      bindex_t i;
3123
      binmap_t leastbit = least_bit(leftbits);
3124
      compute_bit2idx(leastbit, i);
3125
      t = *treebin_at(m, i);
3126
    }
3127
  }
3128
 
3129
  while (t != 0) { /* find smallest of tree or subtree */
3130
    size_t trem = chunksize(t) - nb;
3131
    if (trem < rsize) {
3132
      rsize = trem;
3133
      v = t;
3134
    }
3135
    t = leftmost_child(t);
3136
  }
3137
 
3138
  /*  If dv is a better fit, return 0 so malloc will use it */
3139
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3140
    if (RTCHECK(ok_address(m, v))) { /* split */
3141
      mchunkptr r = chunk_plus_offset(v, nb);
3142
      assert(chunksize(v) == rsize + nb);
3143
      if (RTCHECK(ok_next(v, r))) {
3144
        unlink_large_chunk(m, v);
3145
        if (rsize < MIN_CHUNK_SIZE)
3146
          set_inuse_and_pinuse(m, v, (rsize + nb));
3147
        else {
3148
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3149
          set_size_and_pinuse_of_free_chunk(r, rsize);
3150
          insert_chunk(m, r, rsize);
3151
        }
3152
        return chunk2mem(v);
3153
      }
3154
    }
3155
    CORRUPTION_ERROR_ACTION(m);
3156
  }
3157
  return 0;
3158
}
3159
 
3160
/* allocate a small request from the best fitting chunk in a treebin */
3161
static void* tmalloc_small(mstate m, size_t nb) {
3162
  tchunkptr t, v;
3163
  size_t rsize;
3164
  bindex_t i;
3165
  binmap_t leastbit = least_bit(m->treemap);
3166
  compute_bit2idx(leastbit, i);
3167
 
3168
  v = t = *treebin_at(m, i);
3169
  rsize = chunksize(t) - nb;
3170
 
3171
  while ((t = leftmost_child(t)) != 0) {
3172
    size_t trem = chunksize(t) - nb;
3173
    if (trem < rsize) {
3174
      rsize = trem;
3175
      v = t;
3176
    }
3177
  }
3178
 
3179
  if (RTCHECK(ok_address(m, v))) {
3180
    mchunkptr r = chunk_plus_offset(v, nb);
3181
    assert(chunksize(v) == rsize + nb);
3182
    if (RTCHECK(ok_next(v, r))) {
3183
      unlink_large_chunk(m, v);
3184
      if (rsize < MIN_CHUNK_SIZE)
3185
        set_inuse_and_pinuse(m, v, (rsize + nb));
3186
      else {
3187
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3188
        set_size_and_pinuse_of_free_chunk(r, rsize);
3189
        replace_dv(m, r, rsize);
3190
      }
3191
      return chunk2mem(v);
3192
    }
3193
  }
3194
 
3195
  CORRUPTION_ERROR_ACTION(m);
3196
  return 0;
3197
}
3198
 
3199
/* --------------------------- realloc support --------------------------- */
3200
 
3201
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3202
  if (bytes >= MAX_REQUEST) {
3203
    MALLOC_FAILURE_ACTION;
3204
    return 0;
3205
  }
3206
  if (!PREACTION(m)) {
3207
    mchunkptr oldp = mem2chunk(oldmem);
3208
    size_t oldsize = chunksize(oldp);
3209
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3210
    mchunkptr newp = 0;
3211
    void* extra = 0;
3212
 
3213
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3214
 
3215
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3216
                ok_next(oldp, next) && ok_pinuse(next))) {
3217
      size_t nb = request2size(bytes);
3218
      if (is_mmapped(oldp))
3219
        newp = mmap_resize(m, oldp, nb);
3220
      else if (oldsize >= nb) { /* already big enough */
3221
        size_t rsize = oldsize - nb;
3222
        newp = oldp;
3223
        if (rsize >= MIN_CHUNK_SIZE) {
3224
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3225
          set_inuse(m, newp, nb);
3226
          set_inuse(m, remainder, rsize);
3227
          extra = chunk2mem(remainder);
3228
        }
3229
      }
3230
      else if (next == m->top && oldsize + m->topsize > nb) {
3231
        /* Expand into top */
3232
        size_t newsize = oldsize + m->topsize;
3233
        size_t newtopsize = newsize - nb;
3234
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3235
        set_inuse(m, oldp, nb);
3236
        newtop->head = newtopsize |PINUSE_BIT;
3237
        m->top = newtop;
3238
        m->topsize = newtopsize;
3239
        newp = oldp;
3240
      }
3241
    }
3242
    else {
3243
      USAGE_ERROR_ACTION(m, oldmem);
3244
      POSTACTION(m);
3245
      return 0;
3246
    }
3247
 
3248
    POSTACTION(m);
3249
 
3250
    if (newp != 0) {
3251
      if (extra != 0) {
3252
        internal_free(m, extra);
3253
      }
3254
      check_inuse_chunk(m, newp);
3255
      return chunk2mem(newp);
3256
    }
3257
    else {
3258
      void* newmem = internal_malloc(m, bytes);
3259
      if (newmem != 0) {
3260
        size_t oc = oldsize - overhead_for(oldp);
3261
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3262
        internal_free(m, oldmem);
3263
      }
3264
      return newmem;
3265
    }
3266
  }
3267
  return 0;
3268
}
3269
 
3270
/* --------------------------- memalign support -------------------------- */
3271
 
3272
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3273
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3274
    return internal_malloc(m, bytes);
3275
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3276
    alignment = MIN_CHUNK_SIZE;
3277
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3278
    size_t a = MALLOC_ALIGNMENT << 1;
3279
    while (a < alignment) a <<= 1;
3280
    alignment = a;
3281
  }
3282
 
3283
  if (bytes >= MAX_REQUEST - alignment) {
3284
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3285
      MALLOC_FAILURE_ACTION;
3286
    }
3287
  }
3288
  else {
3289
    size_t nb = request2size(bytes);
3290
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3291
    char* mem = (char*)internal_malloc(m, req);
3292
    if (mem != 0) {
3293
      void* leader = 0;
3294
      void* trailer = 0;
3295
      mchunkptr p = mem2chunk(mem);
3296
 
3297
      if (PREACTION(m)) return 0;
3298
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3299
        /*
3300
          Find an aligned spot inside chunk.  Since we need to give
3301
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3302
          the first calculation places us at a spot with less than
3303
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3304
          We've allocated enough total room so that this is always
3305
          possible.
3306
        */
3307
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3308
                                                       alignment -
3309
                                                       SIZE_T_ONE)) &
3310
                                             -alignment));
3311
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3312
          br : br+alignment;
3313
        mchunkptr newp = (mchunkptr)pos;
3314
        size_t leadsize = pos - (char*)(p);
3315
        size_t newsize = chunksize(p) - leadsize;
3316
 
3317
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3318
          newp->prev_foot = p->prev_foot + leadsize;
3319
          newp->head = (newsize|CINUSE_BIT);
3320
        }
3321
        else { /* Otherwise, give back leader, use the rest */
3322
          set_inuse(m, newp, newsize);
3323
          set_inuse(m, p, leadsize);
3324
          leader = chunk2mem(p);
3325
        }
3326
        p = newp;
3327
      }
3328
 
3329
      /* Give back spare room at the end */
3330
      if (!is_mmapped(p)) {
3331
        size_t size = chunksize(p);
3332
        if (size > nb + MIN_CHUNK_SIZE) {
3333
          size_t remainder_size = size - nb;
3334
          mchunkptr remainder = chunk_plus_offset(p, nb);
3335
          set_inuse(m, p, nb);
3336
          set_inuse(m, remainder, remainder_size);
3337
          trailer = chunk2mem(remainder);
3338
        }
3339
      }
3340
 
3341
      assert (chunksize(p) >= nb);
3342
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3343
      check_inuse_chunk(m, p);
3344
      POSTACTION(m);
3345
      if (leader != 0) {
3346
        internal_free(m, leader);
3347
      }
3348
      if (trailer != 0) {
3349
        internal_free(m, trailer);
3350
      }
3351
      return chunk2mem(p);
3352
    }
3353
  }
3354
  return 0;
3355
}
3356
 
3357
/* ------------------------ comalloc/coalloc support --------------------- */
3358
 
3359
static void** ialloc(mstate m,
3360
                     size_t n_elements,
3361
                     size_t* sizes,
3362
                     int opts,
3363
                     void* chunks[]) {
3364
  /*
3365
    This provides common support for independent_X routines, handling
3366
    all of the combinations that can result.
3367
 
3368
    The opts arg has:
3369
    bit 0 set if all elements are same size (using sizes[0])
3370
    bit 1 set if elements should be zeroed
3371
  */
3372
 
3373
  size_t    element_size;   /* chunksize of each element, if all same */
3374
  size_t    contents_size;  /* total size of elements */
3375
  size_t    array_size;     /* request size of pointer array */
3376
  void*     mem;            /* malloced aggregate space */
3377
  mchunkptr p;              /* corresponding chunk */
3378
  size_t    remainder_size; /* remaining bytes while splitting */
3379
  void**    marray;         /* either "chunks" or malloced ptr array */
3380
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3381
  flag_t    was_enabled;    /* to disable mmap */
3382
  size_t    size;
3383
  size_t    i;
3384
 
3385
  /* compute array length, if needed */
3386
  if (chunks != 0) {
3387
    if (n_elements == 0)
3388
      return chunks; /* nothing to do */
3389
    marray = chunks;
3390
    array_size = 0;
3391
  }
3392
  else {
3393
    /* if empty req, must still return chunk representing empty array */
3394
    if (n_elements == 0)
3395
      return (void**)internal_malloc(m, 0);
3396
    marray = 0;
3397
    array_size = request2size(n_elements * (sizeof(void*)));
3398
  }
3399
 
3400
  /* compute total element size */
3401
  if (opts & 0x1) { /* all-same-size */
3402
    element_size = request2size(*sizes);
3403
    contents_size = n_elements * element_size;
3404
  }
3405
  else { /* add up all the sizes */
3406
    element_size = 0;
3407
    contents_size = 0;
3408
    for (i = 0; i != n_elements; ++i)
3409
      contents_size += request2size(sizes[i]);
3410
  }
3411
 
3412
  size = contents_size + array_size;
3413
 
3414
  /*
3415
     Allocate the aggregate chunk.  First disable direct-mmapping so
3416
     malloc won't use it, since we would not be able to later
3417
     free/realloc space internal to a segregated mmap region.
3418
  */
3419
  was_enabled = use_mmap(m);
3420
  disable_mmap(m);
3421
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3422
  if (was_enabled)
3423
    enable_mmap(m);
3424
  if (mem == 0)
3425
    return 0;
3426
 
3427
  if (PREACTION(m)) return 0;
3428
  p = mem2chunk(mem);
3429
  remainder_size = chunksize(p);
3430
 
3431
  assert(!is_mmapped(p));
3432
 
3433
  if (opts & 0x2) {       /* optionally clear the elements */
3434
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3435
  }
3436
 
3437
  /* If not provided, allocate the pointer array as final part of chunk */
3438
  if (marray == 0) {
3439
    size_t  array_chunk_size;
3440
    array_chunk = chunk_plus_offset(p, contents_size);
3441
    array_chunk_size = remainder_size - contents_size;
3442
    marray = (void**) (chunk2mem(array_chunk));
3443
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3444
    remainder_size = contents_size;
3445
  }
3446
 
3447
  /* split out elements */
3448
  for (i = 0; ; ++i) {
3449
    marray[i] = chunk2mem(p);
3450
    if (i != n_elements-1) {
3451
      if (element_size != 0)
3452
        size = element_size;
3453
      else
3454
        size = request2size(sizes[i]);
3455
      remainder_size -= size;
3456
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3457
      p = chunk_plus_offset(p, size);
3458
    }
3459
    else { /* the final element absorbs any overallocation slop */
3460
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3461
      break;
3462
    }
3463
  }
3464
 
3465
#if DEBUG
3466
  if (marray != chunks) {
3467
    /* final element must have exactly exhausted chunk */
3468
    if (element_size != 0) {
3469
      assert(remainder_size == element_size);
3470
    }
3471
    else {
3472
      assert(remainder_size == request2size(sizes[i]));
3473
    }
3474
    check_inuse_chunk(m, mem2chunk(marray));
3475
  }
3476
  for (i = 0; i != n_elements; ++i)
3477
    check_inuse_chunk(m, mem2chunk(marray[i]));
3478
 
3479
#endif /* DEBUG */
3480
 
3481
  POSTACTION(m);
3482
  return marray;
3483
}
3484
 
3485
 
3486
/* -------------------------- public routines ---------------------------- */
3487
 
3488
#if !ONLY_MSPACES
3489
 
3490
void* dlmalloc(size_t bytes) {
3491
  /*
3492
     Basic algorithm:
3493
     If a small request (< 256 bytes minus per-chunk overhead):
3494
       1. If one exists, use a remainderless chunk in associated smallbin.
3495
          (Remainderless means that there are too few excess bytes to
3496
          represent as a chunk.)
3497
       2. If it is big enough, use the dv chunk, which is normally the
3498
          chunk adjacent to the one used for the most recent small request.
3499
       3. If one exists, split the smallest available chunk in a bin,
3500
          saving remainder in dv.
3501
       4. If it is big enough, use the top chunk.
3502
       5. If available, get memory from system and use it
3503
     Otherwise, for a large request:
3504
       1. Find the smallest available binned chunk that fits, and use it
3505
          if it is better fitting than dv chunk, splitting if necessary.
3506
       2. If better fitting than any binned chunk, use the dv chunk.
3507
       3. If it is big enough, use the top chunk.
3508
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3509
       5. If available, get memory from system and use it
3510
 
3511
     The ugly goto's here ensure that postaction occurs along all paths.
3512
  */
3513
 
3514
  if (!PREACTION(gm)) {
3515
    void* mem;
3516
    size_t nb;
3517
    if (bytes <= MAX_SMALL_REQUEST) {
3518
      bindex_t idx;
3519
      binmap_t smallbits;
3520
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3521
      idx = small_index(nb);
3522
      smallbits = gm->smallmap >> idx;
3523
 
3524
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3525
        mchunkptr b, p;
3526
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3527
        b = smallbin_at(gm, idx);
3528
        p = b->fd;
3529
        assert(chunksize(p) == small_index2size(idx));
3530
        unlink_first_small_chunk(gm, b, p, idx);
3531
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3532
        mem = chunk2mem(p);
3533
        check_malloced_chunk(gm, mem, nb);
3534
        goto postaction;
3535
      }
3536
 
3537
      else if (nb > gm->dvsize) {
3538
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3539
          mchunkptr b, p, r;
3540
          size_t rsize;
3541
          bindex_t i;
3542
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3543
          binmap_t leastbit = least_bit(leftbits);
3544
          compute_bit2idx(leastbit, i);
3545
          b = smallbin_at(gm, i);
3546
          p = b->fd;
3547
          assert(chunksize(p) == small_index2size(i));
3548
          unlink_first_small_chunk(gm, b, p, i);
3549
          rsize = small_index2size(i) - nb;
3550
          /* Fit here cannot be remainderless if 4byte sizes */
3551
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3552
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3553
          else {
3554
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3555
            r = chunk_plus_offset(p, nb);
3556
            set_size_and_pinuse_of_free_chunk(r, rsize);
3557
            replace_dv(gm, r, rsize);
3558
          }
3559
          mem = chunk2mem(p);
3560
          check_malloced_chunk(gm, mem, nb);
3561
          goto postaction;
3562
        }
3563
 
3564
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3565
          check_malloced_chunk(gm, mem, nb);
3566
          goto postaction;
3567
        }
3568
      }
3569
    }
3570
    else if (bytes >= MAX_REQUEST)
3571
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3572
    else {
3573
      nb = pad_request(bytes);
3574
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3575
        check_malloced_chunk(gm, mem, nb);
3576
        goto postaction;
3577
      }
3578
    }
3579
 
3580
    if (nb <= gm->dvsize) {
3581
      size_t rsize = gm->dvsize - nb;
3582
      mchunkptr p = gm->dv;
3583
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3584
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3585
        gm->dvsize = rsize;
3586
        set_size_and_pinuse_of_free_chunk(r, rsize);
3587
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3588
      }
3589
      else { /* exhaust dv */
3590
        size_t dvs = gm->dvsize;
3591
        gm->dvsize = 0;
3592
        gm->dv = 0;
3593
        set_inuse_and_pinuse(gm, p, dvs);
3594
      }
3595
      mem = chunk2mem(p);
3596
      check_malloced_chunk(gm, mem, nb);
3597
      goto postaction;
3598
    }
3599
 
3600
    else if (nb < gm->topsize) { /* Split top */
3601
      size_t rsize = gm->topsize -= nb;
3602
      mchunkptr p = gm->top;
3603
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3604
      r->head = rsize | PINUSE_BIT;
3605
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3606
      mem = chunk2mem(p);
3607
      check_top_chunk(gm, gm->top);
3608
      check_malloced_chunk(gm, mem, nb);
3609
      goto postaction;
3610
    }
3611
 
3612
    mem = sys_alloc(gm, nb);
3613
 
3614
  postaction:
3615
    POSTACTION(gm);
3616
    return mem;
3617
  }
3618
 
3619
  return 0;
3620
}
3621
 
3622
void dlfree(void* mem) {
3623
  /*
3624
     Consolidate freed chunks with preceeding or succeeding bordering
3625
     free chunks, if they exist, and then place in a bin.  Intermixed
3626
     with special cases for top, dv, mmapped chunks, and usage errors.
3627
  */
3628
 
3629
  if (mem != 0) {
3630
    mchunkptr p  = mem2chunk(mem);
3631
#if FOOTERS
3632
    mstate fm = get_mstate_for(p);
3633
    if (!ok_magic(fm)) {
3634
      USAGE_ERROR_ACTION(fm, p);
3635
      return;
3636
    }
3637
#else /* FOOTERS */
3638
#define fm gm
3639
#endif /* FOOTERS */
3640
    if (!PREACTION(fm)) {
3641
      check_inuse_chunk(fm, p);
3642
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3643
        size_t psize = chunksize(p);
3644
        mchunkptr next = chunk_plus_offset(p, psize);
3645
        if (!pinuse(p)) {
3646
          size_t prevsize = p->prev_foot;
3647
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3648
            prevsize &= ~IS_MMAPPED_BIT;
3649
            psize += prevsize + MMAP_FOOT_PAD;
3650
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3651
              fm->footprint -= psize;
3652
            goto postaction;
3653
          }
3654
          else {
3655
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3656
            psize += prevsize;
3657
            p = prev;
3658
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3659
              if (p != fm->dv) {
3660
                unlink_chunk(fm, p, prevsize);
3661
              }
3662
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3663
                fm->dvsize = psize;
3664
                set_free_with_pinuse(p, psize, next);
3665
                goto postaction;
3666
              }
3667
            }
3668
            else
3669
              goto erroraction;
3670
          }
3671
        }
3672
 
3673
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3674
          if (!cinuse(next)) {  /* consolidate forward */
3675
            if (next == fm->top) {
3676
              size_t tsize = fm->topsize += psize;
3677
              fm->top = p;
3678
              p->head = tsize | PINUSE_BIT;
3679
              if (p == fm->dv) {
3680
                fm->dv = 0;
3681
                fm->dvsize = 0;
3682
              }
3683
              if (should_trim(fm, tsize))
3684
                sys_trim(fm, 0);
3685
              goto postaction;
3686
            }
3687
            else if (next == fm->dv) {
3688
              size_t dsize = fm->dvsize += psize;
3689
              fm->dv = p;
3690
              set_size_and_pinuse_of_free_chunk(p, dsize);
3691
              goto postaction;
3692
            }
3693
            else {
3694
              size_t nsize = chunksize(next);
3695
              psize += nsize;
3696
              unlink_chunk(fm, next, nsize);
3697
              set_size_and_pinuse_of_free_chunk(p, psize);
3698
              if (p == fm->dv) {
3699
                fm->dvsize = psize;
3700
                goto postaction;
3701
              }
3702
            }
3703
          }
3704
          else
3705
            set_free_with_pinuse(p, psize, next);
3706
          insert_chunk(fm, p, psize);
3707
          check_free_chunk(fm, p);
3708
          goto postaction;
3709
        }
3710
      }
3711
    erroraction:
3712
      USAGE_ERROR_ACTION(fm, p);
3713
    postaction:
3714
      POSTACTION(fm);
3715
    }
3716
  }
3717
#if !FOOTERS
3718
#undef fm
3719
#endif /* FOOTERS */
3720
}
3721
 
3722
void* dlcalloc(size_t n_elements, size_t elem_size) {
3723
  void* mem;
3724
  size_t req = 0;
3725
  if (n_elements != 0) {
3726
    req = n_elements * elem_size;
3727
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3728
        (req / n_elements != elem_size))
3729
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3730
  }
3731
  mem = dlmalloc(req);
3732
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3733
    memset(mem, 0, req);
3734
  return mem;
3735
}
3736
 
3737
void* dlrealloc(void* oldmem, size_t bytes) {
3738
  if (oldmem == 0)
3739
    return dlmalloc(bytes);
3740
#ifdef REALLOC_ZERO_BYTES_FREES
3741
  if (bytes == 0) {
3742
    dlfree(oldmem);
3743
    return 0;
3744
  }
3745
#endif /* REALLOC_ZERO_BYTES_FREES */
3746
  else {
3747
#if ! FOOTERS
3748
    mstate m = gm;
3749
#else /* FOOTERS */
3750
    mstate m = get_mstate_for(mem2chunk(oldmem));
3751
    if (!ok_magic(m)) {
3752
      USAGE_ERROR_ACTION(m, oldmem);
3753
      return 0;
3754
    }
3755
#endif /* FOOTERS */
3756
    return internal_realloc(m, oldmem, bytes);
3757
  }
3758
}
3759
 
3760
void* dlmemalign(size_t alignment, size_t bytes) {
3761
  return internal_memalign(gm, alignment, bytes);
3762
}
3763
 
3764
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3765
                                 void* chunks[]) {
3766
  size_t sz = elem_size; /* serves as 1-element array */
3767
  return ialloc(gm, n_elements, &sz, 3, chunks);
3768
}
3769
 
3770
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3771
                                   void* chunks[]) {
3772
  return ialloc(gm, n_elements, sizes, 0, chunks);
3773
}
3774
 
3775
void* dlvalloc(size_t bytes) {
3776
  size_t pagesz;
3777
  init_mparams();
3778
  pagesz = mparams.page_size;
3779
  return dlmemalign(pagesz, bytes);
3780
}
3781
 
3782
void* dlpvalloc(size_t bytes) {
3783
  size_t pagesz;
3784
  init_mparams();
3785
  pagesz = mparams.page_size;
3786
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3787
}
3788
 
3789
int dlmalloc_trim(size_t pad) {
3790
  int result = 0;
3791
  if (!PREACTION(gm)) {
3792
    result = sys_trim(gm, pad);
3793
    POSTACTION(gm);
3794
  }
3795
  return result;
3796
}
3797
 
3798
size_t dlmalloc_footprint(void) {
3799
  return gm->footprint;
3800
}
3801
 
3802
size_t dlmalloc_max_footprint(void) {
3803
  return gm->max_footprint;
3804
}
3805
 
3806
#if !NO_MALLINFO
3807
struct mallinfo dlmallinfo(void) {
3808
  return internal_mallinfo(gm);
3809
}
3810
#endif /* NO_MALLINFO */
3811
 
3812
void dlmalloc_stats() {
3813
  internal_malloc_stats(gm);
3814
}
3815
 
3816
size_t dlmalloc_usable_size(void* mem) {
3817
  if (mem != 0) {
3818
    mchunkptr p = mem2chunk(mem);
3819
    if (cinuse(p))
3820
      return chunksize(p) - overhead_for(p);
3821
  }
3822
  return 0;
3823
}
3824
 
3825
int dlmallopt(int param_number, int value) {
3826
  return change_mparam(param_number, value);
3827
}
3828
 
3829
#endif /* !ONLY_MSPACES */
3830
 
3831
/* ----------------------------- user mspaces ---------------------------- */
3832
 
3833
#if MSPACES
3834
 
3835
static mstate init_user_mstate(char* tbase, size_t tsize) {
3836
  size_t msize = pad_request(sizeof(struct malloc_state));
3837
  mchunkptr mn;
3838
  mchunkptr msp = align_as_chunk(tbase);
3839
  mstate m = (mstate)(chunk2mem(msp));
3840
  memset(m, 0, msize);
3841
  INITIAL_LOCK(&m->mutex);
3842
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3843
  m->seg.base = m->least_addr = tbase;
3844
  m->seg.size = m->footprint = m->max_footprint = tsize;
3845
  m->magic = mparams.magic;
3846
  m->mflags = mparams.default_mflags;
3847
  disable_contiguous(m);
3848
  init_bins(m);
3849
  mn = next_chunk(mem2chunk(m));
3850
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3851
  check_top_chunk(m, m->top);
3852
  return m;
3853
}
3854
 
3855
mspace create_mspace(size_t capacity, int locked) {
3856
  mstate m = 0;
3857
  size_t msize = pad_request(sizeof(struct malloc_state));
3858
  init_mparams(); /* Ensure pagesize etc initialized */
3859
 
3860
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3861
    size_t rs = ((capacity == 0)? mparams.granularity :
3862
                 (capacity + TOP_FOOT_SIZE + msize));
3863
    size_t tsize = granularity_align(rs);
3864
    char* tbase = (char*)(CALL_MMAP(tsize));
3865
    if (tbase != CMFAIL) {
3866
      m = init_user_mstate(tbase, tsize);
3867
      m->seg.sflags = IS_MMAPPED_BIT;
3868
      set_lock(m, locked);
3869
    }
3870
  }
3871
  return (mspace)m;
3872
}
3873
 
3874
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3875
  mstate m = 0;
3876
  size_t msize = pad_request(sizeof(struct malloc_state));
3877
  init_mparams(); /* Ensure pagesize etc initialized */
3878
 
3879
  if (capacity > msize + TOP_FOOT_SIZE &&
3880
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3881
    m = init_user_mstate((char*)base, capacity);
3882
    m->seg.sflags = EXTERN_BIT;
3883
    set_lock(m, locked);
3884
  }
3885
  return (mspace)m;
3886
}
3887
 
3888
size_t destroy_mspace(mspace msp) {
3889
  size_t freed = 0;
3890
  mstate ms = (mstate)msp;
3891
  if (ok_magic(ms)) {
3892
    msegmentptr sp = &ms->seg;
3893
    while (sp != 0) {
3894
      char* base = sp->base;
3895
      size_t size = sp->size;
3896
      flag_t flag = sp->sflags;
3897
      sp = sp->next;
3898
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3899
          CALL_MUNMAP(base, size) == 0)
3900
        freed += size;
3901
    }
3902
  }
3903
  else {
3904
    USAGE_ERROR_ACTION(ms,ms);
3905
  }
3906
  return freed;
3907
}
3908
 
3909
/*
3910
  mspace versions of routines are near-clones of the global
3911
  versions. This is not so nice but better than the alternatives.
3912
*/
3913
 
3914
 
3915
void* mspace_malloc(mspace msp, size_t bytes) {
3916
  mstate ms = (mstate)msp;
3917
  if (!ok_magic(ms)) {
3918
    USAGE_ERROR_ACTION(ms,ms);
3919
    return 0;
3920
  }
3921
  if (!PREACTION(ms)) {
3922
    void* mem;
3923
    size_t nb;
3924
    if (bytes <= MAX_SMALL_REQUEST) {
3925
      bindex_t idx;
3926
      binmap_t smallbits;
3927
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3928
      idx = small_index(nb);
3929
      smallbits = ms->smallmap >> idx;
3930
 
3931
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3932
        mchunkptr b, p;
3933
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3934
        b = smallbin_at(ms, idx);
3935
        p = b->fd;
3936
        assert(chunksize(p) == small_index2size(idx));
3937
        unlink_first_small_chunk(ms, b, p, idx);
3938
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3939
        mem = chunk2mem(p);
3940
        check_malloced_chunk(ms, mem, nb);
3941
        goto postaction;
3942
      }
3943
 
3944
      else if (nb > ms->dvsize) {
3945
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3946
          mchunkptr b, p, r;
3947
          size_t rsize;
3948
          bindex_t i;
3949
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3950
          binmap_t leastbit = least_bit(leftbits);
3951
          compute_bit2idx(leastbit, i);
3952
          b = smallbin_at(ms, i);
3953
          p = b->fd;
3954
          assert(chunksize(p) == small_index2size(i));
3955
          unlink_first_small_chunk(ms, b, p, i);
3956
          rsize = small_index2size(i) - nb;
3957
          /* Fit here cannot be remainderless if 4byte sizes */
3958
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3959
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3960
          else {
3961
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3962
            r = chunk_plus_offset(p, nb);
3963
            set_size_and_pinuse_of_free_chunk(r, rsize);
3964
            replace_dv(ms, r, rsize);
3965
          }
3966
          mem = chunk2mem(p);
3967
          check_malloced_chunk(ms, mem, nb);
3968
          goto postaction;
3969
        }
3970
 
3971
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3972
          check_malloced_chunk(ms, mem, nb);
3973
          goto postaction;
3974
        }
3975
      }
3976
    }
3977
    else if (bytes >= MAX_REQUEST)
3978
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3979
    else {
3980
      nb = pad_request(bytes);
3981
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3982
        check_malloced_chunk(ms, mem, nb);
3983
        goto postaction;
3984
      }
3985
    }
3986
 
3987
    if (nb <= ms->dvsize) {
3988
      size_t rsize = ms->dvsize - nb;
3989
      mchunkptr p = ms->dv;
3990
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3991
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3992
        ms->dvsize = rsize;
3993
        set_size_and_pinuse_of_free_chunk(r, rsize);
3994
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3995
      }
3996
      else { /* exhaust dv */
3997
        size_t dvs = ms->dvsize;
3998
        ms->dvsize = 0;
3999
        ms->dv = 0;
4000
        set_inuse_and_pinuse(ms, p, dvs);
4001
      }
4002
      mem = chunk2mem(p);
4003
      check_malloced_chunk(ms, mem, nb);
4004
      goto postaction;
4005
    }
4006
 
4007
    else if (nb < ms->topsize) { /* Split top */
4008
      size_t rsize = ms->topsize -= nb;
4009
      mchunkptr p = ms->top;
4010
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4011
      r->head = rsize | PINUSE_BIT;
4012
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4013
      mem = chunk2mem(p);
4014
      check_top_chunk(ms, ms->top);
4015
      check_malloced_chunk(ms, mem, nb);
4016
      goto postaction;
4017
    }
4018
 
4019
    mem = sys_alloc(ms, nb);
4020
 
4021
  postaction:
4022
    POSTACTION(ms);
4023
    return mem;
4024
  }
4025
 
4026
  return 0;
4027
}
4028
 
4029
void mspace_free(mspace msp, void* mem) {
4030
  if (mem != 0) {
4031
    mchunkptr p  = mem2chunk(mem);
4032
#if FOOTERS
4033
    mstate fm = get_mstate_for(p);
4034
#else /* FOOTERS */
4035
    mstate fm = (mstate)msp;
4036
#endif /* FOOTERS */
4037
    if (!ok_magic(fm)) {
4038
      USAGE_ERROR_ACTION(fm, p);
4039
      return;
4040
    }
4041
    if (!PREACTION(fm)) {
4042
      check_inuse_chunk(fm, p);
4043
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4044
        size_t psize = chunksize(p);
4045
        mchunkptr next = chunk_plus_offset(p, psize);
4046
        if (!pinuse(p)) {
4047
          size_t prevsize = p->prev_foot;
4048
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
4049
            prevsize &= ~IS_MMAPPED_BIT;
4050
            psize += prevsize + MMAP_FOOT_PAD;
4051
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4052
              fm->footprint -= psize;
4053
            goto postaction;
4054
          }
4055
          else {
4056
            mchunkptr prev = chunk_minus_offset(p, prevsize);
4057
            psize += prevsize;
4058
            p = prev;
4059
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4060
              if (p != fm->dv) {
4061
                unlink_chunk(fm, p, prevsize);
4062
              }
4063
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4064
                fm->dvsize = psize;
4065
                set_free_with_pinuse(p, psize, next);
4066
                goto postaction;
4067
              }
4068
            }
4069
            else
4070
              goto erroraction;
4071
          }
4072
        }
4073
 
4074
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4075
          if (!cinuse(next)) {  /* consolidate forward */
4076
            if (next == fm->top) {
4077
              size_t tsize = fm->topsize += psize;
4078
              fm->top = p;
4079
              p->head = tsize | PINUSE_BIT;
4080
              if (p == fm->dv) {
4081
                fm->dv = 0;
4082
                fm->dvsize = 0;
4083
              }
4084
              if (should_trim(fm, tsize))
4085
                sys_trim(fm, 0);
4086
              goto postaction;
4087
            }
4088
            else if (next == fm->dv) {
4089
              size_t dsize = fm->dvsize += psize;
4090
              fm->dv = p;
4091
              set_size_and_pinuse_of_free_chunk(p, dsize);
4092
              goto postaction;
4093
            }
4094
            else {
4095
              size_t nsize = chunksize(next);
4096
              psize += nsize;
4097
              unlink_chunk(fm, next, nsize);
4098
              set_size_and_pinuse_of_free_chunk(p, psize);
4099
              if (p == fm->dv) {
4100
                fm->dvsize = psize;
4101
                goto postaction;
4102
              }
4103
            }
4104
          }
4105
          else
4106
            set_free_with_pinuse(p, psize, next);
4107
          insert_chunk(fm, p, psize);
4108
          check_free_chunk(fm, p);
4109
          goto postaction;
4110
        }
4111
      }
4112
    erroraction:
4113
      USAGE_ERROR_ACTION(fm, p);
4114
    postaction:
4115
      POSTACTION(fm);
4116
    }
4117
  }
4118
}
4119
 
4120
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4121
  void* mem;
4122
  size_t req = 0;
4123
  mstate ms = (mstate)msp;
4124
  if (!ok_magic(ms)) {
4125
    USAGE_ERROR_ACTION(ms,ms);
4126
    return 0;
4127
  }
4128
  if (n_elements != 0) {
4129
    req = n_elements * elem_size;
4130
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4131
        (req / n_elements != elem_size))
4132
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4133
  }
4134
  mem = internal_malloc(ms, req);
4135
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4136
    memset(mem, 0, req);
4137
  return mem;
4138
}
4139
 
4140
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4141
  if (oldmem == 0)
4142
    return mspace_malloc(msp, bytes);
4143
#ifdef REALLOC_ZERO_BYTES_FREES
4144
  if (bytes == 0) {
4145
    mspace_free(msp, oldmem);
4146
    return 0;
4147
  }
4148
#endif /* REALLOC_ZERO_BYTES_FREES */
4149
  else {
4150
#if FOOTERS
4151
    mchunkptr p  = mem2chunk(oldmem);
4152
    mstate ms = get_mstate_for(p);
4153
#else /* FOOTERS */
4154
    mstate ms = (mstate)msp;
4155
#endif /* FOOTERS */
4156
    if (!ok_magic(ms)) {
4157
      USAGE_ERROR_ACTION(ms,ms);
4158
      return 0;
4159
    }
4160
    return internal_realloc(ms, oldmem, bytes);
4161
  }
4162
}
4163
 
4164
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4165
  mstate ms = (mstate)msp;
4166
  if (!ok_magic(ms)) {
4167
    USAGE_ERROR_ACTION(ms,ms);
4168
    return 0;
4169
  }
4170
  return internal_memalign(ms, alignment, bytes);
4171
}
4172
 
4173
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4174
                                 size_t elem_size, void* chunks[]) {
4175
  size_t sz = elem_size; /* serves as 1-element array */
4176
  mstate ms = (mstate)msp;
4177
  if (!ok_magic(ms)) {
4178
    USAGE_ERROR_ACTION(ms,ms);
4179
    return 0;
4180
  }
4181
  return ialloc(ms, n_elements, &sz, 3, chunks);
4182
}
4183
 
4184
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4185
                                   size_t sizes[], void* chunks[]) {
4186
  mstate ms = (mstate)msp;
4187
  if (!ok_magic(ms)) {
4188
    USAGE_ERROR_ACTION(ms,ms);
4189
    return 0;
4190
  }
4191
  return ialloc(ms, n_elements, sizes, 0, chunks);
4192
}
4193
 
4194
int mspace_trim(mspace msp, size_t pad) {
4195
  int result = 0;
4196
  mstate ms = (mstate)msp;
4197
  if (ok_magic(ms)) {
4198
    if (!PREACTION(ms)) {
4199
      result = sys_trim(ms, pad);
4200
      POSTACTION(ms);
4201
    }
4202
  }
4203
  else {
4204
    USAGE_ERROR_ACTION(ms,ms);
4205
  }
4206
  return result;
4207
}
4208
 
4209
void mspace_malloc_stats(mspace msp) {
4210
  mstate ms = (mstate)msp;
4211
  if (ok_magic(ms)) {
4212
    internal_malloc_stats(ms);
4213
  }
4214
  else {
4215
    USAGE_ERROR_ACTION(ms,ms);
4216
  }
4217
}
4218
 
4219
size_t mspace_footprint(mspace msp) {
4220
  size_t result;
4221
  mstate ms = (mstate)msp;
4222
  if (ok_magic(ms)) {
4223
    result = ms->footprint;
4224
  }
4225
  USAGE_ERROR_ACTION(ms,ms);
4226
  return result;
4227
}
4228
 
4229
 
4230
size_t mspace_max_footprint(mspace msp) {
4231
  size_t result;
4232
  mstate ms = (mstate)msp;
4233
  if (ok_magic(ms)) {
4234
    result = ms->max_footprint;
4235
  }
4236
  USAGE_ERROR_ACTION(ms,ms);
4237
  return result;
4238
}
4239
 
4240
 
4241
#if !NO_MALLINFO
4242
struct mallinfo mspace_mallinfo(mspace msp) {
4243
  mstate ms = (mstate)msp;
4244
  if (!ok_magic(ms)) {
4245
    USAGE_ERROR_ACTION(ms,ms);
4246
  }
4247
  return internal_mallinfo(ms);
4248
}
4249
#endif /* NO_MALLINFO */
4250
 
4251
int mspace_mallopt(int param_number, int value) {
4252
  return change_mparam(param_number, value);
4253
}
4254
 
4255
#endif /* MSPACES */
4256
 
4257
/* -------------------- Alternative MORECORE functions ------------------- */
4258
 
4259
/*
4260
  Guidelines for creating a custom version of MORECORE:
4261
 
4262
  * For best performance, MORECORE should allocate in multiples of pagesize.
4263
  * MORECORE may allocate more memory than requested. (Or even less,
4264
      but this will usually result in a malloc failure.)
4265
  * MORECORE must not allocate memory when given argument zero, but
4266
      instead return one past the end address of memory from previous
4267
      nonzero call.
4268
  * For best performance, consecutive calls to MORECORE with positive
4269
      arguments should return increasing addresses, indicating that
4270
      space has been contiguously extended.
4271
  * Even though consecutive calls to MORECORE need not return contiguous
4272
      addresses, it must be OK for malloc'ed chunks to span multiple
4273
      regions in those cases where they do happen to be contiguous.
4274
  * MORECORE need not handle negative arguments -- it may instead
4275
      just return MFAIL when given negative arguments.
4276
      Negative arguments are always multiples of pagesize. MORECORE
4277
      must not misinterpret negative args as large positive unsigned
4278
      args. You can suppress all such calls from even occurring by defining
4279
      MORECORE_CANNOT_TRIM,
4280
 
4281
  As an example alternative MORECORE, here is a custom allocator
4282
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4283
  necessarily physically contiguous non-paged memory (locked in,
4284
  present and won't get swapped out).  You can use it by uncommenting
4285
  this section, adding some #includes, and setting up the appropriate
4286
  defines above:
4287
 
4288
      #define MORECORE osMoreCore
4289
 
4290
  There is also a shutdown routine that should somehow be called for
4291
  cleanup upon program exit.
4292
 
4293
  #define MAX_POOL_ENTRIES 100
4294
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4295
  static int next_os_pool;
4296
  void *our_os_pools[MAX_POOL_ENTRIES];
4297
 
4298
  void *osMoreCore(int size)
4299
  {
4300
    void *ptr = 0;
4301
    static void *sbrk_top = 0;
4302
 
4303
    if (size > 0)
4304
    {
4305
      if (size < MINIMUM_MORECORE_SIZE)
4306
         size = MINIMUM_MORECORE_SIZE;
4307
      if (CurrentExecutionLevel() == kTaskLevel)
4308
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4309
      if (ptr == 0)
4310
      {
4311
        return (void *) MFAIL;
4312
      }
4313
      // save ptrs so they can be freed during cleanup
4314
      our_os_pools[next_os_pool] = ptr;
4315
      next_os_pool++;
4316
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4317
      sbrk_top = (char *) ptr + size;
4318
      return ptr;
4319
    }
4320
    else if (size < 0)
4321
    {
4322
      // we don't currently support shrink behavior
4323
      return (void *) MFAIL;
4324
    }
4325
    else
4326
    {
4327
      return sbrk_top;
4328
    }
4329
  }
4330
 
4331
  // cleanup any allocated memory pools
4332
  // called as last thing before shutting down driver
4333
 
4334
  void osCleanupMem(void)
4335
  {
4336
    void **ptr;
4337
 
4338
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4339
      if (*ptr)
4340
      {
4341
         PoolDeallocate(*ptr);
4342
         *ptr = 0;
4343
      }
4344
  }
4345
 
4346
*/
4347
 
4348
 
4349
/* -----------------------------------------------------------------------
4350
History:
4351
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4352
      * Add max_footprint functions
4353
      * Ensure all appropriate literals are size_t
4354
      * Fix conditional compilation problem for some #define settings
4355
      * Avoid concatenating segments with the one provided
4356
        in create_mspace_with_base
4357
      * Rename some variables to avoid compiler shadowing warnings
4358
      * Use explicit lock initialization.
4359
      * Better handling of sbrk interference.
4360
      * Simplify and fix segment insertion, trimming and mspace_destroy
4361
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4362
      * Thanks especially to Dennis Flanagan for help on these.
4363
 
4364
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4365
      * Fix memalign brace error.
4366
 
4367
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4368
      * Fix improper #endif nesting in C++
4369
      * Add explicit casts needed for C++
4370
 
4371
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4372
      * Use trees for large bins
4373
      * Support mspaces
4374
      * Use segments to unify sbrk-based and mmap-based system allocation,
4375
        removing need for emulation on most platforms without sbrk.
4376
      * Default safety checks
4377
      * Optional footer checks. Thanks to William Robertson for the idea.
4378
      * Internal code refactoring
4379
      * Incorporate suggestions and platform-specific changes.
4380
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4381
        Aaron Bachmann,  Emery Berger, and others.
4382
      * Speed up non-fastbin processing enough to remove fastbins.
4383
      * Remove useless cfree() to avoid conflicts with other apps.
4384
      * Remove internal memcpy, memset. Compilers handle builtins better.
4385
      * Remove some options that no one ever used and rename others.
4386
 
4387
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4388
      * Fix malloc_state bitmap array misdeclaration
4389
 
4390
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4391
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4392
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4393
      * Better detection and support for non-contiguousness of MORECORE.
4394
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4395
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4396
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4397
      * Raised default trim and map thresholds to 256K.
4398
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4399
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4400
      * Branch-free bin calculation
4401
      * Default trim and mmap thresholds now 256K.
4402
 
4403
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4404
      * Introduce independent_comalloc and independent_calloc.
4405
        Thanks to Michael Pachos for motivation and help.
4406
      * Make optional .h file available
4407
      * Allow > 2GB requests on 32bit systems.
4408
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4409
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4410
        and Anonymous.
4411
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4412
        helping test this.)
4413
      * memalign: check alignment arg
4414
      * realloc: don't try to shift chunks backwards, since this
4415
        leads to  more fragmentation in some programs and doesn't
4416
        seem to help in any others.
4417
      * Collect all cases in malloc requiring system memory into sysmalloc
4418
      * Use mmap as backup to sbrk
4419
      * Place all internal state in malloc_state
4420
      * Introduce fastbins (although similar to 2.5.1)
4421
      * Many minor tunings and cosmetic improvements
4422
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4423
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4424
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4425
      * Include errno.h to support default failure action.
4426
 
4427
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4428
      * return null for negative arguments
4429
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4430
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4431
          (e.g. WIN32 platforms)
4432
         * Cleanup header file inclusion for WIN32 platforms
4433
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4434
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4435
           memory allocation routines
4436
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4437
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4438
           usage of 'assert' in non-WIN32 code
4439
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4440
           avoid infinite loop
4441
      * Always call 'fREe()' rather than 'free()'
4442
 
4443
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4444
      * Fixed ordering problem with boundary-stamping
4445
 
4446
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4447
      * Added pvalloc, as recommended by H.J. Liu
4448
      * Added 64bit pointer support mainly from Wolfram Gloger
4449
      * Added anonymously donated WIN32 sbrk emulation
4450
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4451
      * malloc_extend_top: fix mask error that caused wastage after
4452
        foreign sbrks
4453
      * Add linux mremap support code from HJ Liu
4454
 
4455
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4456
      * Integrated most documentation with the code.
4457
      * Add support for mmap, with help from
4458
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4459
      * Use last_remainder in more cases.
4460
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4461
      * Use ordered bins instead of best-fit threshhold
4462
      * Eliminate block-local decls to simplify tracing and debugging.
4463
      * Support another case of realloc via move into top
4464
      * Fix error occuring when initial sbrk_base not word-aligned.
4465
      * Rely on page size for units instead of SBRK_UNIT to
4466
        avoid surprises about sbrk alignment conventions.
4467
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4468
        (raymond@es.ele.tue.nl) for the suggestion.
4469
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4470
      * More precautions for cases where other routines call sbrk,
4471
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4472
      * Added macros etc., allowing use in linux libc from
4473
        H.J. Lu (hjl@gnu.ai.mit.edu)
4474
      * Inverted this history list
4475
 
4476
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4477
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4478
      * Removed all preallocation code since under current scheme
4479
        the work required to undo bad preallocations exceeds
4480
        the work saved in good cases for most test programs.
4481
      * No longer use return list or unconsolidated bins since
4482
        no scheme using them consistently outperforms those that don't
4483
        given above changes.
4484
      * Use best fit for very large chunks to prevent some worst-cases.
4485
      * Added some support for debugging
4486
 
4487
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4488
      * Removed footers when chunks are in use. Thanks to
4489
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4490
 
4491
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4492
      * Added malloc_trim, with help from Wolfram Gloger
4493
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4494
 
4495
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4496
 
4497
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4498
      * realloc: try to expand in both directions
4499
      * malloc: swap order of clean-bin strategy;
4500
      * realloc: only conditionally expand backwards
4501
      * Try not to scavenge used bins
4502
      * Use bin counts as a guide to preallocation
4503
      * Occasionally bin return list chunks in first scan
4504
      * Add a few optimizations from colin@nyx10.cs.du.edu
4505
 
4506
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4507
      * faster bin computation & slightly different binning
4508
      * merged all consolidations to one part of malloc proper
4509
         (eliminating old malloc_find_space & malloc_clean_bin)
4510
      * Scan 2 returns chunks (not just 1)
4511
      * Propagate failure in realloc if malloc returns 0
4512
      * Add stuff to allow compilation on non-ANSI compilers
4513
          from kpv@research.att.com
4514
 
4515
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4516
      * removed potential for odd address access in prev_chunk
4517
      * removed dependency on getpagesize.h
4518
      * misc cosmetics and a bit more internal documentation
4519
      * anticosmetics: mangled names in macros to evade debugger strangeness
4520
      * tested on sparc, hp-700, dec-mips, rs6000
4521
          with gcc & native cc (hp, dec only) allowing
4522
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4523
 
4524
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4525
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4526
         structure of old version,  but most details differ.)
4527
 
4528
*/