Subversion Repositories HelenOS

Rev

Rev 2082 | Details | Compare with Previous | Last modification | View Log | RSS feed

Rev Author Line No. Line
968 palkovsky 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
1656 cejka 441
/** @addtogroup libcmalloc malloc
442
  * @brief Malloc originally written by Doug Lea and ported to HelenOS.
443
  * @ingroup libc
444
 * @{
445
 */
446
/** @file
447
 */
448
 
449
 
999 palkovsky 450
#include <sys/types.h>  /* For size_t */
451
 
452
/** Non-default helenos customizations */
453
#define LACKS_FCNTL_H
454
#define LACKS_SYS_MMAN_H
968 palkovsky 455
#define LACKS_SYS_PARAM_H
999 palkovsky 456
#undef HAVE_MMAP
457
#define HAVE_MMAP 0
968 palkovsky 458
#define LACKS_ERRNO_H
999 palkovsky 459
/* Set errno? */
460
#undef MALLOC_FAILURE_ACTION
968 palkovsky 461
#define MALLOC_FAILURE_ACTION
462
 
463
/* The maximum possible size_t value has all bits set */
464
#define MAX_SIZE_T           (~(size_t)0)
465
 
466
#define ONLY_MSPACES 0
467
#define MSPACES 0
1663 vana 468
 
469
#ifdef MALLOC_ALIGNMENT_16
470
#define MALLOC_ALIGNMENT ((size_t)16U)
471
#else
968 palkovsky 472
#define MALLOC_ALIGNMENT ((size_t)8U)
1663 vana 473
#endif
474
 
968 palkovsky 475
#define FOOTERS 0
476
#define ABORT  abort()
477
#define ABORT_ON_ASSERT_FAILURE 1
478
#define PROCEED_ON_ERROR 0
1143 palkovsky 479
#define USE_LOCKS 1
968 palkovsky 480
#define INSECURE 0
999 palkovsky 481
#define HAVE_MMAP 0
482
 
968 palkovsky 483
#define MMAP_CLEARS 1
999 palkovsky 484
 
968 palkovsky 485
#define HAVE_MORECORE 1
999 palkovsky 486
#define MORECORE_CONTIGUOUS 1
968 palkovsky 487
#define MORECORE sbrk
488
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
999 palkovsky 489
 
968 palkovsky 490
#ifndef DEFAULT_TRIM_THRESHOLD
491
#ifndef MORECORE_CANNOT_TRIM
492
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
493
#else   /* MORECORE_CANNOT_TRIM */
494
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
495
#endif  /* MORECORE_CANNOT_TRIM */
496
#endif  /* DEFAULT_TRIM_THRESHOLD */
497
#ifndef DEFAULT_MMAP_THRESHOLD
498
#if HAVE_MMAP
499
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
500
#else   /* HAVE_MMAP */
501
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
502
#endif  /* HAVE_MMAP */
503
#endif  /* DEFAULT_MMAP_THRESHOLD */
504
#ifndef USE_BUILTIN_FFS
505
#define USE_BUILTIN_FFS 0
506
#endif  /* USE_BUILTIN_FFS */
507
#ifndef USE_DEV_RANDOM
508
#define USE_DEV_RANDOM 0
509
#endif  /* USE_DEV_RANDOM */
510
#ifndef NO_MALLINFO
511
#define NO_MALLINFO 0
512
#endif  /* NO_MALLINFO */
513
#ifndef MALLINFO_FIELD_TYPE
514
#define MALLINFO_FIELD_TYPE size_t
515
#endif  /* MALLINFO_FIELD_TYPE */
516
 
517
/*
518
  mallopt tuning options.  SVID/XPG defines four standard parameter
519
  numbers for mallopt, normally defined in malloc.h.  None of these
520
  are used in this malloc, so setting them has no effect. But this
521
  malloc does support the following options.
522
*/
523
 
524
#define M_TRIM_THRESHOLD     (-1)
525
#define M_GRANULARITY        (-2)
526
#define M_MMAP_THRESHOLD     (-3)
527
 
528
/*
529
  ========================================================================
530
  To make a fully customizable malloc.h header file, cut everything
531
  above this line, put into file malloc.h, edit to suit, and #include it
532
  on the next line, as well as in programs that use this malloc.
533
  ========================================================================
534
*/
535
 
985 palkovsky 536
#include "malloc.h"
968 palkovsky 537
 
538
/*------------------------------ internal #includes ---------------------- */
539
 
540
#include <stdio.h>       /* for printing in malloc_stats */
999 palkovsky 541
#include <string.h>
968 palkovsky 542
 
543
#ifndef LACKS_ERRNO_H
544
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
545
#endif /* LACKS_ERRNO_H */
546
#if FOOTERS
547
#include <time.h>        /* for magic initialization */
548
#endif /* FOOTERS */
549
#ifndef LACKS_STDLIB_H
550
#include <stdlib.h>      /* for abort() */
551
#endif /* LACKS_STDLIB_H */
552
#ifdef DEBUG
553
#if ABORT_ON_ASSERT_FAILURE
1113 palkovsky 554
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
968 palkovsky 555
#else /* ABORT_ON_ASSERT_FAILURE */
556
#include <assert.h>
557
#endif /* ABORT_ON_ASSERT_FAILURE */
558
#else  /* DEBUG */
559
#define assert(x)
560
#endif /* DEBUG */
561
#if USE_BUILTIN_FFS
562
#ifndef LACKS_STRINGS_H
563
#include <strings.h>     /* for ffs */
564
#endif /* LACKS_STRINGS_H */
565
#endif /* USE_BUILTIN_FFS */
566
#if HAVE_MMAP
567
#ifndef LACKS_SYS_MMAN_H
1867 jermar 568
#include <sys/mman.h>    /* for mmap */
968 palkovsky 569
#endif /* LACKS_SYS_MMAN_H */
570
#ifndef LACKS_FCNTL_H
571
#include <fcntl.h>
572
#endif /* LACKS_FCNTL_H */
573
#endif /* HAVE_MMAP */
574
#if HAVE_MORECORE
575
#ifndef LACKS_UNISTD_H
576
#include <unistd.h>     /* for sbrk */
577
#else /* LACKS_UNISTD_H */
578
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
579
extern void*     sbrk(ptrdiff_t);
580
#endif /* FreeBSD etc */
581
#endif /* LACKS_UNISTD_H */
582
#endif /* HAVE_MMAP */
583
 
584
#ifndef WIN32
585
#ifndef malloc_getpagesize
586
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
587
#    ifndef _SC_PAGE_SIZE
588
#      define _SC_PAGE_SIZE _SC_PAGESIZE
589
#    endif
590
#  endif
591
#  ifdef _SC_PAGE_SIZE
592
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
593
#  else
594
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
595
       extern size_t getpagesize();
596
#      define malloc_getpagesize getpagesize()
597
#    else
598
#      ifdef WIN32 /* use supplied emulation of getpagesize */
599
#        define malloc_getpagesize getpagesize()
600
#      else
601
#        ifndef LACKS_SYS_PARAM_H
602
#          include <sys/param.h>
603
#        endif
604
#        ifdef EXEC_PAGESIZE
605
#          define malloc_getpagesize EXEC_PAGESIZE
606
#        else
607
#          ifdef NBPG
608
#            ifndef CLSIZE
609
#              define malloc_getpagesize NBPG
610
#            else
611
#              define malloc_getpagesize (NBPG * CLSIZE)
612
#            endif
613
#          else
614
#            ifdef NBPC
615
#              define malloc_getpagesize NBPC
616
#            else
617
#              ifdef PAGESIZE
618
#                define malloc_getpagesize PAGESIZE
619
#              else /* just guess */
620
#                define malloc_getpagesize ((size_t)4096U)
621
#              endif
622
#            endif
623
#          endif
624
#        endif
625
#      endif
626
#    endif
627
#  endif
628
#endif
629
#endif
630
 
631
/* ------------------- size_t and alignment properties -------------------- */
632
 
633
/* The byte and bit size of a size_t */
634
#define SIZE_T_SIZE         (sizeof(size_t))
635
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
636
 
637
/* Some constants coerced to size_t */
638
/* Annoying but necessary to avoid errors on some plaftorms */
639
#define SIZE_T_ZERO         ((size_t)0)
640
#define SIZE_T_ONE          ((size_t)1)
641
#define SIZE_T_TWO          ((size_t)2)
642
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
643
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
644
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
645
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
646
 
647
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
648
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
649
 
650
/* True if address a has acceptable alignment */
651
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
652
 
653
/* the number of bytes to offset an address to align it */
654
#define align_offset(A)\
655
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
656
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
657
 
658
/* -------------------------- MMAP preliminaries ------------------------- */
659
 
660
/*
661
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
662
   checks to fail so compiler optimizer can delete code rather than
663
   using so many "#if"s.
664
*/
665
 
666
 
667
/* MORECORE and MMAP must return MFAIL on failure */
668
#define MFAIL                ((void*)(MAX_SIZE_T))
669
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
670
 
671
#if !HAVE_MMAP
672
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
673
#define USE_MMAP_BIT         (SIZE_T_ZERO)
674
#define CALL_MMAP(s)         MFAIL
675
#define CALL_MUNMAP(a, s)    (-1)
676
#define DIRECT_MMAP(s)       MFAIL
677
 
678
#else /* HAVE_MMAP */
679
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
680
#define USE_MMAP_BIT         (SIZE_T_ONE)
681
 
682
#ifndef WIN32
683
#define CALL_MUNMAP(a, s)    munmap((a), (s))
684
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
685
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
686
#define MAP_ANONYMOUS        MAP_ANON
687
#endif /* MAP_ANON */
688
#ifdef MAP_ANONYMOUS
689
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
690
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
691
#else /* MAP_ANONYMOUS */
692
/*
693
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
694
   is unlikely to be needed, but is supplied just in case.
695
*/
696
#define MMAP_FLAGS           (MAP_PRIVATE)
697
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
698
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
699
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
700
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
701
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
702
#endif /* MAP_ANONYMOUS */
703
 
704
#define DIRECT_MMAP(s)       CALL_MMAP(s)
705
#else /* WIN32 */
706
 
707
/* Win32 MMAP via VirtualAlloc */
708
static void* win32mmap(size_t size) {
709
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
710
  return (ptr != 0)? ptr: MFAIL;
711
}
712
 
713
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
714
static void* win32direct_mmap(size_t size) {
715
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
716
                           PAGE_READWRITE);
717
  return (ptr != 0)? ptr: MFAIL;
718
}
719
 
720
/* This function supports releasing coalesed segments */
721
static int win32munmap(void* ptr, size_t size) {
722
  MEMORY_BASIC_INFORMATION minfo;
723
  char* cptr = ptr;
724
  while (size) {
725
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
726
      return -1;
727
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
728
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
729
      return -1;
730
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
731
      return -1;
732
    cptr += minfo.RegionSize;
733
    size -= minfo.RegionSize;
734
  }
735
  return 0;
736
}
737
 
738
#define CALL_MMAP(s)         win32mmap(s)
739
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
740
#define DIRECT_MMAP(s)       win32direct_mmap(s)
741
#endif /* WIN32 */
742
#endif /* HAVE_MMAP */
743
 
744
#if HAVE_MMAP && HAVE_MREMAP
745
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
746
#else  /* HAVE_MMAP && HAVE_MREMAP */
747
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
748
#endif /* HAVE_MMAP && HAVE_MREMAP */
749
 
750
#if HAVE_MORECORE
751
#define CALL_MORECORE(S)     MORECORE(S)
752
#else  /* HAVE_MORECORE */
753
#define CALL_MORECORE(S)     MFAIL
754
#endif /* HAVE_MORECORE */
755
 
756
/* mstate bit set if continguous morecore disabled or failed */
757
#define USE_NONCONTIGUOUS_BIT (4U)
758
 
759
/* segment bit set in create_mspace_with_base */
760
#define EXTERN_BIT            (8U)
761
 
762
 
763
/* --------------------------- Lock preliminaries ------------------------ */
764
 
765
#if USE_LOCKS
766
 
767
/*
768
  When locks are defined, there are up to two global locks:
769
 
770
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
771
    MORECORE.  In many cases sys_alloc requires two calls, that should
772
    not be interleaved with calls by other threads.  This does not
773
    protect against direct calls to MORECORE by other threads not
774
    using this lock, so there is still code to cope the best we can on
775
    interference.
776
 
777
  * magic_init_mutex ensures that mparams.magic and other
778
    unique mparams values are initialized only once.
779
*/
780
 
781
/* By default use posix locks */
1143 palkovsky 782
#include <futex.h>
783
#define MLOCK_T atomic_t
784
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
785
/* futex_down cannot fail, but can return different
786
 * retvals for OK
787
 */
788
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
789
#define RELEASE_LOCK(l)      futex_up(l)
968 palkovsky 790
 
791
#if HAVE_MORECORE
1143 palkovsky 792
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
968 palkovsky 793
#endif /* HAVE_MORECORE */
794
 
1143 palkovsky 795
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
968 palkovsky 796
 
797
 
798
#define USE_LOCK_BIT               (2U)
799
#else  /* USE_LOCKS */
800
#define USE_LOCK_BIT               (0U)
801
#define INITIAL_LOCK(l)
802
#endif /* USE_LOCKS */
803
 
804
#if USE_LOCKS && HAVE_MORECORE
805
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
806
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
807
#else /* USE_LOCKS && HAVE_MORECORE */
808
#define ACQUIRE_MORECORE_LOCK()
809
#define RELEASE_MORECORE_LOCK()
810
#endif /* USE_LOCKS && HAVE_MORECORE */
811
 
812
#if USE_LOCKS
813
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
814
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
815
#else  /* USE_LOCKS */
816
#define ACQUIRE_MAGIC_INIT_LOCK()
817
#define RELEASE_MAGIC_INIT_LOCK()
818
#endif /* USE_LOCKS */
819
 
820
 
821
/* -----------------------  Chunk representations ------------------------ */
822
 
823
/*
824
  (The following includes lightly edited explanations by Colin Plumb.)
825
 
826
  The malloc_chunk declaration below is misleading (but accurate and
827
  necessary).  It declares a "view" into memory allowing access to
828
  necessary fields at known offsets from a given base.
829
 
830
  Chunks of memory are maintained using a `boundary tag' method as
831
  originally described by Knuth.  (See the paper by Paul Wilson
832
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
833
  techniques.)  Sizes of free chunks are stored both in the front of
834
  each chunk and at the end.  This makes consolidating fragmented
835
  chunks into bigger chunks fast.  The head fields also hold bits
836
  representing whether chunks are free or in use.
837
 
838
  Here are some pictures to make it clearer.  They are "exploded" to
839
  show that the state of a chunk can be thought of as extending from
840
  the high 31 bits of the head field of its header through the
841
  prev_foot and PINUSE_BIT bit of the following chunk header.
842
 
843
  A chunk that's in use looks like:
844
 
845
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
846
           | Size of previous chunk (if P = 1)                             |
847
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
848
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
849
         | Size of this chunk                                         1| +-+
850
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
851
         |                                                               |
852
         +-                                                             -+
853
         |                                                               |
854
         +-                                                             -+
855
         |                                                               :
856
         +-      size - sizeof(size_t) available payload bytes          -+
857
         :                                                               |
858
 chunk-> +-                                                             -+
859
         |                                                               |
860
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
861
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
862
       | Size of next chunk (may or may not be in use)               | +-+
863
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
864
 
865
    And if it's free, it looks like this:
866
 
867
   chunk-> +-                                                             -+
868
           | User payload (must be in use, or we would have merged!)       |
869
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
870
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
871
         | Size of this chunk                                         0| +-+
872
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
873
         | Next pointer                                                  |
874
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
875
         | Prev pointer                                                  |
876
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
877
         |                                                               :
878
         +-      size - sizeof(struct chunk) unused bytes               -+
879
         :                                                               |
880
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
881
         | Size of this chunk                                            |
882
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
883
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
884
       | Size of next chunk (must be in use, or we would have merged)| +-+
885
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
886
       |                                                               :
887
       +- User payload                                                -+
888
       :                                                               |
889
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
890
                                                                     |0|
891
                                                                     +-+
892
  Note that since we always merge adjacent free chunks, the chunks
893
  adjacent to a free chunk must be in use.
894
 
895
  Given a pointer to a chunk (which can be derived trivially from the
896
  payload pointer) we can, in O(1) time, find out whether the adjacent
897
  chunks are free, and if so, unlink them from the lists that they
898
  are on and merge them with the current chunk.
899
 
900
  Chunks always begin on even word boundaries, so the mem portion
901
  (which is returned to the user) is also on an even word boundary, and
902
  thus at least double-word aligned.
903
 
904
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
905
  chunk size (which is always a multiple of two words), is an in-use
906
  bit for the *previous* chunk.  If that bit is *clear*, then the
907
  word before the current chunk size contains the previous chunk
908
  size, and can be used to find the front of the previous chunk.
909
  The very first chunk allocated always has this bit set, preventing
910
  access to non-existent (or non-owned) memory. If pinuse is set for
911
  any given chunk, then you CANNOT determine the size of the
912
  previous chunk, and might even get a memory addressing fault when
913
  trying to do so.
914
 
915
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
916
  the chunk size redundantly records whether the current chunk is
917
  inuse. This redundancy enables usage checks within free and realloc,
918
  and reduces indirection when freeing and consolidating chunks.
919
 
920
  Each freshly allocated chunk must have both cinuse and pinuse set.
921
  That is, each allocated chunk borders either a previously allocated
922
  and still in-use chunk, or the base of its memory arena. This is
923
  ensured by making all allocations from the the `lowest' part of any
924
  found chunk.  Further, no free chunk physically borders another one,
925
  so each free chunk is known to be preceded and followed by either
926
  inuse chunks or the ends of memory.
927
 
928
  Note that the `foot' of the current chunk is actually represented
929
  as the prev_foot of the NEXT chunk. This makes it easier to
930
  deal with alignments etc but can be very confusing when trying
931
  to extend or adapt this code.
932
 
933
  The exceptions to all this are
934
 
935
     1. The special chunk `top' is the top-most available chunk (i.e.,
936
        the one bordering the end of available memory). It is treated
937
        specially.  Top is never included in any bin, is used only if
938
        no other chunk is available, and is released back to the
939
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
940
        the top chunk is treated as larger (and thus less well
941
        fitting) than any other available chunk.  The top chunk
942
        doesn't update its trailing size field since there is no next
943
        contiguous chunk that would have to index off it. However,
944
        space is still allocated for it (TOP_FOOT_SIZE) to enable
945
        separation or merging when space is extended.
946
 
947
     3. Chunks allocated via mmap, which have the lowest-order bit
948
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
949
        PINUSE_BIT in their head fields.  Because they are allocated
950
        one-by-one, each must carry its own prev_foot field, which is
951
        also used to hold the offset this chunk has within its mmapped
952
        region, which is needed to preserve alignment. Each mmapped
953
        chunk is trailed by the first two fields of a fake next-chunk
954
        for sake of usage checks.
955
 
956
*/
957
 
958
struct malloc_chunk {
959
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
960
  size_t               head;       /* Size and inuse bits. */
961
  struct malloc_chunk* fd;         /* double links -- used only if free. */
962
  struct malloc_chunk* bk;
963
};
964
 
965
typedef struct malloc_chunk  mchunk;
966
typedef struct malloc_chunk* mchunkptr;
967
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
968
typedef unsigned int bindex_t;         /* Described below */
969
typedef unsigned int binmap_t;         /* Described below */
970
typedef unsigned int flag_t;           /* The type of various bit flag sets */
971
 
972
/* ------------------- Chunks sizes and alignments ----------------------- */
973
 
974
#define MCHUNK_SIZE         (sizeof(mchunk))
975
 
976
#if FOOTERS
977
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
978
#else /* FOOTERS */
979
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
980
#endif /* FOOTERS */
981
 
982
/* MMapped chunks need a second word of overhead ... */
983
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
984
/* ... and additional padding for fake next-chunk at foot */
985
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
986
 
987
/* The smallest size we can malloc is an aligned minimal chunk */
988
#define MIN_CHUNK_SIZE\
989
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
990
 
991
/* conversion from malloc headers to user pointers, and back */
992
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
993
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
994
/* chunk associated with aligned address A */
995
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
996
 
997
/* Bounds on request (not chunk) sizes. */
998
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
999
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1000
 
1001
/* pad request bytes into a usable size */
1002
#define pad_request(req) \
1003
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1004
 
1005
/* pad request, checking for minimum (but not maximum) */
1006
#define request2size(req) \
1007
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1008
 
1009
 
1010
/* ------------------ Operations on head and foot fields ----------------- */
1011
 
1012
/*
1013
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1014
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1015
  use. If the chunk was obtained with mmap, the prev_foot field has
1016
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1017
  mmapped region to the base of the chunk.
1018
*/
1019
 
1020
#define PINUSE_BIT          (SIZE_T_ONE)
1021
#define CINUSE_BIT          (SIZE_T_TWO)
1022
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1023
 
1024
/* Head value for fenceposts */
1025
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1026
 
1027
/* extraction of fields from head words */
1028
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1029
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1030
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1031
 
1032
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1033
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1034
 
1035
/* Treat space at ptr +/- offset as a chunk */
1036
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1037
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1038
 
1039
/* Ptr to next or previous physical malloc_chunk. */
1040
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1041
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1042
 
1043
/* extract next chunk's pinuse bit */
1044
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1045
 
1046
/* Get/set size at footer */
1047
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1048
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1049
 
1050
/* Set size, pinuse bit, and foot */
1051
#define set_size_and_pinuse_of_free_chunk(p, s)\
1052
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1053
 
1054
/* Set size, pinuse bit, foot, and clear next pinuse */
1055
#define set_free_with_pinuse(p, s, n)\
1056
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1057
 
1058
#define is_mmapped(p)\
1059
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1060
 
1061
/* Get the internal overhead associated with chunk p */
1062
#define overhead_for(p)\
1063
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1064
 
1065
/* Return true if malloced space is not necessarily cleared */
1066
#if MMAP_CLEARS
1067
#define calloc_must_clear(p) (!is_mmapped(p))
1068
#else /* MMAP_CLEARS */
1069
#define calloc_must_clear(p) (1)
1070
#endif /* MMAP_CLEARS */
1071
 
1072
/* ---------------------- Overlaid data structures ----------------------- */
1073
 
1074
/*
1075
  When chunks are not in use, they are treated as nodes of either
1076
  lists or trees.
1077
 
1078
  "Small"  chunks are stored in circular doubly-linked lists, and look
1079
  like this:
1080
 
1081
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1082
            |             Size of previous chunk                            |
1083
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1084
    `head:' |             Size of chunk, in bytes                         |P|
1085
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1086
            |             Forward pointer to next chunk in list             |
1087
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1088
            |             Back pointer to previous chunk in list            |
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1090
            |             Unused space (may be 0 bytes long)                .
1091
            .                                                               .
1092
            .                                                               |
1093
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1094
    `foot:' |             Size of chunk, in bytes                           |
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1096
 
1097
  Larger chunks are kept in a form of bitwise digital trees (aka
1098
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1099
  free chunks greater than 256 bytes, their size doesn't impose any
1100
  constraints on user chunk sizes.  Each node looks like:
1101
 
1102
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1103
            |             Size of previous chunk                            |
1104
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1105
    `head:' |             Size of chunk, in bytes                         |P|
1106
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1107
            |             Forward pointer to next chunk of same size        |
1108
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1109
            |             Back pointer to previous chunk of same size       |
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1111
            |             Pointer to left child (child[0])                  |
1112
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1113
            |             Pointer to right child (child[1])                 |
1114
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1115
            |             Pointer to parent                                 |
1116
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1117
            |             bin index of this chunk                           |
1118
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1119
            |             Unused space                                      .
1120
            .                                                               |
1121
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1122
    `foot:' |             Size of chunk, in bytes                           |
1123
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1124
 
1125
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1126
  of the same size are arranged in a circularly-linked list, with only
1127
  the oldest chunk (the next to be used, in our FIFO ordering)
1128
  actually in the tree.  (Tree members are distinguished by a non-null
1129
  parent pointer.)  If a chunk with the same size an an existing node
1130
  is inserted, it is linked off the existing node using pointers that
1131
  work in the same way as fd/bk pointers of small chunks.
1132
 
1133
  Each tree contains a power of 2 sized range of chunk sizes (the
1134
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1135
  tree level, with the chunks in the smaller half of the range (0x100
1136
  <= x < 0x140 for the top nose) in the left subtree and the larger
1137
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1138
  done by inspecting individual bits.
1139
 
1140
  Using these rules, each node's left subtree contains all smaller
1141
  sizes than its right subtree.  However, the node at the root of each
1142
  subtree has no particular ordering relationship to either.  (The
1143
  dividing line between the subtree sizes is based on trie relation.)
1144
  If we remove the last chunk of a given size from the interior of the
1145
  tree, we need to replace it with a leaf node.  The tree ordering
1146
  rules permit a node to be replaced by any leaf below it.
1147
 
1148
  The smallest chunk in a tree (a common operation in a best-fit
1149
  allocator) can be found by walking a path to the leftmost leaf in
1150
  the tree.  Unlike a usual binary tree, where we follow left child
1151
  pointers until we reach a null, here we follow the right child
1152
  pointer any time the left one is null, until we reach a leaf with
1153
  both child pointers null. The smallest chunk in the tree will be
1154
  somewhere along that path.
1155
 
1156
  The worst case number of steps to add, find, or remove a node is
1157
  bounded by the number of bits differentiating chunks within
1158
  bins. Under current bin calculations, this ranges from 6 up to 21
1159
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1160
  is of course much better.
1161
*/
1162
 
1163
struct malloc_tree_chunk {
1164
  /* The first four fields must be compatible with malloc_chunk */
1165
  size_t                    prev_foot;
1166
  size_t                    head;
1167
  struct malloc_tree_chunk* fd;
1168
  struct malloc_tree_chunk* bk;
1169
 
1170
  struct malloc_tree_chunk* child[2];
1171
  struct malloc_tree_chunk* parent;
1172
  bindex_t                  index;
1173
};
1174
 
1175
typedef struct malloc_tree_chunk  tchunk;
1176
typedef struct malloc_tree_chunk* tchunkptr;
1177
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1178
 
1179
/* A little helper macro for trees */
1180
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1181
 
1182
/* ----------------------------- Segments -------------------------------- */
1183
 
1184
/*
1185
  Each malloc space may include non-contiguous segments, held in a
1186
  list headed by an embedded malloc_segment record representing the
1187
  top-most space. Segments also include flags holding properties of
1188
  the space. Large chunks that are directly allocated by mmap are not
1189
  included in this list. They are instead independently created and
1190
  destroyed without otherwise keeping track of them.
1191
 
1192
  Segment management mainly comes into play for spaces allocated by
1193
  MMAP.  Any call to MMAP might or might not return memory that is
1194
  adjacent to an existing segment.  MORECORE normally contiguously
1195
  extends the current space, so this space is almost always adjacent,
1196
  which is simpler and faster to deal with. (This is why MORECORE is
1197
  used preferentially to MMAP when both are available -- see
1198
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1199
  hinting mechanisms (inconsistently) supported in various
1200
  implementations of unix mmap, or distinguish reserving from
1201
  committing memory. Instead, we just ask for space, and exploit
1202
  contiguity when we get it.  It is probably possible to do
1203
  better than this on some systems, but no general scheme seems
1204
  to be significantly better.
1205
 
1206
  Management entails a simpler variant of the consolidation scheme
1207
  used for chunks to reduce fragmentation -- new adjacent memory is
1208
  normally prepended or appended to an existing segment. However,
1209
  there are limitations compared to chunk consolidation that mostly
1210
  reflect the fact that segment processing is relatively infrequent
1211
  (occurring only when getting memory from system) and that we
1212
  don't expect to have huge numbers of segments:
1213
 
1214
  * Segments are not indexed, so traversal requires linear scans.  (It
1215
    would be possible to index these, but is not worth the extra
1216
    overhead and complexity for most programs on most platforms.)
1217
  * New segments are only appended to old ones when holding top-most
1218
    memory; if they cannot be prepended to others, they are held in
1219
    different segments.
1220
 
1221
  Except for the top-most segment of an mstate, each segment record
1222
  is kept at the tail of its segment. Segments are added by pushing
1223
  segment records onto the list headed by &mstate.seg for the
1224
  containing mstate.
1225
 
1226
  Segment flags control allocation/merge/deallocation policies:
1227
  * If EXTERN_BIT set, then we did not allocate this segment,
1228
    and so should not try to deallocate or merge with others.
1229
    (This currently holds only for the initial segment passed
1230
    into create_mspace_with_base.)
1231
  * If IS_MMAPPED_BIT set, the segment may be merged with
1232
    other surrounding mmapped segments and trimmed/de-allocated
1233
    using munmap.
1234
  * If neither bit is set, then the segment was obtained using
1235
    MORECORE so can be merged with surrounding MORECORE'd segments
1236
    and deallocated/trimmed using MORECORE with negative arguments.
1237
*/
1238
 
1239
struct malloc_segment {
1240
  char*        base;             /* base address */
1241
  size_t       size;             /* allocated size */
1242
  struct malloc_segment* next;   /* ptr to next segment */
1243
  flag_t       sflags;           /* mmap and extern flag */
1244
};
1245
 
1246
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1247
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1248
 
1249
typedef struct malloc_segment  msegment;
1250
typedef struct malloc_segment* msegmentptr;
1251
 
1252
/* ---------------------------- malloc_state ----------------------------- */
1253
 
1254
/*
1255
   A malloc_state holds all of the bookkeeping for a space.
1256
   The main fields are:
1257
 
1258
  Top
1259
    The topmost chunk of the currently active segment. Its size is
1260
    cached in topsize.  The actual size of topmost space is
1261
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1262
    fenceposts and segment records if necessary when getting more
1263
    space from the system.  The size at which to autotrim top is
1264
    cached from mparams in trim_check, except that it is disabled if
1265
    an autotrim fails.
1266
 
1267
  Designated victim (dv)
1268
    This is the preferred chunk for servicing small requests that
1269
    don't have exact fits.  It is normally the chunk split off most
1270
    recently to service another small request.  Its size is cached in
1271
    dvsize. The link fields of this chunk are not maintained since it
1272
    is not kept in a bin.
1273
 
1274
  SmallBins
1275
    An array of bin headers for free chunks.  These bins hold chunks
1276
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1277
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1278
    use in double-linked lists, each bin header acts as a malloc_chunk
1279
    pointing to the real first node, if it exists (else pointing to
1280
    itself).  This avoids special-casing for headers.  But to avoid
1281
    waste, we allocate only the fd/bk pointers of bins, and then use
1282
    repositioning tricks to treat these as the fields of a chunk.
1283
 
1284
  TreeBins
1285
    Treebins are pointers to the roots of trees holding a range of
1286
    sizes. There are 2 equally spaced treebins for each power of two
1287
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1288
    larger.
1289
 
1290
  Bin maps
1291
    There is one bit map for small bins ("smallmap") and one for
1292
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1293
    clears the bit when empty.  Bit operations are then used to avoid
1294
    bin-by-bin searching -- nearly all "search" is done without ever
1295
    looking at bins that won't be selected.  The bit maps
1296
    conservatively use 32 bits per map word, even if on 64bit system.
1297
    For a good description of some of the bit-based techniques used
1298
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1299
    supplement at http://hackersdelight.org/). Many of these are
1300
    intended to reduce the branchiness of paths through malloc etc, as
1301
    well as to reduce the number of memory locations read or written.
1302
 
1303
  Segments
1304
    A list of segments headed by an embedded malloc_segment record
1305
    representing the initial space.
1306
 
1307
  Address check support
1308
    The least_addr field is the least address ever obtained from
1309
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1310
    than this are trapped (unless INSECURE is defined).
1311
 
1312
  Magic tag
1313
    A cross-check field that should always hold same value as mparams.magic.
1314
 
1315
  Flags
1316
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1317
 
1318
  Statistics
1319
    Each space keeps track of current and maximum system memory
1320
    obtained via MORECORE or MMAP.
1321
 
1322
  Locking
1323
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1324
    around every public call using this mspace.
1325
*/
1326
 
1327
/* Bin types, widths and sizes */
1328
#define NSMALLBINS        (32U)
1329
#define NTREEBINS         (32U)
1330
#define SMALLBIN_SHIFT    (3U)
1331
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1332
#define TREEBIN_SHIFT     (8U)
1333
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1334
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1335
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1336
 
1337
struct malloc_state {
1338
  binmap_t   smallmap;
1339
  binmap_t   treemap;
1340
  size_t     dvsize;
1341
  size_t     topsize;
1342
  char*      least_addr;
1343
  mchunkptr  dv;
1344
  mchunkptr  top;
1345
  size_t     trim_check;
1346
  size_t     magic;
1347
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1348
  tbinptr    treebins[NTREEBINS];
1349
  size_t     footprint;
1350
  size_t     max_footprint;
1351
  flag_t     mflags;
1352
#if USE_LOCKS
1353
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1354
#endif /* USE_LOCKS */
1355
  msegment   seg;
1356
};
1357
 
1358
typedef struct malloc_state*    mstate;
1359
 
1360
/* ------------- Global malloc_state and malloc_params ------------------- */
1361
 
1362
/*
1363
  malloc_params holds global properties, including those that can be
1364
  dynamically set using mallopt. There is a single instance, mparams,
1365
  initialized in init_mparams.
1366
*/
1367
 
1368
struct malloc_params {
1369
  size_t magic;
1370
  size_t page_size;
1371
  size_t granularity;
1372
  size_t mmap_threshold;
1373
  size_t trim_threshold;
1374
  flag_t default_mflags;
1375
};
1376
 
1377
static struct malloc_params mparams;
1378
 
1379
/* The global malloc_state used for all non-"mspace" calls */
1380
static struct malloc_state _gm_;
1381
#define gm                 (&_gm_)
1382
#define is_global(M)       ((M) == &_gm_)
1383
#define is_initialized(M)  ((M)->top != 0)
1384
 
1385
/* -------------------------- system alloc setup ------------------------- */
1386
 
1387
/* Operations on mflags */
1388
 
1389
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1390
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1391
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1392
 
1393
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1394
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1395
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1396
 
1397
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1398
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1399
 
1400
#define set_lock(M,L)\
1401
 ((M)->mflags = (L)?\
1402
  ((M)->mflags | USE_LOCK_BIT) :\
1403
  ((M)->mflags & ~USE_LOCK_BIT))
1404
 
1405
/* page-align a size */
1406
#define page_align(S)\
1407
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1408
 
1409
/* granularity-align a size */
1410
#define granularity_align(S)\
1411
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1412
 
1413
#define is_page_aligned(S)\
1414
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1415
#define is_granularity_aligned(S)\
1416
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1417
 
1418
/*  True if segment S holds address A */
1419
#define segment_holds(S, A)\
1420
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1421
 
1422
/* Return segment holding given address */
1423
static msegmentptr segment_holding(mstate m, char* addr) {
1424
  msegmentptr sp = &m->seg;
1425
  for (;;) {
1426
    if (addr >= sp->base && addr < sp->base + sp->size)
1427
      return sp;
1428
    if ((sp = sp->next) == 0)
1429
      return 0;
1430
  }
1431
}
1432
 
1433
/* Return true if segment contains a segment link */
1434
static int has_segment_link(mstate m, msegmentptr ss) {
1435
  msegmentptr sp = &m->seg;
1436
  for (;;) {
1437
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1438
      return 1;
1439
    if ((sp = sp->next) == 0)
1440
      return 0;
1441
  }
1442
}
1443
 
1444
#ifndef MORECORE_CANNOT_TRIM
1445
#define should_trim(M,s)  ((s) > (M)->trim_check)
1446
#else  /* MORECORE_CANNOT_TRIM */
1447
#define should_trim(M,s)  (0)
1448
#endif /* MORECORE_CANNOT_TRIM */
1449
 
1450
/*
1451
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1452
  that may be needed to place segment records and fenceposts when new
1453
  noncontiguous segments are added.
1454
*/
1455
#define TOP_FOOT_SIZE\
1456
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1457
 
1458
 
1459
/* -------------------------------  Hooks -------------------------------- */
1460
 
1461
/*
1462
  PREACTION should be defined to return 0 on success, and nonzero on
1463
  failure. If you are not using locking, you can redefine these to do
1464
  anything you like.
1465
*/
1466
 
1467
#if USE_LOCKS
1468
 
1469
/* Ensure locks are initialized */
1470
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1471
 
1472
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1473
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1474
#else /* USE_LOCKS */
1475
 
1476
#ifndef PREACTION
1477
#define PREACTION(M) (0)
1478
#endif  /* PREACTION */
1479
 
1480
#ifndef POSTACTION
1481
#define POSTACTION(M)
1482
#endif  /* POSTACTION */
1483
 
1484
#endif /* USE_LOCKS */
1485
 
1486
/*
1487
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1488
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1489
  reallocs. The argument p is an address that might have triggered the
1490
  fault. It is ignored by the two predefined actions, but might be
1491
  useful in custom actions that try to help diagnose errors.
1492
*/
1493
 
1494
#if PROCEED_ON_ERROR
1495
 
1496
/* A count of the number of corruption errors causing resets */
1497
int malloc_corruption_error_count;
1498
 
1499
/* default corruption action */
1500
static void reset_on_error(mstate m);
1501
 
1502
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1503
#define USAGE_ERROR_ACTION(m, p)
1504
 
1505
#else /* PROCEED_ON_ERROR */
1506
 
1507
#ifndef CORRUPTION_ERROR_ACTION
1508
#define CORRUPTION_ERROR_ACTION(m) ABORT
1509
#endif /* CORRUPTION_ERROR_ACTION */
1510
 
1511
#ifndef USAGE_ERROR_ACTION
1512
#define USAGE_ERROR_ACTION(m,p) ABORT
1513
#endif /* USAGE_ERROR_ACTION */
1514
 
1515
#endif /* PROCEED_ON_ERROR */
1516
 
1517
/* -------------------------- Debugging setup ---------------------------- */
1518
 
1519
#if ! DEBUG
1520
 
1521
#define check_free_chunk(M,P)
1522
#define check_inuse_chunk(M,P)
1523
#define check_malloced_chunk(M,P,N)
1524
#define check_mmapped_chunk(M,P)
1525
#define check_malloc_state(M)
1526
#define check_top_chunk(M,P)
1527
 
1528
#else /* DEBUG */
1529
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1530
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1531
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1532
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1533
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1534
#define check_malloc_state(M)       do_check_malloc_state(M)
1535
 
1536
static void   do_check_any_chunk(mstate m, mchunkptr p);
1537
static void   do_check_top_chunk(mstate m, mchunkptr p);
1538
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1539
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1540
static void   do_check_free_chunk(mstate m, mchunkptr p);
1541
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1542
static void   do_check_tree(mstate m, tchunkptr t);
1543
static void   do_check_treebin(mstate m, bindex_t i);
1544
static void   do_check_smallbin(mstate m, bindex_t i);
1545
static void   do_check_malloc_state(mstate m);
1546
static int    bin_find(mstate m, mchunkptr x);
1547
static size_t traverse_and_check(mstate m);
1548
#endif /* DEBUG */
1549
 
1550
/* ---------------------------- Indexing Bins ---------------------------- */
1551
 
1552
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1553
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1554
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1555
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1556
 
1557
/* addressing by index. See above about smallbin repositioning */
1558
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1559
#define treebin_at(M,i)     (&((M)->treebins[i]))
1560
 
1561
/* assign tree index for size S to variable I */
1562
#if defined(__GNUC__) && defined(i386)
1563
#define compute_tree_index(S, I)\
1564
{\
1565
  size_t X = S >> TREEBIN_SHIFT;\
1566
  if (X == 0)\
1567
    I = 0;\
1568
  else if (X > 0xFFFF)\
1569
    I = NTREEBINS-1;\
1570
  else {\
1571
    unsigned int K;\
2082 decky 1572
    asm("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
968 palkovsky 1573
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1574
  }\
1575
}
1576
#else /* GNUC */
1577
#define compute_tree_index(S, I)\
1578
{\
1579
  size_t X = S >> TREEBIN_SHIFT;\
1580
  if (X == 0)\
1581
    I = 0;\
1582
  else if (X > 0xFFFF)\
1583
    I = NTREEBINS-1;\
1584
  else {\
1585
    unsigned int Y = (unsigned int)X;\
1586
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1587
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1588
    N += K;\
1589
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1590
    K = 14 - N + ((Y <<= K) >> 15);\
1591
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1592
  }\
1593
}
1594
#endif /* GNUC */
1595
 
1596
/* Bit representing maximum resolved size in a treebin at i */
1597
#define bit_for_tree_index(i) \
1598
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1599
 
1600
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1601
#define leftshift_for_tree_index(i) \
1602
   ((i == NTREEBINS-1)? 0 : \
1603
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1604
 
1605
/* The size of the smallest chunk held in bin with index i */
1606
#define minsize_for_tree_index(i) \
1607
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1608
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1609
 
1610
 
1611
/* ------------------------ Operations on bin maps ----------------------- */
1612
 
1613
/* bit corresponding to given index */
1614
#define idx2bit(i)              ((binmap_t)(1) << (i))
1615
 
1616
/* Mark/Clear bits with given index */
1617
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1618
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1619
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1620
 
1621
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1622
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1623
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1624
 
1625
/* index corresponding to given bit */
1626
 
1627
#if defined(__GNUC__) && defined(i386)
1628
#define compute_bit2idx(X, I)\
1629
{\
1630
  unsigned int J;\
2082 decky 1631
  asm("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
968 palkovsky 1632
  I = (bindex_t)J;\
1633
}
1634
 
1635
#else /* GNUC */
1636
#if  USE_BUILTIN_FFS
1637
#define compute_bit2idx(X, I) I = ffs(X)-1
1638
 
1639
#else /* USE_BUILTIN_FFS */
1640
#define compute_bit2idx(X, I)\
1641
{\
1642
  unsigned int Y = X - 1;\
1643
  unsigned int K = Y >> (16-4) & 16;\
1644
  unsigned int N = K;        Y >>= K;\
1645
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1646
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1647
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1648
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1649
  I = (bindex_t)(N + Y);\
1650
}
1651
#endif /* USE_BUILTIN_FFS */
1652
#endif /* GNUC */
1653
 
1654
/* isolate the least set bit of a bitmap */
1655
#define least_bit(x)         ((x) & -(x))
1656
 
1657
/* mask with all bits to left of least bit of x on */
1658
#define left_bits(x)         ((x<<1) | -(x<<1))
1659
 
1660
/* mask with all bits to left of or equal to least bit of x on */
1661
#define same_or_left_bits(x) ((x) | -(x))
1662
 
1663
 
1664
/* ----------------------- Runtime Check Support ------------------------- */
1665
 
1666
/*
1667
  For security, the main invariant is that malloc/free/etc never
1668
  writes to a static address other than malloc_state, unless static
1669
  malloc_state itself has been corrupted, which cannot occur via
1670
  malloc (because of these checks). In essence this means that we
1671
  believe all pointers, sizes, maps etc held in malloc_state, but
1672
  check all of those linked or offsetted from other embedded data
1673
  structures.  These checks are interspersed with main code in a way
1674
  that tends to minimize their run-time cost.
1675
 
1676
  When FOOTERS is defined, in addition to range checking, we also
1677
  verify footer fields of inuse chunks, which can be used guarantee
1678
  that the mstate controlling malloc/free is intact.  This is a
1679
  streamlined version of the approach described by William Robertson
1680
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1681
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1682
  of an inuse chunk holds the xor of its mstate and a random seed,
1683
  that is checked upon calls to free() and realloc().  This is
1684
  (probablistically) unguessable from outside the program, but can be
1685
  computed by any code successfully malloc'ing any chunk, so does not
1686
  itself provide protection against code that has already broken
1687
  security through some other means.  Unlike Robertson et al, we
1688
  always dynamically check addresses of all offset chunks (previous,
1689
  next, etc). This turns out to be cheaper than relying on hashes.
1690
*/
1691
 
1692
#if !INSECURE
1693
/* Check if address a is at least as high as any from MORECORE or MMAP */
1694
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1695
/* Check if address of next chunk n is higher than base chunk p */
1696
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1697
/* Check if p has its cinuse bit on */
1698
#define ok_cinuse(p)     cinuse(p)
1699
/* Check if p has its pinuse bit on */
1700
#define ok_pinuse(p)     pinuse(p)
1701
 
1702
#else /* !INSECURE */
1703
#define ok_address(M, a) (1)
1704
#define ok_next(b, n)    (1)
1705
#define ok_cinuse(p)     (1)
1706
#define ok_pinuse(p)     (1)
1707
#endif /* !INSECURE */
1708
 
1709
#if (FOOTERS && !INSECURE)
1710
/* Check if (alleged) mstate m has expected magic field */
1711
#define ok_magic(M)      ((M)->magic == mparams.magic)
1712
#else  /* (FOOTERS && !INSECURE) */
1713
#define ok_magic(M)      (1)
1714
#endif /* (FOOTERS && !INSECURE) */
1715
 
1716
 
1717
/* In gcc, use __builtin_expect to minimize impact of checks */
1718
#if !INSECURE
1719
#if defined(__GNUC__) && __GNUC__ >= 3
1720
#define RTCHECK(e)  __builtin_expect(e, 1)
1721
#else /* GNUC */
1722
#define RTCHECK(e)  (e)
1723
#endif /* GNUC */
1724
#else /* !INSECURE */
1725
#define RTCHECK(e)  (1)
1726
#endif /* !INSECURE */
1727
 
1728
/* macros to set up inuse chunks with or without footers */
1729
 
1730
#if !FOOTERS
1731
 
1732
#define mark_inuse_foot(M,p,s)
1733
 
1734
/* Set cinuse bit and pinuse bit of next chunk */
1735
#define set_inuse(M,p,s)\
1736
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1737
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1738
 
1739
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1740
#define set_inuse_and_pinuse(M,p,s)\
1741
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1742
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1743
 
1744
/* Set size, cinuse and pinuse bit of this chunk */
1745
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1746
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1747
 
1748
#else /* FOOTERS */
1749
 
1750
/* Set foot of inuse chunk to be xor of mstate and seed */
1751
#define mark_inuse_foot(M,p,s)\
1752
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1753
 
1754
#define get_mstate_for(p)\
1755
  ((mstate)(((mchunkptr)((char*)(p) +\
1756
    (chunksize(p))))->prev_foot ^ mparams.magic))
1757
 
1758
#define set_inuse(M,p,s)\
1759
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1760
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1761
  mark_inuse_foot(M,p,s))
1762
 
1763
#define set_inuse_and_pinuse(M,p,s)\
1764
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1765
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1766
 mark_inuse_foot(M,p,s))
1767
 
1768
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1769
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1770
  mark_inuse_foot(M, p, s))
1771
 
1772
#endif /* !FOOTERS */
1773
 
1774
/* ---------------------------- setting mparams -------------------------- */
1775
 
1776
/* Initialize mparams */
1777
static int init_mparams(void) {
1778
  if (mparams.page_size == 0) {
1779
    size_t s;
1780
 
1781
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1782
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1783
#if MORECORE_CONTIGUOUS
1784
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1785
#else  /* MORECORE_CONTIGUOUS */
1786
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1787
#endif /* MORECORE_CONTIGUOUS */
1788
 
1789
#if (FOOTERS && !INSECURE)
1790
    {
1791
#if USE_DEV_RANDOM
1792
      int fd;
1793
      unsigned char buf[sizeof(size_t)];
1794
      /* Try to use /dev/urandom, else fall back on using time */
1795
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1796
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1797
        s = *((size_t *) buf);
1798
        close(fd);
1799
      }
1800
      else
1801
#endif /* USE_DEV_RANDOM */
1802
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1803
 
1804
      s |= (size_t)8U;    /* ensure nonzero */
1805
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1806
 
1807
    }
1808
#else /* (FOOTERS && !INSECURE) */
1809
    s = (size_t)0x58585858U;
1810
#endif /* (FOOTERS && !INSECURE) */
1811
    ACQUIRE_MAGIC_INIT_LOCK();
1812
    if (mparams.magic == 0) {
1813
      mparams.magic = s;
1814
      /* Set up lock for main malloc area */
1815
      INITIAL_LOCK(&gm->mutex);
1816
      gm->mflags = mparams.default_mflags;
1817
    }
1818
    RELEASE_MAGIC_INIT_LOCK();
1819
 
1820
#ifndef WIN32
1821
    mparams.page_size = malloc_getpagesize;
1822
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1823
                           DEFAULT_GRANULARITY : mparams.page_size);
1824
#else /* WIN32 */
1825
    {
1826
      SYSTEM_INFO system_info;
1827
      GetSystemInfo(&system_info);
1828
      mparams.page_size = system_info.dwPageSize;
1829
      mparams.granularity = system_info.dwAllocationGranularity;
1830
    }
1831
#endif /* WIN32 */
1832
 
1833
    /* Sanity-check configuration:
1834
       size_t must be unsigned and as wide as pointer type.
1835
       ints must be at least 4 bytes.
1836
       alignment must be at least 8.
1837
       Alignment, min chunk size, and page size must all be powers of 2.
1838
    */
1839
    if ((sizeof(size_t) != sizeof(char*)) ||
1840
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1841
        (sizeof(int) < 4)  ||
1842
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1843
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1844
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1845
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1846
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1847
      ABORT;
1848
  }
1849
  return 0;
1850
}
1851
 
1852
/* support for mallopt */
1853
static int change_mparam(int param_number, int value) {
1854
  size_t val = (size_t)value;
1855
  init_mparams();
1856
  switch(param_number) {
1857
  case M_TRIM_THRESHOLD:
1858
    mparams.trim_threshold = val;
1859
    return 1;
1860
  case M_GRANULARITY:
1861
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1862
      mparams.granularity = val;
1863
      return 1;
1864
    }
1865
    else
1866
      return 0;
1867
  case M_MMAP_THRESHOLD:
1868
    mparams.mmap_threshold = val;
1869
    return 1;
1870
  default:
1871
    return 0;
1872
  }
1873
}
1874
 
1875
#if DEBUG
1876
/* ------------------------- Debugging Support --------------------------- */
1877
 
1878
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1879
static void do_check_any_chunk(mstate m, mchunkptr p) {
1880
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1881
  assert(ok_address(m, p));
1882
}
1883
 
1884
/* Check properties of top chunk */
1885
static void do_check_top_chunk(mstate m, mchunkptr p) {
1886
  msegmentptr sp = segment_holding(m, (char*)p);
1887
  size_t  sz = chunksize(p);
1888
  assert(sp != 0);
1889
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1890
  assert(ok_address(m, p));
1891
  assert(sz == m->topsize);
1892
  assert(sz > 0);
1893
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1894
  assert(pinuse(p));
1895
  assert(!next_pinuse(p));
1896
}
1897
 
1898
/* Check properties of (inuse) mmapped chunks */
1899
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1900
  size_t  sz = chunksize(p);
1901
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1902
  assert(is_mmapped(p));
1903
  assert(use_mmap(m));
1904
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1905
  assert(ok_address(m, p));
1906
  assert(!is_small(sz));
1907
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1908
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1909
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1910
}
1911
 
1912
/* Check properties of inuse chunks */
1913
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1914
  do_check_any_chunk(m, p);
1915
  assert(cinuse(p));
1916
  assert(next_pinuse(p));
1917
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1918
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1919
  if (is_mmapped(p))
1920
    do_check_mmapped_chunk(m, p);
1921
}
1922
 
1923
/* Check properties of free chunks */
1924
static void do_check_free_chunk(mstate m, mchunkptr p) {
1925
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1926
  mchunkptr next = chunk_plus_offset(p, sz);
1927
  do_check_any_chunk(m, p);
1928
  assert(!cinuse(p));
1929
  assert(!next_pinuse(p));
1930
  assert (!is_mmapped(p));
1931
  if (p != m->dv && p != m->top) {
1932
    if (sz >= MIN_CHUNK_SIZE) {
1933
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1934
      assert(is_aligned(chunk2mem(p)));
1935
      assert(next->prev_foot == sz);
1936
      assert(pinuse(p));
1937
      assert (next == m->top || cinuse(next));
1938
      assert(p->fd->bk == p);
1939
      assert(p->bk->fd == p);
1940
    }
1941
    else  /* markers are always of size SIZE_T_SIZE */
1942
      assert(sz == SIZE_T_SIZE);
1943
  }
1944
}
1945
 
1946
/* Check properties of malloced chunks at the point they are malloced */
1947
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1948
  if (mem != 0) {
1949
    mchunkptr p = mem2chunk(mem);
1950
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1951
    do_check_inuse_chunk(m, p);
1952
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1953
    assert(sz >= MIN_CHUNK_SIZE);
1954
    assert(sz >= s);
1955
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1956
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1957
  }
1958
}
1959
 
1960
/* Check a tree and its subtrees.  */
1961
static void do_check_tree(mstate m, tchunkptr t) {
1962
  tchunkptr head = 0;
1963
  tchunkptr u = t;
1964
  bindex_t tindex = t->index;
1965
  size_t tsize = chunksize(t);
1966
  bindex_t idx;
1967
  compute_tree_index(tsize, idx);
1968
  assert(tindex == idx);
1969
  assert(tsize >= MIN_LARGE_SIZE);
1970
  assert(tsize >= minsize_for_tree_index(idx));
1971
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1972
 
1973
  do { /* traverse through chain of same-sized nodes */
1974
    do_check_any_chunk(m, ((mchunkptr)u));
1975
    assert(u->index == tindex);
1976
    assert(chunksize(u) == tsize);
1977
    assert(!cinuse(u));
1978
    assert(!next_pinuse(u));
1979
    assert(u->fd->bk == u);
1980
    assert(u->bk->fd == u);
1981
    if (u->parent == 0) {
1982
      assert(u->child[0] == 0);
1983
      assert(u->child[1] == 0);
1984
    }
1985
    else {
1986
      assert(head == 0); /* only one node on chain has parent */
1987
      head = u;
1988
      assert(u->parent != u);
1989
      assert (u->parent->child[0] == u ||
1990
              u->parent->child[1] == u ||
1991
              *((tbinptr*)(u->parent)) == u);
1992
      if (u->child[0] != 0) {
1993
        assert(u->child[0]->parent == u);
1994
        assert(u->child[0] != u);
1995
        do_check_tree(m, u->child[0]);
1996
      }
1997
      if (u->child[1] != 0) {
1998
        assert(u->child[1]->parent == u);
1999
        assert(u->child[1] != u);
2000
        do_check_tree(m, u->child[1]);
2001
      }
2002
      if (u->child[0] != 0 && u->child[1] != 0) {
2003
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2004
      }
2005
    }
2006
    u = u->fd;
2007
  } while (u != t);
2008
  assert(head != 0);
2009
}
2010
 
2011
/*  Check all the chunks in a treebin.  */
2012
static void do_check_treebin(mstate m, bindex_t i) {
2013
  tbinptr* tb = treebin_at(m, i);
2014
  tchunkptr t = *tb;
2015
  int empty = (m->treemap & (1U << i)) == 0;
2016
  if (t == 0)
2017
    assert(empty);
2018
  if (!empty)
2019
    do_check_tree(m, t);
2020
}
2021
 
2022
/*  Check all the chunks in a smallbin.  */
2023
static void do_check_smallbin(mstate m, bindex_t i) {
2024
  sbinptr b = smallbin_at(m, i);
2025
  mchunkptr p = b->bk;
2026
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2027
  if (p == b)
2028
    assert(empty);
2029
  if (!empty) {
2030
    for (; p != b; p = p->bk) {
2031
      size_t size = chunksize(p);
2032
      mchunkptr q;
2033
      /* each chunk claims to be free */
2034
      do_check_free_chunk(m, p);
2035
      /* chunk belongs in bin */
2036
      assert(small_index(size) == i);
2037
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2038
      /* chunk is followed by an inuse chunk */
2039
      q = next_chunk(p);
2040
      if (q->head != FENCEPOST_HEAD)
2041
        do_check_inuse_chunk(m, q);
2042
    }
2043
  }
2044
}
2045
 
2046
/* Find x in a bin. Used in other check functions. */
2047
static int bin_find(mstate m, mchunkptr x) {
2048
  size_t size = chunksize(x);
2049
  if (is_small(size)) {
2050
    bindex_t sidx = small_index(size);
2051
    sbinptr b = smallbin_at(m, sidx);
2052
    if (smallmap_is_marked(m, sidx)) {
2053
      mchunkptr p = b;
2054
      do {
2055
        if (p == x)
2056
          return 1;
2057
      } while ((p = p->fd) != b);
2058
    }
2059
  }
2060
  else {
2061
    bindex_t tidx;
2062
    compute_tree_index(size, tidx);
2063
    if (treemap_is_marked(m, tidx)) {
2064
      tchunkptr t = *treebin_at(m, tidx);
2065
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2066
      while (t != 0 && chunksize(t) != size) {
2067
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2068
        sizebits <<= 1;
2069
      }
2070
      if (t != 0) {
2071
        tchunkptr u = t;
2072
        do {
2073
          if (u == (tchunkptr)x)
2074
            return 1;
2075
        } while ((u = u->fd) != t);
2076
      }
2077
    }
2078
  }
2079
  return 0;
2080
}
2081
 
2082
/* Traverse each chunk and check it; return total */
2083
static size_t traverse_and_check(mstate m) {
2084
  size_t sum = 0;
2085
  if (is_initialized(m)) {
2086
    msegmentptr s = &m->seg;
2087
    sum += m->topsize + TOP_FOOT_SIZE;
2088
    while (s != 0) {
2089
      mchunkptr q = align_as_chunk(s->base);
2090
      mchunkptr lastq = 0;
2091
      assert(pinuse(q));
2092
      while (segment_holds(s, q) &&
2093
             q != m->top && q->head != FENCEPOST_HEAD) {
2094
        sum += chunksize(q);
2095
        if (cinuse(q)) {
2096
          assert(!bin_find(m, q));
2097
          do_check_inuse_chunk(m, q);
2098
        }
2099
        else {
2100
          assert(q == m->dv || bin_find(m, q));
2101
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2102
          do_check_free_chunk(m, q);
2103
        }
2104
        lastq = q;
2105
        q = next_chunk(q);
2106
      }
2107
      s = s->next;
2108
    }
2109
  }
2110
  return sum;
2111
}
2112
 
2113
/* Check all properties of malloc_state. */
2114
static void do_check_malloc_state(mstate m) {
2115
  bindex_t i;
2116
  size_t total;
2117
  /* check bins */
2118
  for (i = 0; i < NSMALLBINS; ++i)
2119
    do_check_smallbin(m, i);
2120
  for (i = 0; i < NTREEBINS; ++i)
2121
    do_check_treebin(m, i);
2122
 
2123
  if (m->dvsize != 0) { /* check dv chunk */
2124
    do_check_any_chunk(m, m->dv);
2125
    assert(m->dvsize == chunksize(m->dv));
2126
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2127
    assert(bin_find(m, m->dv) == 0);
2128
  }
2129
 
2130
  if (m->top != 0) {   /* check top chunk */
2131
    do_check_top_chunk(m, m->top);
2132
    assert(m->topsize == chunksize(m->top));
2133
    assert(m->topsize > 0);
2134
    assert(bin_find(m, m->top) == 0);
2135
  }
2136
 
2137
  total = traverse_and_check(m);
2138
  assert(total <= m->footprint);
2139
  assert(m->footprint <= m->max_footprint);
2140
}
2141
#endif /* DEBUG */
2142
 
2143
/* ----------------------------- statistics ------------------------------ */
2144
 
2145
#if !NO_MALLINFO
2146
static struct mallinfo internal_mallinfo(mstate m) {
2147
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2148
  if (!PREACTION(m)) {
2149
    check_malloc_state(m);
2150
    if (is_initialized(m)) {
2151
      size_t nfree = SIZE_T_ONE; /* top always free */
2152
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2153
      size_t sum = mfree;
2154
      msegmentptr s = &m->seg;
2155
      while (s != 0) {
2156
        mchunkptr q = align_as_chunk(s->base);
2157
        while (segment_holds(s, q) &&
2158
               q != m->top && q->head != FENCEPOST_HEAD) {
2159
          size_t sz = chunksize(q);
2160
          sum += sz;
2161
          if (!cinuse(q)) {
2162
            mfree += sz;
2163
            ++nfree;
2164
          }
2165
          q = next_chunk(q);
2166
        }
2167
        s = s->next;
2168
      }
2169
 
2170
      nm.arena    = sum;
2171
      nm.ordblks  = nfree;
2172
      nm.hblkhd   = m->footprint - sum;
2173
      nm.usmblks  = m->max_footprint;
2174
      nm.uordblks = m->footprint - mfree;
2175
      nm.fordblks = mfree;
2176
      nm.keepcost = m->topsize;
2177
    }
2178
 
2179
    POSTACTION(m);
2180
  }
2181
  return nm;
2182
}
2183
#endif /* !NO_MALLINFO */
2184
 
2185
static void internal_malloc_stats(mstate m) {
2186
  if (!PREACTION(m)) {
2187
    size_t maxfp = 0;
2188
    size_t fp = 0;
2189
    size_t used = 0;
2190
    check_malloc_state(m);
2191
    if (is_initialized(m)) {
2192
      msegmentptr s = &m->seg;
2193
      maxfp = m->max_footprint;
2194
      fp = m->footprint;
2195
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2196
 
2197
      while (s != 0) {
2198
        mchunkptr q = align_as_chunk(s->base);
2199
        while (segment_holds(s, q) &&
2200
               q != m->top && q->head != FENCEPOST_HEAD) {
2201
          if (!cinuse(q))
2202
            used -= chunksize(q);
2203
          q = next_chunk(q);
2204
        }
2205
        s = s->next;
2206
      }
2207
    }
2208
 
2209
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2210
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2211
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2212
 
2213
    POSTACTION(m);
2214
  }
2215
}
2216
 
2217
/* ----------------------- Operations on smallbins ----------------------- */
2218
 
2219
/*
2220
  Various forms of linking and unlinking are defined as macros.  Even
2221
  the ones for trees, which are very long but have very short typical
2222
  paths.  This is ugly but reduces reliance on inlining support of
2223
  compilers.
2224
*/
2225
 
2226
/* Link a free chunk into a smallbin  */
2227
#define insert_small_chunk(M, P, S) {\
2228
  bindex_t I  = small_index(S);\
2229
  mchunkptr B = smallbin_at(M, I);\
2230
  mchunkptr F = B;\
2231
  assert(S >= MIN_CHUNK_SIZE);\
2232
  if (!smallmap_is_marked(M, I))\
2233
    mark_smallmap(M, I);\
2234
  else if (RTCHECK(ok_address(M, B->fd)))\
2235
    F = B->fd;\
2236
  else {\
2237
    CORRUPTION_ERROR_ACTION(M);\
2238
  }\
2239
  B->fd = P;\
2240
  F->bk = P;\
2241
  P->fd = F;\
2242
  P->bk = B;\
2243
}
2244
 
2245
/* Unlink a chunk from a smallbin  */
2246
#define unlink_small_chunk(M, P, S) {\
2247
  mchunkptr F = P->fd;\
2248
  mchunkptr B = P->bk;\
2249
  bindex_t I = small_index(S);\
2250
  assert(P != B);\
2251
  assert(P != F);\
2252
  assert(chunksize(P) == small_index2size(I));\
2253
  if (F == B)\
2254
    clear_smallmap(M, I);\
2255
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2256
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2257
    F->bk = B;\
2258
    B->fd = F;\
2259
  }\
2260
  else {\
2261
    CORRUPTION_ERROR_ACTION(M);\
2262
  }\
2263
}
2264
 
2265
/* Unlink the first chunk from a smallbin */
2266
#define unlink_first_small_chunk(M, B, P, I) {\
2267
  mchunkptr F = P->fd;\
2268
  assert(P != B);\
2269
  assert(P != F);\
2270
  assert(chunksize(P) == small_index2size(I));\
2271
  if (B == F)\
2272
    clear_smallmap(M, I);\
2273
  else if (RTCHECK(ok_address(M, F))) {\
2274
    B->fd = F;\
2275
    F->bk = B;\
2276
  }\
2277
  else {\
2278
    CORRUPTION_ERROR_ACTION(M);\
2279
  }\
2280
}
2281
 
2282
/* Replace dv node, binning the old one */
2283
/* Used only when dvsize known to be small */
2284
#define replace_dv(M, P, S) {\
2285
  size_t DVS = M->dvsize;\
2286
  if (DVS != 0) {\
2287
    mchunkptr DV = M->dv;\
2288
    assert(is_small(DVS));\
2289
    insert_small_chunk(M, DV, DVS);\
2290
  }\
2291
  M->dvsize = S;\
2292
  M->dv = P;\
2293
}
2294
 
2295
/* ------------------------- Operations on trees ------------------------- */
2296
 
2297
/* Insert chunk into tree */
2298
#define insert_large_chunk(M, X, S) {\
2299
  tbinptr* H;\
2300
  bindex_t I;\
2301
  compute_tree_index(S, I);\
2302
  H = treebin_at(M, I);\
2303
  X->index = I;\
2304
  X->child[0] = X->child[1] = 0;\
2305
  if (!treemap_is_marked(M, I)) {\
2306
    mark_treemap(M, I);\
2307
    *H = X;\
2308
    X->parent = (tchunkptr)H;\
2309
    X->fd = X->bk = X;\
2310
  }\
2311
  else {\
2312
    tchunkptr T = *H;\
2313
    size_t K = S << leftshift_for_tree_index(I);\
2314
    for (;;) {\
2315
      if (chunksize(T) != S) {\
2316
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2317
        K <<= 1;\
2318
        if (*C != 0)\
2319
          T = *C;\
2320
        else if (RTCHECK(ok_address(M, C))) {\
2321
          *C = X;\
2322
          X->parent = T;\
2323
          X->fd = X->bk = X;\
2324
          break;\
2325
        }\
2326
        else {\
2327
          CORRUPTION_ERROR_ACTION(M);\
2328
          break;\
2329
        }\
2330
      }\
2331
      else {\
2332
        tchunkptr F = T->fd;\
2333
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2334
          T->fd = F->bk = X;\
2335
          X->fd = F;\
2336
          X->bk = T;\
2337
          X->parent = 0;\
2338
          break;\
2339
        }\
2340
        else {\
2341
          CORRUPTION_ERROR_ACTION(M);\
2342
          break;\
2343
        }\
2344
      }\
2345
    }\
2346
  }\
2347
}
2348
 
2349
/*
2350
  Unlink steps:
2351
 
2352
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2353
     and choose its bk node as its replacement.
2354
  2. If x was the last node of its size, but not a leaf node, it must
2355
     be replaced with a leaf node (not merely one with an open left or
2356
     right), to make sure that lefts and rights of descendents
2357
     correspond properly to bit masks.  We use the rightmost descendent
2358
     of x.  We could use any other leaf, but this is easy to locate and
2359
     tends to counteract removal of leftmosts elsewhere, and so keeps
2360
     paths shorter than minimally guaranteed.  This doesn't loop much
2361
     because on average a node in a tree is near the bottom.
2362
  3. If x is the base of a chain (i.e., has parent links) relink
2363
     x's parent and children to x's replacement (or null if none).
2364
*/
2365
 
2366
#define unlink_large_chunk(M, X) {\
2367
  tchunkptr XP = X->parent;\
2368
  tchunkptr R;\
2369
  if (X->bk != X) {\
2370
    tchunkptr F = X->fd;\
2371
    R = X->bk;\
2372
    if (RTCHECK(ok_address(M, F))) {\
2373
      F->bk = R;\
2374
      R->fd = F;\
2375
    }\
2376
    else {\
2377
      CORRUPTION_ERROR_ACTION(M);\
2378
    }\
2379
  }\
2380
  else {\
2381
    tchunkptr* RP;\
2382
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2383
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2384
      tchunkptr* CP;\
2385
      while ((*(CP = &(R->child[1])) != 0) ||\
2386
             (*(CP = &(R->child[0])) != 0)) {\
2387
        R = *(RP = CP);\
2388
      }\
2389
      if (RTCHECK(ok_address(M, RP)))\
2390
        *RP = 0;\
2391
      else {\
2392
        CORRUPTION_ERROR_ACTION(M);\
2393
      }\
2394
    }\
2395
  }\
2396
  if (XP != 0) {\
2397
    tbinptr* H = treebin_at(M, X->index);\
2398
    if (X == *H) {\
2399
      if ((*H = R) == 0) \
2400
        clear_treemap(M, X->index);\
2401
    }\
2402
    else if (RTCHECK(ok_address(M, XP))) {\
2403
      if (XP->child[0] == X) \
2404
        XP->child[0] = R;\
2405
      else \
2406
        XP->child[1] = R;\
2407
    }\
2408
    else\
2409
      CORRUPTION_ERROR_ACTION(M);\
2410
    if (R != 0) {\
2411
      if (RTCHECK(ok_address(M, R))) {\
2412
        tchunkptr C0, C1;\
2413
        R->parent = XP;\
2414
        if ((C0 = X->child[0]) != 0) {\
2415
          if (RTCHECK(ok_address(M, C0))) {\
2416
            R->child[0] = C0;\
2417
            C0->parent = R;\
2418
          }\
2419
          else\
2420
            CORRUPTION_ERROR_ACTION(M);\
2421
        }\
2422
        if ((C1 = X->child[1]) != 0) {\
2423
          if (RTCHECK(ok_address(M, C1))) {\
2424
            R->child[1] = C1;\
2425
            C1->parent = R;\
2426
          }\
2427
          else\
2428
            CORRUPTION_ERROR_ACTION(M);\
2429
        }\
2430
      }\
2431
      else\
2432
        CORRUPTION_ERROR_ACTION(M);\
2433
    }\
2434
  }\
2435
}
2436
 
2437
/* Relays to large vs small bin operations */
2438
 
2439
#define insert_chunk(M, P, S)\
2440
  if (is_small(S)) insert_small_chunk(M, P, S)\
2441
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2442
 
2443
#define unlink_chunk(M, P, S)\
2444
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2445
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2446
 
2447
 
2448
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2449
 
2450
#if ONLY_MSPACES
2451
#define internal_malloc(m, b) mspace_malloc(m, b)
2452
#define internal_free(m, mem) mspace_free(m,mem);
2453
#else /* ONLY_MSPACES */
2454
#if MSPACES
2455
#define internal_malloc(m, b)\
2456
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2457
#define internal_free(m, mem)\
2458
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2459
#else /* MSPACES */
2460
#define internal_malloc(m, b) dlmalloc(b)
2461
#define internal_free(m, mem) dlfree(mem)
2462
#endif /* MSPACES */
2463
#endif /* ONLY_MSPACES */
2464
 
2465
/* -----------------------  Direct-mmapping chunks ----------------------- */
2466
 
2467
/*
2468
  Directly mmapped chunks are set up with an offset to the start of
2469
  the mmapped region stored in the prev_foot field of the chunk. This
2470
  allows reconstruction of the required argument to MUNMAP when freed,
2471
  and also allows adjustment of the returned chunk to meet alignment
2472
  requirements (especially in memalign).  There is also enough space
2473
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2474
  the PINUSE bit so frees can be checked.
2475
*/
2476
 
2477
/* Malloc using mmap */
2478
static void* mmap_alloc(mstate m, size_t nb) {
2479
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2480
  if (mmsize > nb) {     /* Check for wrap around 0 */
2481
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2482
    if (mm != CMFAIL) {
2483
      size_t offset = align_offset(chunk2mem(mm));
2484
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2485
      mchunkptr p = (mchunkptr)(mm + offset);
2486
      p->prev_foot = offset | IS_MMAPPED_BIT;
2487
      (p)->head = (psize|CINUSE_BIT);
2488
      mark_inuse_foot(m, p, psize);
2489
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2490
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2491
 
2492
      if (mm < m->least_addr)
2493
        m->least_addr = mm;
2494
      if ((m->footprint += mmsize) > m->max_footprint)
2495
        m->max_footprint = m->footprint;
2496
      assert(is_aligned(chunk2mem(p)));
2497
      check_mmapped_chunk(m, p);
2498
      return chunk2mem(p);
2499
    }
2500
  }
2501
  return 0;
2502
}
2503
 
2504
/* Realloc using mmap */
2505
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2506
  size_t oldsize = chunksize(oldp);
2507
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2508
    return 0;
2509
  /* Keep old chunk if big enough but not too big */
2510
  if (oldsize >= nb + SIZE_T_SIZE &&
2511
      (oldsize - nb) <= (mparams.granularity << 1))
2512
    return oldp;
2513
  else {
2514
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2515
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2516
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2517
                                         CHUNK_ALIGN_MASK);
2518
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2519
                                  oldmmsize, newmmsize, 1);
2520
    if (cp != CMFAIL) {
2521
      mchunkptr newp = (mchunkptr)(cp + offset);
2522
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2523
      newp->head = (psize|CINUSE_BIT);
2524
      mark_inuse_foot(m, newp, psize);
2525
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2526
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2527
 
2528
      if (cp < m->least_addr)
2529
        m->least_addr = cp;
2530
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2531
        m->max_footprint = m->footprint;
2532
      check_mmapped_chunk(m, newp);
2533
      return newp;
2534
    }
2535
  }
2536
  return 0;
2537
}
2538
 
2539
/* -------------------------- mspace management -------------------------- */
2540
 
2541
/* Initialize top chunk and its size */
2542
static void init_top(mstate m, mchunkptr p, size_t psize) {
2543
  /* Ensure alignment */
2544
  size_t offset = align_offset(chunk2mem(p));
2545
  p = (mchunkptr)((char*)p + offset);
2546
  psize -= offset;
2547
 
2548
  m->top = p;
2549
  m->topsize = psize;
2550
  p->head = psize | PINUSE_BIT;
2551
  /* set size of fake trailing chunk holding overhead space only once */
2552
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2553
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2554
}
2555
 
2556
/* Initialize bins for a new mstate that is otherwise zeroed out */
2557
static void init_bins(mstate m) {
2558
  /* Establish circular links for smallbins */
2559
  bindex_t i;
2560
  for (i = 0; i < NSMALLBINS; ++i) {
2561
    sbinptr bin = smallbin_at(m,i);
2562
    bin->fd = bin->bk = bin;
2563
  }
2564
}
2565
 
2566
#if PROCEED_ON_ERROR
2567
 
2568
/* default corruption action */
2569
static void reset_on_error(mstate m) {
2570
  int i;
2571
  ++malloc_corruption_error_count;
2572
  /* Reinitialize fields to forget about all memory */
2573
  m->smallbins = m->treebins = 0;
2574
  m->dvsize = m->topsize = 0;
2575
  m->seg.base = 0;
2576
  m->seg.size = 0;
2577
  m->seg.next = 0;
2578
  m->top = m->dv = 0;
2579
  for (i = 0; i < NTREEBINS; ++i)
2580
    *treebin_at(m, i) = 0;
2581
  init_bins(m);
2582
}
2583
#endif /* PROCEED_ON_ERROR */
2584
 
2585
/* Allocate chunk and prepend remainder with chunk in successor base. */
2586
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2587
                           size_t nb) {
2588
  mchunkptr p = align_as_chunk(newbase);
2589
  mchunkptr oldfirst = align_as_chunk(oldbase);
2590
  size_t psize = (char*)oldfirst - (char*)p;
2591
  mchunkptr q = chunk_plus_offset(p, nb);
2592
  size_t qsize = psize - nb;
2593
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2594
 
2595
  assert((char*)oldfirst > (char*)q);
2596
  assert(pinuse(oldfirst));
2597
  assert(qsize >= MIN_CHUNK_SIZE);
2598
 
2599
  /* consolidate remainder with first chunk of old base */
2600
  if (oldfirst == m->top) {
2601
    size_t tsize = m->topsize += qsize;
2602
    m->top = q;
2603
    q->head = tsize | PINUSE_BIT;
2604
    check_top_chunk(m, q);
2605
  }
2606
  else if (oldfirst == m->dv) {
2607
    size_t dsize = m->dvsize += qsize;
2608
    m->dv = q;
2609
    set_size_and_pinuse_of_free_chunk(q, dsize);
2610
  }
2611
  else {
2612
    if (!cinuse(oldfirst)) {
2613
      size_t nsize = chunksize(oldfirst);
2614
      unlink_chunk(m, oldfirst, nsize);
2615
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2616
      qsize += nsize;
2617
    }
2618
    set_free_with_pinuse(q, qsize, oldfirst);
2619
    insert_chunk(m, q, qsize);
2620
    check_free_chunk(m, q);
2621
  }
2622
 
2623
  check_malloced_chunk(m, chunk2mem(p), nb);
2624
  return chunk2mem(p);
2625
}
2626
 
2627
 
2628
/* Add a segment to hold a new noncontiguous region */
2629
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2630
  /* Determine locations and sizes of segment, fenceposts, old top */
2631
  char* old_top = (char*)m->top;
2632
  msegmentptr oldsp = segment_holding(m, old_top);
2633
  char* old_end = oldsp->base + oldsp->size;
2634
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2635
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2636
  size_t offset = align_offset(chunk2mem(rawsp));
2637
  char* asp = rawsp + offset;
2638
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2639
  mchunkptr sp = (mchunkptr)csp;
2640
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2641
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2642
  mchunkptr p = tnext;
2643
  int nfences = 0;
2644
 
2645
  /* reset top to new space */
2646
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2647
 
2648
  /* Set up segment record */
2649
  assert(is_aligned(ss));
2650
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2651
  *ss = m->seg; /* Push current record */
2652
  m->seg.base = tbase;
2653
  m->seg.size = tsize;
2654
  m->seg.sflags = mmapped;
2655
  m->seg.next = ss;
2656
 
2657
  /* Insert trailing fenceposts */
2658
  for (;;) {
2659
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2660
    p->head = FENCEPOST_HEAD;
2661
    ++nfences;
2662
    if ((char*)(&(nextp->head)) < old_end)
2663
      p = nextp;
2664
    else
2665
      break;
2666
  }
2667
  assert(nfences >= 2);
2668
 
2669
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2670
  if (csp != old_top) {
2671
    mchunkptr q = (mchunkptr)old_top;
2672
    size_t psize = csp - old_top;
2673
    mchunkptr tn = chunk_plus_offset(q, psize);
2674
    set_free_with_pinuse(q, psize, tn);
2675
    insert_chunk(m, q, psize);
2676
  }
2677
 
2678
  check_top_chunk(m, m->top);
2679
}
2680
 
2681
/* -------------------------- System allocation -------------------------- */
2682
 
2683
/* Get memory from system using MORECORE or MMAP */
2684
static void* sys_alloc(mstate m, size_t nb) {
2685
  char* tbase = CMFAIL;
2686
  size_t tsize = 0;
2687
  flag_t mmap_flag = 0;
2688
 
2689
  init_mparams();
2690
 
2691
  /* Directly map large chunks */
2692
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2693
    void* mem = mmap_alloc(m, nb);
2694
    if (mem != 0)
2695
      return mem;
2696
  }
2697
 
2698
  /*
2699
    Try getting memory in any of three ways (in most-preferred to
2700
    least-preferred order):
2701
    1. A call to MORECORE that can normally contiguously extend memory.
2702
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2703
       or main space is mmapped or a previous contiguous call failed)
2704
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2705
       Note that under the default settings, if MORECORE is unable to
2706
       fulfill a request, and HAVE_MMAP is true, then mmap is
2707
       used as a noncontiguous system allocator. This is a useful backup
2708
       strategy for systems with holes in address spaces -- in this case
2709
       sbrk cannot contiguously expand the heap, but mmap may be able to
2710
       find space.
2711
    3. A call to MORECORE that cannot usually contiguously extend memory.
2712
       (disabled if not HAVE_MORECORE)
2713
  */
2714
 
2715
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2716
    char* br = CMFAIL;
2717
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2718
    size_t asize = 0;
2719
    ACQUIRE_MORECORE_LOCK();
2720
 
2721
    if (ss == 0) {  /* First time through or recovery */
2722
      char* base = (char*)CALL_MORECORE(0);
2723
      if (base != CMFAIL) {
2724
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2725
        /* Adjust to end on a page boundary */
2726
        if (!is_page_aligned(base))
2727
          asize += (page_align((size_t)base) - (size_t)base);
2728
        /* Can't call MORECORE if size is negative when treated as signed */
2729
        if (asize < HALF_MAX_SIZE_T &&
2730
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2731
          tbase = base;
2732
          tsize = asize;
2733
        }
2734
      }
2735
    }
2736
    else {
2737
      /* Subtract out existing available top space from MORECORE request. */
2738
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2739
      /* Use mem here only if it did continuously extend old space */
2740
      if (asize < HALF_MAX_SIZE_T &&
2741
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2742
        tbase = br;
2743
        tsize = asize;
2744
      }
2745
    }
2746
 
2747
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2748
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2749
        if (asize < HALF_MAX_SIZE_T &&
2750
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2751
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2752
          if (esize < HALF_MAX_SIZE_T) {
2753
            char* end = (char*)CALL_MORECORE(esize);
2754
            if (end != CMFAIL)
2755
              asize += esize;
2756
            else {            /* Can't use; try to release */
2757
              CALL_MORECORE(-asize);
2758
              br = CMFAIL;
2759
            }
2760
          }
2761
        }
2762
      }
2763
      if (br != CMFAIL) {    /* Use the space we did get */
2764
        tbase = br;
2765
        tsize = asize;
2766
      }
2767
      else
2768
        disable_contiguous(m); /* Don't try contiguous path in the future */
2769
    }
2770
 
2771
    RELEASE_MORECORE_LOCK();
2772
  }
2773
 
2774
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2775
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2776
    size_t rsize = granularity_align(req);
2777
    if (rsize > nb) { /* Fail if wraps around zero */
2778
      char* mp = (char*)(CALL_MMAP(rsize));
2779
      if (mp != CMFAIL) {
2780
        tbase = mp;
2781
        tsize = rsize;
2782
        mmap_flag = IS_MMAPPED_BIT;
2783
      }
2784
    }
2785
  }
2786
 
2787
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2788
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2789
    if (asize < HALF_MAX_SIZE_T) {
2790
      char* br = CMFAIL;
2791
      char* end = CMFAIL;
2792
      ACQUIRE_MORECORE_LOCK();
2793
      br = (char*)(CALL_MORECORE(asize));
2794
      end = (char*)(CALL_MORECORE(0));
2795
      RELEASE_MORECORE_LOCK();
2796
      if (br != CMFAIL && end != CMFAIL && br < end) {
2797
        size_t ssize = end - br;
2798
        if (ssize > nb + TOP_FOOT_SIZE) {
2799
          tbase = br;
2800
          tsize = ssize;
2801
        }
2802
      }
2803
    }
2804
  }
2805
 
2806
  if (tbase != CMFAIL) {
2807
 
2808
    if ((m->footprint += tsize) > m->max_footprint)
2809
      m->max_footprint = m->footprint;
2810
 
2811
    if (!is_initialized(m)) { /* first-time initialization */
2812
      m->seg.base = m->least_addr = tbase;
2813
      m->seg.size = tsize;
2814
      m->seg.sflags = mmap_flag;
2815
      m->magic = mparams.magic;
2816
      init_bins(m);
2817
      if (is_global(m))
2818
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2819
      else {
2820
        /* Offset top by embedded malloc_state */
2821
        mchunkptr mn = next_chunk(mem2chunk(m));
2822
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2823
      }
2824
    }
2825
 
2826
    else {
2827
      /* Try to merge with an existing segment */
2828
      msegmentptr sp = &m->seg;
2829
      while (sp != 0 && tbase != sp->base + sp->size)
2830
        sp = sp->next;
2831
      if (sp != 0 &&
2832
          !is_extern_segment(sp) &&
2833
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2834
          segment_holds(sp, m->top)) { /* append */
2835
        sp->size += tsize;
2836
        init_top(m, m->top, m->topsize + tsize);
2837
      }
2838
      else {
2839
        if (tbase < m->least_addr)
2840
          m->least_addr = tbase;
2841
        sp = &m->seg;
2842
        while (sp != 0 && sp->base != tbase + tsize)
2843
          sp = sp->next;
2844
        if (sp != 0 &&
2845
            !is_extern_segment(sp) &&
2846
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2847
          char* oldbase = sp->base;
2848
          sp->base = tbase;
2849
          sp->size += tsize;
2850
          return prepend_alloc(m, tbase, oldbase, nb);
2851
        }
2852
        else
2853
          add_segment(m, tbase, tsize, mmap_flag);
2854
      }
2855
    }
2856
 
2857
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2858
      size_t rsize = m->topsize -= nb;
2859
      mchunkptr p = m->top;
2860
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2861
      r->head = rsize | PINUSE_BIT;
2862
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2863
      check_top_chunk(m, m->top);
2864
      check_malloced_chunk(m, chunk2mem(p), nb);
2865
      return chunk2mem(p);
2866
    }
2867
  }
2868
 
2869
  MALLOC_FAILURE_ACTION;
2870
  return 0;
2871
}
2872
 
2873
/* -----------------------  system deallocation -------------------------- */
2874
 
2875
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2876
static size_t release_unused_segments(mstate m) {
2877
  size_t released = 0;
2878
  msegmentptr pred = &m->seg;
2879
  msegmentptr sp = pred->next;
2880
  while (sp != 0) {
2881
    char* base = sp->base;
2882
    size_t size = sp->size;
2883
    msegmentptr next = sp->next;
2884
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2885
      mchunkptr p = align_as_chunk(base);
2886
      size_t psize = chunksize(p);
2887
      /* Can unmap if first chunk holds entire segment and not pinned */
2888
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2889
        tchunkptr tp = (tchunkptr)p;
2890
        assert(segment_holds(sp, (char*)sp));
2891
        if (p == m->dv) {
2892
          m->dv = 0;
2893
          m->dvsize = 0;
2894
        }
2895
        else {
2896
          unlink_large_chunk(m, tp);
2897
        }
2898
        if (CALL_MUNMAP(base, size) == 0) {
2899
          released += size;
2900
          m->footprint -= size;
2901
          /* unlink obsoleted record */
2902
          sp = pred;
2903
          sp->next = next;
2904
        }
2905
        else { /* back out if cannot unmap */
2906
          insert_large_chunk(m, tp, psize);
2907
        }
2908
      }
2909
    }
2910
    pred = sp;
2911
    sp = next;
2912
  }
2913
  return released;
2914
}
2915
 
2916
static int sys_trim(mstate m, size_t pad) {
2917
  size_t released = 0;
2918
  if (pad < MAX_REQUEST && is_initialized(m)) {
2919
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2920
 
2921
    if (m->topsize > pad) {
2922
      /* Shrink top space in granularity-size units, keeping at least one */
2923
      size_t unit = mparams.granularity;
2924
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2925
                      SIZE_T_ONE) * unit;
2926
      msegmentptr sp = segment_holding(m, (char*)m->top);
2927
 
2928
      if (!is_extern_segment(sp)) {
2929
        if (is_mmapped_segment(sp)) {
2930
          if (HAVE_MMAP &&
2931
              sp->size >= extra &&
2932
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2933
            /* Prefer mremap, fall back to munmap */
1719 decky 2934
            if ((CALL_MREMAP(sp->base, sp->size, sp->size - extra, 0) != MFAIL) ||
2935
                (CALL_MUNMAP(sp->base + sp->size - extra, extra) == 0)) {
968 palkovsky 2936
              released = extra;
2937
            }
2938
          }
2939
        }
2940
        else if (HAVE_MORECORE) {
2941
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2942
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2943
          ACQUIRE_MORECORE_LOCK();
2944
          {
2945
            /* Make sure end of memory is where we last set it. */
2946
            char* old_br = (char*)(CALL_MORECORE(0));
2947
            if (old_br == sp->base + sp->size) {
2948
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2949
              char* new_br = (char*)(CALL_MORECORE(0));
2950
              if (rel_br != CMFAIL && new_br < old_br)
2951
                released = old_br - new_br;
2952
            }
2953
          }
2954
          RELEASE_MORECORE_LOCK();
2955
        }
2956
      }
2957
 
2958
      if (released != 0) {
2959
        sp->size -= released;
2960
        m->footprint -= released;
2961
        init_top(m, m->top, m->topsize - released);
2962
        check_top_chunk(m, m->top);
2963
      }
2964
    }
2965
 
2966
    /* Unmap any unused mmapped segments */
2967
    if (HAVE_MMAP)
2968
      released += release_unused_segments(m);
2969
 
2970
    /* On failure, disable autotrim to avoid repeated failed future calls */
2971
    if (released == 0)
2972
      m->trim_check = MAX_SIZE_T;
2973
  }
2974
 
2975
  return (released != 0)? 1 : 0;
2976
}
2977
 
2978
/* ---------------------------- malloc support --------------------------- */
2979
 
2980
/* allocate a large request from the best fitting chunk in a treebin */
2981
static void* tmalloc_large(mstate m, size_t nb) {
2982
  tchunkptr v = 0;
2983
  size_t rsize = -nb; /* Unsigned negation */
2984
  tchunkptr t;
2985
  bindex_t idx;
2986
  compute_tree_index(nb, idx);
2987
 
2988
  if ((t = *treebin_at(m, idx)) != 0) {
2989
    /* Traverse tree for this bin looking for node with size == nb */
2990
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2991
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2992
    for (;;) {
2993
      tchunkptr rt;
2994
      size_t trem = chunksize(t) - nb;
2995
      if (trem < rsize) {
2996
        v = t;
2997
        if ((rsize = trem) == 0)
2998
          break;
2999
      }
3000
      rt = t->child[1];
3001
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3002
      if (rt != 0 && rt != t)
3003
        rst = rt;
3004
      if (t == 0) {
3005
        t = rst; /* set t to least subtree holding sizes > nb */
3006
        break;
3007
      }
3008
      sizebits <<= 1;
3009
    }
3010
  }
3011
 
3012
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3013
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3014
    if (leftbits != 0) {
3015
      bindex_t i;
3016
      binmap_t leastbit = least_bit(leftbits);
3017
      compute_bit2idx(leastbit, i);
3018
      t = *treebin_at(m, i);
3019
    }
3020
  }
3021
 
3022
  while (t != 0) { /* find smallest of tree or subtree */
3023
    size_t trem = chunksize(t) - nb;
3024
    if (trem < rsize) {
3025
      rsize = trem;
3026
      v = t;
3027
    }
3028
    t = leftmost_child(t);
3029
  }
3030
 
3031
  /*  If dv is a better fit, return 0 so malloc will use it */
3032
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3033
    if (RTCHECK(ok_address(m, v))) { /* split */
3034
      mchunkptr r = chunk_plus_offset(v, nb);
3035
      assert(chunksize(v) == rsize + nb);
3036
      if (RTCHECK(ok_next(v, r))) {
3037
        unlink_large_chunk(m, v);
3038
        if (rsize < MIN_CHUNK_SIZE)
3039
          set_inuse_and_pinuse(m, v, (rsize + nb));
3040
        else {
3041
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3042
          set_size_and_pinuse_of_free_chunk(r, rsize);
3043
          insert_chunk(m, r, rsize);
3044
        }
3045
        return chunk2mem(v);
3046
      }
3047
    }
3048
    CORRUPTION_ERROR_ACTION(m);
3049
  }
3050
  return 0;
3051
}
3052
 
3053
/* allocate a small request from the best fitting chunk in a treebin */
3054
static void* tmalloc_small(mstate m, size_t nb) {
3055
  tchunkptr t, v;
3056
  size_t rsize;
3057
  bindex_t i;
3058
  binmap_t leastbit = least_bit(m->treemap);
3059
  compute_bit2idx(leastbit, i);
3060
 
3061
  v = t = *treebin_at(m, i);
3062
  rsize = chunksize(t) - nb;
3063
 
3064
  while ((t = leftmost_child(t)) != 0) {
3065
    size_t trem = chunksize(t) - nb;
3066
    if (trem < rsize) {
3067
      rsize = trem;
3068
      v = t;
3069
    }
3070
  }
3071
 
3072
  if (RTCHECK(ok_address(m, v))) {
3073
    mchunkptr r = chunk_plus_offset(v, nb);
3074
    assert(chunksize(v) == rsize + nb);
3075
    if (RTCHECK(ok_next(v, r))) {
3076
      unlink_large_chunk(m, v);
3077
      if (rsize < MIN_CHUNK_SIZE)
3078
        set_inuse_and_pinuse(m, v, (rsize + nb));
3079
      else {
3080
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3081
        set_size_and_pinuse_of_free_chunk(r, rsize);
3082
        replace_dv(m, r, rsize);
3083
      }
3084
      return chunk2mem(v);
3085
    }
3086
  }
3087
 
3088
  CORRUPTION_ERROR_ACTION(m);
3089
  return 0;
3090
}
3091
 
3092
/* --------------------------- realloc support --------------------------- */
3093
 
3094
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3095
  if (bytes >= MAX_REQUEST) {
3096
    MALLOC_FAILURE_ACTION;
3097
    return 0;
3098
  }
3099
  if (!PREACTION(m)) {
3100
    mchunkptr oldp = mem2chunk(oldmem);
3101
    size_t oldsize = chunksize(oldp);
3102
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3103
    mchunkptr newp = 0;
3104
    void* extra = 0;
3105
 
3106
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3107
 
3108
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3109
                ok_next(oldp, next) && ok_pinuse(next))) {
3110
      size_t nb = request2size(bytes);
3111
      if (is_mmapped(oldp))
3112
        newp = mmap_resize(m, oldp, nb);
3113
      else if (oldsize >= nb) { /* already big enough */
3114
        size_t rsize = oldsize - nb;
3115
        newp = oldp;
3116
        if (rsize >= MIN_CHUNK_SIZE) {
3117
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3118
          set_inuse(m, newp, nb);
3119
          set_inuse(m, remainder, rsize);
3120
          extra = chunk2mem(remainder);
3121
        }
3122
      }
3123
      else if (next == m->top && oldsize + m->topsize > nb) {
3124
        /* Expand into top */
3125
        size_t newsize = oldsize + m->topsize;
3126
        size_t newtopsize = newsize - nb;
3127
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3128
        set_inuse(m, oldp, nb);
3129
        newtop->head = newtopsize |PINUSE_BIT;
3130
        m->top = newtop;
3131
        m->topsize = newtopsize;
3132
        newp = oldp;
3133
      }
3134
    }
3135
    else {
3136
      USAGE_ERROR_ACTION(m, oldmem);
3137
      POSTACTION(m);
3138
      return 0;
3139
    }
3140
 
3141
    POSTACTION(m);
3142
 
3143
    if (newp != 0) {
3144
      if (extra != 0) {
3145
        internal_free(m, extra);
3146
      }
3147
      check_inuse_chunk(m, newp);
3148
      return chunk2mem(newp);
3149
    }
3150
    else {
3151
      void* newmem = internal_malloc(m, bytes);
3152
      if (newmem != 0) {
3153
        size_t oc = oldsize - overhead_for(oldp);
3154
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3155
        internal_free(m, oldmem);
3156
      }
3157
      return newmem;
3158
    }
3159
  }
3160
  return 0;
3161
}
3162
 
3163
/* --------------------------- memalign support -------------------------- */
3164
 
3165
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3166
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3167
    return internal_malloc(m, bytes);
3168
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3169
    alignment = MIN_CHUNK_SIZE;
3170
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3171
    size_t a = MALLOC_ALIGNMENT << 1;
3172
    while (a < alignment) a <<= 1;
3173
    alignment = a;
3174
  }
3175
 
3176
  if (bytes >= MAX_REQUEST - alignment) {
3177
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3178
      MALLOC_FAILURE_ACTION;
3179
    }
3180
  }
3181
  else {
3182
    size_t nb = request2size(bytes);
3183
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3184
    char* mem = (char*)internal_malloc(m, req);
3185
    if (mem != 0) {
3186
      void* leader = 0;
3187
      void* trailer = 0;
3188
      mchunkptr p = mem2chunk(mem);
3189
 
3190
      if (PREACTION(m)) return 0;
3191
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3192
        /*
3193
          Find an aligned spot inside chunk.  Since we need to give
3194
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3195
          the first calculation places us at a spot with less than
3196
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3197
          We've allocated enough total room so that this is always
3198
          possible.
3199
        */
3200
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3201
                                                       alignment -
3202
                                                       SIZE_T_ONE)) &
3203
                                             -alignment));
3204
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3205
          br : br+alignment;
3206
        mchunkptr newp = (mchunkptr)pos;
3207
        size_t leadsize = pos - (char*)(p);
3208
        size_t newsize = chunksize(p) - leadsize;
3209
 
3210
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3211
          newp->prev_foot = p->prev_foot + leadsize;
3212
          newp->head = (newsize|CINUSE_BIT);
3213
        }
3214
        else { /* Otherwise, give back leader, use the rest */
3215
          set_inuse(m, newp, newsize);
3216
          set_inuse(m, p, leadsize);
3217
          leader = chunk2mem(p);
3218
        }
3219
        p = newp;
3220
      }
3221
 
3222
      /* Give back spare room at the end */
3223
      if (!is_mmapped(p)) {
3224
        size_t size = chunksize(p);
3225
        if (size > nb + MIN_CHUNK_SIZE) {
3226
          size_t remainder_size = size - nb;
3227
          mchunkptr remainder = chunk_plus_offset(p, nb);
3228
          set_inuse(m, p, nb);
3229
          set_inuse(m, remainder, remainder_size);
3230
          trailer = chunk2mem(remainder);
3231
        }
3232
      }
3233
 
3234
      assert (chunksize(p) >= nb);
3235
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3236
      check_inuse_chunk(m, p);
3237
      POSTACTION(m);
3238
      if (leader != 0) {
3239
        internal_free(m, leader);
3240
      }
3241
      if (trailer != 0) {
3242
        internal_free(m, trailer);
3243
      }
3244
      return chunk2mem(p);
3245
    }
3246
  }
3247
  return 0;
3248
}
3249
 
3250
/* ------------------------ comalloc/coalloc support --------------------- */
3251
 
3252
static void** ialloc(mstate m,
3253
                     size_t n_elements,
3254
                     size_t* sizes,
3255
                     int opts,
3256
                     void* chunks[]) {
3257
  /*
3258
    This provides common support for independent_X routines, handling
3259
    all of the combinations that can result.
3260
 
3261
    The opts arg has:
3262
    bit 0 set if all elements are same size (using sizes[0])
3263
    bit 1 set if elements should be zeroed
3264
  */
3265
 
3266
  size_t    element_size;   /* chunksize of each element, if all same */
3267
  size_t    contents_size;  /* total size of elements */
3268
  size_t    array_size;     /* request size of pointer array */
3269
  void*     mem;            /* malloced aggregate space */
3270
  mchunkptr p;              /* corresponding chunk */
3271
  size_t    remainder_size; /* remaining bytes while splitting */
3272
  void**    marray;         /* either "chunks" or malloced ptr array */
3273
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3274
  flag_t    was_enabled;    /* to disable mmap */
3275
  size_t    size;
3276
  size_t    i;
3277
 
3278
  /* compute array length, if needed */
3279
  if (chunks != 0) {
3280
    if (n_elements == 0)
3281
      return chunks; /* nothing to do */
3282
    marray = chunks;
3283
    array_size = 0;
3284
  }
3285
  else {
3286
    /* if empty req, must still return chunk representing empty array */
3287
    if (n_elements == 0)
3288
      return (void**)internal_malloc(m, 0);
3289
    marray = 0;
3290
    array_size = request2size(n_elements * (sizeof(void*)));
3291
  }
3292
 
3293
  /* compute total element size */
3294
  if (opts & 0x1) { /* all-same-size */
3295
    element_size = request2size(*sizes);
3296
    contents_size = n_elements * element_size;
3297
  }
3298
  else { /* add up all the sizes */
3299
    element_size = 0;
3300
    contents_size = 0;
3301
    for (i = 0; i != n_elements; ++i)
3302
      contents_size += request2size(sizes[i]);
3303
  }
3304
 
3305
  size = contents_size + array_size;
3306
 
3307
  /*
3308
     Allocate the aggregate chunk.  First disable direct-mmapping so
3309
     malloc won't use it, since we would not be able to later
3310
     free/realloc space internal to a segregated mmap region.
3311
  */
3312
  was_enabled = use_mmap(m);
3313
  disable_mmap(m);
3314
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3315
  if (was_enabled)
3316
    enable_mmap(m);
3317
  if (mem == 0)
3318
    return 0;
3319
 
3320
  if (PREACTION(m)) return 0;
3321
  p = mem2chunk(mem);
3322
  remainder_size = chunksize(p);
3323
 
3324
  assert(!is_mmapped(p));
3325
 
3326
  if (opts & 0x2) {       /* optionally clear the elements */
3327
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3328
  }
3329
 
3330
  /* If not provided, allocate the pointer array as final part of chunk */
3331
  if (marray == 0) {
3332
    size_t  array_chunk_size;
3333
    array_chunk = chunk_plus_offset(p, contents_size);
3334
    array_chunk_size = remainder_size - contents_size;
3335
    marray = (void**) (chunk2mem(array_chunk));
3336
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3337
    remainder_size = contents_size;
3338
  }
3339
 
3340
  /* split out elements */
3341
  for (i = 0; ; ++i) {
3342
    marray[i] = chunk2mem(p);
3343
    if (i != n_elements-1) {
3344
      if (element_size != 0)
3345
        size = element_size;
3346
      else
3347
        size = request2size(sizes[i]);
3348
      remainder_size -= size;
3349
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3350
      p = chunk_plus_offset(p, size);
3351
    }
3352
    else { /* the final element absorbs any overallocation slop */
3353
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3354
      break;
3355
    }
3356
  }
3357
 
3358
#if DEBUG
3359
  if (marray != chunks) {
3360
    /* final element must have exactly exhausted chunk */
3361
    if (element_size != 0) {
3362
      assert(remainder_size == element_size);
3363
    }
3364
    else {
3365
      assert(remainder_size == request2size(sizes[i]));
3366
    }
3367
    check_inuse_chunk(m, mem2chunk(marray));
3368
  }
3369
  for (i = 0; i != n_elements; ++i)
3370
    check_inuse_chunk(m, mem2chunk(marray[i]));
3371
 
3372
#endif /* DEBUG */
3373
 
3374
  POSTACTION(m);
3375
  return marray;
3376
}
3377
 
3378
 
3379
/* -------------------------- public routines ---------------------------- */
3380
 
3381
#if !ONLY_MSPACES
3382
 
3383
void* dlmalloc(size_t bytes) {
3384
  /*
3385
     Basic algorithm:
3386
     If a small request (< 256 bytes minus per-chunk overhead):
3387
       1. If one exists, use a remainderless chunk in associated smallbin.
3388
          (Remainderless means that there are too few excess bytes to
3389
          represent as a chunk.)
3390
       2. If it is big enough, use the dv chunk, which is normally the
3391
          chunk adjacent to the one used for the most recent small request.
3392
       3. If one exists, split the smallest available chunk in a bin,
3393
          saving remainder in dv.
3394
       4. If it is big enough, use the top chunk.
3395
       5. If available, get memory from system and use it
3396
     Otherwise, for a large request:
3397
       1. Find the smallest available binned chunk that fits, and use it
3398
          if it is better fitting than dv chunk, splitting if necessary.
3399
       2. If better fitting than any binned chunk, use the dv chunk.
3400
       3. If it is big enough, use the top chunk.
3401
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3402
       5. If available, get memory from system and use it
3403
 
3404
     The ugly goto's here ensure that postaction occurs along all paths.
3405
  */
3406
 
3407
  if (!PREACTION(gm)) {
3408
    void* mem;
3409
    size_t nb;
3410
    if (bytes <= MAX_SMALL_REQUEST) {
3411
      bindex_t idx;
3412
      binmap_t smallbits;
3413
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3414
      idx = small_index(nb);
3415
      smallbits = gm->smallmap >> idx;
3416
 
3417
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3418
        mchunkptr b, p;
3419
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3420
        b = smallbin_at(gm, idx);
3421
        p = b->fd;
3422
        assert(chunksize(p) == small_index2size(idx));
3423
        unlink_first_small_chunk(gm, b, p, idx);
3424
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3425
        mem = chunk2mem(p);
3426
        check_malloced_chunk(gm, mem, nb);
3427
        goto postaction;
3428
      }
3429
 
3430
      else if (nb > gm->dvsize) {
3431
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3432
          mchunkptr b, p, r;
3433
          size_t rsize;
3434
          bindex_t i;
3435
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3436
          binmap_t leastbit = least_bit(leftbits);
3437
          compute_bit2idx(leastbit, i);
3438
          b = smallbin_at(gm, i);
3439
          p = b->fd;
3440
          assert(chunksize(p) == small_index2size(i));
3441
          unlink_first_small_chunk(gm, b, p, i);
3442
          rsize = small_index2size(i) - nb;
3443
          /* Fit here cannot be remainderless if 4byte sizes */
3444
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3445
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3446
          else {
3447
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3448
            r = chunk_plus_offset(p, nb);
3449
            set_size_and_pinuse_of_free_chunk(r, rsize);
3450
            replace_dv(gm, r, rsize);
3451
          }
3452
          mem = chunk2mem(p);
3453
          check_malloced_chunk(gm, mem, nb);
3454
          goto postaction;
3455
        }
3456
 
3457
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3458
          check_malloced_chunk(gm, mem, nb);
3459
          goto postaction;
3460
        }
3461
      }
3462
    }
3463
    else if (bytes >= MAX_REQUEST)
3464
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3465
    else {
3466
      nb = pad_request(bytes);
3467
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3468
        check_malloced_chunk(gm, mem, nb);
3469
        goto postaction;
3470
      }
3471
    }
3472
 
3473
    if (nb <= gm->dvsize) {
3474
      size_t rsize = gm->dvsize - nb;
3475
      mchunkptr p = gm->dv;
3476
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3477
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3478
        gm->dvsize = rsize;
3479
        set_size_and_pinuse_of_free_chunk(r, rsize);
3480
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3481
      }
3482
      else { /* exhaust dv */
3483
        size_t dvs = gm->dvsize;
3484
        gm->dvsize = 0;
3485
        gm->dv = 0;
3486
        set_inuse_and_pinuse(gm, p, dvs);
3487
      }
3488
      mem = chunk2mem(p);
3489
      check_malloced_chunk(gm, mem, nb);
3490
      goto postaction;
3491
    }
3492
 
3493
    else if (nb < gm->topsize) { /* Split top */
3494
      size_t rsize = gm->topsize -= nb;
3495
      mchunkptr p = gm->top;
3496
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3497
      r->head = rsize | PINUSE_BIT;
3498
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3499
      mem = chunk2mem(p);
3500
      check_top_chunk(gm, gm->top);
3501
      check_malloced_chunk(gm, mem, nb);
3502
      goto postaction;
3503
    }
3504
 
3505
    mem = sys_alloc(gm, nb);
3506
 
3507
  postaction:
3508
    POSTACTION(gm);
3509
    return mem;
3510
  }
3511
 
3512
  return 0;
3513
}
3514
 
3515
void dlfree(void* mem) {
3516
  /*
3517
     Consolidate freed chunks with preceeding or succeeding bordering
3518
     free chunks, if they exist, and then place in a bin.  Intermixed
3519
     with special cases for top, dv, mmapped chunks, and usage errors.
3520
  */
3521
 
3522
  if (mem != 0) {
3523
    mchunkptr p  = mem2chunk(mem);
3524
#if FOOTERS
3525
    mstate fm = get_mstate_for(p);
3526
    if (!ok_magic(fm)) {
3527
      USAGE_ERROR_ACTION(fm, p);
3528
      return;
3529
    }
3530
#else /* FOOTERS */
3531
#define fm gm
3532
#endif /* FOOTERS */
3533
    if (!PREACTION(fm)) {
3534
      check_inuse_chunk(fm, p);
3535
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3536
        size_t psize = chunksize(p);
3537
        mchunkptr next = chunk_plus_offset(p, psize);
3538
        if (!pinuse(p)) {
3539
          size_t prevsize = p->prev_foot;
3540
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3541
            prevsize &= ~IS_MMAPPED_BIT;
3542
            psize += prevsize + MMAP_FOOT_PAD;
3543
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3544
              fm->footprint -= psize;
3545
            goto postaction;
3546
          }
3547
          else {
3548
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3549
            psize += prevsize;
3550
            p = prev;
3551
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3552
              if (p != fm->dv) {
3553
                unlink_chunk(fm, p, prevsize);
3554
              }
3555
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3556
                fm->dvsize = psize;
3557
                set_free_with_pinuse(p, psize, next);
3558
                goto postaction;
3559
              }
3560
            }
3561
            else
3562
              goto erroraction;
3563
          }
3564
        }
3565
 
3566
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3567
          if (!cinuse(next)) {  /* consolidate forward */
3568
            if (next == fm->top) {
3569
              size_t tsize = fm->topsize += psize;
3570
              fm->top = p;
3571
              p->head = tsize | PINUSE_BIT;
3572
              if (p == fm->dv) {
3573
                fm->dv = 0;
3574
                fm->dvsize = 0;
3575
              }
3576
              if (should_trim(fm, tsize))
3577
                sys_trim(fm, 0);
3578
              goto postaction;
3579
            }
3580
            else if (next == fm->dv) {
3581
              size_t dsize = fm->dvsize += psize;
3582
              fm->dv = p;
3583
              set_size_and_pinuse_of_free_chunk(p, dsize);
3584
              goto postaction;
3585
            }
3586
            else {
3587
              size_t nsize = chunksize(next);
3588
              psize += nsize;
3589
              unlink_chunk(fm, next, nsize);
3590
              set_size_and_pinuse_of_free_chunk(p, psize);
3591
              if (p == fm->dv) {
3592
                fm->dvsize = psize;
3593
                goto postaction;
3594
              }
3595
            }
3596
          }
3597
          else
3598
            set_free_with_pinuse(p, psize, next);
3599
          insert_chunk(fm, p, psize);
3600
          check_free_chunk(fm, p);
3601
          goto postaction;
3602
        }
3603
      }
3604
    erroraction:
3605
      USAGE_ERROR_ACTION(fm, p);
3606
    postaction:
3607
      POSTACTION(fm);
3608
    }
3609
  }
3610
#if !FOOTERS
3611
#undef fm
3612
#endif /* FOOTERS */
3613
}
3614
 
3615
void* dlcalloc(size_t n_elements, size_t elem_size) {
3616
  void* mem;
3617
  size_t req = 0;
3618
  if (n_elements != 0) {
3619
    req = n_elements * elem_size;
3620
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3621
        (req / n_elements != elem_size))
3622
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3623
  }
3624
  mem = dlmalloc(req);
3625
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3626
    memset(mem, 0, req);
3627
  return mem;
3628
}
3629
 
3630
void* dlrealloc(void* oldmem, size_t bytes) {
3631
  if (oldmem == 0)
3632
    return dlmalloc(bytes);
3633
#ifdef REALLOC_ZERO_BYTES_FREES
3634
  if (bytes == 0) {
3635
    dlfree(oldmem);
3636
    return 0;
3637
  }
3638
#endif /* REALLOC_ZERO_BYTES_FREES */
3639
  else {
3640
#if ! FOOTERS
3641
    mstate m = gm;
3642
#else /* FOOTERS */
3643
    mstate m = get_mstate_for(mem2chunk(oldmem));
3644
    if (!ok_magic(m)) {
3645
      USAGE_ERROR_ACTION(m, oldmem);
3646
      return 0;
3647
    }
3648
#endif /* FOOTERS */
3649
    return internal_realloc(m, oldmem, bytes);
3650
  }
3651
}
3652
 
3653
void* dlmemalign(size_t alignment, size_t bytes) {
3654
  return internal_memalign(gm, alignment, bytes);
3655
}
3656
 
3657
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3658
                                 void* chunks[]) {
3659
  size_t sz = elem_size; /* serves as 1-element array */
3660
  return ialloc(gm, n_elements, &sz, 3, chunks);
3661
}
3662
 
3663
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3664
                                   void* chunks[]) {
3665
  return ialloc(gm, n_elements, sizes, 0, chunks);
3666
}
3667
 
3668
void* dlvalloc(size_t bytes) {
3669
  size_t pagesz;
3670
  init_mparams();
3671
  pagesz = mparams.page_size;
3672
  return dlmemalign(pagesz, bytes);
3673
}
3674
 
3675
void* dlpvalloc(size_t bytes) {
3676
  size_t pagesz;
3677
  init_mparams();
3678
  pagesz = mparams.page_size;
3679
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3680
}
3681
 
3682
int dlmalloc_trim(size_t pad) {
3683
  int result = 0;
3684
  if (!PREACTION(gm)) {
3685
    result = sys_trim(gm, pad);
3686
    POSTACTION(gm);
3687
  }
3688
  return result;
3689
}
3690
 
3691
size_t dlmalloc_footprint(void) {
3692
  return gm->footprint;
3693
}
3694
 
3695
size_t dlmalloc_max_footprint(void) {
3696
  return gm->max_footprint;
3697
}
3698
 
3699
#if !NO_MALLINFO
3700
struct mallinfo dlmallinfo(void) {
3701
  return internal_mallinfo(gm);
3702
}
3703
#endif /* NO_MALLINFO */
3704
 
3705
void dlmalloc_stats() {
3706
  internal_malloc_stats(gm);
3707
}
3708
 
3709
size_t dlmalloc_usable_size(void* mem) {
3710
  if (mem != 0) {
3711
    mchunkptr p = mem2chunk(mem);
3712
    if (cinuse(p))
3713
      return chunksize(p) - overhead_for(p);
3714
  }
3715
  return 0;
3716
}
3717
 
3718
int dlmallopt(int param_number, int value) {
3719
  return change_mparam(param_number, value);
3720
}
3721
 
3722
#endif /* !ONLY_MSPACES */
3723
 
3724
/* ----------------------------- user mspaces ---------------------------- */
3725
 
3726
#if MSPACES
3727
 
3728
static mstate init_user_mstate(char* tbase, size_t tsize) {
3729
  size_t msize = pad_request(sizeof(struct malloc_state));
3730
  mchunkptr mn;
3731
  mchunkptr msp = align_as_chunk(tbase);
3732
  mstate m = (mstate)(chunk2mem(msp));
3733
  memset(m, 0, msize);
3734
  INITIAL_LOCK(&m->mutex);
3735
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3736
  m->seg.base = m->least_addr = tbase;
3737
  m->seg.size = m->footprint = m->max_footprint = tsize;
3738
  m->magic = mparams.magic;
3739
  m->mflags = mparams.default_mflags;
3740
  disable_contiguous(m);
3741
  init_bins(m);
3742
  mn = next_chunk(mem2chunk(m));
3743
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3744
  check_top_chunk(m, m->top);
3745
  return m;
3746
}
3747
 
3748
mspace create_mspace(size_t capacity, int locked) {
3749
  mstate m = 0;
3750
  size_t msize = pad_request(sizeof(struct malloc_state));
3751
  init_mparams(); /* Ensure pagesize etc initialized */
3752
 
3753
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3754
    size_t rs = ((capacity == 0)? mparams.granularity :
3755
                 (capacity + TOP_FOOT_SIZE + msize));
3756
    size_t tsize = granularity_align(rs);
3757
    char* tbase = (char*)(CALL_MMAP(tsize));
3758
    if (tbase != CMFAIL) {
3759
      m = init_user_mstate(tbase, tsize);
3760
      m->seg.sflags = IS_MMAPPED_BIT;
3761
      set_lock(m, locked);
3762
    }
3763
  }
3764
  return (mspace)m;
3765
}
3766
 
3767
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3768
  mstate m = 0;
3769
  size_t msize = pad_request(sizeof(struct malloc_state));
3770
  init_mparams(); /* Ensure pagesize etc initialized */
3771
 
3772
  if (capacity > msize + TOP_FOOT_SIZE &&
3773
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3774
    m = init_user_mstate((char*)base, capacity);
3775
    m->seg.sflags = EXTERN_BIT;
3776
    set_lock(m, locked);
3777
  }
3778
  return (mspace)m;
3779
}
3780
 
3781
size_t destroy_mspace(mspace msp) {
3782
  size_t freed = 0;
3783
  mstate ms = (mstate)msp;
3784
  if (ok_magic(ms)) {
3785
    msegmentptr sp = &ms->seg;
3786
    while (sp != 0) {
3787
      char* base = sp->base;
3788
      size_t size = sp->size;
3789
      flag_t flag = sp->sflags;
3790
      sp = sp->next;
3791
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3792
          CALL_MUNMAP(base, size) == 0)
3793
        freed += size;
3794
    }
3795
  }
3796
  else {
3797
    USAGE_ERROR_ACTION(ms,ms);
3798
  }
3799
  return freed;
3800
}
3801
 
3802
/*
3803
  mspace versions of routines are near-clones of the global
3804
  versions. This is not so nice but better than the alternatives.
3805
*/
3806
 
3807
 
3808
void* mspace_malloc(mspace msp, size_t bytes) {
3809
  mstate ms = (mstate)msp;
3810
  if (!ok_magic(ms)) {
3811
    USAGE_ERROR_ACTION(ms,ms);
3812
    return 0;
3813
  }
3814
  if (!PREACTION(ms)) {
3815
    void* mem;
3816
    size_t nb;
3817
    if (bytes <= MAX_SMALL_REQUEST) {
3818
      bindex_t idx;
3819
      binmap_t smallbits;
3820
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3821
      idx = small_index(nb);
3822
      smallbits = ms->smallmap >> idx;
3823
 
3824
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3825
        mchunkptr b, p;
3826
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3827
        b = smallbin_at(ms, idx);
3828
        p = b->fd;
3829
        assert(chunksize(p) == small_index2size(idx));
3830
        unlink_first_small_chunk(ms, b, p, idx);
3831
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3832
        mem = chunk2mem(p);
3833
        check_malloced_chunk(ms, mem, nb);
3834
        goto postaction;
3835
      }
3836
 
3837
      else if (nb > ms->dvsize) {
3838
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3839
          mchunkptr b, p, r;
3840
          size_t rsize;
3841
          bindex_t i;
3842
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3843
          binmap_t leastbit = least_bit(leftbits);
3844
          compute_bit2idx(leastbit, i);
3845
          b = smallbin_at(ms, i);
3846
          p = b->fd;
3847
          assert(chunksize(p) == small_index2size(i));
3848
          unlink_first_small_chunk(ms, b, p, i);
3849
          rsize = small_index2size(i) - nb;
3850
          /* Fit here cannot be remainderless if 4byte sizes */
3851
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3852
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3853
          else {
3854
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3855
            r = chunk_plus_offset(p, nb);
3856
            set_size_and_pinuse_of_free_chunk(r, rsize);
3857
            replace_dv(ms, r, rsize);
3858
          }
3859
          mem = chunk2mem(p);
3860
          check_malloced_chunk(ms, mem, nb);
3861
          goto postaction;
3862
        }
3863
 
3864
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3865
          check_malloced_chunk(ms, mem, nb);
3866
          goto postaction;
3867
        }
3868
      }
3869
    }
3870
    else if (bytes >= MAX_REQUEST)
3871
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3872
    else {
3873
      nb = pad_request(bytes);
3874
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3875
        check_malloced_chunk(ms, mem, nb);
3876
        goto postaction;
3877
      }
3878
    }
3879
 
3880
    if (nb <= ms->dvsize) {
3881
      size_t rsize = ms->dvsize - nb;
3882
      mchunkptr p = ms->dv;
3883
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3884
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3885
        ms->dvsize = rsize;
3886
        set_size_and_pinuse_of_free_chunk(r, rsize);
3887
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3888
      }
3889
      else { /* exhaust dv */
3890
        size_t dvs = ms->dvsize;
3891
        ms->dvsize = 0;
3892
        ms->dv = 0;
3893
        set_inuse_and_pinuse(ms, p, dvs);
3894
      }
3895
      mem = chunk2mem(p);
3896
      check_malloced_chunk(ms, mem, nb);
3897
      goto postaction;
3898
    }
3899
 
3900
    else if (nb < ms->topsize) { /* Split top */
3901
      size_t rsize = ms->topsize -= nb;
3902
      mchunkptr p = ms->top;
3903
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3904
      r->head = rsize | PINUSE_BIT;
3905
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3906
      mem = chunk2mem(p);
3907
      check_top_chunk(ms, ms->top);
3908
      check_malloced_chunk(ms, mem, nb);
3909
      goto postaction;
3910
    }
3911
 
3912
    mem = sys_alloc(ms, nb);
3913
 
3914
  postaction:
3915
    POSTACTION(ms);
3916
    return mem;
3917
  }
3918
 
3919
  return 0;
3920
}
3921
 
3922
void mspace_free(mspace msp, void* mem) {
3923
  if (mem != 0) {
3924
    mchunkptr p  = mem2chunk(mem);
3925
#if FOOTERS
3926
    mstate fm = get_mstate_for(p);
3927
#else /* FOOTERS */
3928
    mstate fm = (mstate)msp;
3929
#endif /* FOOTERS */
3930
    if (!ok_magic(fm)) {
3931
      USAGE_ERROR_ACTION(fm, p);
3932
      return;
3933
    }
3934
    if (!PREACTION(fm)) {
3935
      check_inuse_chunk(fm, p);
3936
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3937
        size_t psize = chunksize(p);
3938
        mchunkptr next = chunk_plus_offset(p, psize);
3939
        if (!pinuse(p)) {
3940
          size_t prevsize = p->prev_foot;
3941
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3942
            prevsize &= ~IS_MMAPPED_BIT;
3943
            psize += prevsize + MMAP_FOOT_PAD;
3944
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3945
              fm->footprint -= psize;
3946
            goto postaction;
3947
          }
3948
          else {
3949
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3950
            psize += prevsize;
3951
            p = prev;
3952
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3953
              if (p != fm->dv) {
3954
                unlink_chunk(fm, p, prevsize);
3955
              }
3956
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3957
                fm->dvsize = psize;
3958
                set_free_with_pinuse(p, psize, next);
3959
                goto postaction;
3960
              }
3961
            }
3962
            else
3963
              goto erroraction;
3964
          }
3965
        }
3966
 
3967
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3968
          if (!cinuse(next)) {  /* consolidate forward */
3969
            if (next == fm->top) {
3970
              size_t tsize = fm->topsize += psize;
3971
              fm->top = p;
3972
              p->head = tsize | PINUSE_BIT;
3973
              if (p == fm->dv) {
3974
                fm->dv = 0;
3975
                fm->dvsize = 0;
3976
              }
3977
              if (should_trim(fm, tsize))
3978
                sys_trim(fm, 0);
3979
              goto postaction;
3980
            }
3981
            else if (next == fm->dv) {
3982
              size_t dsize = fm->dvsize += psize;
3983
              fm->dv = p;
3984
              set_size_and_pinuse_of_free_chunk(p, dsize);
3985
              goto postaction;
3986
            }
3987
            else {
3988
              size_t nsize = chunksize(next);
3989
              psize += nsize;
3990
              unlink_chunk(fm, next, nsize);
3991
              set_size_and_pinuse_of_free_chunk(p, psize);
3992
              if (p == fm->dv) {
3993
                fm->dvsize = psize;
3994
                goto postaction;
3995
              }
3996
            }
3997
          }
3998
          else
3999
            set_free_with_pinuse(p, psize, next);
4000
          insert_chunk(fm, p, psize);
4001
          check_free_chunk(fm, p);
4002
          goto postaction;
4003
        }
4004
      }
4005
    erroraction:
4006
      USAGE_ERROR_ACTION(fm, p);
4007
    postaction:
4008
      POSTACTION(fm);
4009
    }
4010
  }
4011
}
4012
 
4013
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4014
  void* mem;
4015
  size_t req = 0;
4016
  mstate ms = (mstate)msp;
4017
  if (!ok_magic(ms)) {
4018
    USAGE_ERROR_ACTION(ms,ms);
4019
    return 0;
4020
  }
4021
  if (n_elements != 0) {
4022
    req = n_elements * elem_size;
4023
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4024
        (req / n_elements != elem_size))
4025
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4026
  }
4027
  mem = internal_malloc(ms, req);
4028
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4029
    memset(mem, 0, req);
4030
  return mem;
4031
}
4032
 
4033
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4034
  if (oldmem == 0)
4035
    return mspace_malloc(msp, bytes);
4036
#ifdef REALLOC_ZERO_BYTES_FREES
4037
  if (bytes == 0) {
4038
    mspace_free(msp, oldmem);
4039
    return 0;
4040
  }
4041
#endif /* REALLOC_ZERO_BYTES_FREES */
4042
  else {
4043
#if FOOTERS
4044
    mchunkptr p  = mem2chunk(oldmem);
4045
    mstate ms = get_mstate_for(p);
4046
#else /* FOOTERS */
4047
    mstate ms = (mstate)msp;
4048
#endif /* FOOTERS */
4049
    if (!ok_magic(ms)) {
4050
      USAGE_ERROR_ACTION(ms,ms);
4051
      return 0;
4052
    }
4053
    return internal_realloc(ms, oldmem, bytes);
4054
  }
4055
}
4056
 
4057
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4058
  mstate ms = (mstate)msp;
4059
  if (!ok_magic(ms)) {
4060
    USAGE_ERROR_ACTION(ms,ms);
4061
    return 0;
4062
  }
4063
  return internal_memalign(ms, alignment, bytes);
4064
}
4065
 
4066
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4067
                                 size_t elem_size, void* chunks[]) {
4068
  size_t sz = elem_size; /* serves as 1-element array */
4069
  mstate ms = (mstate)msp;
4070
  if (!ok_magic(ms)) {
4071
    USAGE_ERROR_ACTION(ms,ms);
4072
    return 0;
4073
  }
4074
  return ialloc(ms, n_elements, &sz, 3, chunks);
4075
}
4076
 
4077
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4078
                                   size_t sizes[], void* chunks[]) {
4079
  mstate ms = (mstate)msp;
4080
  if (!ok_magic(ms)) {
4081
    USAGE_ERROR_ACTION(ms,ms);
4082
    return 0;
4083
  }
4084
  return ialloc(ms, n_elements, sizes, 0, chunks);
4085
}
4086
 
4087
int mspace_trim(mspace msp, size_t pad) {
4088
  int result = 0;
4089
  mstate ms = (mstate)msp;
4090
  if (ok_magic(ms)) {
4091
    if (!PREACTION(ms)) {
4092
      result = sys_trim(ms, pad);
4093
      POSTACTION(ms);
4094
    }
4095
  }
4096
  else {
4097
    USAGE_ERROR_ACTION(ms,ms);
4098
  }
4099
  return result;
4100
}
4101
 
4102
void mspace_malloc_stats(mspace msp) {
4103
  mstate ms = (mstate)msp;
4104
  if (ok_magic(ms)) {
4105
    internal_malloc_stats(ms);
4106
  }
4107
  else {
4108
    USAGE_ERROR_ACTION(ms,ms);
4109
  }
4110
}
4111
 
4112
size_t mspace_footprint(mspace msp) {
4113
  size_t result;
4114
  mstate ms = (mstate)msp;
4115
  if (ok_magic(ms)) {
4116
    result = ms->footprint;
4117
  }
4118
  USAGE_ERROR_ACTION(ms,ms);
4119
  return result;
4120
}
4121
 
4122
 
4123
size_t mspace_max_footprint(mspace msp) {
4124
  size_t result;
4125
  mstate ms = (mstate)msp;
4126
  if (ok_magic(ms)) {
4127
    result = ms->max_footprint;
4128
  }
4129
  USAGE_ERROR_ACTION(ms,ms);
4130
  return result;
4131
}
4132
 
4133
 
4134
#if !NO_MALLINFO
4135
struct mallinfo mspace_mallinfo(mspace msp) {
4136
  mstate ms = (mstate)msp;
4137
  if (!ok_magic(ms)) {
4138
    USAGE_ERROR_ACTION(ms,ms);
4139
  }
4140
  return internal_mallinfo(ms);
4141
}
4142
#endif /* NO_MALLINFO */
4143
 
4144
int mspace_mallopt(int param_number, int value) {
4145
  return change_mparam(param_number, value);
4146
}
4147
 
4148
#endif /* MSPACES */
4149
 
4150
/* -------------------- Alternative MORECORE functions ------------------- */
4151
 
4152
/*
4153
  Guidelines for creating a custom version of MORECORE:
4154
 
4155
  * For best performance, MORECORE should allocate in multiples of pagesize.
4156
  * MORECORE may allocate more memory than requested. (Or even less,
4157
      but this will usually result in a malloc failure.)
4158
  * MORECORE must not allocate memory when given argument zero, but
4159
      instead return one past the end address of memory from previous
4160
      nonzero call.
4161
  * For best performance, consecutive calls to MORECORE with positive
4162
      arguments should return increasing addresses, indicating that
4163
      space has been contiguously extended.
4164
  * Even though consecutive calls to MORECORE need not return contiguous
4165
      addresses, it must be OK for malloc'ed chunks to span multiple
4166
      regions in those cases where they do happen to be contiguous.
4167
  * MORECORE need not handle negative arguments -- it may instead
4168
      just return MFAIL when given negative arguments.
4169
      Negative arguments are always multiples of pagesize. MORECORE
4170
      must not misinterpret negative args as large positive unsigned
4171
      args. You can suppress all such calls from even occurring by defining
4172
      MORECORE_CANNOT_TRIM,
4173
 
4174
  As an example alternative MORECORE, here is a custom allocator
4175
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4176
  necessarily physically contiguous non-paged memory (locked in,
4177
  present and won't get swapped out).  You can use it by uncommenting
4178
  this section, adding some #includes, and setting up the appropriate
4179
  defines above:
4180
 
4181
      #define MORECORE osMoreCore
4182
 
4183
  There is also a shutdown routine that should somehow be called for
4184
  cleanup upon program exit.
4185
 
4186
  #define MAX_POOL_ENTRIES 100
4187
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4188
  static int next_os_pool;
4189
  void *our_os_pools[MAX_POOL_ENTRIES];
4190
 
4191
  void *osMoreCore(int size)
4192
  {
4193
    void *ptr = 0;
4194
    static void *sbrk_top = 0;
4195
 
4196
    if (size > 0)
4197
    {
4198
      if (size < MINIMUM_MORECORE_SIZE)
4199
         size = MINIMUM_MORECORE_SIZE;
4200
      if (CurrentExecutionLevel() == kTaskLevel)
4201
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4202
      if (ptr == 0)
4203
      {
4204
        return (void *) MFAIL;
4205
      }
4206
      // save ptrs so they can be freed during cleanup
4207
      our_os_pools[next_os_pool] = ptr;
4208
      next_os_pool++;
4209
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4210
      sbrk_top = (char *) ptr + size;
4211
      return ptr;
4212
    }
4213
    else if (size < 0)
4214
    {
4215
      // we don't currently support shrink behavior
4216
      return (void *) MFAIL;
4217
    }
4218
    else
4219
    {
4220
      return sbrk_top;
4221
    }
4222
  }
4223
 
4224
  // cleanup any allocated memory pools
4225
  // called as last thing before shutting down driver
4226
 
4227
  void osCleanupMem(void)
4228
  {
4229
    void **ptr;
4230
 
4231
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4232
      if (*ptr)
4233
      {
4234
         PoolDeallocate(*ptr);
4235
         *ptr = 0;
4236
      }
4237
  }
4238
 
4239
*/
4240
 
4241
 
4242
/* -----------------------------------------------------------------------
4243
History:
4244
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4245
      * Add max_footprint functions
4246
      * Ensure all appropriate literals are size_t
4247
      * Fix conditional compilation problem for some #define settings
4248
      * Avoid concatenating segments with the one provided
4249
        in create_mspace_with_base
4250
      * Rename some variables to avoid compiler shadowing warnings
4251
      * Use explicit lock initialization.
4252
      * Better handling of sbrk interference.
4253
      * Simplify and fix segment insertion, trimming and mspace_destroy
4254
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4255
      * Thanks especially to Dennis Flanagan for help on these.
4256
 
4257
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4258
      * Fix memalign brace error.
4259
 
4260
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4261
      * Fix improper #endif nesting in C++
4262
      * Add explicit casts needed for C++
4263
 
4264
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4265
      * Use trees for large bins
4266
      * Support mspaces
4267
      * Use segments to unify sbrk-based and mmap-based system allocation,
4268
        removing need for emulation on most platforms without sbrk.
4269
      * Default safety checks
4270
      * Optional footer checks. Thanks to William Robertson for the idea.
4271
      * Internal code refactoring
4272
      * Incorporate suggestions and platform-specific changes.
4273
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4274
        Aaron Bachmann,  Emery Berger, and others.
4275
      * Speed up non-fastbin processing enough to remove fastbins.
4276
      * Remove useless cfree() to avoid conflicts with other apps.
4277
      * Remove internal memcpy, memset. Compilers handle builtins better.
4278
      * Remove some options that no one ever used and rename others.
4279
 
4280
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4281
      * Fix malloc_state bitmap array misdeclaration
4282
 
4283
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4284
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4285
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4286
      * Better detection and support for non-contiguousness of MORECORE.
4287
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4288
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4289
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4290
      * Raised default trim and map thresholds to 256K.
4291
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4292
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4293
      * Branch-free bin calculation
4294
      * Default trim and mmap thresholds now 256K.
4295
 
4296
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4297
      * Introduce independent_comalloc and independent_calloc.
4298
        Thanks to Michael Pachos for motivation and help.
4299
      * Make optional .h file available
4300
      * Allow > 2GB requests on 32bit systems.
4301
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4302
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4303
        and Anonymous.
4304
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4305
        helping test this.)
4306
      * memalign: check alignment arg
4307
      * realloc: don't try to shift chunks backwards, since this
4308
        leads to  more fragmentation in some programs and doesn't
4309
        seem to help in any others.
4310
      * Collect all cases in malloc requiring system memory into sysmalloc
4311
      * Use mmap as backup to sbrk
4312
      * Place all internal state in malloc_state
4313
      * Introduce fastbins (although similar to 2.5.1)
4314
      * Many minor tunings and cosmetic improvements
4315
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4316
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4317
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4318
      * Include errno.h to support default failure action.
4319
 
4320
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4321
      * return null for negative arguments
4322
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4323
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4324
          (e.g. WIN32 platforms)
4325
         * Cleanup header file inclusion for WIN32 platforms
4326
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4327
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4328
           memory allocation routines
4329
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4330
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4331
           usage of 'assert' in non-WIN32 code
4332
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4333
           avoid infinite loop
4334
      * Always call 'fREe()' rather than 'free()'
4335
 
4336
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4337
      * Fixed ordering problem with boundary-stamping
4338
 
4339
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4340
      * Added pvalloc, as recommended by H.J. Liu
4341
      * Added 64bit pointer support mainly from Wolfram Gloger
4342
      * Added anonymously donated WIN32 sbrk emulation
4343
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4344
      * malloc_extend_top: fix mask error that caused wastage after
4345
        foreign sbrks
4346
      * Add linux mremap support code from HJ Liu
4347
 
4348
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4349
      * Integrated most documentation with the code.
4350
      * Add support for mmap, with help from
4351
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4352
      * Use last_remainder in more cases.
4353
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4354
      * Use ordered bins instead of best-fit threshhold
4355
      * Eliminate block-local decls to simplify tracing and debugging.
4356
      * Support another case of realloc via move into top
4357
      * Fix error occuring when initial sbrk_base not word-aligned.
4358
      * Rely on page size for units instead of SBRK_UNIT to
4359
        avoid surprises about sbrk alignment conventions.
4360
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4361
        (raymond@es.ele.tue.nl) for the suggestion.
4362
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4363
      * More precautions for cases where other routines call sbrk,
4364
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4365
      * Added macros etc., allowing use in linux libc from
4366
        H.J. Lu (hjl@gnu.ai.mit.edu)
4367
      * Inverted this history list
4368
 
4369
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4370
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4371
      * Removed all preallocation code since under current scheme
4372
        the work required to undo bad preallocations exceeds
4373
        the work saved in good cases for most test programs.
4374
      * No longer use return list or unconsolidated bins since
4375
        no scheme using them consistently outperforms those that don't
4376
        given above changes.
4377
      * Use best fit for very large chunks to prevent some worst-cases.
4378
      * Added some support for debugging
4379
 
4380
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4381
      * Removed footers when chunks are in use. Thanks to
4382
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4383
 
4384
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4385
      * Added malloc_trim, with help from Wolfram Gloger
4386
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4387
 
4388
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4389
 
4390
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4391
      * realloc: try to expand in both directions
4392
      * malloc: swap order of clean-bin strategy;
4393
      * realloc: only conditionally expand backwards
4394
      * Try not to scavenge used bins
4395
      * Use bin counts as a guide to preallocation
4396
      * Occasionally bin return list chunks in first scan
4397
      * Add a few optimizations from colin@nyx10.cs.du.edu
4398
 
4399
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4400
      * faster bin computation & slightly different binning
4401
      * merged all consolidations to one part of malloc proper
4402
         (eliminating old malloc_find_space & malloc_clean_bin)
4403
      * Scan 2 returns chunks (not just 1)
4404
      * Propagate failure in realloc if malloc returns 0
4405
      * Add stuff to allow compilation on non-ANSI compilers
4406
          from kpv@research.att.com
4407
 
4408
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4409
      * removed potential for odd address access in prev_chunk
4410
      * removed dependency on getpagesize.h
4411
      * misc cosmetics and a bit more internal documentation
4412
      * anticosmetics: mangled names in macros to evade debugger strangeness
4413
      * tested on sparc, hp-700, dec-mips, rs6000
4414
          with gcc & native cc (hp, dec only) allowing
4415
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4416
 
4417
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4418
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4419
         structure of old version,  but most details differ.)
4420
 
4421
*/
1656 cejka 4422
 
4423
/** @}
4424
 */
4425