Subversion Repositories HelenOS

Rev

Rev 2565 | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 2565 Rev 3888
1
#include <stdio.h>
1
#include <stdio.h>
2
#include <libc.h>
2
#include <libc.h>
3
/*
3
/*
4
  This is a version (aka dlmalloc) of malloc/free/realloc written by
4
  This is a version (aka dlmalloc) of malloc/free/realloc written by
5
  Doug Lea and released to the public domain, as explained at
5
  Doug Lea and released to the public domain, as explained at
6
  http://creativecommons.org/licenses/publicdomain.  Send questions,
6
  http://creativecommons.org/licenses/publicdomain.  Send questions,
7
  comments, complaints, performance data, etc to dl@cs.oswego.edu
7
  comments, complaints, performance data, etc to dl@cs.oswego.edu
8
 
8
 
9
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
9
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
10
 
10
 
11
   Note: There may be an updated version of this malloc obtainable at
11
   Note: There may be an updated version of this malloc obtainable at
12
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
13
         Check before installing!
13
         Check before installing!
14
 
14
 
15
* Quickstart
15
* Quickstart
16
 
16
 
17
  This library is all in one file to simplify the most common usage:
17
  This library is all in one file to simplify the most common usage:
18
  ftp it, compile it (-O3), and link it into another program. All of
18
  ftp it, compile it (-O3), and link it into another program. All of
19
  the compile-time options default to reasonable values for use on
19
  the compile-time options default to reasonable values for use on
20
  most platforms.  You might later want to step through various
20
  most platforms.  You might later want to step through various
21
  compile-time and dynamic tuning options.
21
  compile-time and dynamic tuning options.
22
 
22
 
23
  For convenience, an include file for code using this malloc is at:
23
  For convenience, an include file for code using this malloc is at:
24
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
24
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
25
  You don't really need this .h file unless you call functions not
25
  You don't really need this .h file unless you call functions not
26
  defined in your system include files.  The .h file contains only the
26
  defined in your system include files.  The .h file contains only the
27
  excerpts from this file needed for using this malloc on ANSI C/C++
27
  excerpts from this file needed for using this malloc on ANSI C/C++
28
  systems, so long as you haven't changed compile-time options about
28
  systems, so long as you haven't changed compile-time options about
29
  naming and tuning parameters.  If you do, then you can create your
29
  naming and tuning parameters.  If you do, then you can create your
30
  own malloc.h that does include all settings by cutting at the point
30
  own malloc.h that does include all settings by cutting at the point
31
  indicated below. Note that you may already by default be using a C
31
  indicated below. Note that you may already by default be using a C
32
  library containing a malloc that is based on some version of this
32
  library containing a malloc that is based on some version of this
33
  malloc (for example in linux). You might still want to use the one
33
  malloc (for example in linux). You might still want to use the one
34
  in this file to customize settings or to avoid overheads associated
34
  in this file to customize settings or to avoid overheads associated
35
  with library versions.
35
  with library versions.
36
 
36
 
37
* Vital statistics:
37
* Vital statistics:
38
 
38
 
39
  Supported pointer/size_t representation:       4 or 8 bytes
39
  Supported pointer/size_t representation:       4 or 8 bytes
40
       size_t MUST be an unsigned type of the same width as
40
       size_t MUST be an unsigned type of the same width as
41
       pointers. (If you are using an ancient system that declares
41
       pointers. (If you are using an ancient system that declares
42
       size_t as a signed type, or need it to be a different width
42
       size_t as a signed type, or need it to be a different width
43
       than pointers, you can use a previous release of this malloc
43
       than pointers, you can use a previous release of this malloc
44
       (e.g. 2.7.2) supporting these.)
44
       (e.g. 2.7.2) supporting these.)
45
 
45
 
46
  Alignment:                                     8 bytes (default)
46
  Alignment:                                     8 bytes (default)
47
       This suffices for nearly all current machines and C compilers.
47
       This suffices for nearly all current machines and C compilers.
48
       However, you can define MALLOC_ALIGNMENT to be wider than this
48
       However, you can define MALLOC_ALIGNMENT to be wider than this
49
       if necessary (up to 128bytes), at the expense of using more space.
49
       if necessary (up to 128bytes), at the expense of using more space.
50
 
50
 
51
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
51
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
52
                                          8 or 16 bytes (if 8byte sizes)
52
                                          8 or 16 bytes (if 8byte sizes)
53
       Each malloced chunk has a hidden word of overhead holding size
53
       Each malloced chunk has a hidden word of overhead holding size
54
       and status information, and additional cross-check word
54
       and status information, and additional cross-check word
55
       if FOOTERS is defined.
55
       if FOOTERS is defined.
56
 
56
 
57
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
57
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
58
                          8-byte ptrs:  32 bytes    (including overhead)
58
                          8-byte ptrs:  32 bytes    (including overhead)
59
 
59
 
60
       Even a request for zero bytes (i.e., malloc(0)) returns a
60
       Even a request for zero bytes (i.e., malloc(0)) returns a
61
       pointer to something of the minimum allocatable size.
61
       pointer to something of the minimum allocatable size.
62
       The maximum overhead wastage (i.e., number of extra bytes
62
       The maximum overhead wastage (i.e., number of extra bytes
63
       allocated than were requested in malloc) is less than or equal
63
       allocated than were requested in malloc) is less than or equal
64
       to the minimum size, except for requests >= mmap_threshold that
64
       to the minimum size, except for requests >= mmap_threshold that
65
       are serviced via mmap(), where the worst case wastage is about
65
       are serviced via mmap(), where the worst case wastage is about
66
       32 bytes plus the remainder from a system page (the minimal
66
       32 bytes plus the remainder from a system page (the minimal
67
       mmap unit); typically 4096 or 8192 bytes.
67
       mmap unit); typically 4096 or 8192 bytes.
68
 
68
 
69
  Security: static-safe; optionally more or less
69
  Security: static-safe; optionally more or less
70
       The "security" of malloc refers to the ability of malicious
70
       The "security" of malloc refers to the ability of malicious
71
       code to accentuate the effects of errors (for example, freeing
71
       code to accentuate the effects of errors (for example, freeing
72
       space that is not currently malloc'ed or overwriting past the
72
       space that is not currently malloc'ed or overwriting past the
73
       ends of chunks) in code that calls malloc.  This malloc
73
       ends of chunks) in code that calls malloc.  This malloc
74
       guarantees not to modify any memory locations below the base of
74
       guarantees not to modify any memory locations below the base of
75
       heap, i.e., static variables, even in the presence of usage
75
       heap, i.e., static variables, even in the presence of usage
76
       errors.  The routines additionally detect most improper frees
76
       errors.  The routines additionally detect most improper frees
77
       and reallocs.  All this holds as long as the static bookkeeping
77
       and reallocs.  All this holds as long as the static bookkeeping
78
       for malloc itself is not corrupted by some other means.  This
78
       for malloc itself is not corrupted by some other means.  This
79
       is only one aspect of security -- these checks do not, and
79
       is only one aspect of security -- these checks do not, and
80
       cannot, detect all possible programming errors.
80
       cannot, detect all possible programming errors.
81
 
81
 
82
       If FOOTERS is defined nonzero, then each allocated chunk
82
       If FOOTERS is defined nonzero, then each allocated chunk
83
       carries an additional check word to verify that it was malloced
83
       carries an additional check word to verify that it was malloced
84
       from its space.  These check words are the same within each
84
       from its space.  These check words are the same within each
85
       execution of a program using malloc, but differ across
85
       execution of a program using malloc, but differ across
86
       executions, so externally crafted fake chunks cannot be
86
       executions, so externally crafted fake chunks cannot be
87
       freed. This improves security by rejecting frees/reallocs that
87
       freed. This improves security by rejecting frees/reallocs that
88
       could corrupt heap memory, in addition to the checks preventing
88
       could corrupt heap memory, in addition to the checks preventing
89
       writes to statics that are always on.  This may further improve
89
       writes to statics that are always on.  This may further improve
90
       security at the expense of time and space overhead.  (Note that
90
       security at the expense of time and space overhead.  (Note that
91
       FOOTERS may also be worth using with MSPACES.)
91
       FOOTERS may also be worth using with MSPACES.)
92
 
92
 
93
       By default detected errors cause the program to abort (calling
93
       By default detected errors cause the program to abort (calling
94
       "abort()"). You can override this to instead proceed past
94
       "abort()"). You can override this to instead proceed past
95
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
95
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
96
       has no effect, and a malloc that encounters a bad address
96
       has no effect, and a malloc that encounters a bad address
97
       caused by user overwrites will ignore the bad address by
97
       caused by user overwrites will ignore the bad address by
98
       dropping pointers and indices to all known memory. This may
98
       dropping pointers and indices to all known memory. This may
99
       be appropriate for programs that should continue if at all
99
       be appropriate for programs that should continue if at all
100
       possible in the face of programming errors, although they may
100
       possible in the face of programming errors, although they may
101
       run out of memory because dropped memory is never reclaimed.
101
       run out of memory because dropped memory is never reclaimed.
102
 
102
 
103
       If you don't like either of these options, you can define
103
       If you don't like either of these options, you can define
104
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
104
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
105
       else. And if if you are sure that your program using malloc has
105
       else. And if if you are sure that your program using malloc has
106
       no errors or vulnerabilities, you can define INSECURE to 1,
106
       no errors or vulnerabilities, you can define INSECURE to 1,
107
       which might (or might not) provide a small performance improvement.
107
       which might (or might not) provide a small performance improvement.
108
 
108
 
109
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
109
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
110
       When USE_LOCKS is defined, each public call to malloc, free,
110
       When USE_LOCKS is defined, each public call to malloc, free,
111
       etc is surrounded with either a pthread mutex or a win32
111
       etc is surrounded with either a pthread mutex or a win32
112
       spinlock (depending on WIN32). This is not especially fast, and
112
       spinlock (depending on WIN32). This is not especially fast, and
113
       can be a major bottleneck.  It is designed only to provide
113
       can be a major bottleneck.  It is designed only to provide
114
       minimal protection in concurrent environments, and to provide a
114
       minimal protection in concurrent environments, and to provide a
115
       basis for extensions.  If you are using malloc in a concurrent
115
       basis for extensions.  If you are using malloc in a concurrent
116
       program, consider instead using ptmalloc, which is derived from
116
       program, consider instead using ptmalloc, which is derived from
117
       a version of this malloc. (See http://www.malloc.de).
117
       a version of this malloc. (See http://www.malloc.de).
118
 
118
 
119
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
119
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
120
       This malloc can use unix sbrk or any emulation (invoked using
120
       This malloc can use unix sbrk or any emulation (invoked using
121
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
121
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
122
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
122
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
123
       memory.  On most unix systems, it tends to work best if both
123
       memory.  On most unix systems, it tends to work best if both
124
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
124
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
125
       based on VirtualAlloc. It also uses common C library functions
125
       based on VirtualAlloc. It also uses common C library functions
126
       like memset.
126
       like memset.
127
 
127
 
128
  Compliance: I believe it is compliant with the Single Unix Specification
128
  Compliance: I believe it is compliant with the Single Unix Specification
129
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
129
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
130
       others as well.
130
       others as well.
131
 
131
 
132
* Overview of algorithms
132
* Overview of algorithms
133
 
133
 
134
  This is not the fastest, most space-conserving, most portable, or
134
  This is not the fastest, most space-conserving, most portable, or
135
  most tunable malloc ever written. However it is among the fastest
135
  most tunable malloc ever written. However it is among the fastest
136
  while also being among the most space-conserving, portable and
136
  while also being among the most space-conserving, portable and
137
  tunable.  Consistent balance across these factors results in a good
137
  tunable.  Consistent balance across these factors results in a good
138
  general-purpose allocator for malloc-intensive programs.
138
  general-purpose allocator for malloc-intensive programs.
139
 
139
 
140
  In most ways, this malloc is a best-fit allocator. Generally, it
140
  In most ways, this malloc is a best-fit allocator. Generally, it
141
  chooses the best-fitting existing chunk for a request, with ties
141
  chooses the best-fitting existing chunk for a request, with ties
142
  broken in approximately least-recently-used order. (This strategy
142
  broken in approximately least-recently-used order. (This strategy
143
  normally maintains low fragmentation.) However, for requests less
143
  normally maintains low fragmentation.) However, for requests less
144
  than 256bytes, it deviates from best-fit when there is not an
144
  than 256bytes, it deviates from best-fit when there is not an
145
  exactly fitting available chunk by preferring to use space adjacent
145
  exactly fitting available chunk by preferring to use space adjacent
146
  to that used for the previous small request, as well as by breaking
146
  to that used for the previous small request, as well as by breaking
147
  ties in approximately most-recently-used order. (These enhance
147
  ties in approximately most-recently-used order. (These enhance
148
  locality of series of small allocations.)  And for very large requests
148
  locality of series of small allocations.)  And for very large requests
149
  (>= 256Kb by default), it relies on system memory mapping
149
  (>= 256Kb by default), it relies on system memory mapping
150
  facilities, if supported.  (This helps avoid carrying around and
150
  facilities, if supported.  (This helps avoid carrying around and
151
  possibly fragmenting memory used only for large chunks.)
151
  possibly fragmenting memory used only for large chunks.)
152
 
152
 
153
  All operations (except malloc_stats and mallinfo) have execution
153
  All operations (except malloc_stats and mallinfo) have execution
154
  times that are bounded by a constant factor of the number of bits in
154
  times that are bounded by a constant factor of the number of bits in
155
  a size_t, not counting any clearing in calloc or copying in realloc,
155
  a size_t, not counting any clearing in calloc or copying in realloc,
156
  or actions surrounding MORECORE and MMAP that have times
156
  or actions surrounding MORECORE and MMAP that have times
157
  proportional to the number of non-contiguous regions returned by
157
  proportional to the number of non-contiguous regions returned by
158
  system allocation routines, which is often just 1.
158
  system allocation routines, which is often just 1.
159
 
159
 
160
  The implementation is not very modular and seriously overuses
160
  The implementation is not very modular and seriously overuses
161
  macros. Perhaps someday all C compilers will do as good a job
161
  macros. Perhaps someday all C compilers will do as good a job
162
  inlining modular code as can now be done by brute-force expansion,
162
  inlining modular code as can now be done by brute-force expansion,
163
  but now, enough of them seem not to.
163
  but now, enough of them seem not to.
164
 
164
 
165
  Some compilers issue a lot of warnings about code that is
165
  Some compilers issue a lot of warnings about code that is
166
  dead/unreachable only on some platforms, and also about intentional
166
  dead/unreachable only on some platforms, and also about intentional
167
  uses of negation on unsigned types. All known cases of each can be
167
  uses of negation on unsigned types. All known cases of each can be
168
  ignored.
168
  ignored.
169
 
169
 
170
  For a longer but out of date high-level description, see
170
  For a longer but out of date high-level description, see
171
     http://gee.cs.oswego.edu/dl/html/malloc.html
171
     http://gee.cs.oswego.edu/dl/html/malloc.html
172
 
172
 
173
* MSPACES
173
* MSPACES
174
  If MSPACES is defined, then in addition to malloc, free, etc.,
174
  If MSPACES is defined, then in addition to malloc, free, etc.,
175
  this file also defines mspace_malloc, mspace_free, etc. These
175
  this file also defines mspace_malloc, mspace_free, etc. These
176
  are versions of malloc routines that take an "mspace" argument
176
  are versions of malloc routines that take an "mspace" argument
177
  obtained using create_mspace, to control all internal bookkeeping.
177
  obtained using create_mspace, to control all internal bookkeeping.
178
  If ONLY_MSPACES is defined, only these versions are compiled.
178
  If ONLY_MSPACES is defined, only these versions are compiled.
179
  So if you would like to use this allocator for only some allocations,
179
  So if you would like to use this allocator for only some allocations,
180
  and your system malloc for others, you can compile with
180
  and your system malloc for others, you can compile with
181
  ONLY_MSPACES and then do something like...
181
  ONLY_MSPACES and then do something like...
182
    static mspace mymspace = create_mspace(0,0); // for example
182
    static mspace mymspace = create_mspace(0,0); // for example
183
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
183
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
184
 
184
 
185
  (Note: If you only need one instance of an mspace, you can instead
185
  (Note: If you only need one instance of an mspace, you can instead
186
  use "USE_DL_PREFIX" to relabel the global malloc.)
186
  use "USE_DL_PREFIX" to relabel the global malloc.)
187
 
187
 
188
  You can similarly create thread-local allocators by storing
188
  You can similarly create thread-local allocators by storing
189
  mspaces as thread-locals. For example:
189
  mspaces as thread-locals. For example:
190
    static __thread mspace tlms = 0;
190
    static __thread mspace tlms = 0;
191
    void*  tlmalloc(size_t bytes) {
191
    void*  tlmalloc(size_t bytes) {
192
      if (tlms == 0) tlms = create_mspace(0, 0);
192
      if (tlms == 0) tlms = create_mspace(0, 0);
193
      return mspace_malloc(tlms, bytes);
193
      return mspace_malloc(tlms, bytes);
194
    }
194
    }
195
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
195
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
196
 
196
 
197
  Unless FOOTERS is defined, each mspace is completely independent.
197
  Unless FOOTERS is defined, each mspace is completely independent.
198
  You cannot allocate from one and free to another (although
198
  You cannot allocate from one and free to another (although
199
  conformance is only weakly checked, so usage errors are not always
199
  conformance is only weakly checked, so usage errors are not always
200
  caught). If FOOTERS is defined, then each chunk carries around a tag
200
  caught). If FOOTERS is defined, then each chunk carries around a tag
201
  indicating its originating mspace, and frees are directed to their
201
  indicating its originating mspace, and frees are directed to their
202
  originating spaces.
202
  originating spaces.
203
 
203
 
204
 -------------------------  Compile-time options ---------------------------
204
 -------------------------  Compile-time options ---------------------------
205
 
205
 
206
Be careful in setting #define values for numerical constants of type
206
Be careful in setting #define values for numerical constants of type
207
size_t. On some systems, literal values are not automatically extended
207
size_t. On some systems, literal values are not automatically extended
208
to size_t precision unless they are explicitly casted.
208
to size_t precision unless they are explicitly casted.
209
 
209
 
210
WIN32                    default: defined if _WIN32 defined
210
WIN32                    default: defined if _WIN32 defined
211
  Defining WIN32 sets up defaults for MS environment and compilers.
211
  Defining WIN32 sets up defaults for MS environment and compilers.
212
  Otherwise defaults are for unix.
212
  Otherwise defaults are for unix.
213
 
213
 
214
MALLOC_ALIGNMENT         default: (size_t)8
214
MALLOC_ALIGNMENT         default: (size_t)8
215
  Controls the minimum alignment for malloc'ed chunks.  It must be a
215
  Controls the minimum alignment for malloc'ed chunks.  It must be a
216
  power of two and at least 8, even on machines for which smaller
216
  power of two and at least 8, even on machines for which smaller
217
  alignments would suffice. It may be defined as larger than this
217
  alignments would suffice. It may be defined as larger than this
218
  though. Note however that code and data structures are optimized for
218
  though. Note however that code and data structures are optimized for
219
  the case of 8-byte alignment.
219
  the case of 8-byte alignment.
220
 
220
 
221
MSPACES                  default: 0 (false)
221
MSPACES                  default: 0 (false)
222
  If true, compile in support for independent allocation spaces.
222
  If true, compile in support for independent allocation spaces.
223
  This is only supported if HAVE_MMAP is true.
223
  This is only supported if HAVE_MMAP is true.
224
 
224
 
225
ONLY_MSPACES             default: 0 (false)
225
ONLY_MSPACES             default: 0 (false)
226
  If true, only compile in mspace versions, not regular versions.
226
  If true, only compile in mspace versions, not regular versions.
227
 
227
 
228
USE_LOCKS                default: 0 (false)
228
USE_LOCKS                default: 0 (false)
229
  Causes each call to each public routine to be surrounded with
229
  Causes each call to each public routine to be surrounded with
230
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
230
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
231
  overridden on a per-mspace basis for mspace versions.)
231
  overridden on a per-mspace basis for mspace versions.)
232
 
232
 
233
FOOTERS                  default: 0
233
FOOTERS                  default: 0
234
  If true, provide extra checking and dispatching by placing
234
  If true, provide extra checking and dispatching by placing
235
  information in the footers of allocated chunks. This adds
235
  information in the footers of allocated chunks. This adds
236
  space and time overhead.
236
  space and time overhead.
237
 
237
 
238
INSECURE                 default: 0
238
INSECURE                 default: 0
239
  If true, omit checks for usage errors and heap space overwrites.
239
  If true, omit checks for usage errors and heap space overwrites.
240
 
240
 
241
USE_DL_PREFIX            default: NOT defined
241
USE_DL_PREFIX            default: NOT defined
242
  Causes compiler to prefix all public routines with the string 'dl'.
242
  Causes compiler to prefix all public routines with the string 'dl'.
243
  This can be useful when you only want to use this malloc in one part
243
  This can be useful when you only want to use this malloc in one part
244
  of a program, using your regular system malloc elsewhere.
244
  of a program, using your regular system malloc elsewhere.
245
 
245
 
246
ABORT                    default: defined as abort()
246
ABORT                    default: defined as abort()
247
  Defines how to abort on failed checks.  On most systems, a failed
247
  Defines how to abort on failed checks.  On most systems, a failed
248
  check cannot die with an "assert" or even print an informative
248
  check cannot die with an "assert" or even print an informative
249
  message, because the underlying print routines in turn call malloc,
249
  message, because the underlying print routines in turn call malloc,
250
  which will fail again.  Generally, the best policy is to simply call
250
  which will fail again.  Generally, the best policy is to simply call
251
  abort(). It's not very useful to do more than this because many
251
  abort(). It's not very useful to do more than this because many
252
  errors due to overwriting will show up as address faults (null, odd
252
  errors due to overwriting will show up as address faults (null, odd
253
  addresses etc) rather than malloc-triggered checks, so will also
253
  addresses etc) rather than malloc-triggered checks, so will also
254
  abort.  Also, most compilers know that abort() does not return, so
254
  abort.  Also, most compilers know that abort() does not return, so
255
  can better optimize code conditionally calling it.
255
  can better optimize code conditionally calling it.
256
 
256
 
257
PROCEED_ON_ERROR           default: defined as 0 (false)
257
PROCEED_ON_ERROR           default: defined as 0 (false)
258
  Controls whether detected bad addresses cause them to bypassed
258
  Controls whether detected bad addresses cause them to bypassed
259
  rather than aborting. If set, detected bad arguments to free and
259
  rather than aborting. If set, detected bad arguments to free and
260
  realloc are ignored. And all bookkeeping information is zeroed out
260
  realloc are ignored. And all bookkeeping information is zeroed out
261
  upon a detected overwrite of freed heap space, thus losing the
261
  upon a detected overwrite of freed heap space, thus losing the
262
  ability to ever return it from malloc again, but enabling the
262
  ability to ever return it from malloc again, but enabling the
263
  application to proceed. If PROCEED_ON_ERROR is defined, the
263
  application to proceed. If PROCEED_ON_ERROR is defined, the
264
  static variable malloc_corruption_error_count is compiled in
264
  static variable malloc_corruption_error_count is compiled in
265
  and can be examined to see if errors have occurred. This option
265
  and can be examined to see if errors have occurred. This option
266
  generates slower code than the default abort policy.
266
  generates slower code than the default abort policy.
267
 
267
 
268
DEBUG                    default: NOT defined
268
DEBUG                    default: NOT defined
269
  The DEBUG setting is mainly intended for people trying to modify
269
  The DEBUG setting is mainly intended for people trying to modify
270
  this code or diagnose problems when porting to new platforms.
270
  this code or diagnose problems when porting to new platforms.
271
  However, it may also be able to better isolate user errors than just
271
  However, it may also be able to better isolate user errors than just
272
  using runtime checks.  The assertions in the check routines spell
272
  using runtime checks.  The assertions in the check routines spell
273
  out in more detail the assumptions and invariants underlying the
273
  out in more detail the assumptions and invariants underlying the
274
  algorithms.  The checking is fairly extensive, and will slow down
274
  algorithms.  The checking is fairly extensive, and will slow down
275
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
275
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
276
  set will attempt to check every non-mmapped allocated and free chunk
276
  set will attempt to check every non-mmapped allocated and free chunk
277
  in the course of computing the summaries.
277
  in the course of computing the summaries.
278
 
278
 
279
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
279
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
280
  Debugging assertion failures can be nearly impossible if your
280
  Debugging assertion failures can be nearly impossible if your
281
  version of the assert macro causes malloc to be called, which will
281
  version of the assert macro causes malloc to be called, which will
282
  lead to a cascade of further failures, blowing the runtime stack.
282
  lead to a cascade of further failures, blowing the runtime stack.
283
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
283
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
284
  which will usually make debugging easier.
284
  which will usually make debugging easier.
285
 
285
 
286
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
286
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
287
  The action to take before "return 0" when malloc fails to be able to
287
  The action to take before "return 0" when malloc fails to be able to
288
  return memory because there is none available.
288
  return memory because there is none available.
289
 
289
 
290
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
290
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
291
  True if this system supports sbrk or an emulation of it.
291
  True if this system supports sbrk or an emulation of it.
292
 
292
 
293
MORECORE                  default: sbrk
293
MORECORE                  default: sbrk
294
  The name of the sbrk-style system routine to call to obtain more
294
  The name of the sbrk-style system routine to call to obtain more
295
  memory.  See below for guidance on writing custom MORECORE
295
  memory.  See below for guidance on writing custom MORECORE
296
  functions. The type of the argument to sbrk/MORECORE varies across
296
  functions. The type of the argument to sbrk/MORECORE varies across
297
  systems.  It cannot be size_t, because it supports negative
297
  systems.  It cannot be size_t, because it supports negative
298
  arguments, so it is normally the signed type of the same width as
298
  arguments, so it is normally the signed type of the same width as
299
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
299
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
300
  though. Internally, we only call it with arguments less than half
300
  though. Internally, we only call it with arguments less than half
301
  the max value of a size_t, which should work across all reasonable
301
  the max value of a size_t, which should work across all reasonable
302
  possibilities, although sometimes generating compiler warnings.  See
302
  possibilities, although sometimes generating compiler warnings.  See
303
  near the end of this file for guidelines for creating a custom
303
  near the end of this file for guidelines for creating a custom
304
  version of MORECORE.
304
  version of MORECORE.
305
 
305
 
306
MORECORE_CONTIGUOUS       default: 1 (true)
306
MORECORE_CONTIGUOUS       default: 1 (true)
307
  If true, take advantage of fact that consecutive calls to MORECORE
307
  If true, take advantage of fact that consecutive calls to MORECORE
308
  with positive arguments always return contiguous increasing
308
  with positive arguments always return contiguous increasing
309
  addresses.  This is true of unix sbrk. It does not hurt too much to
309
  addresses.  This is true of unix sbrk. It does not hurt too much to
310
  set it true anyway, since malloc copes with non-contiguities.
310
  set it true anyway, since malloc copes with non-contiguities.
311
  Setting it false when definitely non-contiguous saves time
311
  Setting it false when definitely non-contiguous saves time
312
  and possibly wasted space it would take to discover this though.
312
  and possibly wasted space it would take to discover this though.
313
 
313
 
314
MORECORE_CANNOT_TRIM      default: NOT defined
314
MORECORE_CANNOT_TRIM      default: NOT defined
315
  True if MORECORE cannot release space back to the system when given
315
  True if MORECORE cannot release space back to the system when given
316
  negative arguments. This is generally necessary only if you are
316
  negative arguments. This is generally necessary only if you are
317
  using a hand-crafted MORECORE function that cannot handle negative
317
  using a hand-crafted MORECORE function that cannot handle negative
318
  arguments.
318
  arguments.
319
 
319
 
320
HAVE_MMAP                 default: 1 (true)
320
HAVE_MMAP                 default: 1 (true)
321
  True if this system supports mmap or an emulation of it.  If so, and
321
  True if this system supports mmap or an emulation of it.  If so, and
322
  HAVE_MORECORE is not true, MMAP is used for all system
322
  HAVE_MORECORE is not true, MMAP is used for all system
323
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
323
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
324
  primarily used to directly allocate very large blocks. It is also
324
  primarily used to directly allocate very large blocks. It is also
325
  used as a backup strategy in cases where MORECORE fails to provide
325
  used as a backup strategy in cases where MORECORE fails to provide
326
  space from system. Note: A single call to MUNMAP is assumed to be
326
  space from system. Note: A single call to MUNMAP is assumed to be
327
  able to unmap memory that may have be allocated using multiple calls
327
  able to unmap memory that may have be allocated using multiple calls
328
  to MMAP, so long as they are adjacent.
328
  to MMAP, so long as they are adjacent.
329
 
329
 
330
HAVE_MREMAP               default: 1 on linux, else 0
330
HAVE_MREMAP               default: 1 on linux, else 0
331
  If true realloc() uses mremap() to re-allocate large blocks and
331
  If true realloc() uses mremap() to re-allocate large blocks and
332
  extend or shrink allocation spaces.
332
  extend or shrink allocation spaces.
333
 
333
 
334
MMAP_CLEARS               default: 1 on unix
334
MMAP_CLEARS               default: 1 on unix
335
  True if mmap clears memory so calloc doesn't need to. This is true
335
  True if mmap clears memory so calloc doesn't need to. This is true
336
  for standard unix mmap using /dev/zero.
336
  for standard unix mmap using /dev/zero.
337
 
337
 
338
USE_BUILTIN_FFS            default: 0 (i.e., not used)
338
USE_BUILTIN_FFS            default: 0 (i.e., not used)
339
  Causes malloc to use the builtin ffs() function to compute indices.
339
  Causes malloc to use the builtin ffs() function to compute indices.
340
  Some compilers may recognize and intrinsify ffs to be faster than the
340
  Some compilers may recognize and intrinsify ffs to be faster than the
341
  supplied C version. Also, the case of x86 using gcc is special-cased
341
  supplied C version. Also, the case of x86 using gcc is special-cased
342
  to an asm instruction, so is already as fast as it can be, and so
342
  to an asm instruction, so is already as fast as it can be, and so
343
  this setting has no effect. (On most x86s, the asm version is only
343
  this setting has no effect. (On most x86s, the asm version is only
344
  slightly faster than the C version.)
344
  slightly faster than the C version.)
345
 
345
 
346
malloc_getpagesize         default: derive from system includes, or 4096.
346
malloc_getpagesize         default: derive from system includes, or 4096.
347
  The system page size. To the extent possible, this malloc manages
347
  The system page size. To the extent possible, this malloc manages
348
  memory from the system in page-size units.  This may be (and
348
  memory from the system in page-size units.  This may be (and
349
  usually is) a function rather than a constant. This is ignored
349
  usually is) a function rather than a constant. This is ignored
350
  if WIN32, where page size is determined using getSystemInfo during
350
  if WIN32, where page size is determined using getSystemInfo during
351
  initialization.
351
  initialization.
352
 
352
 
353
USE_DEV_RANDOM             default: 0 (i.e., not used)
353
USE_DEV_RANDOM             default: 0 (i.e., not used)
354
  Causes malloc to use /dev/random to initialize secure magic seed for
354
  Causes malloc to use /dev/random to initialize secure magic seed for
355
  stamping footers. Otherwise, the current time is used.
355
  stamping footers. Otherwise, the current time is used.
356
 
356
 
357
NO_MALLINFO                default: 0
357
NO_MALLINFO                default: 0
358
  If defined, don't compile "mallinfo". This can be a simple way
358
  If defined, don't compile "mallinfo". This can be a simple way
359
  of dealing with mismatches between system declarations and
359
  of dealing with mismatches between system declarations and
360
  those in this file.
360
  those in this file.
361
 
361
 
362
MALLINFO_FIELD_TYPE        default: size_t
362
MALLINFO_FIELD_TYPE        default: size_t
363
  The type of the fields in the mallinfo struct. This was originally
363
  The type of the fields in the mallinfo struct. This was originally
364
  defined as "int" in SVID etc, but is more usefully defined as
364
  defined as "int" in SVID etc, but is more usefully defined as
365
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
365
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
366
 
366
 
367
REALLOC_ZERO_BYTES_FREES    default: not defined
367
REALLOC_ZERO_BYTES_FREES    default: not defined
368
  This should be set if a call to realloc with zero bytes should
368
  This should be set if a call to realloc with zero bytes should
369
  be the same as a call to free. Some people think it should. Otherwise,
369
  be the same as a call to free. Some people think it should. Otherwise,
370
  since this malloc returns a unique pointer for malloc(0), so does
370
  since this malloc returns a unique pointer for malloc(0), so does
371
  realloc(p, 0).
371
  realloc(p, 0).
372
 
372
 
373
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
373
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
374
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
374
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
375
LACKS_STDLIB_H                default: NOT defined unless on WIN32
375
LACKS_STDLIB_H                default: NOT defined unless on WIN32
376
  Define these if your system does not have these header files.
376
  Define these if your system does not have these header files.
377
  You might need to manually insert some of the declarations they provide.
377
  You might need to manually insert some of the declarations they provide.
378
 
378
 
379
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
379
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
380
                                system_info.dwAllocationGranularity in WIN32,
380
                                system_info.dwAllocationGranularity in WIN32,
381
                                otherwise 64K.
381
                                otherwise 64K.
382
      Also settable using mallopt(M_GRANULARITY, x)
382
      Also settable using mallopt(M_GRANULARITY, x)
383
  The unit for allocating and deallocating memory from the system.  On
383
  The unit for allocating and deallocating memory from the system.  On
384
  most systems with contiguous MORECORE, there is no reason to
384
  most systems with contiguous MORECORE, there is no reason to
385
  make this more than a page. However, systems with MMAP tend to
385
  make this more than a page. However, systems with MMAP tend to
386
  either require or encourage larger granularities.  You can increase
386
  either require or encourage larger granularities.  You can increase
387
  this value to prevent system allocation functions to be called so
387
  this value to prevent system allocation functions to be called so
388
  often, especially if they are slow.  The value must be at least one
388
  often, especially if they are slow.  The value must be at least one
389
  page and must be a power of two.  Setting to 0 causes initialization
389
  page and must be a power of two.  Setting to 0 causes initialization
390
  to either page size or win32 region size.  (Note: In previous
390
  to either page size or win32 region size.  (Note: In previous
391
  versions of malloc, the equivalent of this option was called
391
  versions of malloc, the equivalent of this option was called
392
  "TOP_PAD")
392
  "TOP_PAD")
393
 
393
 
394
DEFAULT_TRIM_THRESHOLD    default: 2MB
394
DEFAULT_TRIM_THRESHOLD    default: 2MB
395
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
395
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
396
  The maximum amount of unused top-most memory to keep before
396
  The maximum amount of unused top-most memory to keep before
397
  releasing via malloc_trim in free().  Automatic trimming is mainly
397
  releasing via malloc_trim in free().  Automatic trimming is mainly
398
  useful in long-lived programs using contiguous MORECORE.  Because
398
  useful in long-lived programs using contiguous MORECORE.  Because
399
  trimming via sbrk can be slow on some systems, and can sometimes be
399
  trimming via sbrk can be slow on some systems, and can sometimes be
400
  wasteful (in cases where programs immediately afterward allocate
400
  wasteful (in cases where programs immediately afterward allocate
401
  more large chunks) the value should be high enough so that your
401
  more large chunks) the value should be high enough so that your
402
  overall system performance would improve by releasing this much
402
  overall system performance would improve by releasing this much
403
  memory.  As a rough guide, you might set to a value close to the
403
  memory.  As a rough guide, you might set to a value close to the
404
  average size of a process (program) running on your system.
404
  average size of a process (program) running on your system.
405
  Releasing this much memory would allow such a process to run in
405
  Releasing this much memory would allow such a process to run in
406
  memory.  Generally, it is worth tuning trim thresholds when a
406
  memory.  Generally, it is worth tuning trim thresholds when a
407
  program undergoes phases where several large chunks are allocated
407
  program undergoes phases where several large chunks are allocated
408
  and released in ways that can reuse each other's storage, perhaps
408
  and released in ways that can reuse each other's storage, perhaps
409
  mixed with phases where there are no such chunks at all. The trim
409
  mixed with phases where there are no such chunks at all. The trim
410
  value must be greater than page size to have any useful effect.  To
410
  value must be greater than page size to have any useful effect.  To
411
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
411
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
412
  some people use of mallocing a huge space and then freeing it at
412
  some people use of mallocing a huge space and then freeing it at
413
  program startup, in an attempt to reserve system memory, doesn't
413
  program startup, in an attempt to reserve system memory, doesn't
414
  have the intended effect under automatic trimming, since that memory
414
  have the intended effect under automatic trimming, since that memory
415
  will immediately be returned to the system.
415
  will immediately be returned to the system.
416
 
416
 
417
DEFAULT_MMAP_THRESHOLD       default: 256K
417
DEFAULT_MMAP_THRESHOLD       default: 256K
418
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
418
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
419
  The request size threshold for using MMAP to directly service a
419
  The request size threshold for using MMAP to directly service a
420
  request. Requests of at least this size that cannot be allocated
420
  request. Requests of at least this size that cannot be allocated
421
  using already-existing space will be serviced via mmap.  (If enough
421
  using already-existing space will be serviced via mmap.  (If enough
422
  normal freed space already exists it is used instead.)  Using mmap
422
  normal freed space already exists it is used instead.)  Using mmap
423
  segregates relatively large chunks of memory so that they can be
423
  segregates relatively large chunks of memory so that they can be
424
  individually obtained and released from the host system. A request
424
  individually obtained and released from the host system. A request
425
  serviced through mmap is never reused by any other request (at least
425
  serviced through mmap is never reused by any other request (at least
426
  not directly; the system may just so happen to remap successive
426
  not directly; the system may just so happen to remap successive
427
  requests to the same locations).  Segregating space in this way has
427
  requests to the same locations).  Segregating space in this way has
428
  the benefits that: Mmapped space can always be individually released
428
  the benefits that: Mmapped space can always be individually released
429
  back to the system, which helps keep the system level memory demands
429
  back to the system, which helps keep the system level memory demands
430
  of a long-lived program low.  Also, mapped memory doesn't become
430
  of a long-lived program low.  Also, mapped memory doesn't become
431
  `locked' between other chunks, as can happen with normally allocated
431
  `locked' between other chunks, as can happen with normally allocated
432
  chunks, which means that even trimming via malloc_trim would not
432
  chunks, which means that even trimming via malloc_trim would not
433
  release them.  However, it has the disadvantage that the space
433
  release them.  However, it has the disadvantage that the space
434
  cannot be reclaimed, consolidated, and then used to service later
434
  cannot be reclaimed, consolidated, and then used to service later
435
  requests, as happens with normal chunks.  The advantages of mmap
435
  requests, as happens with normal chunks.  The advantages of mmap
436
  nearly always outweigh disadvantages for "large" chunks, but the
436
  nearly always outweigh disadvantages for "large" chunks, but the
437
  value of "large" may vary across systems.  The default is an
437
  value of "large" may vary across systems.  The default is an
438
  empirically derived value that works well in most systems. You can
438
  empirically derived value that works well in most systems. You can
439
  disable mmap by setting to MAX_SIZE_T.
439
  disable mmap by setting to MAX_SIZE_T.
440
 
440
 
441
*/
441
*/
442
 
442
 
443
/** @addtogroup libcmalloc malloc
443
/** @addtogroup libcmalloc malloc
444
  * @brief Malloc originally written by Doug Lea and ported to HelenOS.
444
  * @brief Malloc originally written by Doug Lea and ported to HelenOS.
445
  * @ingroup libc
445
  * @ingroup libc
446
 * @{
446
 * @{
447
 */
447
 */
448
/** @file
448
/** @file
449
 */
449
 */
450
 
450
 
451
 
451
 
452
#include <sys/types.h>  /* For size_t */
452
#include <sys/types.h>  /* For size_t */
453
 
453
 
454
/** Non-default helenos customizations */
454
/** Non-default helenos customizations */
455
#define LACKS_FCNTL_H
455
#define LACKS_FCNTL_H
456
#define LACKS_SYS_MMAN_H
456
#define LACKS_SYS_MMAN_H
457
#define LACKS_SYS_PARAM_H
457
#define LACKS_SYS_PARAM_H
458
#undef HAVE_MMAP
458
#undef HAVE_MMAP
459
#define HAVE_MMAP 0
459
#define HAVE_MMAP 0
460
#define LACKS_ERRNO_H
460
#define LACKS_ERRNO_H
461
/* Set errno? */
461
/* Set errno? */
462
#undef MALLOC_FAILURE_ACTION
462
#undef MALLOC_FAILURE_ACTION
463
#define MALLOC_FAILURE_ACTION
463
#define MALLOC_FAILURE_ACTION
464
 
464
 
465
/* The maximum possible size_t value has all bits set */
465
/* The maximum possible size_t value has all bits set */
466
#define MAX_SIZE_T           (~(size_t)0)
466
#define MAX_SIZE_T           (~(size_t)0)
467
 
467
 
468
#define ONLY_MSPACES 0
468
#define ONLY_MSPACES 0
469
#define MSPACES 0
469
#define MSPACES 0
470
 
470
 
471
#ifdef MALLOC_ALIGNMENT_16
471
#ifdef MALLOC_ALIGNMENT_16
472
#define MALLOC_ALIGNMENT ((size_t)16U)
472
#define MALLOC_ALIGNMENT ((size_t)16U)
473
#else
473
#else
474
#define MALLOC_ALIGNMENT ((size_t)8U)
474
#define MALLOC_ALIGNMENT ((size_t)8U)
475
#endif
475
#endif
476
 
476
 
477
#define FOOTERS 0 
477
#define FOOTERS 0 
478
#define ABORT  \
478
#define ABORT  \
479
{ \
479
{ \
480
    DEBUG("%s abort in %s on line %d\n", __FILE__, __func__, __LINE__); \
480
    DEBUG("%s abort in %s on line %d\n", __FILE__, __func__, __LINE__); \
481
    abort(); \
481
    abort(); \
482
}
482
}
483
#define ABORT_ON_ASSERT_FAILURE 1
483
#define ABORT_ON_ASSERT_FAILURE 1
484
#define PROCEED_ON_ERROR 0
484
#define PROCEED_ON_ERROR 0
485
#define USE_LOCKS 1
485
#define USE_LOCKS 1
486
#define INSECURE 0
486
#define INSECURE 0
487
#define HAVE_MMAP 0
487
#define HAVE_MMAP 0
488
 
488
 
489
#define MMAP_CLEARS 1
489
#define MMAP_CLEARS 1
490
 
490
 
491
#define HAVE_MORECORE 1
491
#define HAVE_MORECORE 1
492
#define MORECORE_CONTIGUOUS 1
492
#define MORECORE_CONTIGUOUS 1
493
#define MORECORE sbrk
493
#define MORECORE sbrk
494
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
494
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
495
 
495
 
496
#ifndef DEFAULT_TRIM_THRESHOLD
496
#ifndef DEFAULT_TRIM_THRESHOLD
497
#ifndef MORECORE_CANNOT_TRIM
497
#ifndef MORECORE_CANNOT_TRIM
498
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
498
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
499
#else   /* MORECORE_CANNOT_TRIM */
499
#else   /* MORECORE_CANNOT_TRIM */
500
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
500
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
501
#endif  /* MORECORE_CANNOT_TRIM */
501
#endif  /* MORECORE_CANNOT_TRIM */
502
#endif  /* DEFAULT_TRIM_THRESHOLD */
502
#endif  /* DEFAULT_TRIM_THRESHOLD */
503
#ifndef DEFAULT_MMAP_THRESHOLD
503
#ifndef DEFAULT_MMAP_THRESHOLD
504
#if HAVE_MMAP
504
#if HAVE_MMAP
505
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
505
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
506
#else   /* HAVE_MMAP */
506
#else   /* HAVE_MMAP */
507
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
507
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
508
#endif  /* HAVE_MMAP */
508
#endif  /* HAVE_MMAP */
509
#endif  /* DEFAULT_MMAP_THRESHOLD */
509
#endif  /* DEFAULT_MMAP_THRESHOLD */
510
#ifndef USE_BUILTIN_FFS
510
#ifndef USE_BUILTIN_FFS
511
#define USE_BUILTIN_FFS 0
511
#define USE_BUILTIN_FFS 0
512
#endif  /* USE_BUILTIN_FFS */
512
#endif  /* USE_BUILTIN_FFS */
513
#ifndef USE_DEV_RANDOM
513
#ifndef USE_DEV_RANDOM
514
#define USE_DEV_RANDOM 0
514
#define USE_DEV_RANDOM 0
515
#endif  /* USE_DEV_RANDOM */
515
#endif  /* USE_DEV_RANDOM */
516
#ifndef NO_MALLINFO
516
#ifndef NO_MALLINFO
517
#define NO_MALLINFO 0
517
#define NO_MALLINFO 0
518
#endif  /* NO_MALLINFO */
518
#endif  /* NO_MALLINFO */
519
#ifndef MALLINFO_FIELD_TYPE
519
#ifndef MALLINFO_FIELD_TYPE
520
#define MALLINFO_FIELD_TYPE size_t
520
#define MALLINFO_FIELD_TYPE size_t
521
#endif  /* MALLINFO_FIELD_TYPE */
521
#endif  /* MALLINFO_FIELD_TYPE */
522
 
522
 
523
/*
523
/*
524
  mallopt tuning options.  SVID/XPG defines four standard parameter
524
  mallopt tuning options.  SVID/XPG defines four standard parameter
525
  numbers for mallopt, normally defined in malloc.h.  None of these
525
  numbers for mallopt, normally defined in malloc.h.  None of these
526
  are used in this malloc, so setting them has no effect. But this
526
  are used in this malloc, so setting them has no effect. But this
527
  malloc does support the following options.
527
  malloc does support the following options.
528
*/
528
*/
529
 
529
 
530
#define M_TRIM_THRESHOLD     (-1)
530
#define M_TRIM_THRESHOLD     (-1)
531
#define M_GRANULARITY        (-2)
531
#define M_GRANULARITY        (-2)
532
#define M_MMAP_THRESHOLD     (-3)
532
#define M_MMAP_THRESHOLD     (-3)
533
 
533
 
534
/*
534
/*
535
  ========================================================================
535
  ========================================================================
536
  To make a fully customizable malloc.h header file, cut everything
536
  To make a fully customizable malloc.h header file, cut everything
537
  above this line, put into file malloc.h, edit to suit, and #include it
537
  above this line, put into file malloc.h, edit to suit, and #include it
538
  on the next line, as well as in programs that use this malloc.
538
  on the next line, as well as in programs that use this malloc.
539
  ========================================================================
539
  ========================================================================
540
*/
540
*/
541
 
541
 
542
#include "malloc.h"
542
#include "malloc.h"
543
 
543
 
544
/*------------------------------ internal #includes ---------------------- */
544
/*------------------------------ internal #includes ---------------------- */
545
 
545
 
546
#include <stdio.h>       /* for printing in malloc_stats */
546
#include <stdio.h>       /* for printing in malloc_stats */
547
#include <string.h>
547
#include <string.h>
548
 
548
 
549
#ifndef LACKS_ERRNO_H
549
#ifndef LACKS_ERRNO_H
550
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
550
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
551
#endif /* LACKS_ERRNO_H */
551
#endif /* LACKS_ERRNO_H */
552
#if FOOTERS
552
#if FOOTERS
553
#include <time.h>        /* for magic initialization */
553
#include <time.h>        /* for magic initialization */
554
#endif /* FOOTERS */
554
#endif /* FOOTERS */
555
#ifndef LACKS_STDLIB_H
555
#ifndef LACKS_STDLIB_H
556
#include <stdlib.h>      /* for abort() */
556
#include <stdlib.h>      /* for abort() */
557
#endif /* LACKS_STDLIB_H */
557
#endif /* LACKS_STDLIB_H */
558
#ifdef DEBUG
558
#ifdef DEBUG
559
#if ABORT_ON_ASSERT_FAILURE
559
#if ABORT_ON_ASSERT_FAILURE
560
#define assert(x) {if(!(x)) {DEBUG(#x);ABORT;}}
560
#define assert(x) {if(!(x)) {DEBUG(#x);ABORT;}}
561
#else /* ABORT_ON_ASSERT_FAILURE */
561
#else /* ABORT_ON_ASSERT_FAILURE */
562
#include <assert.h>
562
#include <assert.h>
563
#endif /* ABORT_ON_ASSERT_FAILURE */
563
#endif /* ABORT_ON_ASSERT_FAILURE */
564
#else  /* DEBUG */
564
#else  /* DEBUG */
565
#define assert(x)
565
#define assert(x)
566
#endif /* DEBUG */
566
#endif /* DEBUG */
567
#if USE_BUILTIN_FFS
567
#if USE_BUILTIN_FFS
568
#ifndef LACKS_STRINGS_H
568
#ifndef LACKS_STRINGS_H
569
#include <strings.h>     /* for ffs */
569
#include <strings.h>     /* for ffs */
570
#endif /* LACKS_STRINGS_H */
570
#endif /* LACKS_STRINGS_H */
571
#endif /* USE_BUILTIN_FFS */
571
#endif /* USE_BUILTIN_FFS */
572
#if HAVE_MMAP
572
#if HAVE_MMAP
573
#ifndef LACKS_SYS_MMAN_H
573
#ifndef LACKS_SYS_MMAN_H
574
#include <sys/mman.h>    /* for mmap */
574
#include <sys/mman.h>    /* for mmap */
575
#endif /* LACKS_SYS_MMAN_H */
575
#endif /* LACKS_SYS_MMAN_H */
576
#ifndef LACKS_FCNTL_H
576
#ifndef LACKS_FCNTL_H
577
#include <fcntl.h>
577
#include <fcntl.h>
578
#endif /* LACKS_FCNTL_H */
578
#endif /* LACKS_FCNTL_H */
579
#endif /* HAVE_MMAP */
579
#endif /* HAVE_MMAP */
580
#if HAVE_MORECORE
580
#if HAVE_MORECORE
581
#ifndef LACKS_UNISTD_H
581
#ifndef LACKS_UNISTD_H
582
#include <unistd.h>     /* for sbrk */
582
#include <unistd.h>     /* for sbrk */
583
#else /* LACKS_UNISTD_H */
583
#else /* LACKS_UNISTD_H */
584
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
584
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
585
extern void*     sbrk(ptrdiff_t);
585
extern void*     sbrk(ptrdiff_t);
586
#endif /* FreeBSD etc */
586
#endif /* FreeBSD etc */
587
#endif /* LACKS_UNISTD_H */
587
#endif /* LACKS_UNISTD_H */
588
#endif /* HAVE_MMAP */
588
#endif /* HAVE_MMAP */
589
 
589
 
590
#ifndef WIN32
590
#ifndef WIN32
591
#ifndef malloc_getpagesize
591
#ifndef malloc_getpagesize
592
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
592
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
593
#    ifndef _SC_PAGE_SIZE
593
#    ifndef _SC_PAGE_SIZE
594
#      define _SC_PAGE_SIZE _SC_PAGESIZE
594
#      define _SC_PAGE_SIZE _SC_PAGESIZE
595
#    endif
595
#    endif
596
#  endif
596
#  endif
597
#  ifdef _SC_PAGE_SIZE
597
#  ifdef _SC_PAGE_SIZE
598
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
598
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
599
#  else
599
#  else
600
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
600
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
601
       extern size_t getpagesize();
601
       extern size_t getpagesize();
602
#      define malloc_getpagesize getpagesize()
602
#      define malloc_getpagesize getpagesize()
603
#    else
603
#    else
604
#      ifdef WIN32 /* use supplied emulation of getpagesize */
604
#      ifdef WIN32 /* use supplied emulation of getpagesize */
605
#        define malloc_getpagesize getpagesize()
605
#        define malloc_getpagesize getpagesize()
606
#      else
606
#      else
607
#        ifndef LACKS_SYS_PARAM_H
607
#        ifndef LACKS_SYS_PARAM_H
608
#          include <sys/param.h>
608
#          include <sys/param.h>
609
#        endif
609
#        endif
610
#        ifdef EXEC_PAGESIZE
610
#        ifdef EXEC_PAGESIZE
611
#          define malloc_getpagesize EXEC_PAGESIZE
611
#          define malloc_getpagesize EXEC_PAGESIZE
612
#        else
612
#        else
613
#          ifdef NBPG
613
#          ifdef NBPG
614
#            ifndef CLSIZE
614
#            ifndef CLSIZE
615
#              define malloc_getpagesize NBPG
615
#              define malloc_getpagesize NBPG
616
#            else
616
#            else
617
#              define malloc_getpagesize (NBPG * CLSIZE)
617
#              define malloc_getpagesize (NBPG * CLSIZE)
618
#            endif
618
#            endif
619
#          else
619
#          else
620
#            ifdef NBPC
620
#            ifdef NBPC
621
#              define malloc_getpagesize NBPC
621
#              define malloc_getpagesize NBPC
622
#            else
622
#            else
623
#              ifdef PAGESIZE
623
#              ifdef PAGESIZE
624
#                define malloc_getpagesize PAGESIZE
624
#                define malloc_getpagesize PAGESIZE
625
#              else /* just guess */
625
#              else /* just guess */
626
#                define malloc_getpagesize ((size_t)4096U)
626
#                define malloc_getpagesize ((size_t)4096U)
627
#              endif
627
#              endif
628
#            endif
628
#            endif
629
#          endif
629
#          endif
630
#        endif
630
#        endif
631
#      endif
631
#      endif
632
#    endif
632
#    endif
633
#  endif
633
#  endif
634
#endif
634
#endif
635
#endif
635
#endif
636
 
636
 
637
/* ------------------- size_t and alignment properties -------------------- */
637
/* ------------------- size_t and alignment properties -------------------- */
638
 
638
 
639
/* The byte and bit size of a size_t */
639
/* The byte and bit size of a size_t */
640
#define SIZE_T_SIZE         (sizeof(size_t))
640
#define SIZE_T_SIZE         (sizeof(size_t))
641
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
641
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
642
 
642
 
643
/* Some constants coerced to size_t */
643
/* Some constants coerced to size_t */
644
/* Annoying but necessary to avoid errors on some plaftorms */
644
/* Annoying but necessary to avoid errors on some plaftorms */
645
#define SIZE_T_ZERO         ((size_t)0)
645
#define SIZE_T_ZERO         ((size_t)0)
646
#define SIZE_T_ONE          ((size_t)1)
646
#define SIZE_T_ONE          ((size_t)1)
647
#define SIZE_T_TWO          ((size_t)2)
647
#define SIZE_T_TWO          ((size_t)2)
648
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
648
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
649
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
649
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
650
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
650
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
651
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
651
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
652
 
652
 
653
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
653
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
654
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
654
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
655
 
655
 
656
/* True if address a has acceptable alignment */
656
/* True if address a has acceptable alignment */
657
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
657
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
658
 
658
 
659
/* the number of bytes to offset an address to align it */
659
/* the number of bytes to offset an address to align it */
660
#define align_offset(A)\
660
#define align_offset(A)\
661
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
661
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
662
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
662
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
663
 
663
 
664
/* -------------------------- MMAP preliminaries ------------------------- */
664
/* -------------------------- MMAP preliminaries ------------------------- */
665
 
665
 
666
/*
666
/*
667
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
667
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
668
   checks to fail so compiler optimizer can delete code rather than
668
   checks to fail so compiler optimizer can delete code rather than
669
   using so many "#if"s.
669
   using so many "#if"s.
670
*/
670
*/
671
 
671
 
672
 
672
 
673
/* MORECORE and MMAP must return MFAIL on failure */
673
/* MORECORE and MMAP must return MFAIL on failure */
674
#define MFAIL                ((void*)(MAX_SIZE_T))
674
#define MFAIL                ((void*)(MAX_SIZE_T))
675
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
675
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
676
 
676
 
677
#if !HAVE_MMAP
677
#if !HAVE_MMAP
678
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
678
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
679
#define USE_MMAP_BIT         (SIZE_T_ZERO)
679
#define USE_MMAP_BIT         (SIZE_T_ZERO)
680
#define CALL_MMAP(s)         MFAIL
680
#define CALL_MMAP(s)         MFAIL
681
#define CALL_MUNMAP(a, s)    (-1)
681
#define CALL_MUNMAP(a, s)    (-1)
682
#define DIRECT_MMAP(s)       MFAIL
682
#define DIRECT_MMAP(s)       MFAIL
683
 
683
 
684
#else /* HAVE_MMAP */
684
#else /* HAVE_MMAP */
685
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
685
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
686
#define USE_MMAP_BIT         (SIZE_T_ONE)
686
#define USE_MMAP_BIT         (SIZE_T_ONE)
687
 
687
 
688
#ifndef WIN32
688
#ifndef WIN32
689
#define CALL_MUNMAP(a, s)    munmap((a), (s))
689
#define CALL_MUNMAP(a, s)    munmap((a), (s))
690
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
690
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
691
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
691
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
692
#define MAP_ANONYMOUS        MAP_ANON
692
#define MAP_ANONYMOUS        MAP_ANON
693
#endif /* MAP_ANON */
693
#endif /* MAP_ANON */
694
#ifdef MAP_ANONYMOUS
694
#ifdef MAP_ANONYMOUS
695
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
695
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
696
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
696
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
697
#else /* MAP_ANONYMOUS */
697
#else /* MAP_ANONYMOUS */
698
/*
698
/*
699
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
699
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
700
   is unlikely to be needed, but is supplied just in case.
700
   is unlikely to be needed, but is supplied just in case.
701
*/
701
*/
702
#define MMAP_FLAGS           (MAP_PRIVATE)
702
#define MMAP_FLAGS           (MAP_PRIVATE)
703
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
703
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
704
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
704
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
705
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
705
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
706
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
706
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
707
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
707
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
708
#endif /* MAP_ANONYMOUS */
708
#endif /* MAP_ANONYMOUS */
709
 
709
 
710
#define DIRECT_MMAP(s)       CALL_MMAP(s)
710
#define DIRECT_MMAP(s)       CALL_MMAP(s)
711
#else /* WIN32 */
711
#else /* WIN32 */
712
 
712
 
713
/* Win32 MMAP via VirtualAlloc */
713
/* Win32 MMAP via VirtualAlloc */
714
static void* win32mmap(size_t size) {
714
static void* win32mmap(size_t size) {
715
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
715
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
716
  return (ptr != 0)? ptr: MFAIL;
716
  return (ptr != 0)? ptr: MFAIL;
717
}
717
}
718
 
718
 
719
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
719
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
720
static void* win32direct_mmap(size_t size) {
720
static void* win32direct_mmap(size_t size) {
721
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
721
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
722
                           PAGE_READWRITE);
722
                           PAGE_READWRITE);
723
  return (ptr != 0)? ptr: MFAIL;
723
  return (ptr != 0)? ptr: MFAIL;
724
}
724
}
725
 
725
 
726
/* This function supports releasing coalesed segments */
726
/* This function supports releasing coalesed segments */
727
static int win32munmap(void* ptr, size_t size) {
727
static int win32munmap(void* ptr, size_t size) {
728
  MEMORY_BASIC_INFORMATION minfo;
728
  MEMORY_BASIC_INFORMATION minfo;
729
  char* cptr = ptr;
729
  char* cptr = ptr;
730
  while (size) {
730
  while (size) {
731
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
731
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
732
      return -1;
732
      return -1;
733
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
733
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
734
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
734
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
735
      return -1;
735
      return -1;
736
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
736
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
737
      return -1;
737
      return -1;
738
    cptr += minfo.RegionSize;
738
    cptr += minfo.RegionSize;
739
    size -= minfo.RegionSize;
739
    size -= minfo.RegionSize;
740
  }
740
  }
741
  return 0;
741
  return 0;
742
}
742
}
743
 
743
 
744
#define CALL_MMAP(s)         win32mmap(s)
744
#define CALL_MMAP(s)         win32mmap(s)
745
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
745
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
746
#define DIRECT_MMAP(s)       win32direct_mmap(s)
746
#define DIRECT_MMAP(s)       win32direct_mmap(s)
747
#endif /* WIN32 */
747
#endif /* WIN32 */
748
#endif /* HAVE_MMAP */
748
#endif /* HAVE_MMAP */
749
 
749
 
750
#if HAVE_MMAP && HAVE_MREMAP
750
#if HAVE_MMAP && HAVE_MREMAP
751
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
751
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
752
#else  /* HAVE_MMAP && HAVE_MREMAP */
752
#else  /* HAVE_MMAP && HAVE_MREMAP */
753
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
753
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
754
#endif /* HAVE_MMAP && HAVE_MREMAP */
754
#endif /* HAVE_MMAP && HAVE_MREMAP */
755
 
755
 
756
#if HAVE_MORECORE
756
#if HAVE_MORECORE
757
#define CALL_MORECORE(S)     MORECORE(S)
757
#define CALL_MORECORE(S)     MORECORE(S)
758
#else  /* HAVE_MORECORE */
758
#else  /* HAVE_MORECORE */
759
#define CALL_MORECORE(S)     MFAIL
759
#define CALL_MORECORE(S)     MFAIL
760
#endif /* HAVE_MORECORE */
760
#endif /* HAVE_MORECORE */
761
 
761
 
762
/* mstate bit set if continguous morecore disabled or failed */
762
/* mstate bit set if continguous morecore disabled or failed */
763
#define USE_NONCONTIGUOUS_BIT (4U)
763
#define USE_NONCONTIGUOUS_BIT (4U)
764
 
764
 
765
/* segment bit set in create_mspace_with_base */
765
/* segment bit set in create_mspace_with_base */
766
#define EXTERN_BIT            (8U)
766
#define EXTERN_BIT            (8U)
767
 
767
 
768
 
768
 
769
/* --------------------------- Lock preliminaries ------------------------ */
769
/* --------------------------- Lock preliminaries ------------------------ */
770
 
770
 
771
#if USE_LOCKS
771
#if USE_LOCKS
772
 
772
 
773
/*
773
/*
774
  When locks are defined, there are up to two global locks:
774
  When locks are defined, there are up to two global locks:
775
 
775
 
776
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
776
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
777
    MORECORE.  In many cases sys_alloc requires two calls, that should
777
    MORECORE.  In many cases sys_alloc requires two calls, that should
778
    not be interleaved with calls by other threads.  This does not
778
    not be interleaved with calls by other threads.  This does not
779
    protect against direct calls to MORECORE by other threads not
779
    protect against direct calls to MORECORE by other threads not
780
    using this lock, so there is still code to cope the best we can on
780
    using this lock, so there is still code to cope the best we can on
781
    interference.
781
    interference.
782
 
782
 
783
  * magic_init_mutex ensures that mparams.magic and other
783
  * magic_init_mutex ensures that mparams.magic and other
784
    unique mparams values are initialized only once.
784
    unique mparams values are initialized only once.
785
*/
785
*/
786
 
786
 
787
/* By default use posix locks */
787
/* By default use posix locks */
788
#include <futex.h>
788
#include <futex.h>
789
#define MLOCK_T atomic_t
789
#define MLOCK_T atomic_t
790
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
790
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
791
/* futex_down cannot fail, but can return different
791
/* futex_down cannot fail, but can return different
792
 * retvals for OK
792
 * retvals for OK
793
 */
793
 */
794
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
794
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
795
#define RELEASE_LOCK(l)      futex_up(l)
795
#define RELEASE_LOCK(l)      futex_up(l)
796
 
796
 
797
#if HAVE_MORECORE
797
#if HAVE_MORECORE
798
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
798
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
799
#endif /* HAVE_MORECORE */
799
#endif /* HAVE_MORECORE */
800
 
800
 
801
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
801
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
802
 
802
 
803
 
803
 
804
#define USE_LOCK_BIT               (2U)
804
#define USE_LOCK_BIT               (2U)
805
#else  /* USE_LOCKS */
805
#else  /* USE_LOCKS */
806
#define USE_LOCK_BIT               (0U)
806
#define USE_LOCK_BIT               (0U)
807
#define INITIAL_LOCK(l)
807
#define INITIAL_LOCK(l)
808
#endif /* USE_LOCKS */
808
#endif /* USE_LOCKS */
809
 
809
 
810
#if USE_LOCKS && HAVE_MORECORE
810
#if USE_LOCKS && HAVE_MORECORE
811
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
811
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
812
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
812
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
813
#else /* USE_LOCKS && HAVE_MORECORE */
813
#else /* USE_LOCKS && HAVE_MORECORE */
814
#define ACQUIRE_MORECORE_LOCK()
814
#define ACQUIRE_MORECORE_LOCK()
815
#define RELEASE_MORECORE_LOCK()
815
#define RELEASE_MORECORE_LOCK()
816
#endif /* USE_LOCKS && HAVE_MORECORE */
816
#endif /* USE_LOCKS && HAVE_MORECORE */
817
 
817
 
818
#if USE_LOCKS
818
#if USE_LOCKS
819
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
819
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
820
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
820
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
821
#else  /* USE_LOCKS */
821
#else  /* USE_LOCKS */
822
#define ACQUIRE_MAGIC_INIT_LOCK()
822
#define ACQUIRE_MAGIC_INIT_LOCK()
823
#define RELEASE_MAGIC_INIT_LOCK()
823
#define RELEASE_MAGIC_INIT_LOCK()
824
#endif /* USE_LOCKS */
824
#endif /* USE_LOCKS */
825
 
825
 
826
 
826
 
827
/* -----------------------  Chunk representations ------------------------ */
827
/* -----------------------  Chunk representations ------------------------ */
828
 
828
 
829
/*
829
/*
830
  (The following includes lightly edited explanations by Colin Plumb.)
830
  (The following includes lightly edited explanations by Colin Plumb.)
831
 
831
 
832
  The malloc_chunk declaration below is misleading (but accurate and
832
  The malloc_chunk declaration below is misleading (but accurate and
833
  necessary).  It declares a "view" into memory allowing access to
833
  necessary).  It declares a "view" into memory allowing access to
834
  necessary fields at known offsets from a given base.
834
  necessary fields at known offsets from a given base.
835
 
835
 
836
  Chunks of memory are maintained using a `boundary tag' method as
836
  Chunks of memory are maintained using a `boundary tag' method as
837
  originally described by Knuth.  (See the paper by Paul Wilson
837
  originally described by Knuth.  (See the paper by Paul Wilson
838
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
838
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
839
  techniques.)  Sizes of free chunks are stored both in the front of
839
  techniques.)  Sizes of free chunks are stored both in the front of
840
  each chunk and at the end.  This makes consolidating fragmented
840
  each chunk and at the end.  This makes consolidating fragmented
841
  chunks into bigger chunks fast.  The head fields also hold bits
841
  chunks into bigger chunks fast.  The head fields also hold bits
842
  representing whether chunks are free or in use.
842
  representing whether chunks are free or in use.
843
 
843
 
844
  Here are some pictures to make it clearer.  They are "exploded" to
844
  Here are some pictures to make it clearer.  They are "exploded" to
845
  show that the state of a chunk can be thought of as extending from
845
  show that the state of a chunk can be thought of as extending from
846
  the high 31 bits of the head field of its header through the
846
  the high 31 bits of the head field of its header through the
847
  prev_foot and PINUSE_BIT bit of the following chunk header.
847
  prev_foot and PINUSE_BIT bit of the following chunk header.
848
 
848
 
849
  A chunk that's in use looks like:
849
  A chunk that's in use looks like:
850
 
850
 
851
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
851
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
852
           | Size of previous chunk (if P = 1)                             |
852
           | Size of previous chunk (if P = 1)                             |
853
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
853
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
854
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
854
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
855
         | Size of this chunk                                         1| +-+
855
         | Size of this chunk                                         1| +-+
856
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
856
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
857
         |                                                               |
857
         |                                                               |
858
         +-                                                             -+
858
         +-                                                             -+
859
         |                                                               |
859
         |                                                               |
860
         +-                                                             -+
860
         +-                                                             -+
861
         |                                                               :
861
         |                                                               :
862
         +-      size - sizeof(size_t) available payload bytes          -+
862
         +-      size - sizeof(size_t) available payload bytes          -+
863
         :                                                               |
863
         :                                                               |
864
 chunk-> +-                                                             -+
864
 chunk-> +-                                                             -+
865
         |                                                               |
865
         |                                                               |
866
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
866
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
867
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
867
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
868
       | Size of next chunk (may or may not be in use)               | +-+
868
       | Size of next chunk (may or may not be in use)               | +-+
869
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
869
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
870
 
870
 
871
    And if it's free, it looks like this:
871
    And if it's free, it looks like this:
872
 
872
 
873
   chunk-> +-                                                             -+
873
   chunk-> +-                                                             -+
874
           | User payload (must be in use, or we would have merged!)       |
874
           | User payload (must be in use, or we would have merged!)       |
875
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
875
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
876
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
876
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
877
         | Size of this chunk                                         0| +-+
877
         | Size of this chunk                                         0| +-+
878
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
878
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
879
         | Next pointer                                                  |
879
         | Next pointer                                                  |
880
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
880
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
881
         | Prev pointer                                                  |
881
         | Prev pointer                                                  |
882
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
882
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
883
         |                                                               :
883
         |                                                               :
884
         +-      size - sizeof(struct chunk) unused bytes               -+
884
         +-      size - sizeof(struct chunk) unused bytes               -+
885
         :                                                               |
885
         :                                                               |
886
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
886
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
887
         | Size of this chunk                                            |
887
         | Size of this chunk                                            |
888
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
888
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
889
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
889
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
890
       | Size of next chunk (must be in use, or we would have merged)| +-+
890
       | Size of next chunk (must be in use, or we would have merged)| +-+
891
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
891
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
892
       |                                                               :
892
       |                                                               :
893
       +- User payload                                                -+
893
       +- User payload                                                -+
894
       :                                                               |
894
       :                                                               |
895
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
895
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
896
                                                                     |0|
896
                                                                     |0|
897
                                                                     +-+
897
                                                                     +-+
898
  Note that since we always merge adjacent free chunks, the chunks
898
  Note that since we always merge adjacent free chunks, the chunks
899
  adjacent to a free chunk must be in use.
899
  adjacent to a free chunk must be in use.
900
 
900
 
901
  Given a pointer to a chunk (which can be derived trivially from the
901
  Given a pointer to a chunk (which can be derived trivially from the
902
  payload pointer) we can, in O(1) time, find out whether the adjacent
902
  payload pointer) we can, in O(1) time, find out whether the adjacent
903
  chunks are free, and if so, unlink them from the lists that they
903
  chunks are free, and if so, unlink them from the lists that they
904
  are on and merge them with the current chunk.
904
  are on and merge them with the current chunk.
905
 
905
 
906
  Chunks always begin on even word boundaries, so the mem portion
906
  Chunks always begin on even word boundaries, so the mem portion
907
  (which is returned to the user) is also on an even word boundary, and
907
  (which is returned to the user) is also on an even word boundary, and
908
  thus at least double-word aligned.
908
  thus at least double-word aligned.
909
 
909
 
910
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
910
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
911
  chunk size (which is always a multiple of two words), is an in-use
911
  chunk size (which is always a multiple of two words), is an in-use
912
  bit for the *previous* chunk.  If that bit is *clear*, then the
912
  bit for the *previous* chunk.  If that bit is *clear*, then the
913
  word before the current chunk size contains the previous chunk
913
  word before the current chunk size contains the previous chunk
914
  size, and can be used to find the front of the previous chunk.
914
  size, and can be used to find the front of the previous chunk.
915
  The very first chunk allocated always has this bit set, preventing
915
  The very first chunk allocated always has this bit set, preventing
916
  access to non-existent (or non-owned) memory. If pinuse is set for
916
  access to non-existent (or non-owned) memory. If pinuse is set for
917
  any given chunk, then you CANNOT determine the size of the
917
  any given chunk, then you CANNOT determine the size of the
918
  previous chunk, and might even get a memory addressing fault when
918
  previous chunk, and might even get a memory addressing fault when
919
  trying to do so.
919
  trying to do so.
920
 
920
 
921
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
921
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
922
  the chunk size redundantly records whether the current chunk is
922
  the chunk size redundantly records whether the current chunk is
923
  inuse. This redundancy enables usage checks within free and realloc,
923
  inuse. This redundancy enables usage checks within free and realloc,
924
  and reduces indirection when freeing and consolidating chunks.
924
  and reduces indirection when freeing and consolidating chunks.
925
 
925
 
926
  Each freshly allocated chunk must have both cinuse and pinuse set.
926
  Each freshly allocated chunk must have both cinuse and pinuse set.
927
  That is, each allocated chunk borders either a previously allocated
927
  That is, each allocated chunk borders either a previously allocated
928
  and still in-use chunk, or the base of its memory arena. This is
928
  and still in-use chunk, or the base of its memory arena. This is
929
  ensured by making all allocations from the the `lowest' part of any
929
  ensured by making all allocations from the the `lowest' part of any
930
  found chunk.  Further, no free chunk physically borders another one,
930
  found chunk.  Further, no free chunk physically borders another one,
931
  so each free chunk is known to be preceded and followed by either
931
  so each free chunk is known to be preceded and followed by either
932
  inuse chunks or the ends of memory.
932
  inuse chunks or the ends of memory.
933
 
933
 
934
  Note that the `foot' of the current chunk is actually represented
934
  Note that the `foot' of the current chunk is actually represented
935
  as the prev_foot of the NEXT chunk. This makes it easier to
935
  as the prev_foot of the NEXT chunk. This makes it easier to
936
  deal with alignments etc but can be very confusing when trying
936
  deal with alignments etc but can be very confusing when trying
937
  to extend or adapt this code.
937
  to extend or adapt this code.
938
 
938
 
939
  The exceptions to all this are
939
  The exceptions to all this are
940
 
940
 
941
     1. The special chunk `top' is the top-most available chunk (i.e.,
941
     1. The special chunk `top' is the top-most available chunk (i.e.,
942
        the one bordering the end of available memory). It is treated
942
        the one bordering the end of available memory). It is treated
943
        specially.  Top is never included in any bin, is used only if
943
        specially.  Top is never included in any bin, is used only if
944
        no other chunk is available, and is released back to the
944
        no other chunk is available, and is released back to the
945
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
945
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
946
        the top chunk is treated as larger (and thus less well
946
        the top chunk is treated as larger (and thus less well
947
        fitting) than any other available chunk.  The top chunk
947
        fitting) than any other available chunk.  The top chunk
948
        doesn't update its trailing size field since there is no next
948
        doesn't update its trailing size field since there is no next
949
        contiguous chunk that would have to index off it. However,
949
        contiguous chunk that would have to index off it. However,
950
        space is still allocated for it (TOP_FOOT_SIZE) to enable
950
        space is still allocated for it (TOP_FOOT_SIZE) to enable
951
        separation or merging when space is extended.
951
        separation or merging when space is extended.
952
 
952
 
953
     3. Chunks allocated via mmap, which have the lowest-order bit
953
     3. Chunks allocated via mmap, which have the lowest-order bit
954
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
954
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
955
        PINUSE_BIT in their head fields.  Because they are allocated
955
        PINUSE_BIT in their head fields.  Because they are allocated
956
        one-by-one, each must carry its own prev_foot field, which is
956
        one-by-one, each must carry its own prev_foot field, which is
957
        also used to hold the offset this chunk has within its mmapped
957
        also used to hold the offset this chunk has within its mmapped
958
        region, which is needed to preserve alignment. Each mmapped
958
        region, which is needed to preserve alignment. Each mmapped
959
        chunk is trailed by the first two fields of a fake next-chunk
959
        chunk is trailed by the first two fields of a fake next-chunk
960
        for sake of usage checks.
960
        for sake of usage checks.
961
 
961
 
962
*/
962
*/
963
 
963
 
964
struct malloc_chunk {
964
struct malloc_chunk {
965
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
965
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
966
  size_t               head;       /* Size and inuse bits. */
966
  size_t               head;       /* Size and inuse bits. */
967
  struct malloc_chunk* fd;         /* double links -- used only if free. */
967
  struct malloc_chunk* fd;         /* double links -- used only if free. */
968
  struct malloc_chunk* bk;
968
  struct malloc_chunk* bk;
969
};
969
};
970
 
970
 
971
typedef struct malloc_chunk  mchunk;
971
typedef struct malloc_chunk  mchunk;
972
typedef struct malloc_chunk* mchunkptr;
972
typedef struct malloc_chunk* mchunkptr;
973
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
973
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
974
typedef unsigned int bindex_t;         /* Described below */
974
typedef unsigned int bindex_t;         /* Described below */
975
typedef unsigned int binmap_t;         /* Described below */
975
typedef unsigned int binmap_t;         /* Described below */
976
typedef unsigned int flag_t;           /* The type of various bit flag sets */
976
typedef unsigned int flag_t;           /* The type of various bit flag sets */
977
 
977
 
978
/* ------------------- Chunks sizes and alignments ----------------------- */
978
/* ------------------- Chunks sizes and alignments ----------------------- */
979
 
979
 
980
#define MCHUNK_SIZE         (sizeof(mchunk))
980
#define MCHUNK_SIZE         (sizeof(mchunk))
981
 
981
 
982
#if FOOTERS
982
#if FOOTERS
983
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
983
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
984
#else /* FOOTERS */
984
#else /* FOOTERS */
985
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
985
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
986
#endif /* FOOTERS */
986
#endif /* FOOTERS */
987
 
987
 
988
/* MMapped chunks need a second word of overhead ... */
988
/* MMapped chunks need a second word of overhead ... */
989
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
989
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
990
/* ... and additional padding for fake next-chunk at foot */
990
/* ... and additional padding for fake next-chunk at foot */
991
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
991
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
992
 
992
 
993
/* The smallest size we can malloc is an aligned minimal chunk */
993
/* The smallest size we can malloc is an aligned minimal chunk */
994
#define MIN_CHUNK_SIZE\
994
#define MIN_CHUNK_SIZE\
995
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
995
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
996
 
996
 
997
/* conversion from malloc headers to user pointers, and back */
997
/* conversion from malloc headers to user pointers, and back */
998
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
998
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
999
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
999
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1000
/* chunk associated with aligned address A */
1000
/* chunk associated with aligned address A */
1001
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1001
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1002
 
1002
 
1003
/* Bounds on request (not chunk) sizes. */
1003
/* Bounds on request (not chunk) sizes. */
1004
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1004
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1005
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1005
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1006
 
1006
 
1007
/* pad request bytes into a usable size */
1007
/* pad request bytes into a usable size */
1008
#define pad_request(req) \
1008
#define pad_request(req) \
1009
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1009
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1010
 
1010
 
1011
/* pad request, checking for minimum (but not maximum) */
1011
/* pad request, checking for minimum (but not maximum) */
1012
#define request2size(req) \
1012
#define request2size(req) \
1013
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1013
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1014
 
1014
 
1015
 
1015
 
1016
/* ------------------ Operations on head and foot fields ----------------- */
1016
/* ------------------ Operations on head and foot fields ----------------- */
1017
 
1017
 
1018
/*
1018
/*
1019
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1019
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1020
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1020
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1021
  use. If the chunk was obtained with mmap, the prev_foot field has
1021
  use. If the chunk was obtained with mmap, the prev_foot field has
1022
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1022
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1023
  mmapped region to the base of the chunk.
1023
  mmapped region to the base of the chunk.
1024
*/
1024
*/
1025
 
1025
 
1026
#define PINUSE_BIT          (SIZE_T_ONE)
1026
#define PINUSE_BIT          (SIZE_T_ONE)
1027
#define CINUSE_BIT          (SIZE_T_TWO)
1027
#define CINUSE_BIT          (SIZE_T_TWO)
1028
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1028
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1029
 
1029
 
1030
/* Head value for fenceposts */
1030
/* Head value for fenceposts */
1031
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1031
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1032
 
1032
 
1033
/* extraction of fields from head words */
1033
/* extraction of fields from head words */
1034
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1034
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1035
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1035
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1036
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1036
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1037
 
1037
 
1038
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1038
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1039
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1039
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1040
 
1040
 
1041
/* Treat space at ptr +/- offset as a chunk */
1041
/* Treat space at ptr +/- offset as a chunk */
1042
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1042
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1043
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1043
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1044
 
1044
 
1045
/* Ptr to next or previous physical malloc_chunk. */
1045
/* Ptr to next or previous physical malloc_chunk. */
1046
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1046
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1047
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1047
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1048
 
1048
 
1049
/* extract next chunk's pinuse bit */
1049
/* extract next chunk's pinuse bit */
1050
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1050
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1051
 
1051
 
1052
/* Get/set size at footer */
1052
/* Get/set size at footer */
1053
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1053
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1054
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1054
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1055
 
1055
 
1056
/* Set size, pinuse bit, and foot */
1056
/* Set size, pinuse bit, and foot */
1057
#define set_size_and_pinuse_of_free_chunk(p, s)\
1057
#define set_size_and_pinuse_of_free_chunk(p, s)\
1058
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1058
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1059
 
1059
 
1060
/* Set size, pinuse bit, foot, and clear next pinuse */
1060
/* Set size, pinuse bit, foot, and clear next pinuse */
1061
#define set_free_with_pinuse(p, s, n)\
1061
#define set_free_with_pinuse(p, s, n)\
1062
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1062
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1063
 
1063
 
1064
#define is_mmapped(p)\
1064
#define is_mmapped(p)\
1065
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1065
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1066
 
1066
 
1067
/* Get the internal overhead associated with chunk p */
1067
/* Get the internal overhead associated with chunk p */
1068
#define overhead_for(p)\
1068
#define overhead_for(p)\
1069
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1069
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1070
 
1070
 
1071
/* Return true if malloced space is not necessarily cleared */
1071
/* Return true if malloced space is not necessarily cleared */
1072
#if MMAP_CLEARS
1072
#if MMAP_CLEARS
1073
#define calloc_must_clear(p) (!is_mmapped(p))
1073
#define calloc_must_clear(p) (!is_mmapped(p))
1074
#else /* MMAP_CLEARS */
1074
#else /* MMAP_CLEARS */
1075
#define calloc_must_clear(p) (1)
1075
#define calloc_must_clear(p) (1)
1076
#endif /* MMAP_CLEARS */
1076
#endif /* MMAP_CLEARS */
1077
 
1077
 
1078
/* ---------------------- Overlaid data structures ----------------------- */
1078
/* ---------------------- Overlaid data structures ----------------------- */
1079
 
1079
 
1080
/*
1080
/*
1081
  When chunks are not in use, they are treated as nodes of either
1081
  When chunks are not in use, they are treated as nodes of either
1082
  lists or trees.
1082
  lists or trees.
1083
 
1083
 
1084
  "Small"  chunks are stored in circular doubly-linked lists, and look
1084
  "Small"  chunks are stored in circular doubly-linked lists, and look
1085
  like this:
1085
  like this:
1086
 
1086
 
1087
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1087
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1088
            |             Size of previous chunk                            |
1088
            |             Size of previous chunk                            |
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1090
    `head:' |             Size of chunk, in bytes                         |P|
1090
    `head:' |             Size of chunk, in bytes                         |P|
1091
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1091
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1092
            |             Forward pointer to next chunk in list             |
1092
            |             Forward pointer to next chunk in list             |
1093
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1093
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1094
            |             Back pointer to previous chunk in list            |
1094
            |             Back pointer to previous chunk in list            |
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1096
            |             Unused space (may be 0 bytes long)                .
1096
            |             Unused space (may be 0 bytes long)                .
1097
            .                                                               .
1097
            .                                                               .
1098
            .                                                               |
1098
            .                                                               |
1099
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1099
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1100
    `foot:' |             Size of chunk, in bytes                           |
1100
    `foot:' |             Size of chunk, in bytes                           |
1101
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1101
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1102
 
1102
 
1103
  Larger chunks are kept in a form of bitwise digital trees (aka
1103
  Larger chunks are kept in a form of bitwise digital trees (aka
1104
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1104
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1105
  free chunks greater than 256 bytes, their size doesn't impose any
1105
  free chunks greater than 256 bytes, their size doesn't impose any
1106
  constraints on user chunk sizes.  Each node looks like:
1106
  constraints on user chunk sizes.  Each node looks like:
1107
 
1107
 
1108
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1108
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1109
            |             Size of previous chunk                            |
1109
            |             Size of previous chunk                            |
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1111
    `head:' |             Size of chunk, in bytes                         |P|
1111
    `head:' |             Size of chunk, in bytes                         |P|
1112
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1112
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1113
            |             Forward pointer to next chunk of same size        |
1113
            |             Forward pointer to next chunk of same size        |
1114
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1114
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1115
            |             Back pointer to previous chunk of same size       |
1115
            |             Back pointer to previous chunk of same size       |
1116
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1116
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1117
            |             Pointer to left child (child[0])                  |
1117
            |             Pointer to left child (child[0])                  |
1118
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1118
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1119
            |             Pointer to right child (child[1])                 |
1119
            |             Pointer to right child (child[1])                 |
1120
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1120
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1121
            |             Pointer to parent                                 |
1121
            |             Pointer to parent                                 |
1122
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1122
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1123
            |             bin index of this chunk                           |
1123
            |             bin index of this chunk                           |
1124
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1124
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1125
            |             Unused space                                      .
1125
            |             Unused space                                      .
1126
            .                                                               |
1126
            .                                                               |
1127
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1127
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1128
    `foot:' |             Size of chunk, in bytes                           |
1128
    `foot:' |             Size of chunk, in bytes                           |
1129
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1129
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1130
 
1130
 
1131
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1131
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1132
  of the same size are arranged in a circularly-linked list, with only
1132
  of the same size are arranged in a circularly-linked list, with only
1133
  the oldest chunk (the next to be used, in our FIFO ordering)
1133
  the oldest chunk (the next to be used, in our FIFO ordering)
1134
  actually in the tree.  (Tree members are distinguished by a non-null
1134
  actually in the tree.  (Tree members are distinguished by a non-null
1135
  parent pointer.)  If a chunk with the same size an an existing node
1135
  parent pointer.)  If a chunk with the same size an an existing node
1136
  is inserted, it is linked off the existing node using pointers that
1136
  is inserted, it is linked off the existing node using pointers that
1137
  work in the same way as fd/bk pointers of small chunks.
1137
  work in the same way as fd/bk pointers of small chunks.
1138
 
1138
 
1139
  Each tree contains a power of 2 sized range of chunk sizes (the
1139
  Each tree contains a power of 2 sized range of chunk sizes (the
1140
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1140
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1141
  tree level, with the chunks in the smaller half of the range (0x100
1141
  tree level, with the chunks in the smaller half of the range (0x100
1142
  <= x < 0x140 for the top nose) in the left subtree and the larger
1142
  <= x < 0x140 for the top nose) in the left subtree and the larger
1143
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1143
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1144
  done by inspecting individual bits.
1144
  done by inspecting individual bits.
1145
 
1145
 
1146
  Using these rules, each node's left subtree contains all smaller
1146
  Using these rules, each node's left subtree contains all smaller
1147
  sizes than its right subtree.  However, the node at the root of each
1147
  sizes than its right subtree.  However, the node at the root of each
1148
  subtree has no particular ordering relationship to either.  (The
1148
  subtree has no particular ordering relationship to either.  (The
1149
  dividing line between the subtree sizes is based on trie relation.)
1149
  dividing line between the subtree sizes is based on trie relation.)
1150
  If we remove the last chunk of a given size from the interior of the
1150
  If we remove the last chunk of a given size from the interior of the
1151
  tree, we need to replace it with a leaf node.  The tree ordering
1151
  tree, we need to replace it with a leaf node.  The tree ordering
1152
  rules permit a node to be replaced by any leaf below it.
1152
  rules permit a node to be replaced by any leaf below it.
1153
 
1153
 
1154
  The smallest chunk in a tree (a common operation in a best-fit
1154
  The smallest chunk in a tree (a common operation in a best-fit
1155
  allocator) can be found by walking a path to the leftmost leaf in
1155
  allocator) can be found by walking a path to the leftmost leaf in
1156
  the tree.  Unlike a usual binary tree, where we follow left child
1156
  the tree.  Unlike a usual binary tree, where we follow left child
1157
  pointers until we reach a null, here we follow the right child
1157
  pointers until we reach a null, here we follow the right child
1158
  pointer any time the left one is null, until we reach a leaf with
1158
  pointer any time the left one is null, until we reach a leaf with
1159
  both child pointers null. The smallest chunk in the tree will be
1159
  both child pointers null. The smallest chunk in the tree will be
1160
  somewhere along that path.
1160
  somewhere along that path.
1161
 
1161
 
1162
  The worst case number of steps to add, find, or remove a node is
1162
  The worst case number of steps to add, find, or remove a node is
1163
  bounded by the number of bits differentiating chunks within
1163
  bounded by the number of bits differentiating chunks within
1164
  bins. Under current bin calculations, this ranges from 6 up to 21
1164
  bins. Under current bin calculations, this ranges from 6 up to 21
1165
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1165
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1166
  is of course much better.
1166
  is of course much better.
1167
*/
1167
*/
1168
 
1168
 
1169
struct malloc_tree_chunk {
1169
struct malloc_tree_chunk {
1170
  /* The first four fields must be compatible with malloc_chunk */
1170
  /* The first four fields must be compatible with malloc_chunk */
1171
  size_t                    prev_foot;
1171
  size_t                    prev_foot;
1172
  size_t                    head;
1172
  size_t                    head;
1173
  struct malloc_tree_chunk* fd;
1173
  struct malloc_tree_chunk* fd;
1174
  struct malloc_tree_chunk* bk;
1174
  struct malloc_tree_chunk* bk;
1175
 
1175
 
1176
  struct malloc_tree_chunk* child[2];
1176
  struct malloc_tree_chunk* child[2];
1177
  struct malloc_tree_chunk* parent;
1177
  struct malloc_tree_chunk* parent;
1178
  bindex_t                  index;
1178
  bindex_t                  index;
1179
};
1179
};
1180
 
1180
 
1181
typedef struct malloc_tree_chunk  tchunk;
1181
typedef struct malloc_tree_chunk  tchunk;
1182
typedef struct malloc_tree_chunk* tchunkptr;
1182
typedef struct malloc_tree_chunk* tchunkptr;
1183
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1183
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1184
 
1184
 
1185
/* A little helper macro for trees */
1185
/* A little helper macro for trees */
1186
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1186
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1187
 
1187
 
1188
/* ----------------------------- Segments -------------------------------- */
1188
/* ----------------------------- Segments -------------------------------- */
1189
 
1189
 
1190
/*
1190
/*
1191
  Each malloc space may include non-contiguous segments, held in a
1191
  Each malloc space may include non-contiguous segments, held in a
1192
  list headed by an embedded malloc_segment record representing the
1192
  list headed by an embedded malloc_segment record representing the
1193
  top-most space. Segments also include flags holding properties of
1193
  top-most space. Segments also include flags holding properties of
1194
  the space. Large chunks that are directly allocated by mmap are not
1194
  the space. Large chunks that are directly allocated by mmap are not
1195
  included in this list. They are instead independently created and
1195
  included in this list. They are instead independently created and
1196
  destroyed without otherwise keeping track of them.
1196
  destroyed without otherwise keeping track of them.
1197
 
1197
 
1198
  Segment management mainly comes into play for spaces allocated by
1198
  Segment management mainly comes into play for spaces allocated by
1199
  MMAP.  Any call to MMAP might or might not return memory that is
1199
  MMAP.  Any call to MMAP might or might not return memory that is
1200
  adjacent to an existing segment.  MORECORE normally contiguously
1200
  adjacent to an existing segment.  MORECORE normally contiguously
1201
  extends the current space, so this space is almost always adjacent,
1201
  extends the current space, so this space is almost always adjacent,
1202
  which is simpler and faster to deal with. (This is why MORECORE is
1202
  which is simpler and faster to deal with. (This is why MORECORE is
1203
  used preferentially to MMAP when both are available -- see
1203
  used preferentially to MMAP when both are available -- see
1204
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1204
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1205
  hinting mechanisms (inconsistently) supported in various
1205
  hinting mechanisms (inconsistently) supported in various
1206
  implementations of unix mmap, or distinguish reserving from
1206
  implementations of unix mmap, or distinguish reserving from
1207
  committing memory. Instead, we just ask for space, and exploit
1207
  committing memory. Instead, we just ask for space, and exploit
1208
  contiguity when we get it.  It is probably possible to do
1208
  contiguity when we get it.  It is probably possible to do
1209
  better than this on some systems, but no general scheme seems
1209
  better than this on some systems, but no general scheme seems
1210
  to be significantly better.
1210
  to be significantly better.
1211
 
1211
 
1212
  Management entails a simpler variant of the consolidation scheme
1212
  Management entails a simpler variant of the consolidation scheme
1213
  used for chunks to reduce fragmentation -- new adjacent memory is
1213
  used for chunks to reduce fragmentation -- new adjacent memory is
1214
  normally prepended or appended to an existing segment. However,
1214
  normally prepended or appended to an existing segment. However,
1215
  there are limitations compared to chunk consolidation that mostly
1215
  there are limitations compared to chunk consolidation that mostly
1216
  reflect the fact that segment processing is relatively infrequent
1216
  reflect the fact that segment processing is relatively infrequent
1217
  (occurring only when getting memory from system) and that we
1217
  (occurring only when getting memory from system) and that we
1218
  don't expect to have huge numbers of segments:
1218
  don't expect to have huge numbers of segments:
1219
 
1219
 
1220
  * Segments are not indexed, so traversal requires linear scans.  (It
1220
  * Segments are not indexed, so traversal requires linear scans.  (It
1221
    would be possible to index these, but is not worth the extra
1221
    would be possible to index these, but is not worth the extra
1222
    overhead and complexity for most programs on most platforms.)
1222
    overhead and complexity for most programs on most platforms.)
1223
  * New segments are only appended to old ones when holding top-most
1223
  * New segments are only appended to old ones when holding top-most
1224
    memory; if they cannot be prepended to others, they are held in
1224
    memory; if they cannot be prepended to others, they are held in
1225
    different segments.
1225
    different segments.
1226
 
1226
 
1227
  Except for the top-most segment of an mstate, each segment record
1227
  Except for the top-most segment of an mstate, each segment record
1228
  is kept at the tail of its segment. Segments are added by pushing
1228
  is kept at the tail of its segment. Segments are added by pushing
1229
  segment records onto the list headed by &mstate.seg for the
1229
  segment records onto the list headed by &mstate.seg for the
1230
  containing mstate.
1230
  containing mstate.
1231
 
1231
 
1232
  Segment flags control allocation/merge/deallocation policies:
1232
  Segment flags control allocation/merge/deallocation policies:
1233
  * If EXTERN_BIT set, then we did not allocate this segment,
1233
  * If EXTERN_BIT set, then we did not allocate this segment,
1234
    and so should not try to deallocate or merge with others.
1234
    and so should not try to deallocate or merge with others.
1235
    (This currently holds only for the initial segment passed
1235
    (This currently holds only for the initial segment passed
1236
    into create_mspace_with_base.)
1236
    into create_mspace_with_base.)
1237
  * If IS_MMAPPED_BIT set, the segment may be merged with
1237
  * If IS_MMAPPED_BIT set, the segment may be merged with
1238
    other surrounding mmapped segments and trimmed/de-allocated
1238
    other surrounding mmapped segments and trimmed/de-allocated
1239
    using munmap.
1239
    using munmap.
1240
  * If neither bit is set, then the segment was obtained using
1240
  * If neither bit is set, then the segment was obtained using
1241
    MORECORE so can be merged with surrounding MORECORE'd segments
1241
    MORECORE so can be merged with surrounding MORECORE'd segments
1242
    and deallocated/trimmed using MORECORE with negative arguments.
1242
    and deallocated/trimmed using MORECORE with negative arguments.
1243
*/
1243
*/
1244
 
1244
 
1245
struct malloc_segment {
1245
struct malloc_segment {
1246
  char*        base;             /* base address */
1246
  char*        base;             /* base address */
1247
  size_t       size;             /* allocated size */
1247
  size_t       size;             /* allocated size */
1248
  struct malloc_segment* next;   /* ptr to next segment */
1248
  struct malloc_segment* next;   /* ptr to next segment */
1249
  flag_t       sflags;           /* mmap and extern flag */
1249
  flag_t       sflags;           /* mmap and extern flag */
1250
};
1250
};
1251
 
1251
 
1252
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1252
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1253
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1253
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1254
 
1254
 
1255
typedef struct malloc_segment  msegment;
1255
typedef struct malloc_segment  msegment;
1256
typedef struct malloc_segment* msegmentptr;
1256
typedef struct malloc_segment* msegmentptr;
1257
 
1257
 
1258
/* ---------------------------- malloc_state ----------------------------- */
1258
/* ---------------------------- malloc_state ----------------------------- */
1259
 
1259
 
1260
/*
1260
/*
1261
   A malloc_state holds all of the bookkeeping for a space.
1261
   A malloc_state holds all of the bookkeeping for a space.
1262
   The main fields are:
1262
   The main fields are:
1263
 
1263
 
1264
  Top
1264
  Top
1265
    The topmost chunk of the currently active segment. Its size is
1265
    The topmost chunk of the currently active segment. Its size is
1266
    cached in topsize.  The actual size of topmost space is
1266
    cached in topsize.  The actual size of topmost space is
1267
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1267
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1268
    fenceposts and segment records if necessary when getting more
1268
    fenceposts and segment records if necessary when getting more
1269
    space from the system.  The size at which to autotrim top is
1269
    space from the system.  The size at which to autotrim top is
1270
    cached from mparams in trim_check, except that it is disabled if
1270
    cached from mparams in trim_check, except that it is disabled if
1271
    an autotrim fails.
1271
    an autotrim fails.
1272
 
1272
 
1273
  Designated victim (dv)
1273
  Designated victim (dv)
1274
    This is the preferred chunk for servicing small requests that
1274
    This is the preferred chunk for servicing small requests that
1275
    don't have exact fits.  It is normally the chunk split off most
1275
    don't have exact fits.  It is normally the chunk split off most
1276
    recently to service another small request.  Its size is cached in
1276
    recently to service another small request.  Its size is cached in
1277
    dvsize. The link fields of this chunk are not maintained since it
1277
    dvsize. The link fields of this chunk are not maintained since it
1278
    is not kept in a bin.
1278
    is not kept in a bin.
1279
 
1279
 
1280
  SmallBins
1280
  SmallBins
1281
    An array of bin headers for free chunks.  These bins hold chunks
1281
    An array of bin headers for free chunks.  These bins hold chunks
1282
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1282
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1283
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1283
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1284
    use in double-linked lists, each bin header acts as a malloc_chunk
1284
    use in double-linked lists, each bin header acts as a malloc_chunk
1285
    pointing to the real first node, if it exists (else pointing to
1285
    pointing to the real first node, if it exists (else pointing to
1286
    itself).  This avoids special-casing for headers.  But to avoid
1286
    itself).  This avoids special-casing for headers.  But to avoid
1287
    waste, we allocate only the fd/bk pointers of bins, and then use
1287
    waste, we allocate only the fd/bk pointers of bins, and then use
1288
    repositioning tricks to treat these as the fields of a chunk.
1288
    repositioning tricks to treat these as the fields of a chunk.
1289
 
1289
 
1290
  TreeBins
1290
  TreeBins
1291
    Treebins are pointers to the roots of trees holding a range of
1291
    Treebins are pointers to the roots of trees holding a range of
1292
    sizes. There are 2 equally spaced treebins for each power of two
1292
    sizes. There are 2 equally spaced treebins for each power of two
1293
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1293
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1294
    larger.
1294
    larger.
1295
 
1295
 
1296
  Bin maps
1296
  Bin maps
1297
    There is one bit map for small bins ("smallmap") and one for
1297
    There is one bit map for small bins ("smallmap") and one for
1298
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1298
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1299
    clears the bit when empty.  Bit operations are then used to avoid
1299
    clears the bit when empty.  Bit operations are then used to avoid
1300
    bin-by-bin searching -- nearly all "search" is done without ever
1300
    bin-by-bin searching -- nearly all "search" is done without ever
1301
    looking at bins that won't be selected.  The bit maps
1301
    looking at bins that won't be selected.  The bit maps
1302
    conservatively use 32 bits per map word, even if on 64bit system.
1302
    conservatively use 32 bits per map word, even if on 64bit system.
1303
    For a good description of some of the bit-based techniques used
1303
    For a good description of some of the bit-based techniques used
1304
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1304
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1305
    supplement at http://hackersdelight.org/). Many of these are
1305
    supplement at http://hackersdelight.org/). Many of these are
1306
    intended to reduce the branchiness of paths through malloc etc, as
1306
    intended to reduce the branchiness of paths through malloc etc, as
1307
    well as to reduce the number of memory locations read or written.
1307
    well as to reduce the number of memory locations read or written.
1308
 
1308
 
1309
  Segments
1309
  Segments
1310
    A list of segments headed by an embedded malloc_segment record
1310
    A list of segments headed by an embedded malloc_segment record
1311
    representing the initial space.
1311
    representing the initial space.
1312
 
1312
 
1313
  Address check support
1313
  Address check support
1314
    The least_addr field is the least address ever obtained from
1314
    The least_addr field is the least address ever obtained from
1315
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1315
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1316
    than this are trapped (unless INSECURE is defined).
1316
    than this are trapped (unless INSECURE is defined).
1317
 
1317
 
1318
  Magic tag
1318
  Magic tag
1319
    A cross-check field that should always hold same value as mparams.magic.
1319
    A cross-check field that should always hold same value as mparams.magic.
1320
 
1320
 
1321
  Flags
1321
  Flags
1322
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1322
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1323
 
1323
 
1324
  Statistics
1324
  Statistics
1325
    Each space keeps track of current and maximum system memory
1325
    Each space keeps track of current and maximum system memory
1326
    obtained via MORECORE or MMAP.
1326
    obtained via MORECORE or MMAP.
1327
 
1327
 
1328
  Locking
1328
  Locking
1329
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1329
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1330
    around every public call using this mspace.
1330
    around every public call using this mspace.
1331
*/
1331
*/
1332
 
1332
 
1333
/* Bin types, widths and sizes */
1333
/* Bin types, widths and sizes */
1334
#define NSMALLBINS        (32U)
1334
#define NSMALLBINS        (32U)
1335
#define NTREEBINS         (32U)
1335
#define NTREEBINS         (32U)
1336
#define SMALLBIN_SHIFT    (3U)
1336
#define SMALLBIN_SHIFT    (3U)
1337
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1337
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1338
#define TREEBIN_SHIFT     (8U)
1338
#define TREEBIN_SHIFT     (8U)
1339
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1339
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1340
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1340
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1341
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1341
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1342
 
1342
 
1343
struct malloc_state {
1343
struct malloc_state {
1344
  binmap_t   smallmap;
1344
  binmap_t   smallmap;
1345
  binmap_t   treemap;
1345
  binmap_t   treemap;
1346
  size_t     dvsize;
1346
  size_t     dvsize;
1347
  size_t     topsize;
1347
  size_t     topsize;
1348
  char*      least_addr;
1348
  char*      least_addr;
1349
  mchunkptr  dv;
1349
  mchunkptr  dv;
1350
  mchunkptr  top;
1350
  mchunkptr  top;
1351
  size_t     trim_check;
1351
  size_t     trim_check;
1352
  size_t     magic;
1352
  size_t     magic;
1353
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1353
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1354
  tbinptr    treebins[NTREEBINS];
1354
  tbinptr    treebins[NTREEBINS];
1355
  size_t     footprint;
1355
  size_t     footprint;
1356
  size_t     max_footprint;
1356
  size_t     max_footprint;
1357
  flag_t     mflags;
1357
  flag_t     mflags;
1358
#if USE_LOCKS
1358
#if USE_LOCKS
1359
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1359
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1360
#endif /* USE_LOCKS */
1360
#endif /* USE_LOCKS */
1361
  msegment   seg;
1361
  msegment   seg;
1362
};
1362
};
1363
 
1363
 
1364
typedef struct malloc_state*    mstate;
1364
typedef struct malloc_state*    mstate;
1365
 
1365
 
1366
/* ------------- Global malloc_state and malloc_params ------------------- */
1366
/* ------------- Global malloc_state and malloc_params ------------------- */
1367
 
1367
 
1368
/*
1368
/*
1369
  malloc_params holds global properties, including those that can be
1369
  malloc_params holds global properties, including those that can be
1370
  dynamically set using mallopt. There is a single instance, mparams,
1370
  dynamically set using mallopt. There is a single instance, mparams,
1371
  initialized in init_mparams.
1371
  initialized in init_mparams.
1372
*/
1372
*/
1373
 
1373
 
1374
struct malloc_params {
1374
struct malloc_params {
1375
  size_t magic;
1375
  size_t magic;
1376
  size_t page_size;
1376
  size_t page_size;
1377
  size_t granularity;
1377
  size_t granularity;
1378
  size_t mmap_threshold;
1378
  size_t mmap_threshold;
1379
  size_t trim_threshold;
1379
  size_t trim_threshold;
1380
  flag_t default_mflags;
1380
  flag_t default_mflags;
1381
};
1381
};
1382
 
1382
 
1383
static struct malloc_params mparams;
1383
static struct malloc_params mparams;
1384
 
1384
 
1385
/* The global malloc_state used for all non-"mspace" calls */
1385
/* The global malloc_state used for all non-"mspace" calls */
1386
static struct malloc_state _gm_;
1386
static struct malloc_state _gm_;
1387
#define gm                 (&_gm_)
1387
#define gm                 (&_gm_)
1388
#define is_global(M)       ((M) == &_gm_)
1388
#define is_global(M)       ((M) == &_gm_)
1389
#define is_initialized(M)  ((M)->top != 0)
1389
#define is_initialized(M)  ((M)->top != 0)
1390
 
1390
 
1391
/* -------------------------- system alloc setup ------------------------- */
1391
/* -------------------------- system alloc setup ------------------------- */
1392
 
1392
 
1393
/* Operations on mflags */
1393
/* Operations on mflags */
1394
 
1394
 
1395
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1395
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1396
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1396
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1397
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1397
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1398
 
1398
 
1399
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1399
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1400
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1400
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1401
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1401
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1402
 
1402
 
1403
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1403
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1404
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1404
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1405
 
1405
 
1406
#define set_lock(M,L)\
1406
#define set_lock(M,L)\
1407
 ((M)->mflags = (L)?\
1407
 ((M)->mflags = (L)?\
1408
  ((M)->mflags | USE_LOCK_BIT) :\
1408
  ((M)->mflags | USE_LOCK_BIT) :\
1409
  ((M)->mflags & ~USE_LOCK_BIT))
1409
  ((M)->mflags & ~USE_LOCK_BIT))
1410
 
1410
 
1411
/* page-align a size */
1411
/* page-align a size */
1412
#define page_align(S)\
1412
#define page_align(S)\
1413
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1413
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1414
 
1414
 
1415
/* granularity-align a size */
1415
/* granularity-align a size */
1416
#define granularity_align(S)\
1416
#define granularity_align(S)\
1417
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1417
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1418
 
1418
 
1419
#define is_page_aligned(S)\
1419
#define is_page_aligned(S)\
1420
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1420
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1421
#define is_granularity_aligned(S)\
1421
#define is_granularity_aligned(S)\
1422
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1422
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1423
 
1423
 
1424
/*  True if segment S holds address A */
1424
/*  True if segment S holds address A */
1425
#define segment_holds(S, A)\
1425
#define segment_holds(S, A)\
1426
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1426
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1427
 
1427
 
1428
/* Return segment holding given address */
1428
/* Return segment holding given address */
1429
static msegmentptr segment_holding(mstate m, char* addr) {
1429
static msegmentptr segment_holding(mstate m, char* addr) {
1430
  msegmentptr sp = &m->seg;
1430
  msegmentptr sp = &m->seg;
1431
  for (;;) {
1431
  for (;;) {
1432
    if (addr >= sp->base && addr < sp->base + sp->size)
1432
    if (addr >= sp->base && addr < sp->base + sp->size)
1433
      return sp;
1433
      return sp;
1434
    if ((sp = sp->next) == 0)
1434
    if ((sp = sp->next) == 0)
1435
      return 0;
1435
      return 0;
1436
  }
1436
  }
1437
}
1437
}
1438
 
1438
 
1439
/* Return true if segment contains a segment link */
1439
/* Return true if segment contains a segment link */
1440
static int has_segment_link(mstate m, msegmentptr ss) {
1440
static int has_segment_link(mstate m, msegmentptr ss) {
1441
  msegmentptr sp = &m->seg;
1441
  msegmentptr sp = &m->seg;
1442
  for (;;) {
1442
  for (;;) {
1443
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1443
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1444
      return 1;
1444
      return 1;
1445
    if ((sp = sp->next) == 0)
1445
    if ((sp = sp->next) == 0)
1446
      return 0;
1446
      return 0;
1447
  }
1447
  }
1448
}
1448
}
1449
 
1449
 
1450
#ifndef MORECORE_CANNOT_TRIM
1450
#ifndef MORECORE_CANNOT_TRIM
1451
#define should_trim(M,s)  ((s) > (M)->trim_check)
1451
#define should_trim(M,s)  ((s) > (M)->trim_check)
1452
#else  /* MORECORE_CANNOT_TRIM */
1452
#else  /* MORECORE_CANNOT_TRIM */
1453
#define should_trim(M,s)  (0)
1453
#define should_trim(M,s)  (0)
1454
#endif /* MORECORE_CANNOT_TRIM */
1454
#endif /* MORECORE_CANNOT_TRIM */
1455
 
1455
 
1456
/*
1456
/*
1457
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1457
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1458
  that may be needed to place segment records and fenceposts when new
1458
  that may be needed to place segment records and fenceposts when new
1459
  noncontiguous segments are added.
1459
  noncontiguous segments are added.
1460
*/
1460
*/
1461
#define TOP_FOOT_SIZE\
1461
#define TOP_FOOT_SIZE\
1462
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1462
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1463
 
1463
 
1464
 
1464
 
1465
/* -------------------------------  Hooks -------------------------------- */
1465
/* -------------------------------  Hooks -------------------------------- */
1466
 
1466
 
1467
/*
1467
/*
1468
  PREACTION should be defined to return 0 on success, and nonzero on
1468
  PREACTION should be defined to return 0 on success, and nonzero on
1469
  failure. If you are not using locking, you can redefine these to do
1469
  failure. If you are not using locking, you can redefine these to do
1470
  anything you like.
1470
  anything you like.
1471
*/
1471
*/
1472
 
1472
 
1473
#if USE_LOCKS
1473
#if USE_LOCKS
1474
 
1474
 
1475
/* Ensure locks are initialized */
1475
/* Ensure locks are initialized */
1476
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1476
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1477
 
1477
 
1478
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1478
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1479
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1479
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1480
#else /* USE_LOCKS */
1480
#else /* USE_LOCKS */
1481
 
1481
 
1482
#ifndef PREACTION
1482
#ifndef PREACTION
1483
#define PREACTION(M) (0)
1483
#define PREACTION(M) (0)
1484
#endif  /* PREACTION */
1484
#endif  /* PREACTION */
1485
 
1485
 
1486
#ifndef POSTACTION
1486
#ifndef POSTACTION
1487
#define POSTACTION(M)
1487
#define POSTACTION(M)
1488
#endif  /* POSTACTION */
1488
#endif  /* POSTACTION */
1489
 
1489
 
1490
#endif /* USE_LOCKS */
1490
#endif /* USE_LOCKS */
1491
 
1491
 
1492
/*
1492
/*
1493
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1493
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1494
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1494
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1495
  reallocs. The argument p is an address that might have triggered the
1495
  reallocs. The argument p is an address that might have triggered the
1496
  fault. It is ignored by the two predefined actions, but might be
1496
  fault. It is ignored by the two predefined actions, but might be
1497
  useful in custom actions that try to help diagnose errors.
1497
  useful in custom actions that try to help diagnose errors.
1498
*/
1498
*/
1499
 
1499
 
1500
#if PROCEED_ON_ERROR
1500
#if PROCEED_ON_ERROR
1501
 
1501
 
1502
/* A count of the number of corruption errors causing resets */
1502
/* A count of the number of corruption errors causing resets */
1503
int malloc_corruption_error_count;
1503
int malloc_corruption_error_count;
1504
 
1504
 
1505
/* default corruption action */
1505
/* default corruption action */
1506
static void reset_on_error(mstate m);
1506
static void reset_on_error(mstate m);
1507
 
1507
 
1508
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1508
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1509
#define USAGE_ERROR_ACTION(m, p)
1509
#define USAGE_ERROR_ACTION(m, p)
1510
 
1510
 
1511
#else /* PROCEED_ON_ERROR */
1511
#else /* PROCEED_ON_ERROR */
1512
 
1512
 
1513
#ifndef CORRUPTION_ERROR_ACTION
1513
#ifndef CORRUPTION_ERROR_ACTION
1514
#define CORRUPTION_ERROR_ACTION(m) ABORT
1514
#define CORRUPTION_ERROR_ACTION(m) ABORT
1515
#endif /* CORRUPTION_ERROR_ACTION */
1515
#endif /* CORRUPTION_ERROR_ACTION */
1516
 
1516
 
1517
#ifndef USAGE_ERROR_ACTION
1517
#ifndef USAGE_ERROR_ACTION
1518
#define USAGE_ERROR_ACTION(m,p) ABORT
1518
#define USAGE_ERROR_ACTION(m,p) ABORT
1519
#endif /* USAGE_ERROR_ACTION */
1519
#endif /* USAGE_ERROR_ACTION */
1520
 
1520
 
1521
#endif /* PROCEED_ON_ERROR */
1521
#endif /* PROCEED_ON_ERROR */
1522
 
1522
 
1523
/* -------------------------- Debugging setup ---------------------------- */
1523
/* -------------------------- Debugging setup ---------------------------- */
1524
 
1524
 
1525
#if ! DEBUG
1525
#if ! DEBUG
1526
 
1526
 
1527
#define check_free_chunk(M,P)
1527
#define check_free_chunk(M,P)
1528
#define check_inuse_chunk(M,P)
1528
#define check_inuse_chunk(M,P)
1529
#define check_malloced_chunk(M,P,N)
1529
#define check_malloced_chunk(M,P,N)
1530
#define check_mmapped_chunk(M,P)
1530
#define check_mmapped_chunk(M,P)
1531
#define check_malloc_state(M)
1531
#define check_malloc_state(M)
1532
#define check_top_chunk(M,P)
1532
#define check_top_chunk(M,P)
1533
 
1533
 
1534
#else /* DEBUG */
1534
#else /* DEBUG */
1535
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1535
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1536
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1536
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1537
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1537
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1538
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1538
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1539
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1539
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1540
#define check_malloc_state(M)       do_check_malloc_state(M)
1540
#define check_malloc_state(M)       do_check_malloc_state(M)
1541
 
1541
 
1542
static void   do_check_any_chunk(mstate m, mchunkptr p);
1542
static void   do_check_any_chunk(mstate m, mchunkptr p);
1543
static void   do_check_top_chunk(mstate m, mchunkptr p);
1543
static void   do_check_top_chunk(mstate m, mchunkptr p);
1544
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1544
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1545
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1545
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1546
static void   do_check_free_chunk(mstate m, mchunkptr p);
1546
static void   do_check_free_chunk(mstate m, mchunkptr p);
1547
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1547
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1548
static void   do_check_tree(mstate m, tchunkptr t);
1548
static void   do_check_tree(mstate m, tchunkptr t);
1549
static void   do_check_treebin(mstate m, bindex_t i);
1549
static void   do_check_treebin(mstate m, bindex_t i);
1550
static void   do_check_smallbin(mstate m, bindex_t i);
1550
static void   do_check_smallbin(mstate m, bindex_t i);
1551
static void   do_check_malloc_state(mstate m);
1551
static void   do_check_malloc_state(mstate m);
1552
static int    bin_find(mstate m, mchunkptr x);
1552
static int    bin_find(mstate m, mchunkptr x);
1553
static size_t traverse_and_check(mstate m);
1553
static size_t traverse_and_check(mstate m);
1554
#endif /* DEBUG */
1554
#endif /* DEBUG */
1555
 
1555
 
1556
/* ---------------------------- Indexing Bins ---------------------------- */
1556
/* ---------------------------- Indexing Bins ---------------------------- */
1557
 
1557
 
1558
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1558
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1559
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1559
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1560
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1560
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1561
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1561
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1562
 
1562
 
1563
/* addressing by index. See above about smallbin repositioning */
1563
/* addressing by index. See above about smallbin repositioning */
1564
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1564
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1565
#define treebin_at(M,i)     (&((M)->treebins[i]))
1565
#define treebin_at(M,i)     (&((M)->treebins[i]))
1566
 
1566
 
1567
/* assign tree index for size S to variable I */
1567
/* assign tree index for size S to variable I */
1568
#if defined(__GNUC__) && defined(i386)
1568
#if defined(__GNUC__) && defined(i386)
1569
#define compute_tree_index(S, I)\
1569
#define compute_tree_index(S, I)\
1570
{\
1570
{\
1571
  size_t X = S >> TREEBIN_SHIFT;\
1571
  size_t X = S >> TREEBIN_SHIFT;\
1572
  if (X == 0)\
1572
  if (X == 0)\
1573
    I = 0;\
1573
    I = 0;\
1574
  else if (X > 0xFFFF)\
1574
  else if (X > 0xFFFF)\
1575
    I = NTREEBINS-1;\
1575
    I = NTREEBINS-1;\
1576
  else {\
1576
  else {\
1577
    unsigned int K;\
1577
    unsigned int K;\
1578
    asm("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1578
    asm("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1579
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1579
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1580
  }\
1580
  }\
1581
}
1581
}
1582
#else /* GNUC */
1582
#else /* GNUC */
1583
#define compute_tree_index(S, I)\
1583
#define compute_tree_index(S, I)\
1584
{\
1584
{\
1585
  size_t X = S >> TREEBIN_SHIFT;\
1585
  size_t X = S >> TREEBIN_SHIFT;\
1586
  if (X == 0)\
1586
  if (X == 0)\
1587
    I = 0;\
1587
    I = 0;\
1588
  else if (X > 0xFFFF)\
1588
  else if (X > 0xFFFF)\
1589
    I = NTREEBINS-1;\
1589
    I = NTREEBINS-1;\
1590
  else {\
1590
  else {\
1591
    unsigned int Y = (unsigned int)X;\
1591
    unsigned int Y = (unsigned int)X;\
1592
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1592
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1593
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1593
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1594
    N += K;\
1594
    N += K;\
1595
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1595
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1596
    K = 14 - N + ((Y <<= K) >> 15);\
1596
    K = 14 - N + ((Y <<= K) >> 15);\
1597
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1597
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1598
  }\
1598
  }\
1599
}
1599
}
1600
#endif /* GNUC */
1600
#endif /* GNUC */
1601
 
1601
 
1602
/* Bit representing maximum resolved size in a treebin at i */
1602
/* Bit representing maximum resolved size in a treebin at i */
1603
#define bit_for_tree_index(i) \
1603
#define bit_for_tree_index(i) \
1604
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1604
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1605
 
1605
 
1606
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1606
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1607
#define leftshift_for_tree_index(i) \
1607
#define leftshift_for_tree_index(i) \
1608
   ((i == NTREEBINS-1)? 0 : \
1608
   ((i == NTREEBINS-1)? 0 : \
1609
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1609
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1610
 
1610
 
1611
/* The size of the smallest chunk held in bin with index i */
1611
/* The size of the smallest chunk held in bin with index i */
1612
#define minsize_for_tree_index(i) \
1612
#define minsize_for_tree_index(i) \
1613
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1613
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1614
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1614
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1615
 
1615
 
1616
 
1616
 
1617
/* ------------------------ Operations on bin maps ----------------------- */
1617
/* ------------------------ Operations on bin maps ----------------------- */
1618
 
1618
 
1619
/* bit corresponding to given index */
1619
/* bit corresponding to given index */
1620
#define idx2bit(i)              ((binmap_t)(1) << (i))
1620
#define idx2bit(i)              ((binmap_t)(1) << (i))
1621
 
1621
 
1622
/* Mark/Clear bits with given index */
1622
/* Mark/Clear bits with given index */
1623
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1623
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1624
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1624
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1625
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1625
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1626
 
1626
 
1627
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1627
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1628
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1628
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1629
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1629
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1630
 
1630
 
1631
/* index corresponding to given bit */
1631
/* index corresponding to given bit */
1632
 
1632
 
1633
#if defined(__GNUC__) && defined(i386)
1633
#if defined(__GNUC__) && defined(i386)
1634
#define compute_bit2idx(X, I)\
1634
#define compute_bit2idx(X, I)\
1635
{\
1635
{\
1636
  unsigned int J;\
1636
  unsigned int J;\
1637
  asm("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1637
  asm("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1638
  I = (bindex_t)J;\
1638
  I = (bindex_t)J;\
1639
}
1639
}
1640
 
1640
 
1641
#else /* GNUC */
1641
#else /* GNUC */
1642
#if  USE_BUILTIN_FFS
1642
#if  USE_BUILTIN_FFS
1643
#define compute_bit2idx(X, I) I = ffs(X)-1
1643
#define compute_bit2idx(X, I) I = ffs(X)-1
1644
 
1644
 
1645
#else /* USE_BUILTIN_FFS */
1645
#else /* USE_BUILTIN_FFS */
1646
#define compute_bit2idx(X, I)\
1646
#define compute_bit2idx(X, I)\
1647
{\
1647
{\
1648
  unsigned int Y = X - 1;\
1648
  unsigned int Y = X - 1;\
1649
  unsigned int K = Y >> (16-4) & 16;\
1649
  unsigned int K = Y >> (16-4) & 16;\
1650
  unsigned int N = K;        Y >>= K;\
1650
  unsigned int N = K;        Y >>= K;\
1651
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1651
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1652
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1652
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1653
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1653
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1654
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1654
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1655
  I = (bindex_t)(N + Y);\
1655
  I = (bindex_t)(N + Y);\
1656
}
1656
}
1657
#endif /* USE_BUILTIN_FFS */
1657
#endif /* USE_BUILTIN_FFS */
1658
#endif /* GNUC */
1658
#endif /* GNUC */
1659
 
1659
 
1660
/* isolate the least set bit of a bitmap */
1660
/* isolate the least set bit of a bitmap */
1661
#define least_bit(x)         ((x) & -(x))
1661
#define least_bit(x)         ((x) & -(x))
1662
 
1662
 
1663
/* mask with all bits to left of least bit of x on */
1663
/* mask with all bits to left of least bit of x on */
1664
#define left_bits(x)         ((x<<1) | -(x<<1))
1664
#define left_bits(x)         ((x<<1) | -(x<<1))
1665
 
1665
 
1666
/* mask with all bits to left of or equal to least bit of x on */
1666
/* mask with all bits to left of or equal to least bit of x on */
1667
#define same_or_left_bits(x) ((x) | -(x))
1667
#define same_or_left_bits(x) ((x) | -(x))
1668
 
1668
 
1669
 
1669
 
1670
/* ----------------------- Runtime Check Support ------------------------- */
1670
/* ----------------------- Runtime Check Support ------------------------- */
1671
 
1671
 
1672
/*
1672
/*
1673
  For security, the main invariant is that malloc/free/etc never
1673
  For security, the main invariant is that malloc/free/etc never
1674
  writes to a static address other than malloc_state, unless static
1674
  writes to a static address other than malloc_state, unless static
1675
  malloc_state itself has been corrupted, which cannot occur via
1675
  malloc_state itself has been corrupted, which cannot occur via
1676
  malloc (because of these checks). In essence this means that we
1676
  malloc (because of these checks). In essence this means that we
1677
  believe all pointers, sizes, maps etc held in malloc_state, but
1677
  believe all pointers, sizes, maps etc held in malloc_state, but
1678
  check all of those linked or offsetted from other embedded data
1678
  check all of those linked or offsetted from other embedded data
1679
  structures.  These checks are interspersed with main code in a way
1679
  structures.  These checks are interspersed with main code in a way
1680
  that tends to minimize their run-time cost.
1680
  that tends to minimize their run-time cost.
1681
 
1681
 
1682
  When FOOTERS is defined, in addition to range checking, we also
1682
  When FOOTERS is defined, in addition to range checking, we also
1683
  verify footer fields of inuse chunks, which can be used guarantee
1683
  verify footer fields of inuse chunks, which can be used guarantee
1684
  that the mstate controlling malloc/free is intact.  This is a
1684
  that the mstate controlling malloc/free is intact.  This is a
1685
  streamlined version of the approach described by William Robertson
1685
  streamlined version of the approach described by William Robertson
1686
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1686
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1687
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1687
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1688
  of an inuse chunk holds the xor of its mstate and a random seed,
1688
  of an inuse chunk holds the xor of its mstate and a random seed,
1689
  that is checked upon calls to free() and realloc().  This is
1689
  that is checked upon calls to free() and realloc().  This is
1690
  (probablistically) unguessable from outside the program, but can be
1690
  (probablistically) unguessable from outside the program, but can be
1691
  computed by any code successfully malloc'ing any chunk, so does not
1691
  computed by any code successfully malloc'ing any chunk, so does not
1692
  itself provide protection against code that has already broken
1692
  itself provide protection against code that has already broken
1693
  security through some other means.  Unlike Robertson et al, we
1693
  security through some other means.  Unlike Robertson et al, we
1694
  always dynamically check addresses of all offset chunks (previous,
1694
  always dynamically check addresses of all offset chunks (previous,
1695
  next, etc). This turns out to be cheaper than relying on hashes.
1695
  next, etc). This turns out to be cheaper than relying on hashes.
1696
*/
1696
*/
1697
 
1697
 
1698
#if !INSECURE
1698
#if !INSECURE
1699
/* Check if address a is at least as high as any from MORECORE or MMAP */
1699
/* Check if address a is at least as high as any from MORECORE or MMAP */
1700
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1700
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1701
/* Check if address of next chunk n is higher than base chunk p */
1701
/* Check if address of next chunk n is higher than base chunk p */
1702
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1702
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1703
/* Check if p has its cinuse bit on */
1703
/* Check if p has its cinuse bit on */
1704
#define ok_cinuse(p)     cinuse(p)
1704
#define ok_cinuse(p)     cinuse(p)
1705
/* Check if p has its pinuse bit on */
1705
/* Check if p has its pinuse bit on */
1706
#define ok_pinuse(p)     pinuse(p)
1706
#define ok_pinuse(p)     pinuse(p)
1707
 
1707
 
1708
#else /* !INSECURE */
1708
#else /* !INSECURE */
1709
#define ok_address(M, a) (1)
1709
#define ok_address(M, a) (1)
1710
#define ok_next(b, n)    (1)
1710
#define ok_next(b, n)    (1)
1711
#define ok_cinuse(p)     (1)
1711
#define ok_cinuse(p)     (1)
1712
#define ok_pinuse(p)     (1)
1712
#define ok_pinuse(p)     (1)
1713
#endif /* !INSECURE */
1713
#endif /* !INSECURE */
1714
 
1714
 
1715
#if (FOOTERS && !INSECURE)
1715
#if (FOOTERS && !INSECURE)
1716
/* Check if (alleged) mstate m has expected magic field */
1716
/* Check if (alleged) mstate m has expected magic field */
1717
#define ok_magic(M)      ((M)->magic == mparams.magic)
1717
#define ok_magic(M)      ((M)->magic == mparams.magic)
1718
#else  /* (FOOTERS && !INSECURE) */
1718
#else  /* (FOOTERS && !INSECURE) */
1719
#define ok_magic(M)      (1)
1719
#define ok_magic(M)      (1)
1720
#endif /* (FOOTERS && !INSECURE) */
1720
#endif /* (FOOTERS && !INSECURE) */
1721
 
1721
 
1722
 
1722
 
1723
/* In gcc, use __builtin_expect to minimize impact of checks */
1723
/* In gcc, use __builtin_expect to minimize impact of checks */
1724
#if !INSECURE
1724
#if !INSECURE
1725
#if defined(__GNUC__) && __GNUC__ >= 3
1725
#if defined(__GNUC__) && __GNUC__ >= 3
1726
#define RTCHECK(e)  __builtin_expect(e, 1)
1726
#define RTCHECK(e)  __builtin_expect(e, 1)
1727
#else /* GNUC */
1727
#else /* GNUC */
1728
#define RTCHECK(e)  (e)
1728
#define RTCHECK(e)  (e)
1729
#endif /* GNUC */
1729
#endif /* GNUC */
1730
#else /* !INSECURE */
1730
#else /* !INSECURE */
1731
#define RTCHECK(e)  (1)
1731
#define RTCHECK(e)  (1)
1732
#endif /* !INSECURE */
1732
#endif /* !INSECURE */
1733
 
1733
 
1734
/* macros to set up inuse chunks with or without footers */
1734
/* macros to set up inuse chunks with or without footers */
1735
 
1735
 
1736
#if !FOOTERS
1736
#if !FOOTERS
1737
 
1737
 
1738
#define mark_inuse_foot(M,p,s)
1738
#define mark_inuse_foot(M,p,s)
1739
 
1739
 
1740
/* Set cinuse bit and pinuse bit of next chunk */
1740
/* Set cinuse bit and pinuse bit of next chunk */
1741
#define set_inuse(M,p,s)\
1741
#define set_inuse(M,p,s)\
1742
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1742
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1743
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1743
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1744
 
1744
 
1745
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1745
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1746
#define set_inuse_and_pinuse(M,p,s)\
1746
#define set_inuse_and_pinuse(M,p,s)\
1747
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1747
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1748
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1748
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1749
 
1749
 
1750
/* Set size, cinuse and pinuse bit of this chunk */
1750
/* Set size, cinuse and pinuse bit of this chunk */
1751
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1751
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1752
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1752
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1753
 
1753
 
1754
#else /* FOOTERS */
1754
#else /* FOOTERS */
1755
 
1755
 
1756
/* Set foot of inuse chunk to be xor of mstate and seed */
1756
/* Set foot of inuse chunk to be xor of mstate and seed */
1757
#define mark_inuse_foot(M,p,s)\
1757
#define mark_inuse_foot(M,p,s)\
1758
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1758
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1759
 
1759
 
1760
#define get_mstate_for(p)\
1760
#define get_mstate_for(p)\
1761
  ((mstate)(((mchunkptr)((char*)(p) +\
1761
  ((mstate)(((mchunkptr)((char*)(p) +\
1762
    (chunksize(p))))->prev_foot ^ mparams.magic))
1762
    (chunksize(p))))->prev_foot ^ mparams.magic))
1763
 
1763
 
1764
#define set_inuse(M,p,s)\
1764
#define set_inuse(M,p,s)\
1765
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1765
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1766
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1766
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1767
  mark_inuse_foot(M,p,s))
1767
  mark_inuse_foot(M,p,s))
1768
 
1768
 
1769
#define set_inuse_and_pinuse(M,p,s)\
1769
#define set_inuse_and_pinuse(M,p,s)\
1770
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1770
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1771
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1771
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1772
 mark_inuse_foot(M,p,s))
1772
 mark_inuse_foot(M,p,s))
1773
 
1773
 
1774
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1774
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1775
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1775
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1776
  mark_inuse_foot(M, p, s))
1776
  mark_inuse_foot(M, p, s))
1777
 
1777
 
1778
#endif /* !FOOTERS */
1778
#endif /* !FOOTERS */
1779
 
1779
 
1780
/* ---------------------------- setting mparams -------------------------- */
1780
/* ---------------------------- setting mparams -------------------------- */
1781
 
1781
 
1782
/* Initialize mparams */
1782
/* Initialize mparams */
1783
static int init_mparams(void) {
1783
static int init_mparams(void) {
1784
  if (mparams.page_size == 0) {
1784
  if (mparams.page_size == 0) {
1785
    size_t s;
1785
    size_t s;
1786
 
1786
 
1787
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1787
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1788
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1788
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1789
#if MORECORE_CONTIGUOUS
1789
#if MORECORE_CONTIGUOUS
1790
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1790
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1791
#else  /* MORECORE_CONTIGUOUS */
1791
#else  /* MORECORE_CONTIGUOUS */
1792
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1792
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1793
#endif /* MORECORE_CONTIGUOUS */
1793
#endif /* MORECORE_CONTIGUOUS */
1794
 
1794
 
1795
#if (FOOTERS && !INSECURE)
1795
#if (FOOTERS && !INSECURE)
1796
    {
1796
    {
1797
#if USE_DEV_RANDOM
1797
#if USE_DEV_RANDOM
1798
      int fd;
1798
      int fd;
1799
      unsigned char buf[sizeof(size_t)];
1799
      unsigned char buf[sizeof(size_t)];
1800
      /* Try to use /dev/urandom, else fall back on using time */
1800
      /* Try to use /dev/urandom, else fall back on using time */
1801
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1801
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1802
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1802
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1803
        s = *((size_t *) buf);
1803
        s = *((size_t *) buf);
1804
        close(fd);
1804
        close(fd);
1805
      }
1805
      }
1806
      else
1806
      else
1807
#endif /* USE_DEV_RANDOM */
1807
#endif /* USE_DEV_RANDOM */
1808
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1808
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1809
 
1809
 
1810
      s |= (size_t)8U;    /* ensure nonzero */
1810
      s |= (size_t)8U;    /* ensure nonzero */
1811
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1811
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1812
 
1812
 
1813
    }
1813
    }
1814
#else /* (FOOTERS && !INSECURE) */
1814
#else /* (FOOTERS && !INSECURE) */
1815
    s = (size_t)0x58585858U;
1815
    s = (size_t)0x58585858U;
1816
#endif /* (FOOTERS && !INSECURE) */
1816
#endif /* (FOOTERS && !INSECURE) */
1817
    ACQUIRE_MAGIC_INIT_LOCK();
1817
    ACQUIRE_MAGIC_INIT_LOCK();
1818
    if (mparams.magic == 0) {
1818
    if (mparams.magic == 0) {
1819
      mparams.magic = s;
1819
      mparams.magic = s;
1820
      /* Set up lock for main malloc area */
1820
      /* Set up lock for main malloc area */
1821
      INITIAL_LOCK(&gm->mutex);
1821
      INITIAL_LOCK(&gm->mutex);
1822
      gm->mflags = mparams.default_mflags;
1822
      gm->mflags = mparams.default_mflags;
1823
    }
1823
    }
1824
    RELEASE_MAGIC_INIT_LOCK();
1824
    RELEASE_MAGIC_INIT_LOCK();
1825
 
1825
 
1826
#ifndef WIN32
1826
#ifndef WIN32
1827
    mparams.page_size = malloc_getpagesize;
1827
    mparams.page_size = malloc_getpagesize;
1828
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1828
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1829
                           DEFAULT_GRANULARITY : mparams.page_size);
1829
                           DEFAULT_GRANULARITY : mparams.page_size);
1830
#else /* WIN32 */
1830
#else /* WIN32 */
1831
    {
1831
    {
1832
      SYSTEM_INFO system_info;
1832
      SYSTEM_INFO system_info;
1833
      GetSystemInfo(&system_info);
1833
      GetSystemInfo(&system_info);
1834
      mparams.page_size = system_info.dwPageSize;
1834
      mparams.page_size = system_info.dwPageSize;
1835
      mparams.granularity = system_info.dwAllocationGranularity;
1835
      mparams.granularity = system_info.dwAllocationGranularity;
1836
    }
1836
    }
1837
#endif /* WIN32 */
1837
#endif /* WIN32 */
1838
 
1838
 
1839
    /* Sanity-check configuration:
1839
    /* Sanity-check configuration:
1840
       size_t must be unsigned and as wide as pointer type.
1840
       size_t must be unsigned and as wide as pointer type.
1841
       ints must be at least 4 bytes.
1841
       ints must be at least 4 bytes.
1842
       alignment must be at least 8.
1842
       alignment must be at least 8.
1843
       Alignment, min chunk size, and page size must all be powers of 2.
1843
       Alignment, min chunk size, and page size must all be powers of 2.
1844
    */
1844
    */
1845
    if ((sizeof(size_t) != sizeof(char*)) ||
1845
    if ((sizeof(size_t) != sizeof(char*)) ||
1846
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1846
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1847
        (sizeof(int) < 4)  ||
1847
        (sizeof(int) < 4)  ||
1848
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1848
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1849
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1849
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1850
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1850
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1851
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1851
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1852
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1852
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1853
      ABORT;
1853
      ABORT;
1854
  }
1854
  }
1855
  return 0;
1855
  return 0;
1856
}
1856
}
1857
 
1857
 
1858
/* support for mallopt */
1858
/* support for mallopt */
1859
static int change_mparam(int param_number, int value) {
1859
static int change_mparam(int param_number, int value) {
1860
  size_t val = (size_t)value;
1860
  size_t val = (size_t)value;
1861
  init_mparams();
1861
  init_mparams();
1862
  switch(param_number) {
1862
  switch(param_number) {
1863
  case M_TRIM_THRESHOLD:
1863
  case M_TRIM_THRESHOLD:
1864
    mparams.trim_threshold = val;
1864
    mparams.trim_threshold = val;
1865
    return 1;
1865
    return 1;
1866
  case M_GRANULARITY:
1866
  case M_GRANULARITY:
1867
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1867
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1868
      mparams.granularity = val;
1868
      mparams.granularity = val;
1869
      return 1;
1869
      return 1;
1870
    }
1870
    }
1871
    else
1871
    else
1872
      return 0;
1872
      return 0;
1873
  case M_MMAP_THRESHOLD:
1873
  case M_MMAP_THRESHOLD:
1874
    mparams.mmap_threshold = val;
1874
    mparams.mmap_threshold = val;
1875
    return 1;
1875
    return 1;
1876
  default:
1876
  default:
1877
    return 0;
1877
    return 0;
1878
  }
1878
  }
1879
}
1879
}
1880
 
1880
 
1881
#if DEBUG
1881
#if DEBUG
1882
/* ------------------------- Debugging Support --------------------------- */
1882
/* ------------------------- Debugging Support --------------------------- */
1883
 
1883
 
1884
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1884
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1885
static void do_check_any_chunk(mstate m, mchunkptr p) {
1885
static void do_check_any_chunk(mstate m, mchunkptr p) {
1886
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1886
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1887
  assert(ok_address(m, p));
1887
  assert(ok_address(m, p));
1888
}
1888
}
1889
 
1889
 
1890
/* Check properties of top chunk */
1890
/* Check properties of top chunk */
1891
static void do_check_top_chunk(mstate m, mchunkptr p) {
1891
static void do_check_top_chunk(mstate m, mchunkptr p) {
1892
  msegmentptr sp = segment_holding(m, (char*)p);
1892
  msegmentptr sp = segment_holding(m, (char*)p);
1893
  size_t  sz = chunksize(p);
1893
  size_t  sz = chunksize(p);
1894
  assert(sp != 0);
1894
  assert(sp != 0);
1895
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1895
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1896
  assert(ok_address(m, p));
1896
  assert(ok_address(m, p));
1897
  assert(sz == m->topsize);
1897
  assert(sz == m->topsize);
1898
  assert(sz > 0);
1898
  assert(sz > 0);
1899
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1899
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1900
  assert(pinuse(p));
1900
  assert(pinuse(p));
1901
  assert(!next_pinuse(p));
1901
  assert(!next_pinuse(p));
1902
}
1902
}
1903
 
1903
 
1904
/* Check properties of (inuse) mmapped chunks */
1904
/* Check properties of (inuse) mmapped chunks */
1905
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1905
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1906
  size_t  sz = chunksize(p);
1906
  size_t  sz = chunksize(p);
1907
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1907
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1908
  assert(is_mmapped(p));
1908
  assert(is_mmapped(p));
1909
  assert(use_mmap(m));
1909
  assert(use_mmap(m));
1910
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1910
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1911
  assert(ok_address(m, p));
1911
  assert(ok_address(m, p));
1912
  assert(!is_small(sz));
1912
  assert(!is_small(sz));
1913
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1913
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1914
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1914
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1915
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1915
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1916
}
1916
}
1917
 
1917
 
1918
/* Check properties of inuse chunks */
1918
/* Check properties of inuse chunks */
1919
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1919
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1920
  do_check_any_chunk(m, p);
1920
  do_check_any_chunk(m, p);
1921
  assert(cinuse(p));
1921
  assert(cinuse(p));
1922
  assert(next_pinuse(p));
1922
  assert(next_pinuse(p));
1923
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1923
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1924
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1924
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1925
  if (is_mmapped(p))
1925
  if (is_mmapped(p))
1926
    do_check_mmapped_chunk(m, p);
1926
    do_check_mmapped_chunk(m, p);
1927
}
1927
}
1928
 
1928
 
1929
/* Check properties of free chunks */
1929
/* Check properties of free chunks */
1930
static void do_check_free_chunk(mstate m, mchunkptr p) {
1930
static void do_check_free_chunk(mstate m, mchunkptr p) {
1931
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1931
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1932
  mchunkptr next = chunk_plus_offset(p, sz);
1932
  mchunkptr next = chunk_plus_offset(p, sz);
1933
  do_check_any_chunk(m, p);
1933
  do_check_any_chunk(m, p);
1934
  assert(!cinuse(p));
1934
  assert(!cinuse(p));
1935
  assert(!next_pinuse(p));
1935
  assert(!next_pinuse(p));
1936
  assert (!is_mmapped(p));
1936
  assert (!is_mmapped(p));
1937
  if (p != m->dv && p != m->top) {
1937
  if (p != m->dv && p != m->top) {
1938
    if (sz >= MIN_CHUNK_SIZE) {
1938
    if (sz >= MIN_CHUNK_SIZE) {
1939
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1939
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1940
      assert(is_aligned(chunk2mem(p)));
1940
      assert(is_aligned(chunk2mem(p)));
1941
      assert(next->prev_foot == sz);
1941
      assert(next->prev_foot == sz);
1942
      assert(pinuse(p));
1942
      assert(pinuse(p));
1943
      assert (next == m->top || cinuse(next));
1943
      assert (next == m->top || cinuse(next));
1944
      assert(p->fd->bk == p);
1944
      assert(p->fd->bk == p);
1945
      assert(p->bk->fd == p);
1945
      assert(p->bk->fd == p);
1946
    }
1946
    }
1947
    else  /* markers are always of size SIZE_T_SIZE */
1947
    else  /* markers are always of size SIZE_T_SIZE */
1948
      assert(sz == SIZE_T_SIZE);
1948
      assert(sz == SIZE_T_SIZE);
1949
  }
1949
  }
1950
}
1950
}
1951
 
1951
 
1952
/* Check properties of malloced chunks at the point they are malloced */
1952
/* Check properties of malloced chunks at the point they are malloced */
1953
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1953
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1954
  if (mem != 0) {
1954
  if (mem != 0) {
1955
    mchunkptr p = mem2chunk(mem);
1955
    mchunkptr p = mem2chunk(mem);
1956
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1956
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1957
    do_check_inuse_chunk(m, p);
1957
    do_check_inuse_chunk(m, p);
1958
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1958
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1959
    assert(sz >= MIN_CHUNK_SIZE);
1959
    assert(sz >= MIN_CHUNK_SIZE);
1960
    assert(sz >= s);
1960
    assert(sz >= s);
1961
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1961
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1962
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1962
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1963
  }
1963
  }
1964
}
1964
}
1965
 
1965
 
1966
/* Check a tree and its subtrees.  */
1966
/* Check a tree and its subtrees.  */
1967
static void do_check_tree(mstate m, tchunkptr t) {
1967
static void do_check_tree(mstate m, tchunkptr t) {
1968
  tchunkptr head = 0;
1968
  tchunkptr head = 0;
1969
  tchunkptr u = t;
1969
  tchunkptr u = t;
1970
  bindex_t tindex = t->index;
1970
  bindex_t tindex = t->index;
1971
  size_t tsize = chunksize(t);
1971
  size_t tsize = chunksize(t);
1972
  bindex_t idx;
1972
  bindex_t idx;
1973
  compute_tree_index(tsize, idx);
1973
  compute_tree_index(tsize, idx);
1974
  assert(tindex == idx);
1974
  assert(tindex == idx);
1975
  assert(tsize >= MIN_LARGE_SIZE);
1975
  assert(tsize >= MIN_LARGE_SIZE);
1976
  assert(tsize >= minsize_for_tree_index(idx));
1976
  assert(tsize >= minsize_for_tree_index(idx));
1977
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1977
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1978
 
1978
 
1979
  do { /* traverse through chain of same-sized nodes */
1979
  do { /* traverse through chain of same-sized nodes */
1980
    do_check_any_chunk(m, ((mchunkptr)u));
1980
    do_check_any_chunk(m, ((mchunkptr)u));
1981
    assert(u->index == tindex);
1981
    assert(u->index == tindex);
1982
    assert(chunksize(u) == tsize);
1982
    assert(chunksize(u) == tsize);
1983
    assert(!cinuse(u));
1983
    assert(!cinuse(u));
1984
    assert(!next_pinuse(u));
1984
    assert(!next_pinuse(u));
1985
    assert(u->fd->bk == u);
1985
    assert(u->fd->bk == u);
1986
    assert(u->bk->fd == u);
1986
    assert(u->bk->fd == u);
1987
    if (u->parent == 0) {
1987
    if (u->parent == 0) {
1988
      assert(u->child[0] == 0);
1988
      assert(u->child[0] == 0);
1989
      assert(u->child[1] == 0);
1989
      assert(u->child[1] == 0);
1990
    }
1990
    }
1991
    else {
1991
    else {
1992
      assert(head == 0); /* only one node on chain has parent */
1992
      assert(head == 0); /* only one node on chain has parent */
1993
      head = u;
1993
      head = u;
1994
      assert(u->parent != u);
1994
      assert(u->parent != u);
1995
      assert (u->parent->child[0] == u ||
1995
      assert (u->parent->child[0] == u ||
1996
              u->parent->child[1] == u ||
1996
              u->parent->child[1] == u ||
1997
              *((tbinptr*)(u->parent)) == u);
1997
              *((tbinptr*)(u->parent)) == u);
1998
      if (u->child[0] != 0) {
1998
      if (u->child[0] != 0) {
1999
        assert(u->child[0]->parent == u);
1999
        assert(u->child[0]->parent == u);
2000
        assert(u->child[0] != u);
2000
        assert(u->child[0] != u);
2001
        do_check_tree(m, u->child[0]);
2001
        do_check_tree(m, u->child[0]);
2002
      }
2002
      }
2003
      if (u->child[1] != 0) {
2003
      if (u->child[1] != 0) {
2004
        assert(u->child[1]->parent == u);
2004
        assert(u->child[1]->parent == u);
2005
        assert(u->child[1] != u);
2005
        assert(u->child[1] != u);
2006
        do_check_tree(m, u->child[1]);
2006
        do_check_tree(m, u->child[1]);
2007
      }
2007
      }
2008
      if (u->child[0] != 0 && u->child[1] != 0) {
2008
      if (u->child[0] != 0 && u->child[1] != 0) {
2009
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2009
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2010
      }
2010
      }
2011
    }
2011
    }
2012
    u = u->fd;
2012
    u = u->fd;
2013
  } while (u != t);
2013
  } while (u != t);
2014
  assert(head != 0);
2014
  assert(head != 0);
2015
}
2015
}
2016
 
2016
 
2017
/*  Check all the chunks in a treebin.  */
2017
/*  Check all the chunks in a treebin.  */
2018
static void do_check_treebin(mstate m, bindex_t i) {
2018
static void do_check_treebin(mstate m, bindex_t i) {
2019
  tbinptr* tb = treebin_at(m, i);
2019
  tbinptr* tb = treebin_at(m, i);
2020
  tchunkptr t = *tb;
2020
  tchunkptr t = *tb;
2021
  int empty = (m->treemap & (1U << i)) == 0;
2021
  int empty = (m->treemap & (1U << i)) == 0;
2022
  if (t == 0)
2022
  if (t == 0)
2023
    assert(empty);
2023
    assert(empty);
2024
  if (!empty)
2024
  if (!empty)
2025
    do_check_tree(m, t);
2025
    do_check_tree(m, t);
2026
}
2026
}
2027
 
2027
 
2028
/*  Check all the chunks in a smallbin.  */
2028
/*  Check all the chunks in a smallbin.  */
2029
static void do_check_smallbin(mstate m, bindex_t i) {
2029
static void do_check_smallbin(mstate m, bindex_t i) {
2030
  sbinptr b = smallbin_at(m, i);
2030
  sbinptr b = smallbin_at(m, i);
2031
  mchunkptr p = b->bk;
2031
  mchunkptr p = b->bk;
2032
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2032
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2033
  if (p == b)
2033
  if (p == b)
2034
    assert(empty);
2034
    assert(empty);
2035
  if (!empty) {
2035
  if (!empty) {
2036
    for (; p != b; p = p->bk) {
2036
    for (; p != b; p = p->bk) {
2037
      size_t size = chunksize(p);
2037
      size_t size = chunksize(p);
2038
      mchunkptr q;
2038
      mchunkptr q;
2039
      /* each chunk claims to be free */
2039
      /* each chunk claims to be free */
2040
      do_check_free_chunk(m, p);
2040
      do_check_free_chunk(m, p);
2041
      /* chunk belongs in bin */
2041
      /* chunk belongs in bin */
2042
      assert(small_index(size) == i);
2042
      assert(small_index(size) == i);
2043
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2043
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2044
      /* chunk is followed by an inuse chunk */
2044
      /* chunk is followed by an inuse chunk */
2045
      q = next_chunk(p);
2045
      q = next_chunk(p);
2046
      if (q->head != FENCEPOST_HEAD)
2046
      if (q->head != FENCEPOST_HEAD)
2047
        do_check_inuse_chunk(m, q);
2047
        do_check_inuse_chunk(m, q);
2048
    }
2048
    }
2049
  }
2049
  }
2050
}
2050
}
2051
 
2051
 
2052
/* Find x in a bin. Used in other check functions. */
2052
/* Find x in a bin. Used in other check functions. */
2053
static int bin_find(mstate m, mchunkptr x) {
2053
static int bin_find(mstate m, mchunkptr x) {
2054
  size_t size = chunksize(x);
2054
  size_t size = chunksize(x);
2055
  if (is_small(size)) {
2055
  if (is_small(size)) {
2056
    bindex_t sidx = small_index(size);
2056
    bindex_t sidx = small_index(size);
2057
    sbinptr b = smallbin_at(m, sidx);
2057
    sbinptr b = smallbin_at(m, sidx);
2058
    if (smallmap_is_marked(m, sidx)) {
2058
    if (smallmap_is_marked(m, sidx)) {
2059
      mchunkptr p = b;
2059
      mchunkptr p = b;
2060
      do {
2060
      do {
2061
        if (p == x)
2061
        if (p == x)
2062
          return 1;
2062
          return 1;
2063
      } while ((p = p->fd) != b);
2063
      } while ((p = p->fd) != b);
2064
    }
2064
    }
2065
  }
2065
  }
2066
  else {
2066
  else {
2067
    bindex_t tidx;
2067
    bindex_t tidx;
2068
    compute_tree_index(size, tidx);
2068
    compute_tree_index(size, tidx);
2069
    if (treemap_is_marked(m, tidx)) {
2069
    if (treemap_is_marked(m, tidx)) {
2070
      tchunkptr t = *treebin_at(m, tidx);
2070
      tchunkptr t = *treebin_at(m, tidx);
2071
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2071
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2072
      while (t != 0 && chunksize(t) != size) {
2072
      while (t != 0 && chunksize(t) != size) {
2073
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2073
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2074
        sizebits <<= 1;
2074
        sizebits <<= 1;
2075
      }
2075
      }
2076
      if (t != 0) {
2076
      if (t != 0) {
2077
        tchunkptr u = t;
2077
        tchunkptr u = t;
2078
        do {
2078
        do {
2079
          if (u == (tchunkptr)x)
2079
          if (u == (tchunkptr)x)
2080
            return 1;
2080
            return 1;
2081
        } while ((u = u->fd) != t);
2081
        } while ((u = u->fd) != t);
2082
      }
2082
      }
2083
    }
2083
    }
2084
  }
2084
  }
2085
  return 0;
2085
  return 0;
2086
}
2086
}
2087
 
2087
 
2088
/* Traverse each chunk and check it; return total */
2088
/* Traverse each chunk and check it; return total */
2089
static size_t traverse_and_check(mstate m) {
2089
static size_t traverse_and_check(mstate m) {
2090
  size_t sum = 0;
2090
  size_t sum = 0;
2091
  if (is_initialized(m)) {
2091
  if (is_initialized(m)) {
2092
    msegmentptr s = &m->seg;
2092
    msegmentptr s = &m->seg;
2093
    sum += m->topsize + TOP_FOOT_SIZE;
2093
    sum += m->topsize + TOP_FOOT_SIZE;
2094
    while (s != 0) {
2094
    while (s != 0) {
2095
      mchunkptr q = align_as_chunk(s->base);
2095
      mchunkptr q = align_as_chunk(s->base);
2096
      mchunkptr lastq = 0;
2096
      mchunkptr lastq = 0;
2097
      assert(pinuse(q));
2097
      assert(pinuse(q));
2098
      while (segment_holds(s, q) &&
2098
      while (segment_holds(s, q) &&
2099
             q != m->top && q->head != FENCEPOST_HEAD) {
2099
             q != m->top && q->head != FENCEPOST_HEAD) {
2100
        sum += chunksize(q);
2100
        sum += chunksize(q);
2101
        if (cinuse(q)) {
2101
        if (cinuse(q)) {
2102
          assert(!bin_find(m, q));
2102
          assert(!bin_find(m, q));
2103
          do_check_inuse_chunk(m, q);
2103
          do_check_inuse_chunk(m, q);
2104
        }
2104
        }
2105
        else {
2105
        else {
2106
          assert(q == m->dv || bin_find(m, q));
2106
          assert(q == m->dv || bin_find(m, q));
2107
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2107
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2108
          do_check_free_chunk(m, q);
2108
          do_check_free_chunk(m, q);
2109
        }
2109
        }
2110
        lastq = q;
2110
        lastq = q;
2111
        q = next_chunk(q);
2111
        q = next_chunk(q);
2112
      }
2112
      }
2113
      s = s->next;
2113
      s = s->next;
2114
    }
2114
    }
2115
  }
2115
  }
2116
  return sum;
2116
  return sum;
2117
}
2117
}
2118
 
2118
 
2119
/* Check all properties of malloc_state. */
2119
/* Check all properties of malloc_state. */
2120
static void do_check_malloc_state(mstate m) {
2120
static void do_check_malloc_state(mstate m) {
2121
  bindex_t i;
2121
  bindex_t i;
2122
  size_t total;
2122
  size_t total;
2123
  /* check bins */
2123
  /* check bins */
2124
  for (i = 0; i < NSMALLBINS; ++i)
2124
  for (i = 0; i < NSMALLBINS; ++i)
2125
    do_check_smallbin(m, i);
2125
    do_check_smallbin(m, i);
2126
  for (i = 0; i < NTREEBINS; ++i)
2126
  for (i = 0; i < NTREEBINS; ++i)
2127
    do_check_treebin(m, i);
2127
    do_check_treebin(m, i);
2128
 
2128
 
2129
  if (m->dvsize != 0) { /* check dv chunk */
2129
  if (m->dvsize != 0) { /* check dv chunk */
2130
    do_check_any_chunk(m, m->dv);
2130
    do_check_any_chunk(m, m->dv);
2131
    assert(m->dvsize == chunksize(m->dv));
2131
    assert(m->dvsize == chunksize(m->dv));
2132
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2132
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2133
    assert(bin_find(m, m->dv) == 0);
2133
    assert(bin_find(m, m->dv) == 0);
2134
  }
2134
  }
2135
 
2135
 
2136
  if (m->top != 0) {   /* check top chunk */
2136
  if (m->top != 0) {   /* check top chunk */
2137
    do_check_top_chunk(m, m->top);
2137
    do_check_top_chunk(m, m->top);
2138
    assert(m->topsize == chunksize(m->top));
2138
    assert(m->topsize == chunksize(m->top));
2139
    assert(m->topsize > 0);
2139
    assert(m->topsize > 0);
2140
    assert(bin_find(m, m->top) == 0);
2140
    assert(bin_find(m, m->top) == 0);
2141
  }
2141
  }
2142
 
2142
 
2143
  total = traverse_and_check(m);
2143
  total = traverse_and_check(m);
2144
  assert(total <= m->footprint);
2144
  assert(total <= m->footprint);
2145
  assert(m->footprint <= m->max_footprint);
2145
  assert(m->footprint <= m->max_footprint);
2146
}
2146
}
2147
#endif /* DEBUG */
2147
#endif /* DEBUG */
2148
 
2148
 
2149
/* ----------------------------- statistics ------------------------------ */
2149
/* ----------------------------- statistics ------------------------------ */
2150
 
2150
 
2151
#if !NO_MALLINFO
2151
#if !NO_MALLINFO
2152
static struct mallinfo internal_mallinfo(mstate m) {
2152
static struct mallinfo internal_mallinfo(mstate m) {
2153
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2153
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2154
  if (!PREACTION(m)) {
2154
  if (!PREACTION(m)) {
2155
    check_malloc_state(m);
2155
    check_malloc_state(m);
2156
    if (is_initialized(m)) {
2156
    if (is_initialized(m)) {
2157
      size_t nfree = SIZE_T_ONE; /* top always free */
2157
      size_t nfree = SIZE_T_ONE; /* top always free */
2158
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2158
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2159
      size_t sum = mfree;
2159
      size_t sum = mfree;
2160
      msegmentptr s = &m->seg;
2160
      msegmentptr s = &m->seg;
2161
      while (s != 0) {
2161
      while (s != 0) {
2162
        mchunkptr q = align_as_chunk(s->base);
2162
        mchunkptr q = align_as_chunk(s->base);
2163
        while (segment_holds(s, q) &&
2163
        while (segment_holds(s, q) &&
2164
               q != m->top && q->head != FENCEPOST_HEAD) {
2164
               q != m->top && q->head != FENCEPOST_HEAD) {
2165
          size_t sz = chunksize(q);
2165
          size_t sz = chunksize(q);
2166
          sum += sz;
2166
          sum += sz;
2167
          if (!cinuse(q)) {
2167
          if (!cinuse(q)) {
2168
            mfree += sz;
2168
            mfree += sz;
2169
            ++nfree;
2169
            ++nfree;
2170
          }
2170
          }
2171
          q = next_chunk(q);
2171
          q = next_chunk(q);
2172
        }
2172
        }
2173
        s = s->next;
2173
        s = s->next;
2174
      }
2174
      }
2175
 
2175
 
2176
      nm.arena    = sum;
2176
      nm.arena    = sum;
2177
      nm.ordblks  = nfree;
2177
      nm.ordblks  = nfree;
2178
      nm.hblkhd   = m->footprint - sum;
2178
      nm.hblkhd   = m->footprint - sum;
2179
      nm.usmblks  = m->max_footprint;
2179
      nm.usmblks  = m->max_footprint;
2180
      nm.uordblks = m->footprint - mfree;
2180
      nm.uordblks = m->footprint - mfree;
2181
      nm.fordblks = mfree;
2181
      nm.fordblks = mfree;
2182
      nm.keepcost = m->topsize;
2182
      nm.keepcost = m->topsize;
2183
    }
2183
    }
2184
 
2184
 
2185
    POSTACTION(m);
2185
    POSTACTION(m);
2186
  }
2186
  }
2187
  return nm;
2187
  return nm;
2188
}
2188
}
2189
#endif /* !NO_MALLINFO */
2189
#endif /* !NO_MALLINFO */
2190
 
2190
 
2191
static void internal_malloc_stats(mstate m) {
2191
static void internal_malloc_stats(mstate m) {
2192
  if (!PREACTION(m)) {
2192
  if (!PREACTION(m)) {
2193
    size_t maxfp = 0;
2193
    size_t maxfp = 0;
2194
    size_t fp = 0;
2194
    size_t fp = 0;
2195
    size_t used = 0;
2195
    size_t used = 0;
2196
    check_malloc_state(m);
2196
    check_malloc_state(m);
2197
    if (is_initialized(m)) {
2197
    if (is_initialized(m)) {
2198
      msegmentptr s = &m->seg;
2198
      msegmentptr s = &m->seg;
2199
      maxfp = m->max_footprint;
2199
      maxfp = m->max_footprint;
2200
      fp = m->footprint;
2200
      fp = m->footprint;
2201
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2201
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2202
 
2202
 
2203
      while (s != 0) {
2203
      while (s != 0) {
2204
        mchunkptr q = align_as_chunk(s->base);
2204
        mchunkptr q = align_as_chunk(s->base);
2205
        while (segment_holds(s, q) &&
2205
        while (segment_holds(s, q) &&
2206
               q != m->top && q->head != FENCEPOST_HEAD) {
2206
               q != m->top && q->head != FENCEPOST_HEAD) {
2207
          if (!cinuse(q))
2207
          if (!cinuse(q))
2208
            used -= chunksize(q);
2208
            used -= chunksize(q);
2209
          q = next_chunk(q);
2209
          q = next_chunk(q);
2210
        }
2210
        }
2211
        s = s->next;
2211
        s = s->next;
2212
      }
2212
      }
2213
    }
2213
    }
2214
 
2214
 
2215
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2215
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2216
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2216
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2217
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2217
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2218
 
2218
 
2219
    POSTACTION(m);
2219
    POSTACTION(m);
2220
  }
2220
  }
2221
}
2221
}
2222
 
2222
 
2223
/* ----------------------- Operations on smallbins ----------------------- */
2223
/* ----------------------- Operations on smallbins ----------------------- */
2224
 
2224
 
2225
/*
2225
/*
2226
  Various forms of linking and unlinking are defined as macros.  Even
2226
  Various forms of linking and unlinking are defined as macros.  Even
2227
  the ones for trees, which are very long but have very short typical
2227
  the ones for trees, which are very long but have very short typical
2228
  paths.  This is ugly but reduces reliance on inlining support of
2228
  paths.  This is ugly but reduces reliance on inlining support of
2229
  compilers.
2229
  compilers.
2230
*/
2230
*/
2231
 
2231
 
2232
/* Link a free chunk into a smallbin  */
2232
/* Link a free chunk into a smallbin  */
2233
#define insert_small_chunk(M, P, S) {\
2233
#define insert_small_chunk(M, P, S) {\
2234
  bindex_t I  = small_index(S);\
2234
  bindex_t I  = small_index(S);\
2235
  mchunkptr B = smallbin_at(M, I);\
2235
  mchunkptr B = smallbin_at(M, I);\
2236
  mchunkptr F = B;\
2236
  mchunkptr F = B;\
2237
  assert(S >= MIN_CHUNK_SIZE);\
2237
  assert(S >= MIN_CHUNK_SIZE);\
2238
  if (!smallmap_is_marked(M, I))\
2238
  if (!smallmap_is_marked(M, I))\
2239
    mark_smallmap(M, I);\
2239
    mark_smallmap(M, I);\
2240
  else if (RTCHECK(ok_address(M, B->fd)))\
2240
  else if (RTCHECK(ok_address(M, B->fd)))\
2241
    F = B->fd;\
2241
    F = B->fd;\
2242
  else {\
2242
  else {\
2243
    CORRUPTION_ERROR_ACTION(M);\
2243
    CORRUPTION_ERROR_ACTION(M);\
2244
  }\
2244
  }\
2245
  B->fd = P;\
2245
  B->fd = P;\
2246
  F->bk = P;\
2246
  F->bk = P;\
2247
  P->fd = F;\
2247
  P->fd = F;\
2248
  P->bk = B;\
2248
  P->bk = B;\
2249
}
2249
}
2250
 
2250
 
2251
/* Unlink a chunk from a smallbin  */
2251
/* Unlink a chunk from a smallbin  */
2252
#define unlink_small_chunk(M, P, S) {\
2252
#define unlink_small_chunk(M, P, S) {\
2253
  mchunkptr F = P->fd;\
2253
  mchunkptr F = P->fd;\
2254
  mchunkptr B = P->bk;\
2254
  mchunkptr B = P->bk;\
2255
  bindex_t I = small_index(S);\
2255
  bindex_t I = small_index(S);\
2256
  assert(P != B);\
2256
  assert(P != B);\
2257
  assert(P != F);\
2257
  assert(P != F);\
2258
  assert(chunksize(P) == small_index2size(I));\
2258
  assert(chunksize(P) == small_index2size(I));\
2259
  if (F == B)\
2259
  if (F == B)\
2260
    clear_smallmap(M, I);\
2260
    clear_smallmap(M, I);\
2261
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2261
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2262
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2262
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2263
    F->bk = B;\
2263
    F->bk = B;\
2264
    B->fd = F;\
2264
    B->fd = F;\
2265
  }\
2265
  }\
2266
  else {\
2266
  else {\
2267
    CORRUPTION_ERROR_ACTION(M);\
2267
    CORRUPTION_ERROR_ACTION(M);\
2268
  }\
2268
  }\
2269
}
2269
}
2270
 
2270
 
2271
/* Unlink the first chunk from a smallbin */
2271
/* Unlink the first chunk from a smallbin */
2272
#define unlink_first_small_chunk(M, B, P, I) {\
2272
#define unlink_first_small_chunk(M, B, P, I) {\
2273
  mchunkptr F = P->fd;\
2273
  mchunkptr F = P->fd;\
2274
  assert(P != B);\
2274
  assert(P != B);\
2275
  assert(P != F);\
2275
  assert(P != F);\
2276
  assert(chunksize(P) == small_index2size(I));\
2276
  assert(chunksize(P) == small_index2size(I));\
2277
  if (B == F)\
2277
  if (B == F)\
2278
    clear_smallmap(M, I);\
2278
    clear_smallmap(M, I);\
2279
  else if (RTCHECK(ok_address(M, F))) {\
2279
  else if (RTCHECK(ok_address(M, F))) {\
2280
    B->fd = F;\
2280
    B->fd = F;\
2281
    F->bk = B;\
2281
    F->bk = B;\
2282
  }\
2282
  }\
2283
  else {\
2283
  else {\
2284
    CORRUPTION_ERROR_ACTION(M);\
2284
    CORRUPTION_ERROR_ACTION(M);\
2285
  }\
2285
  }\
2286
}
2286
}
2287
 
2287
 
2288
/* Replace dv node, binning the old one */
2288
/* Replace dv node, binning the old one */
2289
/* Used only when dvsize known to be small */
2289
/* Used only when dvsize known to be small */
2290
#define replace_dv(M, P, S) {\
2290
#define replace_dv(M, P, S) {\
2291
  size_t DVS = M->dvsize;\
2291
  size_t DVS = M->dvsize;\
2292
  if (DVS != 0) {\
2292
  if (DVS != 0) {\
2293
    mchunkptr DV = M->dv;\
2293
    mchunkptr DV = M->dv;\
2294
    assert(is_small(DVS));\
2294
    assert(is_small(DVS));\
2295
    insert_small_chunk(M, DV, DVS);\
2295
    insert_small_chunk(M, DV, DVS);\
2296
  }\
2296
  }\
2297
  M->dvsize = S;\
2297
  M->dvsize = S;\
2298
  M->dv = P;\
2298
  M->dv = P;\
2299
}
2299
}
2300
 
2300
 
2301
/* ------------------------- Operations on trees ------------------------- */
2301
/* ------------------------- Operations on trees ------------------------- */
2302
 
2302
 
2303
/* Insert chunk into tree */
2303
/* Insert chunk into tree */
2304
#define insert_large_chunk(M, X, S) {\
2304
#define insert_large_chunk(M, X, S) {\
2305
  tbinptr* H;\
2305
  tbinptr* H;\
2306
  bindex_t I;\
2306
  bindex_t I;\
2307
  compute_tree_index(S, I);\
2307
  compute_tree_index(S, I);\
2308
  H = treebin_at(M, I);\
2308
  H = treebin_at(M, I);\
2309
  X->index = I;\
2309
  X->index = I;\
2310
  X->child[0] = X->child[1] = 0;\
2310
  X->child[0] = X->child[1] = 0;\
2311
  if (!treemap_is_marked(M, I)) {\
2311
  if (!treemap_is_marked(M, I)) {\
2312
    mark_treemap(M, I);\
2312
    mark_treemap(M, I);\
2313
    *H = X;\
2313
    *H = X;\
2314
    X->parent = (tchunkptr)H;\
2314
    X->parent = (tchunkptr)H;\
2315
    X->fd = X->bk = X;\
2315
    X->fd = X->bk = X;\
2316
  }\
2316
  }\
2317
  else {\
2317
  else {\
2318
    tchunkptr T = *H;\
2318
    tchunkptr T = *H;\
2319
    size_t K = S << leftshift_for_tree_index(I);\
2319
    size_t K = S << leftshift_for_tree_index(I);\
2320
    for (;;) {\
2320
    for (;;) {\
2321
      if (chunksize(T) != S) {\
2321
      if (chunksize(T) != S) {\
2322
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2322
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2323
        K <<= 1;\
2323
        K <<= 1;\
2324
        if (*C != 0)\
2324
        if (*C != 0)\
2325
          T = *C;\
2325
          T = *C;\
2326
        else if (RTCHECK(ok_address(M, C))) {\
2326
        else if (RTCHECK(ok_address(M, C))) {\
2327
          *C = X;\
2327
          *C = X;\
2328
          X->parent = T;\
2328
          X->parent = T;\
2329
          X->fd = X->bk = X;\
2329
          X->fd = X->bk = X;\
2330
          break;\
2330
          break;\
2331
        }\
2331
        }\
2332
        else {\
2332
        else {\
2333
          CORRUPTION_ERROR_ACTION(M);\
2333
          CORRUPTION_ERROR_ACTION(M);\
2334
          break;\
2334
          break;\
2335
        }\
2335
        }\
2336
      }\
2336
      }\
2337
      else {\
2337
      else {\
2338
        tchunkptr F = T->fd;\
2338
        tchunkptr F = T->fd;\
2339
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2339
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2340
          T->fd = F->bk = X;\
2340
          T->fd = F->bk = X;\
2341
          X->fd = F;\
2341
          X->fd = F;\
2342
          X->bk = T;\
2342
          X->bk = T;\
2343
          X->parent = 0;\
2343
          X->parent = 0;\
2344
          break;\
2344
          break;\
2345
        }\
2345
        }\
2346
        else {\
2346
        else {\
2347
          CORRUPTION_ERROR_ACTION(M);\
2347
          CORRUPTION_ERROR_ACTION(M);\
2348
          break;\
2348
          break;\
2349
        }\
2349
        }\
2350
      }\
2350
      }\
2351
    }\
2351
    }\
2352
  }\
2352
  }\
2353
}
2353
}
2354
 
2354
 
2355
/*
2355
/*
2356
  Unlink steps:
2356
  Unlink steps:
2357
 
2357
 
2358
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2358
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2359
     and choose its bk node as its replacement.
2359
     and choose its bk node as its replacement.
2360
  2. If x was the last node of its size, but not a leaf node, it must
2360
  2. If x was the last node of its size, but not a leaf node, it must
2361
     be replaced with a leaf node (not merely one with an open left or
2361
     be replaced with a leaf node (not merely one with an open left or
2362
     right), to make sure that lefts and rights of descendents
2362
     right), to make sure that lefts and rights of descendents
2363
     correspond properly to bit masks.  We use the rightmost descendent
2363
     correspond properly to bit masks.  We use the rightmost descendent
2364
     of x.  We could use any other leaf, but this is easy to locate and
2364
     of x.  We could use any other leaf, but this is easy to locate and
2365
     tends to counteract removal of leftmosts elsewhere, and so keeps
2365
     tends to counteract removal of leftmosts elsewhere, and so keeps
2366
     paths shorter than minimally guaranteed.  This doesn't loop much
2366
     paths shorter than minimally guaranteed.  This doesn't loop much
2367
     because on average a node in a tree is near the bottom.
2367
     because on average a node in a tree is near the bottom.
2368
  3. If x is the base of a chain (i.e., has parent links) relink
2368
  3. If x is the base of a chain (i.e., has parent links) relink
2369
     x's parent and children to x's replacement (or null if none).
2369
     x's parent and children to x's replacement (or null if none).
2370
*/
2370
*/
2371
 
2371
 
2372
#define unlink_large_chunk(M, X) {\
2372
#define unlink_large_chunk(M, X) {\
2373
  tchunkptr XP = X->parent;\
2373
  tchunkptr XP = X->parent;\
2374
  tchunkptr R;\
2374
  tchunkptr R;\
2375
  if (X->bk != X) {\
2375
  if (X->bk != X) {\
2376
    tchunkptr F = X->fd;\
2376
    tchunkptr F = X->fd;\
2377
    R = X->bk;\
2377
    R = X->bk;\
2378
    if (RTCHECK(ok_address(M, F))) {\
2378
    if (RTCHECK(ok_address(M, F))) {\
2379
      F->bk = R;\
2379
      F->bk = R;\
2380
      R->fd = F;\
2380
      R->fd = F;\
2381
    }\
2381
    }\
2382
    else {\
2382
    else {\
2383
      CORRUPTION_ERROR_ACTION(M);\
2383
      CORRUPTION_ERROR_ACTION(M);\
2384
    }\
2384
    }\
2385
  }\
2385
  }\
2386
  else {\
2386
  else {\
2387
    tchunkptr* RP;\
2387
    tchunkptr* RP;\
2388
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2388
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2389
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2389
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2390
      tchunkptr* CP;\
2390
      tchunkptr* CP;\
2391
      while ((*(CP = &(R->child[1])) != 0) ||\
2391
      while ((*(CP = &(R->child[1])) != 0) ||\
2392
             (*(CP = &(R->child[0])) != 0)) {\
2392
             (*(CP = &(R->child[0])) != 0)) {\
2393
        R = *(RP = CP);\
2393
        R = *(RP = CP);\
2394
      }\
2394
      }\
2395
      if (RTCHECK(ok_address(M, RP)))\
2395
      if (RTCHECK(ok_address(M, RP)))\
2396
        *RP = 0;\
2396
        *RP = 0;\
2397
      else {\
2397
      else {\
2398
        CORRUPTION_ERROR_ACTION(M);\
2398
        CORRUPTION_ERROR_ACTION(M);\
2399
      }\
2399
      }\
2400
    }\
2400
    }\
2401
  }\
2401
  }\
2402
  if (XP != 0) {\
2402
  if (XP != 0) {\
2403
    tbinptr* H = treebin_at(M, X->index);\
2403
    tbinptr* H = treebin_at(M, X->index);\
2404
    if (X == *H) {\
2404
    if (X == *H) {\
2405
      if ((*H = R) == 0) \
2405
      if ((*H = R) == 0) \
2406
        clear_treemap(M, X->index);\
2406
        clear_treemap(M, X->index);\
2407
    }\
2407
    }\
2408
    else if (RTCHECK(ok_address(M, XP))) {\
2408
    else if (RTCHECK(ok_address(M, XP))) {\
2409
      if (XP->child[0] == X) \
2409
      if (XP->child[0] == X) \
2410
        XP->child[0] = R;\
2410
        XP->child[0] = R;\
2411
      else \
2411
      else \
2412
        XP->child[1] = R;\
2412
        XP->child[1] = R;\
2413
    }\
2413
    }\
2414
    else\
2414
    else\
2415
      CORRUPTION_ERROR_ACTION(M);\
2415
      CORRUPTION_ERROR_ACTION(M);\
2416
    if (R != 0) {\
2416
    if (R != 0) {\
2417
      if (RTCHECK(ok_address(M, R))) {\
2417
      if (RTCHECK(ok_address(M, R))) {\
2418
        tchunkptr C0, C1;\
2418
        tchunkptr C0, C1;\
2419
        R->parent = XP;\
2419
        R->parent = XP;\
2420
        if ((C0 = X->child[0]) != 0) {\
2420
        if ((C0 = X->child[0]) != 0) {\
2421
          if (RTCHECK(ok_address(M, C0))) {\
2421
          if (RTCHECK(ok_address(M, C0))) {\
2422
            R->child[0] = C0;\
2422
            R->child[0] = C0;\
2423
            C0->parent = R;\
2423
            C0->parent = R;\
2424
          }\
2424
          }\
2425
          else\
2425
          else\
2426
            CORRUPTION_ERROR_ACTION(M);\
2426
            CORRUPTION_ERROR_ACTION(M);\
2427
        }\
2427
        }\
2428
        if ((C1 = X->child[1]) != 0) {\
2428
        if ((C1 = X->child[1]) != 0) {\
2429
          if (RTCHECK(ok_address(M, C1))) {\
2429
          if (RTCHECK(ok_address(M, C1))) {\
2430
            R->child[1] = C1;\
2430
            R->child[1] = C1;\
2431
            C1->parent = R;\
2431
            C1->parent = R;\
2432
          }\
2432
          }\
2433
          else\
2433
          else\
2434
            CORRUPTION_ERROR_ACTION(M);\
2434
            CORRUPTION_ERROR_ACTION(M);\
2435
        }\
2435
        }\
2436
      }\
2436
      }\
2437
      else\
2437
      else\
2438
        CORRUPTION_ERROR_ACTION(M);\
2438
        CORRUPTION_ERROR_ACTION(M);\
2439
    }\
2439
    }\
2440
  }\
2440
  }\
2441
}
2441
}
2442
 
2442
 
2443
/* Relays to large vs small bin operations */
2443
/* Relays to large vs small bin operations */
2444
 
2444
 
2445
#define insert_chunk(M, P, S)\
2445
#define insert_chunk(M, P, S)\
2446
  if (is_small(S)) insert_small_chunk(M, P, S)\
2446
  if (is_small(S)) insert_small_chunk(M, P, S)\
2447
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2447
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2448
 
2448
 
2449
#define unlink_chunk(M, P, S)\
2449
#define unlink_chunk(M, P, S)\
2450
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2450
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2451
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2451
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2452
 
2452
 
2453
 
2453
 
2454
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2454
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2455
 
2455
 
2456
#if ONLY_MSPACES
2456
#if ONLY_MSPACES
2457
#define internal_malloc(m, b) mspace_malloc(m, b)
2457
#define internal_malloc(m, b) mspace_malloc(m, b)
2458
#define internal_free(m, mem) mspace_free(m,mem);
2458
#define internal_free(m, mem) mspace_free(m,mem);
2459
#else /* ONLY_MSPACES */
2459
#else /* ONLY_MSPACES */
2460
#if MSPACES
2460
#if MSPACES
2461
#define internal_malloc(m, b)\
2461
#define internal_malloc(m, b)\
2462
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2462
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2463
#define internal_free(m, mem)\
2463
#define internal_free(m, mem)\
2464
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2464
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2465
#else /* MSPACES */
2465
#else /* MSPACES */
2466
#define internal_malloc(m, b) dlmalloc(b)
2466
#define internal_malloc(m, b) dlmalloc(b)
2467
#define internal_free(m, mem) dlfree(mem)
2467
#define internal_free(m, mem) dlfree(mem)
2468
#endif /* MSPACES */
2468
#endif /* MSPACES */
2469
#endif /* ONLY_MSPACES */
2469
#endif /* ONLY_MSPACES */
2470
 
2470
 
2471
/* -----------------------  Direct-mmapping chunks ----------------------- */
2471
/* -----------------------  Direct-mmapping chunks ----------------------- */
2472
 
2472
 
2473
/*
2473
/*
2474
  Directly mmapped chunks are set up with an offset to the start of
2474
  Directly mmapped chunks are set up with an offset to the start of
2475
  the mmapped region stored in the prev_foot field of the chunk. This
2475
  the mmapped region stored in the prev_foot field of the chunk. This
2476
  allows reconstruction of the required argument to MUNMAP when freed,
2476
  allows reconstruction of the required argument to MUNMAP when freed,
2477
  and also allows adjustment of the returned chunk to meet alignment
2477
  and also allows adjustment of the returned chunk to meet alignment
2478
  requirements (especially in memalign).  There is also enough space
2478
  requirements (especially in memalign).  There is also enough space
2479
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2479
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2480
  the PINUSE bit so frees can be checked.
2480
  the PINUSE bit so frees can be checked.
2481
*/
2481
*/
2482
 
2482
 
2483
/* Malloc using mmap */
2483
/* Malloc using mmap */
2484
static void* mmap_alloc(mstate m, size_t nb) {
2484
static void* mmap_alloc(mstate m, size_t nb) {
2485
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2485
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2486
  if (mmsize > nb) {     /* Check for wrap around 0 */
2486
  if (mmsize > nb) {     /* Check for wrap around 0 */
2487
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2487
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2488
    if (mm != CMFAIL) {
2488
    if (mm != CMFAIL) {
2489
      size_t offset = align_offset(chunk2mem(mm));
2489
      size_t offset = align_offset(chunk2mem(mm));
2490
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2490
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2491
      mchunkptr p = (mchunkptr)(mm + offset);
2491
      mchunkptr p = (mchunkptr)(mm + offset);
2492
      p->prev_foot = offset | IS_MMAPPED_BIT;
2492
      p->prev_foot = offset | IS_MMAPPED_BIT;
2493
      (p)->head = (psize|CINUSE_BIT);
2493
      (p)->head = (psize|CINUSE_BIT);
2494
      mark_inuse_foot(m, p, psize);
2494
      mark_inuse_foot(m, p, psize);
2495
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2495
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2496
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2496
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2497
 
2497
 
2498
      if (mm < m->least_addr)
2498
      if (mm < m->least_addr)
2499
        m->least_addr = mm;
2499
        m->least_addr = mm;
2500
      if ((m->footprint += mmsize) > m->max_footprint)
2500
      if ((m->footprint += mmsize) > m->max_footprint)
2501
        m->max_footprint = m->footprint;
2501
        m->max_footprint = m->footprint;
2502
      assert(is_aligned(chunk2mem(p)));
2502
      assert(is_aligned(chunk2mem(p)));
2503
      check_mmapped_chunk(m, p);
2503
      check_mmapped_chunk(m, p);
2504
      return chunk2mem(p);
2504
      return chunk2mem(p);
2505
    }
2505
    }
2506
  }
2506
  }
2507
  return 0;
2507
  return 0;
2508
}
2508
}
2509
 
2509
 
2510
/* Realloc using mmap */
2510
/* Realloc using mmap */
2511
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2511
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2512
  size_t oldsize = chunksize(oldp);
2512
  size_t oldsize = chunksize(oldp);
2513
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2513
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2514
    return 0;
2514
    return 0;
2515
  /* Keep old chunk if big enough but not too big */
2515
  /* Keep old chunk if big enough but not too big */
2516
  if (oldsize >= nb + SIZE_T_SIZE &&
2516
  if (oldsize >= nb + SIZE_T_SIZE &&
2517
      (oldsize - nb) <= (mparams.granularity << 1))
2517
      (oldsize - nb) <= (mparams.granularity << 1))
2518
    return oldp;
2518
    return oldp;
2519
  else {
2519
  else {
2520
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2520
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2521
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2521
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2522
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2522
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2523
                                         CHUNK_ALIGN_MASK);
2523
                                         CHUNK_ALIGN_MASK);
2524
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2524
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2525
                                  oldmmsize, newmmsize, 1);
2525
                                  oldmmsize, newmmsize, 1);
2526
    if (cp != CMFAIL) {
2526
    if (cp != CMFAIL) {
2527
      mchunkptr newp = (mchunkptr)(cp + offset);
2527
      mchunkptr newp = (mchunkptr)(cp + offset);
2528
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2528
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2529
      newp->head = (psize|CINUSE_BIT);
2529
      newp->head = (psize|CINUSE_BIT);
2530
      mark_inuse_foot(m, newp, psize);
2530
      mark_inuse_foot(m, newp, psize);
2531
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2531
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2532
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2532
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2533
 
2533
 
2534
      if (cp < m->least_addr)
2534
      if (cp < m->least_addr)
2535
        m->least_addr = cp;
2535
        m->least_addr = cp;
2536
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2536
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2537
        m->max_footprint = m->footprint;
2537
        m->max_footprint = m->footprint;
2538
      check_mmapped_chunk(m, newp);
2538
      check_mmapped_chunk(m, newp);
2539
      return newp;
2539
      return newp;
2540
    }
2540
    }
2541
  }
2541
  }
2542
  return 0;
2542
  return 0;
2543
}
2543
}
2544
 
2544
 
2545
/* -------------------------- mspace management -------------------------- */
2545
/* -------------------------- mspace management -------------------------- */
2546
 
2546
 
2547
/* Initialize top chunk and its size */
2547
/* Initialize top chunk and its size */
2548
static void init_top(mstate m, mchunkptr p, size_t psize) {
2548
static void init_top(mstate m, mchunkptr p, size_t psize) {
2549
  /* Ensure alignment */
2549
  /* Ensure alignment */
2550
  size_t offset = align_offset(chunk2mem(p));
2550
  size_t offset = align_offset(chunk2mem(p));
2551
  p = (mchunkptr)((char*)p + offset);
2551
  p = (mchunkptr)((char*)p + offset);
2552
  psize -= offset;
2552
  psize -= offset;
2553
 
2553
 
2554
  m->top = p;
2554
  m->top = p;
2555
  m->topsize = psize;
2555
  m->topsize = psize;
2556
  p->head = psize | PINUSE_BIT;
2556
  p->head = psize | PINUSE_BIT;
2557
  /* set size of fake trailing chunk holding overhead space only once */
2557
  /* set size of fake trailing chunk holding overhead space only once */
2558
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2558
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2559
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2559
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2560
}
2560
}
2561
 
2561
 
2562
/* Initialize bins for a new mstate that is otherwise zeroed out */
2562
/* Initialize bins for a new mstate that is otherwise zeroed out */
2563
static void init_bins(mstate m) {
2563
static void init_bins(mstate m) {
2564
  /* Establish circular links for smallbins */
2564
  /* Establish circular links for smallbins */
2565
  bindex_t i;
2565
  bindex_t i;
2566
  for (i = 0; i < NSMALLBINS; ++i) {
2566
  for (i = 0; i < NSMALLBINS; ++i) {
2567
    sbinptr bin = smallbin_at(m,i);
2567
    sbinptr bin = smallbin_at(m,i);
2568
    bin->fd = bin->bk = bin;
2568
    bin->fd = bin->bk = bin;
2569
  }
2569
  }
2570
}
2570
}
2571
 
2571
 
2572
#if PROCEED_ON_ERROR
2572
#if PROCEED_ON_ERROR
2573
 
2573
 
2574
/* default corruption action */
2574
/* default corruption action */
2575
static void reset_on_error(mstate m) {
2575
static void reset_on_error(mstate m) {
2576
  int i;
2576
  int i;
2577
  ++malloc_corruption_error_count;
2577
  ++malloc_corruption_error_count;
2578
  /* Reinitialize fields to forget about all memory */
2578
  /* Reinitialize fields to forget about all memory */
2579
  m->smallbins = m->treebins = 0;
2579
  m->smallbins = m->treebins = 0;
2580
  m->dvsize = m->topsize = 0;
2580
  m->dvsize = m->topsize = 0;
2581
  m->seg.base = 0;
2581
  m->seg.base = 0;
2582
  m->seg.size = 0;
2582
  m->seg.size = 0;
2583
  m->seg.next = 0;
2583
  m->seg.next = 0;
2584
  m->top = m->dv = 0;
2584
  m->top = m->dv = 0;
2585
  for (i = 0; i < NTREEBINS; ++i)
2585
  for (i = 0; i < NTREEBINS; ++i)
2586
    *treebin_at(m, i) = 0;
2586
    *treebin_at(m, i) = 0;
2587
  init_bins(m);
2587
  init_bins(m);
2588
}
2588
}
2589
#endif /* PROCEED_ON_ERROR */
2589
#endif /* PROCEED_ON_ERROR */
2590
 
2590
 
2591
/* Allocate chunk and prepend remainder with chunk in successor base. */
2591
/* Allocate chunk and prepend remainder with chunk in successor base. */
2592
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2592
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2593
                           size_t nb) {
2593
                           size_t nb) {
2594
  mchunkptr p = align_as_chunk(newbase);
2594
  mchunkptr p = align_as_chunk(newbase);
2595
  mchunkptr oldfirst = align_as_chunk(oldbase);
2595
  mchunkptr oldfirst = align_as_chunk(oldbase);
2596
  size_t psize = (char*)oldfirst - (char*)p;
2596
  size_t psize = (char*)oldfirst - (char*)p;
2597
  mchunkptr q = chunk_plus_offset(p, nb);
2597
  mchunkptr q = chunk_plus_offset(p, nb);
2598
  size_t qsize = psize - nb;
2598
  size_t qsize = psize - nb;
2599
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2599
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2600
 
2600
 
2601
  assert((char*)oldfirst > (char*)q);
2601
  assert((char*)oldfirst > (char*)q);
2602
  assert(pinuse(oldfirst));
2602
  assert(pinuse(oldfirst));
2603
  assert(qsize >= MIN_CHUNK_SIZE);
2603
  assert(qsize >= MIN_CHUNK_SIZE);
2604
 
2604
 
2605
  /* consolidate remainder with first chunk of old base */
2605
  /* consolidate remainder with first chunk of old base */
2606
  if (oldfirst == m->top) {
2606
  if (oldfirst == m->top) {
2607
    size_t tsize = m->topsize += qsize;
2607
    size_t tsize = m->topsize += qsize;
2608
    m->top = q;
2608
    m->top = q;
2609
    q->head = tsize | PINUSE_BIT;
2609
    q->head = tsize | PINUSE_BIT;
2610
    check_top_chunk(m, q);
2610
    check_top_chunk(m, q);
2611
  }
2611
  }
2612
  else if (oldfirst == m->dv) {
2612
  else if (oldfirst == m->dv) {
2613
    size_t dsize = m->dvsize += qsize;
2613
    size_t dsize = m->dvsize += qsize;
2614
    m->dv = q;
2614
    m->dv = q;
2615
    set_size_and_pinuse_of_free_chunk(q, dsize);
2615
    set_size_and_pinuse_of_free_chunk(q, dsize);
2616
  }
2616
  }
2617
  else {
2617
  else {
2618
    if (!cinuse(oldfirst)) {
2618
    if (!cinuse(oldfirst)) {
2619
      size_t nsize = chunksize(oldfirst);
2619
      size_t nsize = chunksize(oldfirst);
2620
      unlink_chunk(m, oldfirst, nsize);
2620
      unlink_chunk(m, oldfirst, nsize);
2621
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2621
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2622
      qsize += nsize;
2622
      qsize += nsize;
2623
    }
2623
    }
2624
    set_free_with_pinuse(q, qsize, oldfirst);
2624
    set_free_with_pinuse(q, qsize, oldfirst);
2625
    insert_chunk(m, q, qsize);
2625
    insert_chunk(m, q, qsize);
2626
    check_free_chunk(m, q);
2626
    check_free_chunk(m, q);
2627
  }
2627
  }
2628
 
2628
 
2629
  check_malloced_chunk(m, chunk2mem(p), nb);
2629
  check_malloced_chunk(m, chunk2mem(p), nb);
2630
  return chunk2mem(p);
2630
  return chunk2mem(p);
2631
}
2631
}
2632
 
2632
 
2633
 
2633
 
2634
/* Add a segment to hold a new noncontiguous region */
2634
/* Add a segment to hold a new noncontiguous region */
2635
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2635
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2636
  /* Determine locations and sizes of segment, fenceposts, old top */
2636
  /* Determine locations and sizes of segment, fenceposts, old top */
2637
  char* old_top = (char*)m->top;
2637
  char* old_top = (char*)m->top;
2638
  msegmentptr oldsp = segment_holding(m, old_top);
2638
  msegmentptr oldsp = segment_holding(m, old_top);
2639
  char* old_end = oldsp->base + oldsp->size;
2639
  char* old_end = oldsp->base + oldsp->size;
2640
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2640
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2641
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2641
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2642
  size_t offset = align_offset(chunk2mem(rawsp));
2642
  size_t offset = align_offset(chunk2mem(rawsp));
2643
  char* asp = rawsp + offset;
2643
  char* asp = rawsp + offset;
2644
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2644
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2645
  mchunkptr sp = (mchunkptr)csp;
2645
  mchunkptr sp = (mchunkptr)csp;
2646
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2646
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2647
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2647
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2648
  mchunkptr p = tnext;
2648
  mchunkptr p = tnext;
2649
  int nfences = 0;
2649
  int nfences = 0;
2650
 
2650
 
2651
  /* reset top to new space */
2651
  /* reset top to new space */
2652
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2652
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2653
 
2653
 
2654
  /* Set up segment record */
2654
  /* Set up segment record */
2655
  assert(is_aligned(ss));
2655
  assert(is_aligned(ss));
2656
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2656
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2657
  *ss = m->seg; /* Push current record */
2657
  *ss = m->seg; /* Push current record */
2658
  m->seg.base = tbase;
2658
  m->seg.base = tbase;
2659
  m->seg.size = tsize;
2659
  m->seg.size = tsize;
2660
  m->seg.sflags = mmapped;
2660
  m->seg.sflags = mmapped;
2661
  m->seg.next = ss;
2661
  m->seg.next = ss;
2662
 
2662
 
2663
  /* Insert trailing fenceposts */
2663
  /* Insert trailing fenceposts */
2664
  for (;;) {
2664
  for (;;) {
2665
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2665
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2666
    p->head = FENCEPOST_HEAD;
2666
    p->head = FENCEPOST_HEAD;
2667
    ++nfences;
2667
    ++nfences;
2668
    if ((char*)(&(nextp->head)) < old_end)
2668
    if ((char*)(&(nextp->head)) < old_end)
2669
      p = nextp;
2669
      p = nextp;
2670
    else
2670
    else
2671
      break;
2671
      break;
2672
  }
2672
  }
2673
  assert(nfences >= 2);
2673
  assert(nfences >= 2);
2674
 
2674
 
2675
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2675
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2676
  if (csp != old_top) {
2676
  if (csp != old_top) {
2677
    mchunkptr q = (mchunkptr)old_top;
2677
    mchunkptr q = (mchunkptr)old_top;
2678
    size_t psize = csp - old_top;
2678
    size_t psize = csp - old_top;
2679
    mchunkptr tn = chunk_plus_offset(q, psize);
2679
    mchunkptr tn = chunk_plus_offset(q, psize);
2680
    set_free_with_pinuse(q, psize, tn);
2680
    set_free_with_pinuse(q, psize, tn);
2681
    insert_chunk(m, q, psize);
2681
    insert_chunk(m, q, psize);
2682
  }
2682
  }
2683
 
2683
 
2684
  check_top_chunk(m, m->top);
2684
  check_top_chunk(m, m->top);
2685
}
2685
}
2686
 
2686
 
2687
/* -------------------------- System allocation -------------------------- */
2687
/* -------------------------- System allocation -------------------------- */
2688
 
2688
 
2689
/* Get memory from system using MORECORE or MMAP */
2689
/* Get memory from system using MORECORE or MMAP */
2690
static void* sys_alloc(mstate m, size_t nb) {
2690
static void* sys_alloc(mstate m, size_t nb) {
2691
  char* tbase = CMFAIL;
2691
  char* tbase = CMFAIL;
2692
  size_t tsize = 0;
2692
  size_t tsize = 0;
2693
  flag_t mmap_flag = 0;
2693
  flag_t mmap_flag = 0;
2694
 
2694
 
2695
  init_mparams();
2695
  init_mparams();
2696
 
2696
 
2697
  /* Directly map large chunks */
2697
  /* Directly map large chunks */
2698
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2698
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2699
    void* mem = mmap_alloc(m, nb);
2699
    void* mem = mmap_alloc(m, nb);
2700
    if (mem != 0)
2700
    if (mem != 0)
2701
      return mem;
2701
      return mem;
2702
  }
2702
  }
2703
 
2703
 
2704
  /*
2704
  /*
2705
    Try getting memory in any of three ways (in most-preferred to
2705
    Try getting memory in any of three ways (in most-preferred to
2706
    least-preferred order):
2706
    least-preferred order):
2707
    1. A call to MORECORE that can normally contiguously extend memory.
2707
    1. A call to MORECORE that can normally contiguously extend memory.
2708
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2708
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2709
       or main space is mmapped or a previous contiguous call failed)
2709
       or main space is mmapped or a previous contiguous call failed)
2710
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2710
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2711
       Note that under the default settings, if MORECORE is unable to
2711
       Note that under the default settings, if MORECORE is unable to
2712
       fulfill a request, and HAVE_MMAP is true, then mmap is
2712
       fulfill a request, and HAVE_MMAP is true, then mmap is
2713
       used as a noncontiguous system allocator. This is a useful backup
2713
       used as a noncontiguous system allocator. This is a useful backup
2714
       strategy for systems with holes in address spaces -- in this case
2714
       strategy for systems with holes in address spaces -- in this case
2715
       sbrk cannot contiguously expand the heap, but mmap may be able to
2715
       sbrk cannot contiguously expand the heap, but mmap may be able to
2716
       find space.
2716
       find space.
2717
    3. A call to MORECORE that cannot usually contiguously extend memory.
2717
    3. A call to MORECORE that cannot usually contiguously extend memory.
2718
       (disabled if not HAVE_MORECORE)
2718
       (disabled if not HAVE_MORECORE)
2719
  */
2719
  */
2720
 
2720
 
2721
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2721
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2722
    char* br = CMFAIL;
2722
    char* br = CMFAIL;
2723
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2723
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2724
    size_t asize = 0;
2724
    size_t asize = 0;
2725
    ACQUIRE_MORECORE_LOCK();
2725
    ACQUIRE_MORECORE_LOCK();
2726
 
2726
 
2727
    if (ss == 0) {  /* First time through or recovery */
2727
    if (ss == 0) {  /* First time through or recovery */
2728
      char* base = (char*)CALL_MORECORE(0);
2728
      char* base = (char*)CALL_MORECORE(0);
2729
      if (base != CMFAIL) {
2729
      if (base != CMFAIL) {
2730
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2730
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2731
        /* Adjust to end on a page boundary */
2731
        /* Adjust to end on a page boundary */
2732
        if (!is_page_aligned(base))
2732
        if (!is_page_aligned(base))
2733
          asize += (page_align((size_t)base) - (size_t)base);
2733
          asize += (page_align((size_t)base) - (size_t)base);
2734
        /* Can't call MORECORE if size is negative when treated as signed */
2734
        /* Can't call MORECORE if size is negative when treated as signed */
2735
        if (asize < HALF_MAX_SIZE_T &&
2735
        if (asize < HALF_MAX_SIZE_T &&
2736
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2736
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2737
          tbase = base;
2737
          tbase = base;
2738
          tsize = asize;
2738
          tsize = asize;
2739
        }
2739
        }
2740
      }
2740
      }
2741
    }
2741
    }
2742
    else {
2742
    else {
2743
      /* Subtract out existing available top space from MORECORE request. */
2743
      /* Subtract out existing available top space from MORECORE request. */
2744
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2744
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2745
      /* Use mem here only if it did continuously extend old space */
2745
      /* Use mem here only if it did continuously extend old space */
2746
      if (asize < HALF_MAX_SIZE_T &&
2746
      if (asize < HALF_MAX_SIZE_T &&
2747
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2747
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2748
        tbase = br;
2748
        tbase = br;
2749
        tsize = asize;
2749
        tsize = asize;
2750
      }
2750
      }
2751
    }
2751
    }
2752
 
2752
 
2753
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2753
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2754
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2754
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2755
        if (asize < HALF_MAX_SIZE_T &&
2755
        if (asize < HALF_MAX_SIZE_T &&
2756
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2756
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2757
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2757
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2758
          if (esize < HALF_MAX_SIZE_T) {
2758
          if (esize < HALF_MAX_SIZE_T) {
2759
            char* end = (char*)CALL_MORECORE(esize);
2759
            char* end = (char*)CALL_MORECORE(esize);
2760
            if (end != CMFAIL)
2760
            if (end != CMFAIL)
2761
              asize += esize;
2761
              asize += esize;
2762
            else {            /* Can't use; try to release */
2762
            else {            /* Can't use; try to release */
2763
              CALL_MORECORE(-asize);
2763
              CALL_MORECORE(-asize);
2764
              br = CMFAIL;
2764
              br = CMFAIL;
2765
            }
2765
            }
2766
          }
2766
          }
2767
        }
2767
        }
2768
      }
2768
      }
2769
      if (br != CMFAIL) {    /* Use the space we did get */
2769
      if (br != CMFAIL) {    /* Use the space we did get */
2770
        tbase = br;
2770
        tbase = br;
2771
        tsize = asize;
2771
        tsize = asize;
2772
      }
2772
      }
2773
      else
2773
      else
2774
        disable_contiguous(m); /* Don't try contiguous path in the future */
2774
        disable_contiguous(m); /* Don't try contiguous path in the future */
2775
    }
2775
    }
2776
 
2776
 
2777
    RELEASE_MORECORE_LOCK();
2777
    RELEASE_MORECORE_LOCK();
2778
  }
2778
  }
2779
 
2779
 
2780
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2780
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2781
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2781
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2782
    size_t rsize = granularity_align(req);
2782
    size_t rsize = granularity_align(req);
2783
    if (rsize > nb) { /* Fail if wraps around zero */
2783
    if (rsize > nb) { /* Fail if wraps around zero */
2784
      char* mp = (char*)(CALL_MMAP(rsize));
2784
      char* mp = (char*)(CALL_MMAP(rsize));
2785
      if (mp != CMFAIL) {
2785
      if (mp != CMFAIL) {
2786
        tbase = mp;
2786
        tbase = mp;
2787
        tsize = rsize;
2787
        tsize = rsize;
2788
        mmap_flag = IS_MMAPPED_BIT;
2788
        mmap_flag = IS_MMAPPED_BIT;
2789
      }
2789
      }
2790
    }
2790
    }
2791
  }
2791
  }
2792
 
2792
 
2793
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2793
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2794
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2794
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2795
    if (asize < HALF_MAX_SIZE_T) {
2795
    if (asize < HALF_MAX_SIZE_T) {
2796
      char* br = CMFAIL;
2796
      char* br = CMFAIL;
2797
      char* end = CMFAIL;
2797
      char* end = CMFAIL;
2798
      ACQUIRE_MORECORE_LOCK();
2798
      ACQUIRE_MORECORE_LOCK();
2799
      br = (char*)(CALL_MORECORE(asize));
2799
      br = (char*)(CALL_MORECORE(asize));
2800
      end = (char*)(CALL_MORECORE(0));
2800
      end = (char*)(CALL_MORECORE(0));
2801
      RELEASE_MORECORE_LOCK();
2801
      RELEASE_MORECORE_LOCK();
2802
      if (br != CMFAIL && end != CMFAIL && br < end) {
2802
      if (br != CMFAIL && end != CMFAIL && br < end) {
2803
        size_t ssize = end - br;
2803
        size_t ssize = end - br;
2804
        if (ssize > nb + TOP_FOOT_SIZE) {
2804
        if (ssize > nb + TOP_FOOT_SIZE) {
2805
          tbase = br;
2805
          tbase = br;
2806
          tsize = ssize;
2806
          tsize = ssize;
2807
        }
2807
        }
2808
      }
2808
      }
2809
    }
2809
    }
2810
  }
2810
  }
2811
 
2811
 
2812
  if (tbase != CMFAIL) {
2812
  if (tbase != CMFAIL) {
2813
 
2813
 
2814
    if ((m->footprint += tsize) > m->max_footprint)
2814
    if ((m->footprint += tsize) > m->max_footprint)
2815
      m->max_footprint = m->footprint;
2815
      m->max_footprint = m->footprint;
2816
 
2816
 
2817
    if (!is_initialized(m)) { /* first-time initialization */
2817
    if (!is_initialized(m)) { /* first-time initialization */
2818
      m->seg.base = m->least_addr = tbase;
2818
      m->seg.base = m->least_addr = tbase;
2819
      m->seg.size = tsize;
2819
      m->seg.size = tsize;
2820
      m->seg.sflags = mmap_flag;
2820
      m->seg.sflags = mmap_flag;
2821
      m->magic = mparams.magic;
2821
      m->magic = mparams.magic;
2822
      init_bins(m);
2822
      init_bins(m);
2823
      if (is_global(m))
2823
      if (is_global(m))
2824
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2824
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2825
      else {
2825
      else {
2826
        /* Offset top by embedded malloc_state */
2826
        /* Offset top by embedded malloc_state */
2827
        mchunkptr mn = next_chunk(mem2chunk(m));
2827
        mchunkptr mn = next_chunk(mem2chunk(m));
2828
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2828
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2829
      }
2829
      }
2830
    }
2830
    }
2831
 
2831
 
2832
    else {
2832
    else {
2833
      /* Try to merge with an existing segment */
2833
      /* Try to merge with an existing segment */
2834
      msegmentptr sp = &m->seg;
2834
      msegmentptr sp = &m->seg;
2835
      while (sp != 0 && tbase != sp->base + sp->size)
2835
      while (sp != 0 && tbase != sp->base + sp->size)
2836
        sp = sp->next;
2836
        sp = sp->next;
2837
      if (sp != 0 &&
2837
      if (sp != 0 &&
2838
          !is_extern_segment(sp) &&
2838
          !is_extern_segment(sp) &&
2839
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2839
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2840
          segment_holds(sp, m->top)) { /* append */
2840
          segment_holds(sp, m->top)) { /* append */
2841
        sp->size += tsize;
2841
        sp->size += tsize;
2842
        init_top(m, m->top, m->topsize + tsize);
2842
        init_top(m, m->top, m->topsize + tsize);
2843
      }
2843
      }
2844
      else {
2844
      else {
2845
        if (tbase < m->least_addr)
2845
        if (tbase < m->least_addr)
2846
          m->least_addr = tbase;
2846
          m->least_addr = tbase;
2847
        sp = &m->seg;
2847
        sp = &m->seg;
2848
        while (sp != 0 && sp->base != tbase + tsize)
2848
        while (sp != 0 && sp->base != tbase + tsize)
2849
          sp = sp->next;
2849
          sp = sp->next;
2850
        if (sp != 0 &&
2850
        if (sp != 0 &&
2851
            !is_extern_segment(sp) &&
2851
            !is_extern_segment(sp) &&
2852
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2852
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2853
          char* oldbase = sp->base;
2853
          char* oldbase = sp->base;
2854
          sp->base = tbase;
2854
          sp->base = tbase;
2855
          sp->size += tsize;
2855
          sp->size += tsize;
2856
          return prepend_alloc(m, tbase, oldbase, nb);
2856
          return prepend_alloc(m, tbase, oldbase, nb);
2857
        }
2857
        }
2858
        else
2858
        else
2859
          add_segment(m, tbase, tsize, mmap_flag);
2859
          add_segment(m, tbase, tsize, mmap_flag);
2860
      }
2860
      }
2861
    }
2861
    }
2862
 
2862
 
2863
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2863
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2864
      size_t rsize = m->topsize -= nb;
2864
      size_t rsize = m->topsize -= nb;
2865
      mchunkptr p = m->top;
2865
      mchunkptr p = m->top;
2866
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2866
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2867
      r->head = rsize | PINUSE_BIT;
2867
      r->head = rsize | PINUSE_BIT;
2868
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2868
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2869
      check_top_chunk(m, m->top);
2869
      check_top_chunk(m, m->top);
2870
      check_malloced_chunk(m, chunk2mem(p), nb);
2870
      check_malloced_chunk(m, chunk2mem(p), nb);
2871
      return chunk2mem(p);
2871
      return chunk2mem(p);
2872
    }
2872
    }
2873
  }
2873
  }
2874
 
2874
 
2875
  MALLOC_FAILURE_ACTION;
2875
  MALLOC_FAILURE_ACTION;
2876
  return 0;
2876
  return 0;
2877
}
2877
}
2878
 
2878
 
2879
/* -----------------------  system deallocation -------------------------- */
2879
/* -----------------------  system deallocation -------------------------- */
2880
 
2880
 
2881
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2881
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2882
static size_t release_unused_segments(mstate m) {
2882
static size_t release_unused_segments(mstate m) {
2883
  size_t released = 0;
2883
  size_t released = 0;
2884
  msegmentptr pred = &m->seg;
2884
  msegmentptr pred = &m->seg;
2885
  msegmentptr sp = pred->next;
2885
  msegmentptr sp = pred->next;
2886
  while (sp != 0) {
2886
  while (sp != 0) {
2887
    char* base = sp->base;
2887
    char* base = sp->base;
2888
    size_t size = sp->size;
2888
    size_t size = sp->size;
2889
    msegmentptr next = sp->next;
2889
    msegmentptr next = sp->next;
2890
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2890
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2891
      mchunkptr p = align_as_chunk(base);
2891
      mchunkptr p = align_as_chunk(base);
2892
      size_t psize = chunksize(p);
2892
      size_t psize = chunksize(p);
2893
      /* Can unmap if first chunk holds entire segment and not pinned */
2893
      /* Can unmap if first chunk holds entire segment and not pinned */
2894
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2894
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2895
        tchunkptr tp = (tchunkptr)p;
2895
        tchunkptr tp = (tchunkptr)p;
2896
        assert(segment_holds(sp, (char*)sp));
2896
        assert(segment_holds(sp, (char*)sp));
2897
        if (p == m->dv) {
2897
        if (p == m->dv) {
2898
          m->dv = 0;
2898
          m->dv = 0;
2899
          m->dvsize = 0;
2899
          m->dvsize = 0;
2900
        }
2900
        }
2901
        else {
2901
        else {
2902
          unlink_large_chunk(m, tp);
2902
          unlink_large_chunk(m, tp);
2903
        }
2903
        }
2904
        if (CALL_MUNMAP(base, size) == 0) {
2904
        if (CALL_MUNMAP(base, size) == 0) {
2905
          released += size;
2905
          released += size;
2906
          m->footprint -= size;
2906
          m->footprint -= size;
2907
          /* unlink obsoleted record */
2907
          /* unlink obsoleted record */
2908
          sp = pred;
2908
          sp = pred;
2909
          sp->next = next;
2909
          sp->next = next;
2910
        }
2910
        }
2911
        else { /* back out if cannot unmap */
2911
        else { /* back out if cannot unmap */
2912
          insert_large_chunk(m, tp, psize);
2912
          insert_large_chunk(m, tp, psize);
2913
        }
2913
        }
2914
      }
2914
      }
2915
    }
2915
    }
2916
    pred = sp;
2916
    pred = sp;
2917
    sp = next;
2917
    sp = next;
2918
  }
2918
  }
2919
  return released;
2919
  return released;
2920
}
2920
}
2921
 
2921
 
2922
static int sys_trim(mstate m, size_t pad) {
2922
static int sys_trim(mstate m, size_t pad) {
2923
  size_t released = 0;
2923
  size_t released = 0;
2924
  if (pad < MAX_REQUEST && is_initialized(m)) {
2924
  if (pad < MAX_REQUEST && is_initialized(m)) {
2925
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2925
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2926
 
2926
 
2927
    if (m->topsize > pad) {
2927
    if (m->topsize > pad) {
2928
      /* Shrink top space in granularity-size units, keeping at least one */
2928
      /* Shrink top space in granularity-size units, keeping at least one */
2929
      size_t unit = mparams.granularity;
2929
      size_t unit = mparams.granularity;
2930
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2930
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2931
                      SIZE_T_ONE) * unit;
2931
                      SIZE_T_ONE) * unit;
2932
      msegmentptr sp = segment_holding(m, (char*)m->top);
2932
      msegmentptr sp = segment_holding(m, (char*)m->top);
2933
 
2933
 
2934
      if (!is_extern_segment(sp)) {
2934
      if (!is_extern_segment(sp)) {
2935
        if (is_mmapped_segment(sp)) {
2935
        if (is_mmapped_segment(sp)) {
2936
          if (HAVE_MMAP &&
2936
          if (HAVE_MMAP &&
2937
              sp->size >= extra &&
2937
              sp->size >= extra &&
2938
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2938
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2939
            /* Prefer mremap, fall back to munmap */
2939
            /* Prefer mremap, fall back to munmap */
2940
            if ((CALL_MREMAP(sp->base, sp->size, sp->size - extra, 0) != MFAIL) ||
2940
            if ((CALL_MREMAP(sp->base, sp->size, sp->size - extra, 0) != MFAIL) ||
2941
                (CALL_MUNMAP(sp->base + sp->size - extra, extra) == 0)) {
2941
                (CALL_MUNMAP(sp->base + sp->size - extra, extra) == 0)) {
2942
              released = extra;
2942
              released = extra;
2943
            }
2943
            }
2944
          }
2944
          }
2945
        }
2945
        }
2946
        else if (HAVE_MORECORE) {
2946
        else if (HAVE_MORECORE) {
2947
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2947
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2948
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2948
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2949
          ACQUIRE_MORECORE_LOCK();
2949
          ACQUIRE_MORECORE_LOCK();
2950
          {
2950
          {
2951
            /* Make sure end of memory is where we last set it. */
2951
            /* Make sure end of memory is where we last set it. */
2952
            char* old_br = (char*)(CALL_MORECORE(0));
2952
            char* old_br = (char*)(CALL_MORECORE(0));
2953
            if (old_br == sp->base + sp->size) {
2953
            if (old_br == sp->base + sp->size) {
2954
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2954
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2955
              char* new_br = (char*)(CALL_MORECORE(0));
2955
              char* new_br = (char*)(CALL_MORECORE(0));
2956
              if (rel_br != CMFAIL && new_br < old_br)
2956
              if (rel_br != CMFAIL && new_br < old_br)
2957
                released = old_br - new_br;
2957
                released = old_br - new_br;
2958
            }
2958
            }
2959
          }
2959
          }
2960
          RELEASE_MORECORE_LOCK();
2960
          RELEASE_MORECORE_LOCK();
2961
        }
2961
        }
2962
      }
2962
      }
2963
 
2963
 
2964
      if (released != 0) {
2964
      if (released != 0) {
2965
        sp->size -= released;
2965
        sp->size -= released;
2966
        m->footprint -= released;
2966
        m->footprint -= released;
2967
        init_top(m, m->top, m->topsize - released);
2967
        init_top(m, m->top, m->topsize - released);
2968
        check_top_chunk(m, m->top);
2968
        check_top_chunk(m, m->top);
2969
      }
2969
      }
2970
    }
2970
    }
2971
 
2971
 
2972
    /* Unmap any unused mmapped segments */
2972
    /* Unmap any unused mmapped segments */
2973
    if (HAVE_MMAP)
2973
    if (HAVE_MMAP)
2974
      released += release_unused_segments(m);
2974
      released += release_unused_segments(m);
2975
 
2975
 
2976
    /* On failure, disable autotrim to avoid repeated failed future calls */
2976
    /* On failure, disable autotrim to avoid repeated failed future calls */
2977
    if (released == 0)
2977
    if (released == 0)
2978
      m->trim_check = MAX_SIZE_T;
2978
      m->trim_check = MAX_SIZE_T;
2979
  }
2979
  }
2980
 
2980
 
2981
  return (released != 0)? 1 : 0;
2981
  return (released != 0)? 1 : 0;
2982
}
2982
}
2983
 
2983
 
2984
/* ---------------------------- malloc support --------------------------- */
2984
/* ---------------------------- malloc support --------------------------- */
2985
 
2985
 
2986
/* allocate a large request from the best fitting chunk in a treebin */
2986
/* allocate a large request from the best fitting chunk in a treebin */
2987
static void* tmalloc_large(mstate m, size_t nb) {
2987
static void* tmalloc_large(mstate m, size_t nb) {
2988
  tchunkptr v = 0;
2988
  tchunkptr v = 0;
2989
  size_t rsize = -nb; /* Unsigned negation */
2989
  size_t rsize = -nb; /* Unsigned negation */
2990
  tchunkptr t;
2990
  tchunkptr t;
2991
  bindex_t idx;
2991
  bindex_t idx;
2992
  compute_tree_index(nb, idx);
2992
  compute_tree_index(nb, idx);
2993
 
2993
 
2994
  if ((t = *treebin_at(m, idx)) != 0) {
2994
  if ((t = *treebin_at(m, idx)) != 0) {
2995
    /* Traverse tree for this bin looking for node with size == nb */
2995
    /* Traverse tree for this bin looking for node with size == nb */
2996
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2996
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2997
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2997
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2998
    for (;;) {
2998
    for (;;) {
2999
      tchunkptr rt;
2999
      tchunkptr rt;
3000
      size_t trem = chunksize(t) - nb;
3000
      size_t trem = chunksize(t) - nb;
3001
      if (trem < rsize) {
3001
      if (trem < rsize) {
3002
        v = t;
3002
        v = t;
3003
        if ((rsize = trem) == 0)
3003
        if ((rsize = trem) == 0)
3004
          break;
3004
          break;
3005
      }
3005
      }
3006
      rt = t->child[1];
3006
      rt = t->child[1];
3007
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3007
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3008
      if (rt != 0 && rt != t)
3008
      if (rt != 0 && rt != t)
3009
        rst = rt;
3009
        rst = rt;
3010
      if (t == 0) {
3010
      if (t == 0) {
3011
        t = rst; /* set t to least subtree holding sizes > nb */
3011
        t = rst; /* set t to least subtree holding sizes > nb */
3012
        break;
3012
        break;
3013
      }
3013
      }
3014
      sizebits <<= 1;
3014
      sizebits <<= 1;
3015
    }
3015
    }
3016
  }
3016
  }
3017
 
3017
 
3018
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3018
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3019
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3019
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3020
    if (leftbits != 0) {
3020
    if (leftbits != 0) {
3021
      bindex_t i;
3021
      bindex_t i;
3022
      binmap_t leastbit = least_bit(leftbits);
3022
      binmap_t leastbit = least_bit(leftbits);
3023
      compute_bit2idx(leastbit, i);
3023
      compute_bit2idx(leastbit, i);
3024
      t = *treebin_at(m, i);
3024
      t = *treebin_at(m, i);
3025
    }
3025
    }
3026
  }
3026
  }
3027
 
3027
 
3028
  while (t != 0) { /* find smallest of tree or subtree */
3028
  while (t != 0) { /* find smallest of tree or subtree */
3029
    size_t trem = chunksize(t) - nb;
3029
    size_t trem = chunksize(t) - nb;
3030
    if (trem < rsize) {
3030
    if (trem < rsize) {
3031
      rsize = trem;
3031
      rsize = trem;
3032
      v = t;
3032
      v = t;
3033
    }
3033
    }
3034
    t = leftmost_child(t);
3034
    t = leftmost_child(t);
3035
  }
3035
  }
3036
 
3036
 
3037
  /*  If dv is a better fit, return 0 so malloc will use it */
3037
  /*  If dv is a better fit, return 0 so malloc will use it */
3038
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3038
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3039
    if (RTCHECK(ok_address(m, v))) { /* split */
3039
    if (RTCHECK(ok_address(m, v))) { /* split */
3040
      mchunkptr r = chunk_plus_offset(v, nb);
3040
      mchunkptr r = chunk_plus_offset(v, nb);
3041
      assert(chunksize(v) == rsize + nb);
3041
      assert(chunksize(v) == rsize + nb);
3042
      if (RTCHECK(ok_next(v, r))) {
3042
      if (RTCHECK(ok_next(v, r))) {
3043
        unlink_large_chunk(m, v);
3043
        unlink_large_chunk(m, v);
3044
        if (rsize < MIN_CHUNK_SIZE)
3044
        if (rsize < MIN_CHUNK_SIZE)
3045
          set_inuse_and_pinuse(m, v, (rsize + nb));
3045
          set_inuse_and_pinuse(m, v, (rsize + nb));
3046
        else {
3046
        else {
3047
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3047
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3048
          set_size_and_pinuse_of_free_chunk(r, rsize);
3048
          set_size_and_pinuse_of_free_chunk(r, rsize);
3049
          insert_chunk(m, r, rsize);
3049
          insert_chunk(m, r, rsize);
3050
        }
3050
        }
3051
        return chunk2mem(v);
3051
        return chunk2mem(v);
3052
      }
3052
      }
3053
    }
3053
    }
3054
    CORRUPTION_ERROR_ACTION(m);
3054
    CORRUPTION_ERROR_ACTION(m);
3055
  }
3055
  }
3056
  return 0;
3056
  return 0;
3057
}
3057
}
3058
 
3058
 
3059
/* allocate a small request from the best fitting chunk in a treebin */
3059
/* allocate a small request from the best fitting chunk in a treebin */
3060
static void* tmalloc_small(mstate m, size_t nb) {
3060
static void* tmalloc_small(mstate m, size_t nb) {
3061
  tchunkptr t, v;
3061
  tchunkptr t, v;
3062
  size_t rsize;
3062
  size_t rsize;
3063
  bindex_t i;
3063
  bindex_t i;
3064
  binmap_t leastbit = least_bit(m->treemap);
3064
  binmap_t leastbit = least_bit(m->treemap);
3065
  compute_bit2idx(leastbit, i);
3065
  compute_bit2idx(leastbit, i);
3066
 
3066
 
3067
  v = t = *treebin_at(m, i);
3067
  v = t = *treebin_at(m, i);
3068
  rsize = chunksize(t) - nb;
3068
  rsize = chunksize(t) - nb;
3069
 
3069
 
3070
  while ((t = leftmost_child(t)) != 0) {
3070
  while ((t = leftmost_child(t)) != 0) {
3071
    size_t trem = chunksize(t) - nb;
3071
    size_t trem = chunksize(t) - nb;
3072
    if (trem < rsize) {
3072
    if (trem < rsize) {
3073
      rsize = trem;
3073
      rsize = trem;
3074
      v = t;
3074
      v = t;
3075
    }
3075
    }
3076
  }
3076
  }
3077
 
3077
 
3078
  if (RTCHECK(ok_address(m, v))) {
3078
  if (RTCHECK(ok_address(m, v))) {
3079
    mchunkptr r = chunk_plus_offset(v, nb);
3079
    mchunkptr r = chunk_plus_offset(v, nb);
3080
    assert(chunksize(v) == rsize + nb);
3080
    assert(chunksize(v) == rsize + nb);
3081
    if (RTCHECK(ok_next(v, r))) {
3081
    if (RTCHECK(ok_next(v, r))) {
3082
      unlink_large_chunk(m, v);
3082
      unlink_large_chunk(m, v);
3083
      if (rsize < MIN_CHUNK_SIZE)
3083
      if (rsize < MIN_CHUNK_SIZE)
3084
        set_inuse_and_pinuse(m, v, (rsize + nb));
3084
        set_inuse_and_pinuse(m, v, (rsize + nb));
3085
      else {
3085
      else {
3086
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3086
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3087
        set_size_and_pinuse_of_free_chunk(r, rsize);
3087
        set_size_and_pinuse_of_free_chunk(r, rsize);
3088
        replace_dv(m, r, rsize);
3088
        replace_dv(m, r, rsize);
3089
      }
3089
      }
3090
      return chunk2mem(v);
3090
      return chunk2mem(v);
3091
    }
3091
    }
3092
  }
3092
  }
3093
 
3093
 
3094
  CORRUPTION_ERROR_ACTION(m);
3094
  CORRUPTION_ERROR_ACTION(m);
3095
  return 0;
3095
  return 0;
3096
}
3096
}
3097
 
3097
 
3098
/* --------------------------- realloc support --------------------------- */
3098
/* --------------------------- realloc support --------------------------- */
3099
 
3099
 
3100
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3100
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3101
  if (bytes >= MAX_REQUEST) {
3101
  if (bytes >= MAX_REQUEST) {
3102
    MALLOC_FAILURE_ACTION;
3102
    MALLOC_FAILURE_ACTION;
3103
    return 0;
3103
    return 0;
3104
  }
3104
  }
3105
  if (!PREACTION(m)) {
3105
  if (!PREACTION(m)) {
3106
    mchunkptr oldp = mem2chunk(oldmem);
3106
    mchunkptr oldp = mem2chunk(oldmem);
3107
    size_t oldsize = chunksize(oldp);
3107
    size_t oldsize = chunksize(oldp);
3108
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3108
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3109
    mchunkptr newp = 0;
3109
    mchunkptr newp = 0;
3110
    void* extra = 0;
3110
    void* extra = 0;
3111
 
3111
 
3112
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3112
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3113
 
3113
 
3114
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3114
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3115
                ok_next(oldp, next) && ok_pinuse(next))) {
3115
                ok_next(oldp, next) && ok_pinuse(next))) {
3116
      size_t nb = request2size(bytes);
3116
      size_t nb = request2size(bytes);
3117
      if (is_mmapped(oldp))
3117
      if (is_mmapped(oldp))
3118
        newp = mmap_resize(m, oldp, nb);
3118
        newp = mmap_resize(m, oldp, nb);
3119
      else if (oldsize >= nb) { /* already big enough */
3119
      else if (oldsize >= nb) { /* already big enough */
3120
        size_t rsize = oldsize - nb;
3120
        size_t rsize = oldsize - nb;
3121
        newp = oldp;
3121
        newp = oldp;
3122
        if (rsize >= MIN_CHUNK_SIZE) {
3122
        if (rsize >= MIN_CHUNK_SIZE) {
3123
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3123
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3124
          set_inuse(m, newp, nb);
3124
          set_inuse(m, newp, nb);
3125
          set_inuse(m, remainder, rsize);
3125
          set_inuse(m, remainder, rsize);
3126
          extra = chunk2mem(remainder);
3126
          extra = chunk2mem(remainder);
3127
        }
3127
        }
3128
      }
3128
      }
3129
      else if (next == m->top && oldsize + m->topsize > nb) {
3129
      else if (next == m->top && oldsize + m->topsize > nb) {
3130
        /* Expand into top */
3130
        /* Expand into top */
3131
        size_t newsize = oldsize + m->topsize;
3131
        size_t newsize = oldsize + m->topsize;
3132
        size_t newtopsize = newsize - nb;
3132
        size_t newtopsize = newsize - nb;
3133
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3133
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3134
        set_inuse(m, oldp, nb);
3134
        set_inuse(m, oldp, nb);
3135
        newtop->head = newtopsize |PINUSE_BIT;
3135
        newtop->head = newtopsize |PINUSE_BIT;
3136
        m->top = newtop;
3136
        m->top = newtop;
3137
        m->topsize = newtopsize;
3137
        m->topsize = newtopsize;
3138
        newp = oldp;
3138
        newp = oldp;
3139
      }
3139
      }
3140
    }
3140
    }
3141
    else {
3141
    else {
3142
      USAGE_ERROR_ACTION(m, oldmem);
3142
      USAGE_ERROR_ACTION(m, oldmem);
3143
      POSTACTION(m);
3143
      POSTACTION(m);
3144
      return 0;
3144
      return 0;
3145
    }
3145
    }
3146
 
3146
 
3147
    POSTACTION(m);
3147
    POSTACTION(m);
3148
 
3148
 
3149
    if (newp != 0) {
3149
    if (newp != 0) {
3150
      if (extra != 0) {
3150
      if (extra != 0) {
3151
        internal_free(m, extra);
3151
        internal_free(m, extra);
3152
      }
3152
      }
3153
      check_inuse_chunk(m, newp);
3153
      check_inuse_chunk(m, newp);
3154
      return chunk2mem(newp);
3154
      return chunk2mem(newp);
3155
    }
3155
    }
3156
    else {
3156
    else {
3157
      void* newmem = internal_malloc(m, bytes);
3157
      void* newmem = internal_malloc(m, bytes);
3158
      if (newmem != 0) {
3158
      if (newmem != 0) {
3159
        size_t oc = oldsize - overhead_for(oldp);
3159
        size_t oc = oldsize - overhead_for(oldp);
3160
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3160
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3161
        internal_free(m, oldmem);
3161
        internal_free(m, oldmem);
3162
      }
3162
      }
3163
      return newmem;
3163
      return newmem;
3164
    }
3164
    }
3165
  }
3165
  }
3166
  return 0;
3166
  return 0;
3167
}
3167
}
3168
 
3168
 
3169
/* --------------------------- memalign support -------------------------- */
3169
/* --------------------------- memalign support -------------------------- */
3170
 
3170
 
3171
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3171
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3172
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3172
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3173
    return internal_malloc(m, bytes);
3173
    return internal_malloc(m, bytes);
3174
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3174
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3175
    alignment = MIN_CHUNK_SIZE;
3175
    alignment = MIN_CHUNK_SIZE;
3176
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3176
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3177
    size_t a = MALLOC_ALIGNMENT << 1;
3177
    size_t a = MALLOC_ALIGNMENT << 1;
3178
    while (a < alignment) a <<= 1;
3178
    while (a < alignment) a <<= 1;
3179
    alignment = a;
3179
    alignment = a;
3180
  }
3180
  }
3181
 
3181
 
3182
  if (bytes >= MAX_REQUEST - alignment) {
3182
  if (bytes >= MAX_REQUEST - alignment) {
3183
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3183
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3184
      MALLOC_FAILURE_ACTION;
3184
      MALLOC_FAILURE_ACTION;
3185
    }
3185
    }
3186
  }
3186
  }
3187
  else {
3187
  else {
3188
    size_t nb = request2size(bytes);
3188
    size_t nb = request2size(bytes);
3189
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3189
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3190
    char* mem = (char*)internal_malloc(m, req);
3190
    char* mem = (char*)internal_malloc(m, req);
3191
    if (mem != 0) {
3191
    if (mem != 0) {
3192
      void* leader = 0;
3192
      void* leader = 0;
3193
      void* trailer = 0;
3193
      void* trailer = 0;
3194
      mchunkptr p = mem2chunk(mem);
3194
      mchunkptr p = mem2chunk(mem);
3195
 
3195
 
3196
      if (PREACTION(m)) return 0;
3196
      if (PREACTION(m)) return 0;
3197
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3197
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3198
        /*
3198
        /*
3199
          Find an aligned spot inside chunk.  Since we need to give
3199
          Find an aligned spot inside chunk.  Since we need to give
3200
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3200
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3201
          the first calculation places us at a spot with less than
3201
          the first calculation places us at a spot with less than
3202
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3202
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3203
          We've allocated enough total room so that this is always
3203
          We've allocated enough total room so that this is always
3204
          possible.
3204
          possible.
3205
        */
3205
        */
3206
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3206
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3207
                                                       alignment -
3207
                                                       alignment -
3208
                                                       SIZE_T_ONE)) &
3208
                                                       SIZE_T_ONE)) &
3209
                                             -alignment));
3209
                                             -alignment));
3210
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3210
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3211
          br : br+alignment;
3211
          br : br+alignment;
3212
        mchunkptr newp = (mchunkptr)pos;
3212
        mchunkptr newp = (mchunkptr)pos;
3213
        size_t leadsize = pos - (char*)(p);
3213
        size_t leadsize = pos - (char*)(p);
3214
        size_t newsize = chunksize(p) - leadsize;
3214
        size_t newsize = chunksize(p) - leadsize;
3215
 
3215
 
3216
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3216
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3217
          newp->prev_foot = p->prev_foot + leadsize;
3217
          newp->prev_foot = p->prev_foot + leadsize;
3218
          newp->head = (newsize|CINUSE_BIT);
3218
          newp->head = (newsize|CINUSE_BIT);
3219
        }
3219
        }
3220
        else { /* Otherwise, give back leader, use the rest */
3220
        else { /* Otherwise, give back leader, use the rest */
3221
          set_inuse(m, newp, newsize);
3221
          set_inuse(m, newp, newsize);
3222
          set_inuse(m, p, leadsize);
3222
          set_inuse(m, p, leadsize);
3223
          leader = chunk2mem(p);
3223
          leader = chunk2mem(p);
3224
        }
3224
        }
3225
        p = newp;
3225
        p = newp;
3226
      }
3226
      }
3227
 
3227
 
3228
      /* Give back spare room at the end */
3228
      /* Give back spare room at the end */
3229
      if (!is_mmapped(p)) {
3229
      if (!is_mmapped(p)) {
3230
        size_t size = chunksize(p);
3230
        size_t size = chunksize(p);
3231
        if (size > nb + MIN_CHUNK_SIZE) {
3231
        if (size > nb + MIN_CHUNK_SIZE) {
3232
          size_t remainder_size = size - nb;
3232
          size_t remainder_size = size - nb;
3233
          mchunkptr remainder = chunk_plus_offset(p, nb);
3233
          mchunkptr remainder = chunk_plus_offset(p, nb);
3234
          set_inuse(m, p, nb);
3234
          set_inuse(m, p, nb);
3235
          set_inuse(m, remainder, remainder_size);
3235
          set_inuse(m, remainder, remainder_size);
3236
          trailer = chunk2mem(remainder);
3236
          trailer = chunk2mem(remainder);
3237
        }
3237
        }
3238
      }
3238
      }
3239
 
3239
 
3240
      assert (chunksize(p) >= nb);
3240
      assert (chunksize(p) >= nb);
3241
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3241
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3242
      check_inuse_chunk(m, p);
3242
      check_inuse_chunk(m, p);
3243
      POSTACTION(m);
3243
      POSTACTION(m);
3244
      if (leader != 0) {
3244
      if (leader != 0) {
3245
        internal_free(m, leader);
3245
        internal_free(m, leader);
3246
      }
3246
      }
3247
      if (trailer != 0) {
3247
      if (trailer != 0) {
3248
        internal_free(m, trailer);
3248
        internal_free(m, trailer);
3249
      }
3249
      }
3250
      return chunk2mem(p);
3250
      return chunk2mem(p);
3251
    }
3251
    }
3252
  }
3252
  }
3253
  return 0;
3253
  return 0;
3254
}
3254
}
3255
 
3255
 
3256
/* ------------------------ comalloc/coalloc support --------------------- */
3256
/* ------------------------ comalloc/coalloc support --------------------- */
3257
 
3257
 
3258
static void** ialloc(mstate m,
3258
static void** ialloc(mstate m,
3259
                     size_t n_elements,
3259
                     size_t n_elements,
3260
                     size_t* sizes,
3260
                     size_t* sizes,
3261
                     int opts,
3261
                     int opts,
3262
                     void* chunks[]) {
3262
                     void* chunks[]) {
3263
  /*
3263
  /*
3264
    This provides common support for independent_X routines, handling
3264
    This provides common support for independent_X routines, handling
3265
    all of the combinations that can result.
3265
    all of the combinations that can result.
3266
 
3266
 
3267
    The opts arg has:
3267
    The opts arg has:
3268
    bit 0 set if all elements are same size (using sizes[0])
3268
    bit 0 set if all elements are same size (using sizes[0])
3269
    bit 1 set if elements should be zeroed
3269
    bit 1 set if elements should be zeroed
3270
  */
3270
  */
3271
 
3271
 
3272
  size_t    element_size;   /* chunksize of each element, if all same */
3272
  size_t    element_size;   /* chunksize of each element, if all same */
3273
  size_t    contents_size;  /* total size of elements */
3273
  size_t    contents_size;  /* total size of elements */
3274
  size_t    array_size;     /* request size of pointer array */
3274
  size_t    array_size;     /* request size of pointer array */
3275
  void*     mem;            /* malloced aggregate space */
3275
  void*     mem;            /* malloced aggregate space */
3276
  mchunkptr p;              /* corresponding chunk */
3276
  mchunkptr p;              /* corresponding chunk */
3277
  size_t    remainder_size; /* remaining bytes while splitting */
3277
  size_t    remainder_size; /* remaining bytes while splitting */
3278
  void**    marray;         /* either "chunks" or malloced ptr array */
3278
  void**    marray;         /* either "chunks" or malloced ptr array */
3279
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3279
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3280
  flag_t    was_enabled;    /* to disable mmap */
3280
  flag_t    was_enabled;    /* to disable mmap */
3281
  size_t    size;
3281
  size_t    size;
3282
  size_t    i;
3282
  size_t    i;
3283
 
3283
 
3284
  /* compute array length, if needed */
3284
  /* compute array length, if needed */
3285
  if (chunks != 0) {
3285
  if (chunks != 0) {
3286
    if (n_elements == 0)
3286
    if (n_elements == 0)
3287
      return chunks; /* nothing to do */
3287
      return chunks; /* nothing to do */
3288
    marray = chunks;
3288
    marray = chunks;
3289
    array_size = 0;
3289
    array_size = 0;
3290
  }
3290
  }
3291
  else {
3291
  else {
3292
    /* if empty req, must still return chunk representing empty array */
3292
    /* if empty req, must still return chunk representing empty array */
3293
    if (n_elements == 0)
3293
    if (n_elements == 0)
3294
      return (void**)internal_malloc(m, 0);
3294
      return (void**)internal_malloc(m, 0);
3295
    marray = 0;
3295
    marray = 0;
3296
    array_size = request2size(n_elements * (sizeof(void*)));
3296
    array_size = request2size(n_elements * (sizeof(void*)));
3297
  }
3297
  }
3298
 
3298
 
3299
  /* compute total element size */
3299
  /* compute total element size */
3300
  if (opts & 0x1) { /* all-same-size */
3300
  if (opts & 0x1) { /* all-same-size */
3301
    element_size = request2size(*sizes);
3301
    element_size = request2size(*sizes);
3302
    contents_size = n_elements * element_size;
3302
    contents_size = n_elements * element_size;
3303
  }
3303
  }
3304
  else { /* add up all the sizes */
3304
  else { /* add up all the sizes */
3305
    element_size = 0;
3305
    element_size = 0;
3306
    contents_size = 0;
3306
    contents_size = 0;
3307
    for (i = 0; i != n_elements; ++i)
3307
    for (i = 0; i != n_elements; ++i)
3308
      contents_size += request2size(sizes[i]);
3308
      contents_size += request2size(sizes[i]);
3309
  }
3309
  }
3310
 
3310
 
3311
  size = contents_size + array_size;
3311
  size = contents_size + array_size;
3312
 
3312
 
3313
  /*
3313
  /*
3314
     Allocate the aggregate chunk.  First disable direct-mmapping so
3314
     Allocate the aggregate chunk.  First disable direct-mmapping so
3315
     malloc won't use it, since we would not be able to later
3315
     malloc won't use it, since we would not be able to later
3316
     free/realloc space internal to a segregated mmap region.
3316
     free/realloc space internal to a segregated mmap region.
3317
  */
3317
  */
3318
  was_enabled = use_mmap(m);
3318
  was_enabled = use_mmap(m);
3319
  disable_mmap(m);
3319
  disable_mmap(m);
3320
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3320
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3321
  if (was_enabled)
3321
  if (was_enabled)
3322
    enable_mmap(m);
3322
    enable_mmap(m);
3323
  if (mem == 0)
3323
  if (mem == 0)
3324
    return 0;
3324
    return 0;
3325
 
3325
 
3326
  if (PREACTION(m)) return 0;
3326
  if (PREACTION(m)) return 0;
3327
  p = mem2chunk(mem);
3327
  p = mem2chunk(mem);
3328
  remainder_size = chunksize(p);
3328
  remainder_size = chunksize(p);
3329
 
3329
 
3330
  assert(!is_mmapped(p));
3330
  assert(!is_mmapped(p));
3331
 
3331
 
3332
  if (opts & 0x2) {       /* optionally clear the elements */
3332
  if (opts & 0x2) {       /* optionally clear the elements */
3333
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3333
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3334
  }
3334
  }
3335
 
3335
 
3336
  /* If not provided, allocate the pointer array as final part of chunk */
3336
  /* If not provided, allocate the pointer array as final part of chunk */
3337
  if (marray == 0) {
3337
  if (marray == 0) {
3338
    size_t  array_chunk_size;
3338
    size_t  array_chunk_size;
3339
    array_chunk = chunk_plus_offset(p, contents_size);
3339
    array_chunk = chunk_plus_offset(p, contents_size);
3340
    array_chunk_size = remainder_size - contents_size;
3340
    array_chunk_size = remainder_size - contents_size;
3341
    marray = (void**) (chunk2mem(array_chunk));
3341
    marray = (void**) (chunk2mem(array_chunk));
3342
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3342
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3343
    remainder_size = contents_size;
3343
    remainder_size = contents_size;
3344
  }
3344
  }
3345
 
3345
 
3346
  /* split out elements */
3346
  /* split out elements */
3347
  for (i = 0; ; ++i) {
3347
  for (i = 0; ; ++i) {
3348
    marray[i] = chunk2mem(p);
3348
    marray[i] = chunk2mem(p);
3349
    if (i != n_elements-1) {
3349
    if (i != n_elements-1) {
3350
      if (element_size != 0)
3350
      if (element_size != 0)
3351
        size = element_size;
3351
        size = element_size;
3352
      else
3352
      else
3353
        size = request2size(sizes[i]);
3353
        size = request2size(sizes[i]);
3354
      remainder_size -= size;
3354
      remainder_size -= size;
3355
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3355
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3356
      p = chunk_plus_offset(p, size);
3356
      p = chunk_plus_offset(p, size);
3357
    }
3357
    }
3358
    else { /* the final element absorbs any overallocation slop */
3358
    else { /* the final element absorbs any overallocation slop */
3359
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3359
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3360
      break;
3360
      break;
3361
    }
3361
    }
3362
  }
3362
  }
3363
 
3363
 
3364
#if DEBUG
3364
#if DEBUG
3365
  if (marray != chunks) {
3365
  if (marray != chunks) {
3366
    /* final element must have exactly exhausted chunk */
3366
    /* final element must have exactly exhausted chunk */
3367
    if (element_size != 0) {
3367
    if (element_size != 0) {
3368
      assert(remainder_size == element_size);
3368
      assert(remainder_size == element_size);
3369
    }
3369
    }
3370
    else {
3370
    else {
3371
      assert(remainder_size == request2size(sizes[i]));
3371
      assert(remainder_size == request2size(sizes[i]));
3372
    }
3372
    }
3373
    check_inuse_chunk(m, mem2chunk(marray));
3373
    check_inuse_chunk(m, mem2chunk(marray));
3374
  }
3374
  }
3375
  for (i = 0; i != n_elements; ++i)
3375
  for (i = 0; i != n_elements; ++i)
3376
    check_inuse_chunk(m, mem2chunk(marray[i]));
3376
    check_inuse_chunk(m, mem2chunk(marray[i]));
3377
 
3377
 
3378
#endif /* DEBUG */
3378
#endif /* DEBUG */
3379
 
3379
 
3380
  POSTACTION(m);
3380
  POSTACTION(m);
3381
  return marray;
3381
  return marray;
3382
}
3382
}
3383
 
3383
 
3384
 
3384
 
3385
/* -------------------------- public routines ---------------------------- */
3385
/* -------------------------- public routines ---------------------------- */
3386
 
3386
 
3387
#if !ONLY_MSPACES
3387
#if !ONLY_MSPACES
3388
 
3388
 
3389
void* dlmalloc(size_t bytes) {
3389
void* dlmalloc(size_t bytes) {
3390
  /*
3390
  /*
3391
     Basic algorithm:
3391
     Basic algorithm:
3392
     If a small request (< 256 bytes minus per-chunk overhead):
3392
     If a small request (< 256 bytes minus per-chunk overhead):
3393
       1. If one exists, use a remainderless chunk in associated smallbin.
3393
       1. If one exists, use a remainderless chunk in associated smallbin.
3394
          (Remainderless means that there are too few excess bytes to
3394
          (Remainderless means that there are too few excess bytes to
3395
          represent as a chunk.)
3395
          represent as a chunk.)
3396
       2. If it is big enough, use the dv chunk, which is normally the
3396
       2. If it is big enough, use the dv chunk, which is normally the
3397
          chunk adjacent to the one used for the most recent small request.
3397
          chunk adjacent to the one used for the most recent small request.
3398
       3. If one exists, split the smallest available chunk in a bin,
3398
       3. If one exists, split the smallest available chunk in a bin,
3399
          saving remainder in dv.
3399
          saving remainder in dv.
3400
       4. If it is big enough, use the top chunk.
3400
       4. If it is big enough, use the top chunk.
3401
       5. If available, get memory from system and use it
3401
       5. If available, get memory from system and use it
3402
     Otherwise, for a large request:
3402
     Otherwise, for a large request:
3403
       1. Find the smallest available binned chunk that fits, and use it
3403
       1. Find the smallest available binned chunk that fits, and use it
3404
          if it is better fitting than dv chunk, splitting if necessary.
3404
          if it is better fitting than dv chunk, splitting if necessary.
3405
       2. If better fitting than any binned chunk, use the dv chunk.
3405
       2. If better fitting than any binned chunk, use the dv chunk.
3406
       3. If it is big enough, use the top chunk.
3406
       3. If it is big enough, use the top chunk.
3407
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3407
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3408
       5. If available, get memory from system and use it
3408
       5. If available, get memory from system and use it
3409
 
3409
 
3410
     The ugly goto's here ensure that postaction occurs along all paths.
3410
     The ugly goto's here ensure that postaction occurs along all paths.
3411
  */
3411
  */
3412
 
3412
 
3413
  if (!PREACTION(gm)) {
3413
  if (!PREACTION(gm)) {
3414
    void* mem;
3414
    void* mem;
3415
    size_t nb;
3415
    size_t nb;
3416
    if (bytes <= MAX_SMALL_REQUEST) {
3416
    if (bytes <= MAX_SMALL_REQUEST) {
3417
      bindex_t idx;
3417
      bindex_t idx;
3418
      binmap_t smallbits;
3418
      binmap_t smallbits;
3419
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3419
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3420
      idx = small_index(nb);
3420
      idx = small_index(nb);
3421
      smallbits = gm->smallmap >> idx;
3421
      smallbits = gm->smallmap >> idx;
3422
 
3422
 
3423
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3423
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3424
        mchunkptr b, p;
3424
        mchunkptr b, p;
3425
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3425
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3426
        b = smallbin_at(gm, idx);
3426
        b = smallbin_at(gm, idx);
3427
        p = b->fd;
3427
        p = b->fd;
3428
        assert(chunksize(p) == small_index2size(idx));
3428
        assert(chunksize(p) == small_index2size(idx));
3429
        unlink_first_small_chunk(gm, b, p, idx);
3429
        unlink_first_small_chunk(gm, b, p, idx);
3430
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3430
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3431
        mem = chunk2mem(p);
3431
        mem = chunk2mem(p);
3432
        check_malloced_chunk(gm, mem, nb);
3432
        check_malloced_chunk(gm, mem, nb);
3433
        goto postaction;
3433
        goto postaction;
3434
      }
3434
      }
3435
 
3435
 
3436
      else if (nb > gm->dvsize) {
3436
      else if (nb > gm->dvsize) {
3437
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3437
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3438
          mchunkptr b, p, r;
3438
          mchunkptr b, p, r;
3439
          size_t rsize;
3439
          size_t rsize;
3440
          bindex_t i;
3440
          bindex_t i;
3441
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3441
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3442
          binmap_t leastbit = least_bit(leftbits);
3442
          binmap_t leastbit = least_bit(leftbits);
3443
          compute_bit2idx(leastbit, i);
3443
          compute_bit2idx(leastbit, i);
3444
          b = smallbin_at(gm, i);
3444
          b = smallbin_at(gm, i);
3445
          p = b->fd;
3445
          p = b->fd;
3446
          assert(chunksize(p) == small_index2size(i));
3446
          assert(chunksize(p) == small_index2size(i));
3447
          unlink_first_small_chunk(gm, b, p, i);
3447
          unlink_first_small_chunk(gm, b, p, i);
3448
          rsize = small_index2size(i) - nb;
3448
          rsize = small_index2size(i) - nb;
3449
          /* Fit here cannot be remainderless if 4byte sizes */
3449
          /* Fit here cannot be remainderless if 4byte sizes */
3450
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3450
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3451
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3451
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3452
          else {
3452
          else {
3453
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3453
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3454
            r = chunk_plus_offset(p, nb);
3454
            r = chunk_plus_offset(p, nb);
3455
            set_size_and_pinuse_of_free_chunk(r, rsize);
3455
            set_size_and_pinuse_of_free_chunk(r, rsize);
3456
            replace_dv(gm, r, rsize);
3456
            replace_dv(gm, r, rsize);
3457
          }
3457
          }
3458
          mem = chunk2mem(p);
3458
          mem = chunk2mem(p);
3459
          check_malloced_chunk(gm, mem, nb);
3459
          check_malloced_chunk(gm, mem, nb);
3460
          goto postaction;
3460
          goto postaction;
3461
        }
3461
        }
3462
 
3462
 
3463
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3463
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3464
          check_malloced_chunk(gm, mem, nb);
3464
          check_malloced_chunk(gm, mem, nb);
3465
          goto postaction;
3465
          goto postaction;
3466
        }
3466
        }
3467
      }
3467
      }
3468
    }
3468
    }
3469
    else if (bytes >= MAX_REQUEST)
3469
    else if (bytes >= MAX_REQUEST)
3470
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3470
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3471
    else {
3471
    else {
3472
      nb = pad_request(bytes);
3472
      nb = pad_request(bytes);
3473
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3473
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3474
        check_malloced_chunk(gm, mem, nb);
3474
        check_malloced_chunk(gm, mem, nb);
3475
        goto postaction;
3475
        goto postaction;
3476
      }
3476
      }
3477
    }
3477
    }
3478
 
3478
 
3479
    if (nb <= gm->dvsize) {
3479
    if (nb <= gm->dvsize) {
3480
      size_t rsize = gm->dvsize - nb;
3480
      size_t rsize = gm->dvsize - nb;
3481
      mchunkptr p = gm->dv;
3481
      mchunkptr p = gm->dv;
3482
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3482
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3483
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3483
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3484
        gm->dvsize = rsize;
3484
        gm->dvsize = rsize;
3485
        set_size_and_pinuse_of_free_chunk(r, rsize);
3485
        set_size_and_pinuse_of_free_chunk(r, rsize);
3486
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3486
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3487
      }
3487
      }
3488
      else { /* exhaust dv */
3488
      else { /* exhaust dv */
3489
        size_t dvs = gm->dvsize;
3489
        size_t dvs = gm->dvsize;
3490
        gm->dvsize = 0;
3490
        gm->dvsize = 0;
3491
        gm->dv = 0;
3491
        gm->dv = 0;
3492
        set_inuse_and_pinuse(gm, p, dvs);
3492
        set_inuse_and_pinuse(gm, p, dvs);
3493
      }
3493
      }
3494
      mem = chunk2mem(p);
3494
      mem = chunk2mem(p);
3495
      check_malloced_chunk(gm, mem, nb);
3495
      check_malloced_chunk(gm, mem, nb);
3496
      goto postaction;
3496
      goto postaction;
3497
    }
3497
    }
3498
 
3498
 
3499
    else if (nb < gm->topsize) { /* Split top */
3499
    else if (nb < gm->topsize) { /* Split top */
3500
      size_t rsize = gm->topsize -= nb;
3500
      size_t rsize = gm->topsize -= nb;
3501
      mchunkptr p = gm->top;
3501
      mchunkptr p = gm->top;
3502
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3502
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3503
      r->head = rsize | PINUSE_BIT;
3503
      r->head = rsize | PINUSE_BIT;
3504
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3504
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3505
      mem = chunk2mem(p);
3505
      mem = chunk2mem(p);
3506
      check_top_chunk(gm, gm->top);
3506
      check_top_chunk(gm, gm->top);
3507
      check_malloced_chunk(gm, mem, nb);
3507
      check_malloced_chunk(gm, mem, nb);
3508
      goto postaction;
3508
      goto postaction;
3509
    }
3509
    }
3510
 
3510
 
3511
    mem = sys_alloc(gm, nb);
3511
    mem = sys_alloc(gm, nb);
3512
 
3512
 
3513
  postaction:
3513
  postaction:
3514
    POSTACTION(gm);
3514
    POSTACTION(gm);
3515
    return mem;
3515
    return mem;
3516
  }
3516
  }
3517
 
3517
 
3518
  return 0;
3518
  return 0;
3519
}
3519
}
3520
 
3520
 
3521
void dlfree(void* mem) {
3521
void dlfree(void* mem) {
3522
  /*
3522
  /*
3523
     Consolidate freed chunks with preceeding or succeeding bordering
3523
     Consolidate freed chunks with preceeding or succeeding bordering
3524
     free chunks, if they exist, and then place in a bin.  Intermixed
3524
     free chunks, if they exist, and then place in a bin.  Intermixed
3525
     with special cases for top, dv, mmapped chunks, and usage errors.
3525
     with special cases for top, dv, mmapped chunks, and usage errors.
3526
  */
3526
  */
3527
 
3527
 
3528
  if (mem != 0) {
3528
  if (mem != 0) {
3529
    mchunkptr p  = mem2chunk(mem);
3529
    mchunkptr p  = mem2chunk(mem);
3530
#if FOOTERS
3530
#if FOOTERS
3531
    mstate fm = get_mstate_for(p);
3531
    mstate fm = get_mstate_for(p);
3532
    if (!ok_magic(fm)) {
3532
    if (!ok_magic(fm)) {
3533
      USAGE_ERROR_ACTION(fm, p);
3533
      USAGE_ERROR_ACTION(fm, p);
3534
      return;
3534
      return;
3535
    }
3535
    }
3536
#else /* FOOTERS */
3536
#else /* FOOTERS */
3537
#define fm gm
3537
#define fm gm
3538
#endif /* FOOTERS */
3538
#endif /* FOOTERS */
3539
    if (!PREACTION(fm)) {
3539
    if (!PREACTION(fm)) {
3540
      check_inuse_chunk(fm, p);
3540
      check_inuse_chunk(fm, p);
3541
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3541
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3542
        size_t psize = chunksize(p);
3542
        size_t psize = chunksize(p);
3543
        mchunkptr next = chunk_plus_offset(p, psize);
3543
        mchunkptr next = chunk_plus_offset(p, psize);
3544
        if (!pinuse(p)) {
3544
        if (!pinuse(p)) {
3545
          size_t prevsize = p->prev_foot;
3545
          size_t prevsize = p->prev_foot;
3546
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3546
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3547
            prevsize &= ~IS_MMAPPED_BIT;
3547
            prevsize &= ~IS_MMAPPED_BIT;
3548
            psize += prevsize + MMAP_FOOT_PAD;
3548
            psize += prevsize + MMAP_FOOT_PAD;
3549
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3549
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3550
              fm->footprint -= psize;
3550
              fm->footprint -= psize;
3551
            goto postaction;
3551
            goto postaction;
3552
          }
3552
          }
3553
          else {
3553
          else {
3554
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3554
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3555
            psize += prevsize;
3555
            psize += prevsize;
3556
            p = prev;
3556
            p = prev;
3557
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3557
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3558
              if (p != fm->dv) {
3558
              if (p != fm->dv) {
3559
                unlink_chunk(fm, p, prevsize);
3559
                unlink_chunk(fm, p, prevsize);
3560
              }
3560
              }
3561
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3561
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3562
                fm->dvsize = psize;
3562
                fm->dvsize = psize;
3563
                set_free_with_pinuse(p, psize, next);
3563
                set_free_with_pinuse(p, psize, next);
3564
                goto postaction;
3564
                goto postaction;
3565
              }
3565
              }
3566
            }
3566
            }
3567
            else
3567
            else
3568
              goto erroraction;
3568
              goto erroraction;
3569
          }
3569
          }
3570
        }
3570
        }
3571
 
3571
 
3572
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3572
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3573
          if (!cinuse(next)) {  /* consolidate forward */
3573
          if (!cinuse(next)) {  /* consolidate forward */
3574
            if (next == fm->top) {
3574
            if (next == fm->top) {
3575
              size_t tsize = fm->topsize += psize;
3575
              size_t tsize = fm->topsize += psize;
3576
              fm->top = p;
3576
              fm->top = p;
3577
              p->head = tsize | PINUSE_BIT;
3577
              p->head = tsize | PINUSE_BIT;
3578
              if (p == fm->dv) {
3578
              if (p == fm->dv) {
3579
                fm->dv = 0;
3579
                fm->dv = 0;
3580
                fm->dvsize = 0;
3580
                fm->dvsize = 0;
3581
              }
3581
              }
3582
              if (should_trim(fm, tsize))
3582
              if (should_trim(fm, tsize))
3583
                sys_trim(fm, 0);
3583
                sys_trim(fm, 0);
3584
              goto postaction;
3584
              goto postaction;
3585
            }
3585
            }
3586
            else if (next == fm->dv) {
3586
            else if (next == fm->dv) {
3587
              size_t dsize = fm->dvsize += psize;
3587
              size_t dsize = fm->dvsize += psize;
3588
              fm->dv = p;
3588
              fm->dv = p;
3589
              set_size_and_pinuse_of_free_chunk(p, dsize);
3589
              set_size_and_pinuse_of_free_chunk(p, dsize);
3590
              goto postaction;
3590
              goto postaction;
3591
            }
3591
            }
3592
            else {
3592
            else {
3593
              size_t nsize = chunksize(next);
3593
              size_t nsize = chunksize(next);
3594
              psize += nsize;
3594
              psize += nsize;
3595
              unlink_chunk(fm, next, nsize);
3595
              unlink_chunk(fm, next, nsize);
3596
              set_size_and_pinuse_of_free_chunk(p, psize);
3596
              set_size_and_pinuse_of_free_chunk(p, psize);
3597
              if (p == fm->dv) {
3597
              if (p == fm->dv) {
3598
                fm->dvsize = psize;
3598
                fm->dvsize = psize;
3599
                goto postaction;
3599
                goto postaction;
3600
              }
3600
              }
3601
            }
3601
            }
3602
          }
3602
          }
3603
          else
3603
          else
3604
            set_free_with_pinuse(p, psize, next);
3604
            set_free_with_pinuse(p, psize, next);
3605
          insert_chunk(fm, p, psize);
3605
          insert_chunk(fm, p, psize);
3606
          check_free_chunk(fm, p);
3606
          check_free_chunk(fm, p);
3607
          goto postaction;
3607
          goto postaction;
3608
        }
3608
        }
3609
      }
3609
      }
3610
    erroraction:
3610
    erroraction:
3611
      USAGE_ERROR_ACTION(fm, p);
3611
      USAGE_ERROR_ACTION(fm, p);
3612
    postaction:
3612
    postaction:
3613
      POSTACTION(fm);
3613
      POSTACTION(fm);
3614
    }
3614
    }
3615
  }
3615
  }
3616
#if !FOOTERS
3616
#if !FOOTERS
3617
#undef fm
3617
#undef fm
3618
#endif /* FOOTERS */
3618
#endif /* FOOTERS */
3619
}
3619
}
3620
 
3620
 
3621
void* dlcalloc(size_t n_elements, size_t elem_size) {
3621
void* dlcalloc(size_t n_elements, size_t elem_size) {
3622
  void* mem;
3622
  void* mem;
3623
  size_t req = 0;
3623
  size_t req = 0;
3624
  if (n_elements != 0) {
3624
  if (n_elements != 0) {
3625
    req = n_elements * elem_size;
3625
    req = n_elements * elem_size;
3626
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3626
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3627
        (req / n_elements != elem_size))
3627
        (req / n_elements != elem_size))
3628
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3628
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3629
  }
3629
  }
3630
  mem = dlmalloc(req);
3630
  mem = dlmalloc(req);
3631
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3631
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3632
    memset(mem, 0, req);
3632
    memset(mem, 0, req);
3633
  return mem;
3633
  return mem;
3634
}
3634
}
3635
 
3635
 
3636
void* dlrealloc(void* oldmem, size_t bytes) {
3636
void* dlrealloc(void* oldmem, size_t bytes) {
3637
  if (oldmem == 0)
3637
  if (oldmem == 0)
3638
    return dlmalloc(bytes);
3638
    return dlmalloc(bytes);
3639
#ifdef REALLOC_ZERO_BYTES_FREES
3639
#ifdef REALLOC_ZERO_BYTES_FREES
3640
  if (bytes == 0) {
3640
  if (bytes == 0) {
3641
    dlfree(oldmem);
3641
    dlfree(oldmem);
3642
    return 0;
3642
    return 0;
3643
  }
3643
  }
3644
#endif /* REALLOC_ZERO_BYTES_FREES */
3644
#endif /* REALLOC_ZERO_BYTES_FREES */
3645
  else {
3645
  else {
3646
#if ! FOOTERS
3646
#if ! FOOTERS
3647
    mstate m = gm;
3647
    mstate m = gm;
3648
#else /* FOOTERS */
3648
#else /* FOOTERS */
3649
    mstate m = get_mstate_for(mem2chunk(oldmem));
3649
    mstate m = get_mstate_for(mem2chunk(oldmem));
3650
    if (!ok_magic(m)) {
3650
    if (!ok_magic(m)) {
3651
      USAGE_ERROR_ACTION(m, oldmem);
3651
      USAGE_ERROR_ACTION(m, oldmem);
3652
      return 0;
3652
      return 0;
3653
    }
3653
    }
3654
#endif /* FOOTERS */
3654
#endif /* FOOTERS */
3655
    return internal_realloc(m, oldmem, bytes);
3655
    return internal_realloc(m, oldmem, bytes);
3656
  }
3656
  }
3657
}
3657
}
3658
 
3658
 
3659
void* dlmemalign(size_t alignment, size_t bytes) {
3659
void* dlmemalign(size_t alignment, size_t bytes) {
3660
  return internal_memalign(gm, alignment, bytes);
3660
  return internal_memalign(gm, alignment, bytes);
3661
}
3661
}
3662
 
3662
 
3663
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3663
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3664
                                 void* chunks[]) {
3664
                                 void* chunks[]) {
3665
  size_t sz = elem_size; /* serves as 1-element array */
3665
  size_t sz = elem_size; /* serves as 1-element array */
3666
  return ialloc(gm, n_elements, &sz, 3, chunks);
3666
  return ialloc(gm, n_elements, &sz, 3, chunks);
3667
}
3667
}
3668
 
3668
 
3669
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3669
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3670
                                   void* chunks[]) {
3670
                                   void* chunks[]) {
3671
  return ialloc(gm, n_elements, sizes, 0, chunks);
3671
  return ialloc(gm, n_elements, sizes, 0, chunks);
3672
}
3672
}
3673
 
3673
 
3674
void* dlvalloc(size_t bytes) {
3674
void* dlvalloc(size_t bytes) {
3675
  size_t pagesz;
3675
  size_t pagesz;
3676
  init_mparams();
3676
  init_mparams();
3677
  pagesz = mparams.page_size;
3677
  pagesz = mparams.page_size;
3678
  return dlmemalign(pagesz, bytes);
3678
  return dlmemalign(pagesz, bytes);
3679
}
3679
}
3680
 
3680
 
3681
void* dlpvalloc(size_t bytes) {
3681
void* dlpvalloc(size_t bytes) {
3682
  size_t pagesz;
3682
  size_t pagesz;
3683
  init_mparams();
3683
  init_mparams();
3684
  pagesz = mparams.page_size;
3684
  pagesz = mparams.page_size;
3685
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3685
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3686
}
3686
}
3687
 
3687
 
3688
int dlmalloc_trim(size_t pad) {
3688
int dlmalloc_trim(size_t pad) {
3689
  int result = 0;
3689
  int result = 0;
3690
  if (!PREACTION(gm)) {
3690
  if (!PREACTION(gm)) {
3691
    result = sys_trim(gm, pad);
3691
    result = sys_trim(gm, pad);
3692
    POSTACTION(gm);
3692
    POSTACTION(gm);
3693
  }
3693
  }
3694
  return result;
3694
  return result;
3695
}
3695
}
3696
 
3696
 
3697
size_t dlmalloc_footprint(void) {
3697
size_t dlmalloc_footprint(void) {
3698
  return gm->footprint;
3698
  return gm->footprint;
3699
}
3699
}
3700
 
3700
 
3701
size_t dlmalloc_max_footprint(void) {
3701
size_t dlmalloc_max_footprint(void) {
3702
  return gm->max_footprint;
3702
  return gm->max_footprint;
3703
}
3703
}
3704
 
3704
 
3705
#if !NO_MALLINFO
3705
#if !NO_MALLINFO
3706
struct mallinfo dlmallinfo(void) {
3706
struct mallinfo dlmallinfo(void) {
3707
  return internal_mallinfo(gm);
3707
  return internal_mallinfo(gm);
3708
}
3708
}
3709
#endif /* NO_MALLINFO */
3709
#endif /* NO_MALLINFO */
3710
 
3710
 
3711
void dlmalloc_stats() {
3711
void dlmalloc_stats() {
3712
  internal_malloc_stats(gm);
3712
  internal_malloc_stats(gm);
3713
}
3713
}
3714
 
3714
 
3715
size_t dlmalloc_usable_size(void* mem) {
3715
size_t dlmalloc_usable_size(void* mem) {
3716
  if (mem != 0) {
3716
  if (mem != 0) {
3717
    mchunkptr p = mem2chunk(mem);
3717
    mchunkptr p = mem2chunk(mem);
3718
    if (cinuse(p))
3718
    if (cinuse(p))
3719
      return chunksize(p) - overhead_for(p);
3719
      return chunksize(p) - overhead_for(p);
3720
  }
3720
  }
3721
  return 0;
3721
  return 0;
3722
}
3722
}
3723
 
3723
 
3724
int dlmallopt(int param_number, int value) {
3724
int dlmallopt(int param_number, int value) {
3725
  return change_mparam(param_number, value);
3725
  return change_mparam(param_number, value);
3726
}
3726
}
3727
 
3727
 
3728
#endif /* !ONLY_MSPACES */
3728
#endif /* !ONLY_MSPACES */
3729
 
3729
 
3730
/* ----------------------------- user mspaces ---------------------------- */
3730
/* ----------------------------- user mspaces ---------------------------- */
3731
 
3731
 
3732
#if MSPACES
3732
#if MSPACES
3733
 
3733
 
3734
static mstate init_user_mstate(char* tbase, size_t tsize) {
3734
static mstate init_user_mstate(char* tbase, size_t tsize) {
3735
  size_t msize = pad_request(sizeof(struct malloc_state));
3735
  size_t msize = pad_request(sizeof(struct malloc_state));
3736
  mchunkptr mn;
3736
  mchunkptr mn;
3737
  mchunkptr msp = align_as_chunk(tbase);
3737
  mchunkptr msp = align_as_chunk(tbase);
3738
  mstate m = (mstate)(chunk2mem(msp));
3738
  mstate m = (mstate)(chunk2mem(msp));
3739
  memset(m, 0, msize);
3739
  memset(m, 0, msize);
3740
  INITIAL_LOCK(&m->mutex);
3740
  INITIAL_LOCK(&m->mutex);
3741
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3741
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3742
  m->seg.base = m->least_addr = tbase;
3742
  m->seg.base = m->least_addr = tbase;
3743
  m->seg.size = m->footprint = m->max_footprint = tsize;
3743
  m->seg.size = m->footprint = m->max_footprint = tsize;
3744
  m->magic = mparams.magic;
3744
  m->magic = mparams.magic;
3745
  m->mflags = mparams.default_mflags;
3745
  m->mflags = mparams.default_mflags;
3746
  disable_contiguous(m);
3746
  disable_contiguous(m);
3747
  init_bins(m);
3747
  init_bins(m);
3748
  mn = next_chunk(mem2chunk(m));
3748
  mn = next_chunk(mem2chunk(m));
3749
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3749
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3750
  check_top_chunk(m, m->top);
3750
  check_top_chunk(m, m->top);
3751
  return m;
3751
  return m;
3752
}
3752
}
3753
 
3753
 
3754
mspace create_mspace(size_t capacity, int locked) {
3754
mspace create_mspace(size_t capacity, int locked) {
3755
  mstate m = 0;
3755
  mstate m = 0;
3756
  size_t msize = pad_request(sizeof(struct malloc_state));
3756
  size_t msize = pad_request(sizeof(struct malloc_state));
3757
  init_mparams(); /* Ensure pagesize etc initialized */
3757
  init_mparams(); /* Ensure pagesize etc initialized */
3758
 
3758
 
3759
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3759
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3760
    size_t rs = ((capacity == 0)? mparams.granularity :
3760
    size_t rs = ((capacity == 0)? mparams.granularity :
3761
                 (capacity + TOP_FOOT_SIZE + msize));
3761
                 (capacity + TOP_FOOT_SIZE + msize));
3762
    size_t tsize = granularity_align(rs);
3762
    size_t tsize = granularity_align(rs);
3763
    char* tbase = (char*)(CALL_MMAP(tsize));
3763
    char* tbase = (char*)(CALL_MMAP(tsize));
3764
    if (tbase != CMFAIL) {
3764
    if (tbase != CMFAIL) {
3765
      m = init_user_mstate(tbase, tsize);
3765
      m = init_user_mstate(tbase, tsize);
3766
      m->seg.sflags = IS_MMAPPED_BIT;
3766
      m->seg.sflags = IS_MMAPPED_BIT;
3767
      set_lock(m, locked);
3767
      set_lock(m, locked);
3768
    }
3768
    }
3769
  }
3769
  }
3770
  return (mspace)m;
3770
  return (mspace)m;
3771
}
3771
}
3772
 
3772
 
3773
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3773
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3774
  mstate m = 0;
3774
  mstate m = 0;
3775
  size_t msize = pad_request(sizeof(struct malloc_state));
3775
  size_t msize = pad_request(sizeof(struct malloc_state));
3776
  init_mparams(); /* Ensure pagesize etc initialized */
3776
  init_mparams(); /* Ensure pagesize etc initialized */
3777
 
3777
 
3778
  if (capacity > msize + TOP_FOOT_SIZE &&
3778
  if (capacity > msize + TOP_FOOT_SIZE &&
3779
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3779
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3780
    m = init_user_mstate((char*)base, capacity);
3780
    m = init_user_mstate((char*)base, capacity);
3781
    m->seg.sflags = EXTERN_BIT;
3781
    m->seg.sflags = EXTERN_BIT;
3782
    set_lock(m, locked);
3782
    set_lock(m, locked);
3783
  }
3783
  }
3784
  return (mspace)m;
3784
  return (mspace)m;
3785
}
3785
}
3786
 
3786
 
3787
size_t destroy_mspace(mspace msp) {
3787
size_t destroy_mspace(mspace msp) {
3788
  size_t freed = 0;
3788
  size_t freed = 0;
3789
  mstate ms = (mstate)msp;
3789
  mstate ms = (mstate)msp;
3790
  if (ok_magic(ms)) {
3790
  if (ok_magic(ms)) {
3791
    msegmentptr sp = &ms->seg;
3791
    msegmentptr sp = &ms->seg;
3792
    while (sp != 0) {
3792
    while (sp != 0) {
3793
      char* base = sp->base;
3793
      char* base = sp->base;
3794
      size_t size = sp->size;
3794
      size_t size = sp->size;
3795
      flag_t flag = sp->sflags;
3795
      flag_t flag = sp->sflags;
3796
      sp = sp->next;
3796
      sp = sp->next;
3797
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3797
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3798
          CALL_MUNMAP(base, size) == 0)
3798
          CALL_MUNMAP(base, size) == 0)
3799
        freed += size;
3799
        freed += size;
3800
    }
3800
    }
3801
  }
3801
  }
3802
  else {
3802
  else {
3803
    USAGE_ERROR_ACTION(ms,ms);
3803
    USAGE_ERROR_ACTION(ms,ms);
3804
  }
3804
  }
3805
  return freed;
3805
  return freed;
3806
}
3806
}
3807
 
3807
 
3808
/*
3808
/*
3809
  mspace versions of routines are near-clones of the global
3809
  mspace versions of routines are near-clones of the global
3810
  versions. This is not so nice but better than the alternatives.
3810
  versions. This is not so nice but better than the alternatives.
3811
*/
3811
*/
3812
 
3812
 
3813
 
3813
 
3814
void* mspace_malloc(mspace msp, size_t bytes) {
3814
void* mspace_malloc(mspace msp, size_t bytes) {
3815
  mstate ms = (mstate)msp;
3815
  mstate ms = (mstate)msp;
3816
  if (!ok_magic(ms)) {
3816
  if (!ok_magic(ms)) {
3817
    USAGE_ERROR_ACTION(ms,ms);
3817
    USAGE_ERROR_ACTION(ms,ms);
3818
    return 0;
3818
    return 0;
3819
  }
3819
  }
3820
  if (!PREACTION(ms)) {
3820
  if (!PREACTION(ms)) {
3821
    void* mem;
3821
    void* mem;
3822
    size_t nb;
3822
    size_t nb;
3823
    if (bytes <= MAX_SMALL_REQUEST) {
3823
    if (bytes <= MAX_SMALL_REQUEST) {
3824
      bindex_t idx;
3824
      bindex_t idx;
3825
      binmap_t smallbits;
3825
      binmap_t smallbits;
3826
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3826
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3827
      idx = small_index(nb);
3827
      idx = small_index(nb);
3828
      smallbits = ms->smallmap >> idx;
3828
      smallbits = ms->smallmap >> idx;
3829
 
3829
 
3830
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3830
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3831
        mchunkptr b, p;
3831
        mchunkptr b, p;
3832
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3832
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3833
        b = smallbin_at(ms, idx);
3833
        b = smallbin_at(ms, idx);
3834
        p = b->fd;
3834
        p = b->fd;
3835
        assert(chunksize(p) == small_index2size(idx));
3835
        assert(chunksize(p) == small_index2size(idx));
3836
        unlink_first_small_chunk(ms, b, p, idx);
3836
        unlink_first_small_chunk(ms, b, p, idx);
3837
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3837
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3838
        mem = chunk2mem(p);
3838
        mem = chunk2mem(p);
3839
        check_malloced_chunk(ms, mem, nb);
3839
        check_malloced_chunk(ms, mem, nb);
3840
        goto postaction;
3840
        goto postaction;
3841
      }
3841
      }
3842
 
3842
 
3843
      else if (nb > ms->dvsize) {
3843
      else if (nb > ms->dvsize) {
3844
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3844
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3845
          mchunkptr b, p, r;
3845
          mchunkptr b, p, r;
3846
          size_t rsize;
3846
          size_t rsize;
3847
          bindex_t i;
3847
          bindex_t i;
3848
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3848
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3849
          binmap_t leastbit = least_bit(leftbits);
3849
          binmap_t leastbit = least_bit(leftbits);
3850
          compute_bit2idx(leastbit, i);
3850
          compute_bit2idx(leastbit, i);
3851
          b = smallbin_at(ms, i);
3851
          b = smallbin_at(ms, i);
3852
          p = b->fd;
3852
          p = b->fd;
3853
          assert(chunksize(p) == small_index2size(i));
3853
          assert(chunksize(p) == small_index2size(i));
3854
          unlink_first_small_chunk(ms, b, p, i);
3854
          unlink_first_small_chunk(ms, b, p, i);
3855
          rsize = small_index2size(i) - nb;
3855
          rsize = small_index2size(i) - nb;
3856
          /* Fit here cannot be remainderless if 4byte sizes */
3856
          /* Fit here cannot be remainderless if 4byte sizes */
3857
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3857
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3858
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3858
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3859
          else {
3859
          else {
3860
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3860
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3861
            r = chunk_plus_offset(p, nb);
3861
            r = chunk_plus_offset(p, nb);
3862
            set_size_and_pinuse_of_free_chunk(r, rsize);
3862
            set_size_and_pinuse_of_free_chunk(r, rsize);
3863
            replace_dv(ms, r, rsize);
3863
            replace_dv(ms, r, rsize);
3864
          }
3864
          }
3865
          mem = chunk2mem(p);
3865
          mem = chunk2mem(p);
3866
          check_malloced_chunk(ms, mem, nb);
3866
          check_malloced_chunk(ms, mem, nb);
3867
          goto postaction;
3867
          goto postaction;
3868
        }
3868
        }
3869
 
3869
 
3870
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3870
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3871
          check_malloced_chunk(ms, mem, nb);
3871
          check_malloced_chunk(ms, mem, nb);
3872
          goto postaction;
3872
          goto postaction;
3873
        }
3873
        }
3874
      }
3874
      }
3875
    }
3875
    }
3876
    else if (bytes >= MAX_REQUEST)
3876
    else if (bytes >= MAX_REQUEST)
3877
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3877
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3878
    else {
3878
    else {
3879
      nb = pad_request(bytes);
3879
      nb = pad_request(bytes);
3880
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3880
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3881
        check_malloced_chunk(ms, mem, nb);
3881
        check_malloced_chunk(ms, mem, nb);
3882
        goto postaction;
3882
        goto postaction;
3883
      }
3883
      }
3884
    }
3884
    }
3885
 
3885
 
3886
    if (nb <= ms->dvsize) {
3886
    if (nb <= ms->dvsize) {
3887
      size_t rsize = ms->dvsize - nb;
3887
      size_t rsize = ms->dvsize - nb;
3888
      mchunkptr p = ms->dv;
3888
      mchunkptr p = ms->dv;
3889
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3889
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3890
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3890
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3891
        ms->dvsize = rsize;
3891
        ms->dvsize = rsize;
3892
        set_size_and_pinuse_of_free_chunk(r, rsize);
3892
        set_size_and_pinuse_of_free_chunk(r, rsize);
3893
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3893
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3894
      }
3894
      }
3895
      else { /* exhaust dv */
3895
      else { /* exhaust dv */
3896
        size_t dvs = ms->dvsize;
3896
        size_t dvs = ms->dvsize;
3897
        ms->dvsize = 0;
3897
        ms->dvsize = 0;
3898
        ms->dv = 0;
3898
        ms->dv = 0;
3899
        set_inuse_and_pinuse(ms, p, dvs);
3899
        set_inuse_and_pinuse(ms, p, dvs);
3900
      }
3900
      }
3901
      mem = chunk2mem(p);
3901
      mem = chunk2mem(p);
3902
      check_malloced_chunk(ms, mem, nb);
3902
      check_malloced_chunk(ms, mem, nb);
3903
      goto postaction;
3903
      goto postaction;
3904
    }
3904
    }
3905
 
3905
 
3906
    else if (nb < ms->topsize) { /* Split top */
3906
    else if (nb < ms->topsize) { /* Split top */
3907
      size_t rsize = ms->topsize -= nb;
3907
      size_t rsize = ms->topsize -= nb;
3908
      mchunkptr p = ms->top;
3908
      mchunkptr p = ms->top;
3909
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3909
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3910
      r->head = rsize | PINUSE_BIT;
3910
      r->head = rsize | PINUSE_BIT;
3911
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3911
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3912
      mem = chunk2mem(p);
3912
      mem = chunk2mem(p);
3913
      check_top_chunk(ms, ms->top);
3913
      check_top_chunk(ms, ms->top);
3914
      check_malloced_chunk(ms, mem, nb);
3914
      check_malloced_chunk(ms, mem, nb);
3915
      goto postaction;
3915
      goto postaction;
3916
    }
3916
    }
3917
 
3917
 
3918
    mem = sys_alloc(ms, nb);
3918
    mem = sys_alloc(ms, nb);
3919
 
3919
 
3920
  postaction:
3920
  postaction:
3921
    POSTACTION(ms);
3921
    POSTACTION(ms);
3922
    return mem;
3922
    return mem;
3923
  }
3923
  }
3924
 
3924
 
3925
  return 0;
3925
  return 0;
3926
}
3926
}
3927
 
3927
 
3928
void mspace_free(mspace msp, void* mem) {
3928
void mspace_free(mspace msp, void* mem) {
3929
  if (mem != 0) {
3929
  if (mem != 0) {
3930
    mchunkptr p  = mem2chunk(mem);
3930
    mchunkptr p  = mem2chunk(mem);
3931
#if FOOTERS
3931
#if FOOTERS
3932
    mstate fm = get_mstate_for(p);
3932
    mstate fm = get_mstate_for(p);
3933
#else /* FOOTERS */
3933
#else /* FOOTERS */
3934
    mstate fm = (mstate)msp;
3934
    mstate fm = (mstate)msp;
3935
#endif /* FOOTERS */
3935
#endif /* FOOTERS */
3936
    if (!ok_magic(fm)) {
3936
    if (!ok_magic(fm)) {
3937
      USAGE_ERROR_ACTION(fm, p);
3937
      USAGE_ERROR_ACTION(fm, p);
3938
      return;
3938
      return;
3939
    }
3939
    }
3940
    if (!PREACTION(fm)) {
3940
    if (!PREACTION(fm)) {
3941
      check_inuse_chunk(fm, p);
3941
      check_inuse_chunk(fm, p);
3942
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3942
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3943
        size_t psize = chunksize(p);
3943
        size_t psize = chunksize(p);
3944
        mchunkptr next = chunk_plus_offset(p, psize);
3944
        mchunkptr next = chunk_plus_offset(p, psize);
3945
        if (!pinuse(p)) {
3945
        if (!pinuse(p)) {
3946
          size_t prevsize = p->prev_foot;
3946
          size_t prevsize = p->prev_foot;
3947
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3947
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3948
            prevsize &= ~IS_MMAPPED_BIT;
3948
            prevsize &= ~IS_MMAPPED_BIT;
3949
            psize += prevsize + MMAP_FOOT_PAD;
3949
            psize += prevsize + MMAP_FOOT_PAD;
3950
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3950
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3951
              fm->footprint -= psize;
3951
              fm->footprint -= psize;
3952
            goto postaction;
3952
            goto postaction;
3953
          }
3953
          }
3954
          else {
3954
          else {
3955
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3955
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3956
            psize += prevsize;
3956
            psize += prevsize;
3957
            p = prev;
3957
            p = prev;
3958
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3958
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3959
              if (p != fm->dv) {
3959
              if (p != fm->dv) {
3960
                unlink_chunk(fm, p, prevsize);
3960
                unlink_chunk(fm, p, prevsize);
3961
              }
3961
              }
3962
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3962
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3963
                fm->dvsize = psize;
3963
                fm->dvsize = psize;
3964
                set_free_with_pinuse(p, psize, next);
3964
                set_free_with_pinuse(p, psize, next);
3965
                goto postaction;
3965
                goto postaction;
3966
              }
3966
              }
3967
            }
3967
            }
3968
            else
3968
            else
3969
              goto erroraction;
3969
              goto erroraction;
3970
          }
3970
          }
3971
        }
3971
        }
3972
 
3972
 
3973
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3973
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3974
          if (!cinuse(next)) {  /* consolidate forward */
3974
          if (!cinuse(next)) {  /* consolidate forward */
3975
            if (next == fm->top) {
3975
            if (next == fm->top) {
3976
              size_t tsize = fm->topsize += psize;
3976
              size_t tsize = fm->topsize += psize;
3977
              fm->top = p;
3977
              fm->top = p;
3978
              p->head = tsize | PINUSE_BIT;
3978
              p->head = tsize | PINUSE_BIT;
3979
              if (p == fm->dv) {
3979
              if (p == fm->dv) {
3980
                fm->dv = 0;
3980
                fm->dv = 0;
3981
                fm->dvsize = 0;
3981
                fm->dvsize = 0;
3982
              }
3982
              }
3983
              if (should_trim(fm, tsize))
3983
              if (should_trim(fm, tsize))
3984
                sys_trim(fm, 0);
3984
                sys_trim(fm, 0);
3985
              goto postaction;
3985
              goto postaction;
3986
            }
3986
            }
3987
            else if (next == fm->dv) {
3987
            else if (next == fm->dv) {
3988
              size_t dsize = fm->dvsize += psize;
3988
              size_t dsize = fm->dvsize += psize;
3989
              fm->dv = p;
3989
              fm->dv = p;
3990
              set_size_and_pinuse_of_free_chunk(p, dsize);
3990
              set_size_and_pinuse_of_free_chunk(p, dsize);
3991
              goto postaction;
3991
              goto postaction;
3992
            }
3992
            }
3993
            else {
3993
            else {
3994
              size_t nsize = chunksize(next);
3994
              size_t nsize = chunksize(next);
3995
              psize += nsize;
3995
              psize += nsize;
3996
              unlink_chunk(fm, next, nsize);
3996
              unlink_chunk(fm, next, nsize);
3997
              set_size_and_pinuse_of_free_chunk(p, psize);
3997
              set_size_and_pinuse_of_free_chunk(p, psize);
3998
              if (p == fm->dv) {
3998
              if (p == fm->dv) {
3999
                fm->dvsize = psize;
3999
                fm->dvsize = psize;
4000
                goto postaction;
4000
                goto postaction;
4001
              }
4001
              }
4002
            }
4002
            }
4003
          }
4003
          }
4004
          else
4004
          else
4005
            set_free_with_pinuse(p, psize, next);
4005
            set_free_with_pinuse(p, psize, next);
4006
          insert_chunk(fm, p, psize);
4006
          insert_chunk(fm, p, psize);
4007
          check_free_chunk(fm, p);
4007
          check_free_chunk(fm, p);
4008
          goto postaction;
4008
          goto postaction;
4009
        }
4009
        }
4010
      }
4010
      }
4011
    erroraction:
4011
    erroraction:
4012
      USAGE_ERROR_ACTION(fm, p);
4012
      USAGE_ERROR_ACTION(fm, p);
4013
    postaction:
4013
    postaction:
4014
      POSTACTION(fm);
4014
      POSTACTION(fm);
4015
    }
4015
    }
4016
  }
4016
  }
4017
}
4017
}
4018
 
4018
 
4019
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4019
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4020
  void* mem;
4020
  void* mem;
4021
  size_t req = 0;
4021
  size_t req = 0;
4022
  mstate ms = (mstate)msp;
4022
  mstate ms = (mstate)msp;
4023
  if (!ok_magic(ms)) {
4023
  if (!ok_magic(ms)) {
4024
    USAGE_ERROR_ACTION(ms,ms);
4024
    USAGE_ERROR_ACTION(ms,ms);
4025
    return 0;
4025
    return 0;
4026
  }
4026
  }
4027
  if (n_elements != 0) {
4027
  if (n_elements != 0) {
4028
    req = n_elements * elem_size;
4028
    req = n_elements * elem_size;
4029
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4029
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4030
        (req / n_elements != elem_size))
4030
        (req / n_elements != elem_size))
4031
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4031
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4032
  }
4032
  }
4033
  mem = internal_malloc(ms, req);
4033
  mem = internal_malloc(ms, req);
4034
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4034
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4035
    memset(mem, 0, req);
4035
    memset(mem, 0, req);
4036
  return mem;
4036
  return mem;
4037
}
4037
}
4038
 
4038
 
4039
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4039
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4040
  if (oldmem == 0)
4040
  if (oldmem == 0)
4041
    return mspace_malloc(msp, bytes);
4041
    return mspace_malloc(msp, bytes);
4042
#ifdef REALLOC_ZERO_BYTES_FREES
4042
#ifdef REALLOC_ZERO_BYTES_FREES
4043
  if (bytes == 0) {
4043
  if (bytes == 0) {
4044
    mspace_free(msp, oldmem);
4044
    mspace_free(msp, oldmem);
4045
    return 0;
4045
    return 0;
4046
  }
4046
  }
4047
#endif /* REALLOC_ZERO_BYTES_FREES */
4047
#endif /* REALLOC_ZERO_BYTES_FREES */
4048
  else {
4048
  else {
4049
#if FOOTERS
4049
#if FOOTERS
4050
    mchunkptr p  = mem2chunk(oldmem);
4050
    mchunkptr p  = mem2chunk(oldmem);
4051
    mstate ms = get_mstate_for(p);
4051
    mstate ms = get_mstate_for(p);
4052
#else /* FOOTERS */
4052
#else /* FOOTERS */
4053
    mstate ms = (mstate)msp;
4053
    mstate ms = (mstate)msp;
4054
#endif /* FOOTERS */
4054
#endif /* FOOTERS */
4055
    if (!ok_magic(ms)) {
4055
    if (!ok_magic(ms)) {
4056
      USAGE_ERROR_ACTION(ms,ms);
4056
      USAGE_ERROR_ACTION(ms,ms);
4057
      return 0;
4057
      return 0;
4058
    }
4058
    }
4059
    return internal_realloc(ms, oldmem, bytes);
4059
    return internal_realloc(ms, oldmem, bytes);
4060
  }
4060
  }
4061
}
4061
}
4062
 
4062
 
4063
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4063
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4064
  mstate ms = (mstate)msp;
4064
  mstate ms = (mstate)msp;
4065
  if (!ok_magic(ms)) {
4065
  if (!ok_magic(ms)) {
4066
    USAGE_ERROR_ACTION(ms,ms);
4066
    USAGE_ERROR_ACTION(ms,ms);
4067
    return 0;
4067
    return 0;
4068
  }
4068
  }
4069
  return internal_memalign(ms, alignment, bytes);
4069
  return internal_memalign(ms, alignment, bytes);
4070
}
4070
}
4071
 
4071
 
4072
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4072
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4073
                                 size_t elem_size, void* chunks[]) {
4073
                                 size_t elem_size, void* chunks[]) {
4074
  size_t sz = elem_size; /* serves as 1-element array */
4074
  size_t sz = elem_size; /* serves as 1-element array */
4075
  mstate ms = (mstate)msp;
4075
  mstate ms = (mstate)msp;
4076
  if (!ok_magic(ms)) {
4076
  if (!ok_magic(ms)) {
4077
    USAGE_ERROR_ACTION(ms,ms);
4077
    USAGE_ERROR_ACTION(ms,ms);
4078
    return 0;
4078
    return 0;
4079
  }
4079
  }
4080
  return ialloc(ms, n_elements, &sz, 3, chunks);
4080
  return ialloc(ms, n_elements, &sz, 3, chunks);
4081
}
4081
}
4082
 
4082
 
4083
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4083
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4084
                                   size_t sizes[], void* chunks[]) {
4084
                                   size_t sizes[], void* chunks[]) {
4085
  mstate ms = (mstate)msp;
4085
  mstate ms = (mstate)msp;
4086
  if (!ok_magic(ms)) {
4086
  if (!ok_magic(ms)) {
4087
    USAGE_ERROR_ACTION(ms,ms);
4087
    USAGE_ERROR_ACTION(ms,ms);
4088
    return 0;
4088
    return 0;
4089
  }
4089
  }
4090
  return ialloc(ms, n_elements, sizes, 0, chunks);
4090
  return ialloc(ms, n_elements, sizes, 0, chunks);
4091
}
4091
}
4092
 
4092
 
4093
int mspace_trim(mspace msp, size_t pad) {
4093
int mspace_trim(mspace msp, size_t pad) {
4094
  int result = 0;
4094
  int result = 0;
4095
  mstate ms = (mstate)msp;
4095
  mstate ms = (mstate)msp;
4096
  if (ok_magic(ms)) {
4096
  if (ok_magic(ms)) {
4097
    if (!PREACTION(ms)) {
4097
    if (!PREACTION(ms)) {
4098
      result = sys_trim(ms, pad);
4098
      result = sys_trim(ms, pad);
4099
      POSTACTION(ms);
4099
      POSTACTION(ms);
4100
    }
4100
    }
4101
  }
4101
  }
4102
  else {
4102
  else {
4103
    USAGE_ERROR_ACTION(ms,ms);
4103
    USAGE_ERROR_ACTION(ms,ms);
4104
  }
4104
  }
4105
  return result;
4105
  return result;
4106
}
4106
}
4107
 
4107
 
4108
void mspace_malloc_stats(mspace msp) {
4108
void mspace_malloc_stats(mspace msp) {
4109
  mstate ms = (mstate)msp;
4109
  mstate ms = (mstate)msp;
4110
  if (ok_magic(ms)) {
4110
  if (ok_magic(ms)) {
4111
    internal_malloc_stats(ms);
4111
    internal_malloc_stats(ms);
4112
  }
4112
  }
4113
  else {
4113
  else {
4114
    USAGE_ERROR_ACTION(ms,ms);
4114
    USAGE_ERROR_ACTION(ms,ms);
4115
  }
4115
  }
4116
}
4116
}
4117
 
4117
 
4118
size_t mspace_footprint(mspace msp) {
4118
size_t mspace_footprint(mspace msp) {
4119
  size_t result;
4119
  size_t result;
4120
  mstate ms = (mstate)msp;
4120
  mstate ms = (mstate)msp;
4121
  if (ok_magic(ms)) {
4121
  if (ok_magic(ms)) {
4122
    result = ms->footprint;
4122
    result = ms->footprint;
4123
  }
4123
  }
4124
  USAGE_ERROR_ACTION(ms,ms);
4124
  USAGE_ERROR_ACTION(ms,ms);
4125
  return result;
4125
  return result;
4126
}
4126
}
4127
 
4127
 
4128
 
4128
 
4129
size_t mspace_max_footprint(mspace msp) {
4129
size_t mspace_max_footprint(mspace msp) {
4130
  size_t result;
4130
  size_t result;
4131
  mstate ms = (mstate)msp;
4131
  mstate ms = (mstate)msp;
4132
  if (ok_magic(ms)) {
4132
  if (ok_magic(ms)) {
4133
    result = ms->max_footprint;
4133
    result = ms->max_footprint;
4134
  }
4134
  }
4135
  USAGE_ERROR_ACTION(ms,ms);
4135
  USAGE_ERROR_ACTION(ms,ms);
4136
  return result;
4136
  return result;
4137
}
4137
}
4138
 
4138
 
4139
 
4139
 
4140
#if !NO_MALLINFO
4140
#if !NO_MALLINFO
4141
struct mallinfo mspace_mallinfo(mspace msp) {
4141
struct mallinfo mspace_mallinfo(mspace msp) {
4142
  mstate ms = (mstate)msp;
4142
  mstate ms = (mstate)msp;
4143
  if (!ok_magic(ms)) {
4143
  if (!ok_magic(ms)) {
4144
    USAGE_ERROR_ACTION(ms,ms);
4144
    USAGE_ERROR_ACTION(ms,ms);
4145
  }
4145
  }
4146
  return internal_mallinfo(ms);
4146
  return internal_mallinfo(ms);
4147
}
4147
}
4148
#endif /* NO_MALLINFO */
4148
#endif /* NO_MALLINFO */
4149
 
4149
 
4150
int mspace_mallopt(int param_number, int value) {
4150
int mspace_mallopt(int param_number, int value) {
4151
  return change_mparam(param_number, value);
4151
  return change_mparam(param_number, value);
4152
}
4152
}
4153
 
4153
 
4154
#endif /* MSPACES */
4154
#endif /* MSPACES */
4155
 
4155
 
4156
/* -------------------- Alternative MORECORE functions ------------------- */
4156
/* -------------------- Alternative MORECORE functions ------------------- */
4157
 
4157
 
4158
/*
4158
/*
4159
  Guidelines for creating a custom version of MORECORE:
4159
  Guidelines for creating a custom version of MORECORE:
4160
 
4160
 
4161
  * For best performance, MORECORE should allocate in multiples of pagesize.
4161
  * For best performance, MORECORE should allocate in multiples of pagesize.
4162
  * MORECORE may allocate more memory than requested. (Or even less,
4162
  * MORECORE may allocate more memory than requested. (Or even less,
4163
      but this will usually result in a malloc failure.)
4163
      but this will usually result in a malloc failure.)
4164
  * MORECORE must not allocate memory when given argument zero, but
4164
  * MORECORE must not allocate memory when given argument zero, but
4165
      instead return one past the end address of memory from previous
4165
      instead return one past the end address of memory from previous
4166
      nonzero call.
4166
      nonzero call.
4167
  * For best performance, consecutive calls to MORECORE with positive
4167
  * For best performance, consecutive calls to MORECORE with positive
4168
      arguments should return increasing addresses, indicating that
4168
      arguments should return increasing addresses, indicating that
4169
      space has been contiguously extended.
4169
      space has been contiguously extended.
4170
  * Even though consecutive calls to MORECORE need not return contiguous
4170
  * Even though consecutive calls to MORECORE need not return contiguous
4171
      addresses, it must be OK for malloc'ed chunks to span multiple
4171
      addresses, it must be OK for malloc'ed chunks to span multiple
4172
      regions in those cases where they do happen to be contiguous.
4172
      regions in those cases where they do happen to be contiguous.
4173
  * MORECORE need not handle negative arguments -- it may instead
4173
  * MORECORE need not handle negative arguments -- it may instead
4174
      just return MFAIL when given negative arguments.
4174
      just return MFAIL when given negative arguments.
4175
      Negative arguments are always multiples of pagesize. MORECORE
4175
      Negative arguments are always multiples of pagesize. MORECORE
4176
      must not misinterpret negative args as large positive unsigned
4176
      must not misinterpret negative args as large positive unsigned
4177
      args. You can suppress all such calls from even occurring by defining
4177
      args. You can suppress all such calls from even occurring by defining
4178
      MORECORE_CANNOT_TRIM,
4178
      MORECORE_CANNOT_TRIM,
4179
 
4179
 
4180
  As an example alternative MORECORE, here is a custom allocator
4180
  As an example alternative MORECORE, here is a custom allocator
4181
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4181
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4182
  necessarily physically contiguous non-paged memory (locked in,
4182
  necessarily physically contiguous non-paged memory (locked in,
4183
  present and won't get swapped out).  You can use it by uncommenting
4183
  present and won't get swapped out).  You can use it by uncommenting
4184
  this section, adding some #includes, and setting up the appropriate
4184
  this section, adding some #includes, and setting up the appropriate
4185
  defines above:
4185
  defines above:
4186
 
4186
 
4187
      #define MORECORE osMoreCore
4187
      #define MORECORE osMoreCore
4188
 
4188
 
4189
  There is also a shutdown routine that should somehow be called for
4189
  There is also a shutdown routine that should somehow be called for
4190
  cleanup upon program exit.
4190
  cleanup upon program exit.
4191
 
4191
 
4192
  #define MAX_POOL_ENTRIES 100
4192
  #define MAX_POOL_ENTRIES 100
4193
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4193
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4194
  static int next_os_pool;
4194
  static int next_os_pool;
4195
  void *our_os_pools[MAX_POOL_ENTRIES];
4195
  void *our_os_pools[MAX_POOL_ENTRIES];
4196
 
4196
 
4197
  void *osMoreCore(int size)
4197
  void *osMoreCore(int size)
4198
  {
4198
  {
4199
    void *ptr = 0;
4199
    void *ptr = 0;
4200
    static void *sbrk_top = 0;
4200
    static void *sbrk_top = 0;
4201
 
4201
 
4202
    if (size > 0)
4202
    if (size > 0)
4203
    {
4203
    {
4204
      if (size < MINIMUM_MORECORE_SIZE)
4204
      if (size < MINIMUM_MORECORE_SIZE)
4205
         size = MINIMUM_MORECORE_SIZE;
4205
         size = MINIMUM_MORECORE_SIZE;
4206
      if (CurrentExecutionLevel() == kTaskLevel)
4206
      if (CurrentExecutionLevel() == kTaskLevel)
4207
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4207
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4208
      if (ptr == 0)
4208
      if (ptr == 0)
4209
      {
4209
      {
4210
        return (void *) MFAIL;
4210
        return (void *) MFAIL;
4211
      }
4211
      }
4212
      // save ptrs so they can be freed during cleanup
4212
      // save ptrs so they can be freed during cleanup
4213
      our_os_pools[next_os_pool] = ptr;
4213
      our_os_pools[next_os_pool] = ptr;
4214
      next_os_pool++;
4214
      next_os_pool++;
4215
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4215
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4216
      sbrk_top = (char *) ptr + size;
4216
      sbrk_top = (char *) ptr + size;
4217
      return ptr;
4217
      return ptr;
4218
    }
4218
    }
4219
    else if (size < 0)
4219
    else if (size < 0)
4220
    {
4220
    {
4221
      // we don't currently support shrink behavior
4221
      // we don't currently support shrink behavior
4222
      return (void *) MFAIL;
4222
      return (void *) MFAIL;
4223
    }
4223
    }
4224
    else
4224
    else
4225
    {
4225
    {
4226
      return sbrk_top;
4226
      return sbrk_top;
4227
    }
4227
    }
4228
  }
4228
  }
4229
 
4229
 
4230
  // cleanup any allocated memory pools
4230
  // cleanup any allocated memory pools
4231
  // called as last thing before shutting down driver
4231
  // called as last thing before shutting down driver
4232
 
4232
 
4233
  void osCleanupMem(void)
4233
  void osCleanupMem(void)
4234
  {
4234
  {
4235
    void **ptr;
4235
    void **ptr;
4236
 
4236
 
4237
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4237
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4238
      if (*ptr)
4238
      if (*ptr)
4239
      {
4239
      {
4240
         PoolDeallocate(*ptr);
4240
         PoolDeallocate(*ptr);
4241
         *ptr = 0;
4241
         *ptr = 0;
4242
      }
4242
      }
4243
  }
4243
  }
4244
 
4244
 
4245
*/
4245
*/
4246
 
4246
 
4247
 
4247
 
4248
/* -----------------------------------------------------------------------
4248
/* -----------------------------------------------------------------------
4249
History:
4249
History:
4250
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4250
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4251
      * Add max_footprint functions
4251
      * Add max_footprint functions
4252
      * Ensure all appropriate literals are size_t
4252
      * Ensure all appropriate literals are size_t
4253
      * Fix conditional compilation problem for some #define settings
4253
      * Fix conditional compilation problem for some #define settings
4254
      * Avoid concatenating segments with the one provided
4254
      * Avoid concatenating segments with the one provided
4255
        in create_mspace_with_base
4255
        in create_mspace_with_base
4256
      * Rename some variables to avoid compiler shadowing warnings
4256
      * Rename some variables to avoid compiler shadowing warnings
4257
      * Use explicit lock initialization.
4257
      * Use explicit lock initialization.
4258
      * Better handling of sbrk interference.
4258
      * Better handling of sbrk interference.
4259
      * Simplify and fix segment insertion, trimming and mspace_destroy
4259
      * Simplify and fix segment insertion, trimming and mspace_destroy
4260
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4260
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4261
      * Thanks especially to Dennis Flanagan for help on these.
4261
      * Thanks especially to Dennis Flanagan for help on these.
4262
 
4262
 
4263
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4263
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4264
      * Fix memalign brace error.
4264
      * Fix memalign brace error.
4265
 
4265
 
4266
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4266
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4267
      * Fix improper #endif nesting in C++
4267
      * Fix improper #endif nesting in C++
4268
      * Add explicit casts needed for C++
4268
      * Add explicit casts needed for C++
4269
 
4269
 
4270
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4270
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4271
      * Use trees for large bins
4271
      * Use trees for large bins
4272
      * Support mspaces
4272
      * Support mspaces
4273
      * Use segments to unify sbrk-based and mmap-based system allocation,
4273
      * Use segments to unify sbrk-based and mmap-based system allocation,
4274
        removing need for emulation on most platforms without sbrk.
4274
        removing need for emulation on most platforms without sbrk.
4275
      * Default safety checks
4275
      * Default safety checks
4276
      * Optional footer checks. Thanks to William Robertson for the idea.
4276
      * Optional footer checks. Thanks to William Robertson for the idea.
4277
      * Internal code refactoring
4277
      * Internal code refactoring
4278
      * Incorporate suggestions and platform-specific changes.
4278
      * Incorporate suggestions and platform-specific changes.
4279
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4279
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4280
        Aaron Bachmann,  Emery Berger, and others.
4280
        Aaron Bachmann,  Emery Berger, and others.
4281
      * Speed up non-fastbin processing enough to remove fastbins.
4281
      * Speed up non-fastbin processing enough to remove fastbins.
4282
      * Remove useless cfree() to avoid conflicts with other apps.
4282
      * Remove useless cfree() to avoid conflicts with other apps.
4283
      * Remove internal memcpy, memset. Compilers handle builtins better.
4283
      * Remove internal memcpy, memset. Compilers handle builtins better.
4284
      * Remove some options that no one ever used and rename others.
4284
      * Remove some options that no one ever used and rename others.
4285
 
4285
 
4286
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4286
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4287
      * Fix malloc_state bitmap array misdeclaration
4287
      * Fix malloc_state bitmap array misdeclaration
4288
 
4288
 
4289
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4289
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4290
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4290
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4291
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4291
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4292
      * Better detection and support for non-contiguousness of MORECORE.
4292
      * Better detection and support for non-contiguousness of MORECORE.
4293
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4293
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4294
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4294
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4295
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4295
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4296
      * Raised default trim and map thresholds to 256K.
4296
      * Raised default trim and map thresholds to 256K.
4297
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4297
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4298
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4298
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4299
      * Branch-free bin calculation
4299
      * Branch-free bin calculation
4300
      * Default trim and mmap thresholds now 256K.
4300
      * Default trim and mmap thresholds now 256K.
4301
 
4301
 
4302
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4302
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4303
      * Introduce independent_comalloc and independent_calloc.
4303
      * Introduce independent_comalloc and independent_calloc.
4304
        Thanks to Michael Pachos for motivation and help.
4304
        Thanks to Michael Pachos for motivation and help.
4305
      * Make optional .h file available
4305
      * Make optional .h file available
4306
      * Allow > 2GB requests on 32bit systems.
4306
      * Allow > 2GB requests on 32bit systems.
4307
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4307
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4308
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4308
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4309
        and Anonymous.
4309
        and Anonymous.
4310
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4310
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4311
        helping test this.)
4311
        helping test this.)
4312
      * memalign: check alignment arg
4312
      * memalign: check alignment arg
4313
      * realloc: don't try to shift chunks backwards, since this
4313
      * realloc: don't try to shift chunks backwards, since this
4314
        leads to  more fragmentation in some programs and doesn't
4314
        leads to  more fragmentation in some programs and doesn't
4315
        seem to help in any others.
4315
        seem to help in any others.
4316
      * Collect all cases in malloc requiring system memory into sysmalloc
4316
      * Collect all cases in malloc requiring system memory into sysmalloc
4317
      * Use mmap as backup to sbrk
4317
      * Use mmap as backup to sbrk
4318
      * Place all internal state in malloc_state
4318
      * Place all internal state in malloc_state
4319
      * Introduce fastbins (although similar to 2.5.1)
4319
      * Introduce fastbins (although similar to 2.5.1)
4320
      * Many minor tunings and cosmetic improvements
4320
      * Many minor tunings and cosmetic improvements
4321
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4321
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4322
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4322
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4323
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4323
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4324
      * Include errno.h to support default failure action.
4324
      * Include errno.h to support default failure action.
4325
 
4325
 
4326
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4326
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4327
      * return null for negative arguments
4327
      * return null for negative arguments
4328
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4328
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4329
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4329
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4330
          (e.g. WIN32 platforms)
4330
          (e.g. WIN32 platforms)
4331
         * Cleanup header file inclusion for WIN32 platforms
4331
         * Cleanup header file inclusion for WIN32 platforms
4332
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4332
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4333
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4333
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4334
           memory allocation routines
4334
           memory allocation routines
4335
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4335
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4336
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4336
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4337
           usage of 'assert' in non-WIN32 code
4337
           usage of 'assert' in non-WIN32 code
4338
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4338
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4339
           avoid infinite loop
4339
           avoid infinite loop
4340
      * Always call 'fREe()' rather than 'free()'
4340
      * Always call 'fREe()' rather than 'free()'
4341
 
4341
 
4342
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4342
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4343
      * Fixed ordering problem with boundary-stamping
4343
      * Fixed ordering problem with boundary-stamping
4344
 
4344
 
4345
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4345
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4346
      * Added pvalloc, as recommended by H.J. Liu
4346
      * Added pvalloc, as recommended by H.J. Liu
4347
      * Added 64bit pointer support mainly from Wolfram Gloger
4347
      * Added 64bit pointer support mainly from Wolfram Gloger
4348
      * Added anonymously donated WIN32 sbrk emulation
4348
      * Added anonymously donated WIN32 sbrk emulation
4349
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4349
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4350
      * malloc_extend_top: fix mask error that caused wastage after
4350
      * malloc_extend_top: fix mask error that caused wastage after
4351
        foreign sbrks
4351
        foreign sbrks
4352
      * Add linux mremap support code from HJ Liu
4352
      * Add linux mremap support code from HJ Liu
4353
 
4353
 
4354
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4354
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4355
      * Integrated most documentation with the code.
4355
      * Integrated most documentation with the code.
4356
      * Add support for mmap, with help from
4356
      * Add support for mmap, with help from
4357
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4357
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4358
      * Use last_remainder in more cases.
4358
      * Use last_remainder in more cases.
4359
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4359
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4360
      * Use ordered bins instead of best-fit threshhold
4360
      * Use ordered bins instead of best-fit threshhold
4361
      * Eliminate block-local decls to simplify tracing and debugging.
4361
      * Eliminate block-local decls to simplify tracing and debugging.
4362
      * Support another case of realloc via move into top
4362
      * Support another case of realloc via move into top
4363
      * Fix error occuring when initial sbrk_base not word-aligned.
4363
      * Fix error occuring when initial sbrk_base not word-aligned.
4364
      * Rely on page size for units instead of SBRK_UNIT to
4364
      * Rely on page size for units instead of SBRK_UNIT to
4365
        avoid surprises about sbrk alignment conventions.
4365
        avoid surprises about sbrk alignment conventions.
4366
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4366
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4367
        (raymond@es.ele.tue.nl) for the suggestion.
4367
        (raymond@es.ele.tue.nl) for the suggestion.
4368
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4368
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4369
      * More precautions for cases where other routines call sbrk,
4369
      * More precautions for cases where other routines call sbrk,
4370
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4370
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4371
      * Added macros etc., allowing use in linux libc from
4371
      * Added macros etc., allowing use in linux libc from
4372
        H.J. Lu (hjl@gnu.ai.mit.edu)
4372
        H.J. Lu (hjl@gnu.ai.mit.edu)
4373
      * Inverted this history list
4373
      * Inverted this history list
4374
 
4374
 
4375
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4375
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4376
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4376
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4377
      * Removed all preallocation code since under current scheme
4377
      * Removed all preallocation code since under current scheme
4378
        the work required to undo bad preallocations exceeds
4378
        the work required to undo bad preallocations exceeds
4379
        the work saved in good cases for most test programs.
4379
        the work saved in good cases for most test programs.
4380
      * No longer use return list or unconsolidated bins since
4380
      * No longer use return list or unconsolidated bins since
4381
        no scheme using them consistently outperforms those that don't
4381
        no scheme using them consistently outperforms those that don't
4382
        given above changes.
4382
        given above changes.
4383
      * Use best fit for very large chunks to prevent some worst-cases.
4383
      * Use best fit for very large chunks to prevent some worst-cases.
4384
      * Added some support for debugging
4384
      * Added some support for debugging
4385
 
4385
 
4386
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4386
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4387
      * Removed footers when chunks are in use. Thanks to
4387
      * Removed footers when chunks are in use. Thanks to
4388
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4388
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4389
 
4389
 
4390
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4390
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4391
      * Added malloc_trim, with help from Wolfram Gloger
4391
      * Added malloc_trim, with help from Wolfram Gloger
4392
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4392
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4393
 
4393
 
4394
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4394
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4395
 
4395
 
4396
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4396
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4397
      * realloc: try to expand in both directions
4397
      * realloc: try to expand in both directions
4398
      * malloc: swap order of clean-bin strategy;
4398
      * malloc: swap order of clean-bin strategy;
4399
      * realloc: only conditionally expand backwards
4399
      * realloc: only conditionally expand backwards
4400
      * Try not to scavenge used bins
4400
      * Try not to scavenge used bins
4401
      * Use bin counts as a guide to preallocation
4401
      * Use bin counts as a guide to preallocation
4402
      * Occasionally bin return list chunks in first scan
4402
      * Occasionally bin return list chunks in first scan
4403
      * Add a few optimizations from colin@nyx10.cs.du.edu
4403
      * Add a few optimizations from colin@nyx10.cs.du.edu
4404
 
4404
 
4405
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4405
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4406
      * faster bin computation & slightly different binning
4406
      * faster bin computation & slightly different binning
4407
      * merged all consolidations to one part of malloc proper
4407
      * merged all consolidations to one part of malloc proper
4408
         (eliminating old malloc_find_space & malloc_clean_bin)
4408
         (eliminating old malloc_find_space & malloc_clean_bin)
4409
      * Scan 2 returns chunks (not just 1)
4409
      * Scan 2 returns chunks (not just 1)
4410
      * Propagate failure in realloc if malloc returns 0
4410
      * Propagate failure in realloc if malloc returns 0
4411
      * Add stuff to allow compilation on non-ANSI compilers
4411
      * Add stuff to allow compilation on non-ANSI compilers
4412
          from kpv@research.att.com
4412
          from kpv@research.att.com
4413
 
4413
 
4414
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4414
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4415
      * removed potential for odd address access in prev_chunk
4415
      * removed potential for odd address access in prev_chunk
4416
      * removed dependency on getpagesize.h
4416
      * removed dependency on getpagesize.h
4417
      * misc cosmetics and a bit more internal documentation
4417
      * misc cosmetics and a bit more internal documentation
4418
      * anticosmetics: mangled names in macros to evade debugger strangeness
4418
      * anticosmetics: mangled names in macros to evade debugger strangeness
4419
      * tested on sparc, hp-700, dec-mips, rs6000
4419
      * tested on sparc, hp-700, dec-mips, rs6000
4420
          with gcc & native cc (hp, dec only) allowing
4420
          with gcc & native cc (hp, dec only) allowing
4421
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4421
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4422
 
4422
 
4423
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4423
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4424
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4424
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4425
         structure of old version,  but most details differ.)
4425
         structure of old version,  but most details differ.)
4426
 
4426
 
4427
*/
4427
*/
4428
 
4428
 
4429
/** @}
4429
/** @}
4430
 */
4430
 */
4431
 
4431
 
4432
 
4432