Subversion Repositories HelenOS

Rev

Rev 1963 | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 1963 Rev 1968
1
/*
1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
8
 
9
   Note: There may be an updated version of this malloc obtainable at
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
11
         Check before installing!
12
 
12
 
13
* Quickstart
13
* Quickstart
14
 
14
 
15
  This library is all in one file to simplify the most common usage:
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
19
  compile-time and dynamic tuning options.
20
 
20
 
21
  For convenience, an include file for code using this malloc is at:
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
33
  with library versions.
34
 
34
 
35
* Vital statistics:
35
* Vital statistics:
36
 
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
42
       (e.g. 2.7.2) supporting these.)
43
 
43
 
44
  Alignment:                                     8 bytes (default)
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
53
       if FOOTERS is defined.
54
 
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
66
 
67
  Security: static-safe; optionally more or less
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
78
       cannot, detect all possible programming errors.
79
 
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
89
       FOOTERS may also be worth using with MSPACES.)
90
 
90
 
91
       By default detected errors cause the program to abort (calling
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
99
       run out of memory because dropped memory is never reclaimed.
100
 
100
 
101
       If you don't like either of these options, you can define
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
105
       which might (or might not) provide a small performance improvement.
106
 
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
115
       a version of this malloc. (See http://www.malloc.de).
116
 
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
124
       like memset.
125
 
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
128
       others as well.
129
 
129
 
130
* Overview of algorithms
130
* Overview of algorithms
131
 
131
 
132
  This is not the fastest, most space-conserving, most portable, or
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
136
  general-purpose allocator for malloc-intensive programs.
137
 
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
149
  possibly fragmenting memory used only for large chunks.)
150
 
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
156
  system allocation routines, which is often just 1.
157
 
157
 
158
  The implementation is not very modular and seriously overuses
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
161
  but now, enough of them seem not to.
162
 
162
 
163
  Some compilers issue a lot of warnings about code that is
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
166
  ignored.
167
 
167
 
168
  For a longer but out of date high-level description, see
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
170
 
171
* MSPACES
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
185
 
186
  You can similarly create thread-local allocators by storing
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
191
      return mspace_malloc(tlms, bytes);
192
    }
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
200
  originating spaces.
201
 
201
 
202
 -------------------------  Compile-time options ---------------------------
202
 -------------------------  Compile-time options ---------------------------
203
 
203
 
204
Be careful in setting #define values for numerical constants of type
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
206
to size_t precision unless they are explicitly casted.
207
 
207
 
208
WIN32                    default: defined if _WIN32 defined
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
210
  Otherwise defaults are for unix.
211
 
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
217
  the case of 8-byte alignment.
218
 
218
 
219
MSPACES                  default: 0 (false)
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
221
  This is only supported if HAVE_MMAP is true.
222
 
222
 
223
ONLY_MSPACES             default: 0 (false)
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
224
  If true, only compile in mspace versions, not regular versions.
225
 
225
 
226
USE_LOCKS                default: 0 (false)
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
229
  overridden on a per-mspace basis for mspace versions.)
230
 
230
 
231
FOOTERS                  default: 0
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
234
  space and time overhead.
235
 
235
 
236
INSECURE                 default: 0
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
238
 
239
USE_DL_PREFIX            default: NOT defined
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
242
  of a program, using your regular system malloc elsewhere.
243
 
243
 
244
ABORT                    default: defined as abort()
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
253
  can better optimize code conditionally calling it.
254
 
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
264
  generates slower code than the default abort policy.
265
 
265
 
266
DEBUG                    default: NOT defined
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
275
  in the course of computing the summaries.
276
 
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
282
  which will usually make debugging easier.
283
 
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
286
  return memory because there is none available.
287
 
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
289
  True if this system supports sbrk or an emulation of it.
290
 
290
 
291
MORECORE                  default: sbrk
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
302
  version of MORECORE.
303
 
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
310
  and possibly wasted space it would take to discover this though.
311
 
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
316
  arguments.
317
 
317
 
318
HAVE_MMAP                 default: 1 (true)
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
326
  to MMAP, so long as they are adjacent.
327
 
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
330
  extend or shrink allocation spaces.
331
 
331
 
332
MMAP_CLEARS               default: 1 on unix
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
334
  for standard unix mmap using /dev/zero.
335
 
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
342
  slightly faster than the C version.)
343
 
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
349
  initialization.
350
 
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
353
  stamping footers. Otherwise, the current time is used.
354
 
354
 
355
NO_MALLINFO                default: 0
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
357
  of dealing with mismatches between system declarations and
358
  those in this file.
358
  those in this file.
359
 
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
369
  realloc(p, 0).
370
 
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
375
  You might need to manually insert some of the declarations they provide.
376
 
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
390
  "TOP_PAD")
391
 
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
413
  will immediately be returned to the system.
414
 
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
437
  disable mmap by setting to MAX_SIZE_T.
438
 
438
 
439
*/
439
*/
440
 
440
 
441
#include <sys/types.h>  /* For size_t */
441
#include <sys/types.h>  /* For size_t */
442
 
442
 
443
/** Non-default helenos customizations */
443
/** Non-default helenos customizations */
444
#define LACKS_FCNTL_H
444
#define LACKS_FCNTL_H
445
#define LACKS_SYS_MMAN_H
445
#define LACKS_SYS_MMAN_H
446
#define LACKS_SYS_PARAM_H
446
#define LACKS_SYS_PARAM_H
447
#undef HAVE_MMAP
447
#undef HAVE_MMAP
448
#define HAVE_MMAP 0
448
#define HAVE_MMAP 0
449
#define LACKS_ERRNO_H
449
#define LACKS_ERRNO_H
450
/* Set errno? */
450
/* Set errno? */
451
#undef MALLOC_FAILURE_ACTION
451
#undef MALLOC_FAILURE_ACTION
452
#define MALLOC_FAILURE_ACTION
452
#define MALLOC_FAILURE_ACTION
453
 
453
 
454
/* The maximum possible size_t value has all bits set */
454
/* The maximum possible size_t value has all bits set */
455
#define MAX_SIZE_T           (~(size_t)0)
455
#define MAX_SIZE_T           (~(size_t)0)
456
 
456
 
457
#define ONLY_MSPACES 0
457
#define ONLY_MSPACES 0
458
#define MSPACES 0
458
#define MSPACES 0
459
#define MALLOC_ALIGNMENT ((size_t)8U)
459
#define MALLOC_ALIGNMENT ((size_t)8U)
460
#define FOOTERS 0
460
#define FOOTERS 0
461
#define ABORT  abort()
461
#define ABORT  abort()
462
#define ABORT_ON_ASSERT_FAILURE 1
462
#define ABORT_ON_ASSERT_FAILURE 1
463
#define PROCEED_ON_ERROR 0
463
#define PROCEED_ON_ERROR 0
464
#define USE_LOCKS 1
464
#define USE_LOCKS 1
465
#define INSECURE 0
465
#define INSECURE 0
466
#define HAVE_MMAP 0
466
#define HAVE_MMAP 0
467
 
467
 
468
#define MMAP_CLEARS 1
468
#define MMAP_CLEARS 1
469
 
469
 
470
#define HAVE_MORECORE 1
470
#define HAVE_MORECORE 1
471
#define MORECORE_CONTIGUOUS 1
471
#define MORECORE_CONTIGUOUS 1
472
#define MORECORE sbrk
472
#define MORECORE sbrk
473
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
473
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
474
 
474
 
475
#ifndef DEFAULT_TRIM_THRESHOLD
475
#ifndef DEFAULT_TRIM_THRESHOLD
476
#ifndef MORECORE_CANNOT_TRIM
476
#ifndef MORECORE_CANNOT_TRIM
477
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
477
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
478
#else   /* MORECORE_CANNOT_TRIM */
478
#else   /* MORECORE_CANNOT_TRIM */
479
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
479
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
480
#endif  /* MORECORE_CANNOT_TRIM */
480
#endif  /* MORECORE_CANNOT_TRIM */
481
#endif  /* DEFAULT_TRIM_THRESHOLD */
481
#endif  /* DEFAULT_TRIM_THRESHOLD */
482
#ifndef DEFAULT_MMAP_THRESHOLD
482
#ifndef DEFAULT_MMAP_THRESHOLD
483
#if HAVE_MMAP
483
#if HAVE_MMAP
484
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
484
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
485
#else   /* HAVE_MMAP */
485
#else   /* HAVE_MMAP */
486
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
486
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
487
#endif  /* HAVE_MMAP */
487
#endif  /* HAVE_MMAP */
488
#endif  /* DEFAULT_MMAP_THRESHOLD */
488
#endif  /* DEFAULT_MMAP_THRESHOLD */
489
#ifndef USE_BUILTIN_FFS
489
#ifndef USE_BUILTIN_FFS
490
#define USE_BUILTIN_FFS 0
490
#define USE_BUILTIN_FFS 0
491
#endif  /* USE_BUILTIN_FFS */
491
#endif  /* USE_BUILTIN_FFS */
492
#ifndef USE_DEV_RANDOM
492
#ifndef USE_DEV_RANDOM
493
#define USE_DEV_RANDOM 0
493
#define USE_DEV_RANDOM 0
494
#endif  /* USE_DEV_RANDOM */
494
#endif  /* USE_DEV_RANDOM */
495
#ifndef NO_MALLINFO
495
#ifndef NO_MALLINFO
496
#define NO_MALLINFO 0
496
#define NO_MALLINFO 0
497
#endif  /* NO_MALLINFO */
497
#endif  /* NO_MALLINFO */
498
#ifndef MALLINFO_FIELD_TYPE
498
#ifndef MALLINFO_FIELD_TYPE
499
#define MALLINFO_FIELD_TYPE size_t
499
#define MALLINFO_FIELD_TYPE size_t
500
#endif  /* MALLINFO_FIELD_TYPE */
500
#endif  /* MALLINFO_FIELD_TYPE */
501
 
501
 
502
/*
502
/*
503
  mallopt tuning options.  SVID/XPG defines four standard parameter
503
  mallopt tuning options.  SVID/XPG defines four standard parameter
504
  numbers for mallopt, normally defined in malloc.h.  None of these
504
  numbers for mallopt, normally defined in malloc.h.  None of these
505
  are used in this malloc, so setting them has no effect. But this
505
  are used in this malloc, so setting them has no effect. But this
506
  malloc does support the following options.
506
  malloc does support the following options.
507
*/
507
*/
508
 
508
 
509
#define M_TRIM_THRESHOLD     (-1)
509
#define M_TRIM_THRESHOLD     (-1)
510
#define M_GRANULARITY        (-2)
510
#define M_GRANULARITY        (-2)
511
#define M_MMAP_THRESHOLD     (-3)
511
#define M_MMAP_THRESHOLD     (-3)
512
 
512
 
513
/*
513
/*
514
  ========================================================================
514
  ========================================================================
515
  To make a fully customizable malloc.h header file, cut everything
515
  To make a fully customizable malloc.h header file, cut everything
516
  above this line, put into file malloc.h, edit to suit, and #include it
516
  above this line, put into file malloc.h, edit to suit, and #include it
517
  on the next line, as well as in programs that use this malloc.
517
  on the next line, as well as in programs that use this malloc.
518
  ========================================================================
518
  ========================================================================
519
*/
519
*/
520
 
520
 
521
#include "malloc.h"
521
#include "malloc.h"
522
 
522
 
523
/*------------------------------ internal #includes ---------------------- */
523
/*------------------------------ internal #includes ---------------------- */
524
 
524
 
525
#include <stdio.h>       /* for printing in malloc_stats */
525
#include <stdio.h>       /* for printing in malloc_stats */
526
#include <string.h>
526
#include <string.h>
527
 
527
 
528
#ifndef LACKS_ERRNO_H
528
#ifndef LACKS_ERRNO_H
529
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
529
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
530
#endif /* LACKS_ERRNO_H */
530
#endif /* LACKS_ERRNO_H */
531
#if FOOTERS
531
#if FOOTERS
532
#include <time.h>        /* for magic initialization */
532
#include <time.h>        /* for magic initialization */
533
#endif /* FOOTERS */
533
#endif /* FOOTERS */
534
#ifndef LACKS_STDLIB_H
534
#ifndef LACKS_STDLIB_H
535
#include <stdlib.h>      /* for abort() */
535
#include <stdlib.h>      /* for abort() */
536
#endif /* LACKS_STDLIB_H */
536
#endif /* LACKS_STDLIB_H */
537
#ifdef DEBUG
537
#ifdef DEBUG
538
#if ABORT_ON_ASSERT_FAILURE
538
#if ABORT_ON_ASSERT_FAILURE
539
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
539
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
540
#else /* ABORT_ON_ASSERT_FAILURE */
540
#else /* ABORT_ON_ASSERT_FAILURE */
541
#include <assert.h>
541
#include <assert.h>
542
#endif /* ABORT_ON_ASSERT_FAILURE */
542
#endif /* ABORT_ON_ASSERT_FAILURE */
543
#else  /* DEBUG */
543
#else  /* DEBUG */
544
#define assert(x)
544
#define assert(x)
545
#endif /* DEBUG */
545
#endif /* DEBUG */
546
#if USE_BUILTIN_FFS
546
#if USE_BUILTIN_FFS
547
#ifndef LACKS_STRINGS_H
547
#ifndef LACKS_STRINGS_H
548
#include <strings.h>     /* for ffs */
548
#include <strings.h>     /* for ffs */
549
#endif /* LACKS_STRINGS_H */
549
#endif /* LACKS_STRINGS_H */
550
#endif /* USE_BUILTIN_FFS */
550
#endif /* USE_BUILTIN_FFS */
551
#if HAVE_MMAP
551
#if HAVE_MMAP
552
#ifndef LACKS_SYS_MMAN_H
552
#ifndef LACKS_SYS_MMAN_H
553
#include <sys/mman.h>    /* for mmap */
553
#include <sys/mman.h>    /* for mmap */
554
#endif /* LACKS_SYS_MMAN_H */
554
#endif /* LACKS_SYS_MMAN_H */
555
#ifndef LACKS_FCNTL_H
555
#ifndef LACKS_FCNTL_H
556
#include <fcntl.h>
556
#include <fcntl.h>
557
#endif /* LACKS_FCNTL_H */
557
#endif /* LACKS_FCNTL_H */
558
#endif /* HAVE_MMAP */
558
#endif /* HAVE_MMAP */
559
#if HAVE_MORECORE
559
#if HAVE_MORECORE
560
#ifndef LACKS_UNISTD_H
560
#ifndef LACKS_UNISTD_H
561
#include <unistd.h>     /* for sbrk */
561
#include <unistd.h>     /* for sbrk */
562
#else /* LACKS_UNISTD_H */
562
#else /* LACKS_UNISTD_H */
563
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
563
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
564
extern void*     sbrk(ptrdiff_t);
564
extern void*     sbrk(ptrdiff_t);
565
#endif /* FreeBSD etc */
565
#endif /* FreeBSD etc */
566
#endif /* LACKS_UNISTD_H */
566
#endif /* LACKS_UNISTD_H */
567
#endif /* HAVE_MMAP */
567
#endif /* HAVE_MMAP */
568
 
568
 
569
#ifndef WIN32
569
#ifndef WIN32
570
#ifndef malloc_getpagesize
570
#ifndef malloc_getpagesize
571
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
571
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
572
#    ifndef _SC_PAGE_SIZE
572
#    ifndef _SC_PAGE_SIZE
573
#      define _SC_PAGE_SIZE _SC_PAGESIZE
573
#      define _SC_PAGE_SIZE _SC_PAGESIZE
574
#    endif
574
#    endif
575
#  endif
575
#  endif
576
#  ifdef _SC_PAGE_SIZE
576
#  ifdef _SC_PAGE_SIZE
577
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
577
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
578
#  else
578
#  else
579
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
579
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
580
       extern size_t getpagesize();
580
       extern size_t getpagesize();
581
#      define malloc_getpagesize getpagesize()
581
#      define malloc_getpagesize getpagesize()
582
#    else
582
#    else
583
#      ifdef WIN32 /* use supplied emulation of getpagesize */
583
#      ifdef WIN32 /* use supplied emulation of getpagesize */
584
#        define malloc_getpagesize getpagesize()
584
#        define malloc_getpagesize getpagesize()
585
#      else
585
#      else
586
#        ifndef LACKS_SYS_PARAM_H
586
#        ifndef LACKS_SYS_PARAM_H
587
#          include <sys/param.h>
587
#          include <sys/param.h>
588
#        endif
588
#        endif
589
#        ifdef EXEC_PAGESIZE
589
#        ifdef EXEC_PAGESIZE
590
#          define malloc_getpagesize EXEC_PAGESIZE
590
#          define malloc_getpagesize EXEC_PAGESIZE
591
#        else
591
#        else
592
#          ifdef NBPG
592
#          ifdef NBPG
593
#            ifndef CLSIZE
593
#            ifndef CLSIZE
594
#              define malloc_getpagesize NBPG
594
#              define malloc_getpagesize NBPG
595
#            else
595
#            else
596
#              define malloc_getpagesize (NBPG * CLSIZE)
596
#              define malloc_getpagesize (NBPG * CLSIZE)
597
#            endif
597
#            endif
598
#          else
598
#          else
599
#            ifdef NBPC
599
#            ifdef NBPC
600
#              define malloc_getpagesize NBPC
600
#              define malloc_getpagesize NBPC
601
#            else
601
#            else
602
#              ifdef PAGESIZE
602
#              ifdef PAGESIZE
603
#                define malloc_getpagesize PAGESIZE
603
#                define malloc_getpagesize PAGESIZE
604
#              else /* just guess */
604
#              else /* just guess */
605
#                define malloc_getpagesize ((size_t)4096U)
605
#                define malloc_getpagesize ((size_t)4096U)
606
#              endif
606
#              endif
607
#            endif
607
#            endif
608
#          endif
608
#          endif
609
#        endif
609
#        endif
610
#      endif
610
#      endif
611
#    endif
611
#    endif
612
#  endif
612
#  endif
613
#endif
613
#endif
614
#endif
614
#endif
615
 
615
 
616
/* ------------------- size_t and alignment properties -------------------- */
616
/* ------------------- size_t and alignment properties -------------------- */
617
 
617
 
618
/* The byte and bit size of a size_t */
618
/* The byte and bit size of a size_t */
619
#define SIZE_T_SIZE         (sizeof(size_t))
619
#define SIZE_T_SIZE         (sizeof(size_t))
620
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
620
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
621
 
621
 
622
/* Some constants coerced to size_t */
622
/* Some constants coerced to size_t */
623
/* Annoying but necessary to avoid errors on some plaftorms */
623
/* Annoying but necessary to avoid errors on some plaftorms */
624
#define SIZE_T_ZERO         ((size_t)0)
624
#define SIZE_T_ZERO         ((size_t)0)
625
#define SIZE_T_ONE          ((size_t)1)
625
#define SIZE_T_ONE          ((size_t)1)
626
#define SIZE_T_TWO          ((size_t)2)
626
#define SIZE_T_TWO          ((size_t)2)
627
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
627
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
628
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
628
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
629
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
629
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
630
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
630
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
631
 
631
 
632
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
632
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
633
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
633
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
634
 
634
 
635
/* True if address a has acceptable alignment */
635
/* True if address a has acceptable alignment */
636
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
636
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
637
 
637
 
638
/* the number of bytes to offset an address to align it */
638
/* the number of bytes to offset an address to align it */
639
#define align_offset(A)\
639
#define align_offset(A)\
640
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
640
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
641
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
641
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
642
 
642
 
643
/* -------------------------- MMAP preliminaries ------------------------- */
643
/* -------------------------- MMAP preliminaries ------------------------- */
644
 
644
 
645
/*
645
/*
646
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
646
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
647
   checks to fail so compiler optimizer can delete code rather than
647
   checks to fail so compiler optimizer can delete code rather than
648
   using so many "#if"s.
648
   using so many "#if"s.
649
*/
649
*/
650
 
650
 
651
 
651
 
652
/* MORECORE and MMAP must return MFAIL on failure */
652
/* MORECORE and MMAP must return MFAIL on failure */
653
#define MFAIL                ((void*)(MAX_SIZE_T))
653
#define MFAIL                ((void*)(MAX_SIZE_T))
654
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
654
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
655
 
655
 
656
#if !HAVE_MMAP
656
#if !HAVE_MMAP
657
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
657
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
658
#define USE_MMAP_BIT         (SIZE_T_ZERO)
658
#define USE_MMAP_BIT         (SIZE_T_ZERO)
659
#define CALL_MMAP(s)         MFAIL
659
#define CALL_MMAP(s)         MFAIL
660
#define CALL_MUNMAP(a, s)    (-1)
660
#define CALL_MUNMAP(a, s)    (-1)
661
#define DIRECT_MMAP(s)       MFAIL
661
#define DIRECT_MMAP(s)       MFAIL
662
 
662
 
663
#else /* HAVE_MMAP */
663
#else /* HAVE_MMAP */
664
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
664
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
665
#define USE_MMAP_BIT         (SIZE_T_ONE)
665
#define USE_MMAP_BIT         (SIZE_T_ONE)
666
 
666
 
667
#ifndef WIN32
667
#ifndef WIN32
668
#define CALL_MUNMAP(a, s)    munmap((a), (s))
668
#define CALL_MUNMAP(a, s)    munmap((a), (s))
669
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
669
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
670
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
670
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
671
#define MAP_ANONYMOUS        MAP_ANON
671
#define MAP_ANONYMOUS        MAP_ANON
672
#endif /* MAP_ANON */
672
#endif /* MAP_ANON */
673
#ifdef MAP_ANONYMOUS
673
#ifdef MAP_ANONYMOUS
674
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
674
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
675
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
675
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
676
#else /* MAP_ANONYMOUS */
676
#else /* MAP_ANONYMOUS */
677
/*
677
/*
678
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
678
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
679
   is unlikely to be needed, but is supplied just in case.
679
   is unlikely to be needed, but is supplied just in case.
680
*/
680
*/
681
#define MMAP_FLAGS           (MAP_PRIVATE)
681
#define MMAP_FLAGS           (MAP_PRIVATE)
682
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
682
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
683
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
683
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
684
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
684
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
685
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
685
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
686
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
686
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
687
#endif /* MAP_ANONYMOUS */
687
#endif /* MAP_ANONYMOUS */
688
 
688
 
689
#define DIRECT_MMAP(s)       CALL_MMAP(s)
689
#define DIRECT_MMAP(s)       CALL_MMAP(s)
690
#else /* WIN32 */
690
#else /* WIN32 */
691
 
691
 
692
/* Win32 MMAP via VirtualAlloc */
692
/* Win32 MMAP via VirtualAlloc */
693
static void* win32mmap(size_t size) {
693
static void* win32mmap(size_t size) {
694
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
694
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
695
  return (ptr != 0)? ptr: MFAIL;
695
  return (ptr != 0)? ptr: MFAIL;
696
}
696
}
697
 
697
 
698
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
698
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
699
static void* win32direct_mmap(size_t size) {
699
static void* win32direct_mmap(size_t size) {
700
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
700
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
701
                           PAGE_READWRITE);
701
                           PAGE_READWRITE);
702
  return (ptr != 0)? ptr: MFAIL;
702
  return (ptr != 0)? ptr: MFAIL;
703
}
703
}
704
 
704
 
705
/* This function supports releasing coalesed segments */
705
/* This function supports releasing coalesed segments */
706
static int win32munmap(void* ptr, size_t size) {
706
static int win32munmap(void* ptr, size_t size) {
707
  MEMORY_BASIC_INFORMATION minfo;
707
  MEMORY_BASIC_INFORMATION minfo;
708
  char* cptr = ptr;
708
  char* cptr = ptr;
709
  while (size) {
709
  while (size) {
710
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
710
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
711
      return -1;
711
      return -1;
712
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
712
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
713
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
713
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
714
      return -1;
714
      return -1;
715
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
715
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
716
      return -1;
716
      return -1;
717
    cptr += minfo.RegionSize;
717
    cptr += minfo.RegionSize;
718
    size -= minfo.RegionSize;
718
    size -= minfo.RegionSize;
719
  }
719
  }
720
  return 0;
720
  return 0;
721
}
721
}
722
 
722
 
723
#define CALL_MMAP(s)         win32mmap(s)
723
#define CALL_MMAP(s)         win32mmap(s)
724
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
724
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
725
#define DIRECT_MMAP(s)       win32direct_mmap(s)
725
#define DIRECT_MMAP(s)       win32direct_mmap(s)
726
#endif /* WIN32 */
726
#endif /* WIN32 */
727
#endif /* HAVE_MMAP */
727
#endif /* HAVE_MMAP */
728
 
728
 
729
#if HAVE_MMAP && HAVE_MREMAP
729
#if HAVE_MMAP && HAVE_MREMAP
730
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
730
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
731
#else  /* HAVE_MMAP && HAVE_MREMAP */
731
#else  /* HAVE_MMAP && HAVE_MREMAP */
732
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
732
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
733
#endif /* HAVE_MMAP && HAVE_MREMAP */
733
#endif /* HAVE_MMAP && HAVE_MREMAP */
734
 
734
 
735
#if HAVE_MORECORE
735
#if HAVE_MORECORE
736
#define CALL_MORECORE(S)     MORECORE(S)
736
#define CALL_MORECORE(S)     MORECORE(S)
737
#else  /* HAVE_MORECORE */
737
#else  /* HAVE_MORECORE */
738
#define CALL_MORECORE(S)     MFAIL
738
#define CALL_MORECORE(S)     MFAIL
739
#endif /* HAVE_MORECORE */
739
#endif /* HAVE_MORECORE */
740
 
740
 
741
/* mstate bit set if continguous morecore disabled or failed */
741
/* mstate bit set if continguous morecore disabled or failed */
742
#define USE_NONCONTIGUOUS_BIT (4U)
742
#define USE_NONCONTIGUOUS_BIT (4U)
743
 
743
 
744
/* segment bit set in create_mspace_with_base */
744
/* segment bit set in create_mspace_with_base */
745
#define EXTERN_BIT            (8U)
745
#define EXTERN_BIT            (8U)
746
 
746
 
747
 
747
 
748
/* --------------------------- Lock preliminaries ------------------------ */
748
/* --------------------------- Lock preliminaries ------------------------ */
749
 
749
 
750
#if USE_LOCKS
750
#if USE_LOCKS
751
 
751
 
752
/*
752
/*
753
  When locks are defined, there are up to two global locks:
753
  When locks are defined, there are up to two global locks:
754
 
754
 
755
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
755
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
756
    MORECORE.  In many cases sys_alloc requires two calls, that should
756
    MORECORE.  In many cases sys_alloc requires two calls, that should
757
    not be interleaved with calls by other threads.  This does not
757
    not be interleaved with calls by other threads.  This does not
758
    protect against direct calls to MORECORE by other threads not
758
    protect against direct calls to MORECORE by other threads not
759
    using this lock, so there is still code to cope the best we can on
759
    using this lock, so there is still code to cope the best we can on
760
    interference.
760
    interference.
761
 
761
 
762
  * magic_init_mutex ensures that mparams.magic and other
762
  * magic_init_mutex ensures that mparams.magic and other
763
    unique mparams values are initialized only once.
763
    unique mparams values are initialized only once.
764
*/
764
*/
765
 
765
 
766
/* By default use posix locks */
766
/* By default use posix locks */
767
#include <futex.h>
767
#include <futex.h>
768
#define MLOCK_T atomic_t
768
#define MLOCK_T atomic_t
769
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
769
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
770
/* futex_down cannot fail, but can return different
770
/* futex_down cannot fail, but can return different
771
 * retvals for OK
771
 * retvals for OK
772
 */
772
 */
773
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
773
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
774
#define RELEASE_LOCK(l)      futex_up(l)
774
#define RELEASE_LOCK(l)      futex_up(l)
775
 
775
 
776
#if HAVE_MORECORE
776
#if HAVE_MORECORE
777
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
777
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
778
#endif /* HAVE_MORECORE */
778
#endif /* HAVE_MORECORE */
779
 
779
 
780
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
780
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
781
 
781
 
782
 
782
 
783
#define USE_LOCK_BIT               (2U)
783
#define USE_LOCK_BIT               (2U)
784
#else  /* USE_LOCKS */
784
#else  /* USE_LOCKS */
785
#define USE_LOCK_BIT               (0U)
785
#define USE_LOCK_BIT               (0U)
786
#define INITIAL_LOCK(l)
786
#define INITIAL_LOCK(l)
787
#endif /* USE_LOCKS */
787
#endif /* USE_LOCKS */
788
 
788
 
789
#if USE_LOCKS && HAVE_MORECORE
789
#if USE_LOCKS && HAVE_MORECORE
790
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
790
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
791
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
791
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
792
#else /* USE_LOCKS && HAVE_MORECORE */
792
#else /* USE_LOCKS && HAVE_MORECORE */
793
#define ACQUIRE_MORECORE_LOCK()
793
#define ACQUIRE_MORECORE_LOCK()
794
#define RELEASE_MORECORE_LOCK()
794
#define RELEASE_MORECORE_LOCK()
795
#endif /* USE_LOCKS && HAVE_MORECORE */
795
#endif /* USE_LOCKS && HAVE_MORECORE */
796
 
796
 
797
#if USE_LOCKS
797
#if USE_LOCKS
798
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
798
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
799
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
799
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
800
#else  /* USE_LOCKS */
800
#else  /* USE_LOCKS */
801
#define ACQUIRE_MAGIC_INIT_LOCK()
801
#define ACQUIRE_MAGIC_INIT_LOCK()
802
#define RELEASE_MAGIC_INIT_LOCK()
802
#define RELEASE_MAGIC_INIT_LOCK()
803
#endif /* USE_LOCKS */
803
#endif /* USE_LOCKS */
804
 
804
 
805
 
805
 
806
/* -----------------------  Chunk representations ------------------------ */
806
/* -----------------------  Chunk representations ------------------------ */
807
 
807
 
808
/*
808
/*
809
  (The following includes lightly edited explanations by Colin Plumb.)
809
  (The following includes lightly edited explanations by Colin Plumb.)
810
 
810
 
811
  The malloc_chunk declaration below is misleading (but accurate and
811
  The malloc_chunk declaration below is misleading (but accurate and
812
  necessary).  It declares a "view" into memory allowing access to
812
  necessary).  It declares a "view" into memory allowing access to
813
  necessary fields at known offsets from a given base.
813
  necessary fields at known offsets from a given base.
814
 
814
 
815
  Chunks of memory are maintained using a `boundary tag' method as
815
  Chunks of memory are maintained using a `boundary tag' method as
816
  originally described by Knuth.  (See the paper by Paul Wilson
816
  originally described by Knuth.  (See the paper by Paul Wilson
817
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
817
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
818
  techniques.)  Sizes of free chunks are stored both in the front of
818
  techniques.)  Sizes of free chunks are stored both in the front of
819
  each chunk and at the end.  This makes consolidating fragmented
819
  each chunk and at the end.  This makes consolidating fragmented
820
  chunks into bigger chunks fast.  The head fields also hold bits
820
  chunks into bigger chunks fast.  The head fields also hold bits
821
  representing whether chunks are free or in use.
821
  representing whether chunks are free or in use.
822
 
822
 
823
  Here are some pictures to make it clearer.  They are "exploded" to
823
  Here are some pictures to make it clearer.  They are "exploded" to
824
  show that the state of a chunk can be thought of as extending from
824
  show that the state of a chunk can be thought of as extending from
825
  the high 31 bits of the head field of its header through the
825
  the high 31 bits of the head field of its header through the
826
  prev_foot and PINUSE_BIT bit of the following chunk header.
826
  prev_foot and PINUSE_BIT bit of the following chunk header.
827
 
827
 
828
  A chunk that's in use looks like:
828
  A chunk that's in use looks like:
829
 
829
 
830
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
830
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
831
           | Size of previous chunk (if P = 1)                             |
831
           | Size of previous chunk (if P = 1)                             |
832
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
832
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
833
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
833
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
834
         | Size of this chunk                                         1| +-+
834
         | Size of this chunk                                         1| +-+
835
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
835
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
836
         |                                                               |
836
         |                                                               |
837
         +-                                                             -+
837
         +-                                                             -+
838
         |                                                               |
838
         |                                                               |
839
         +-                                                             -+
839
         +-                                                             -+
840
         |                                                               :
840
         |                                                               :
841
         +-      size - sizeof(size_t) available payload bytes          -+
841
         +-      size - sizeof(size_t) available payload bytes          -+
842
         :                                                               |
842
         :                                                               |
843
 chunk-> +-                                                             -+
843
 chunk-> +-                                                             -+
844
         |                                                               |
844
         |                                                               |
845
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
845
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
846
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
846
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
847
       | Size of next chunk (may or may not be in use)               | +-+
847
       | Size of next chunk (may or may not be in use)               | +-+
848
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
848
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
849
 
849
 
850
    And if it's free, it looks like this:
850
    And if it's free, it looks like this:
851
 
851
 
852
   chunk-> +-                                                             -+
852
   chunk-> +-                                                             -+
853
           | User payload (must be in use, or we would have merged!)       |
853
           | User payload (must be in use, or we would have merged!)       |
854
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
854
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
855
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
855
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
856
         | Size of this chunk                                         0| +-+
856
         | Size of this chunk                                         0| +-+
857
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
857
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
858
         | Next pointer                                                  |
858
         | Next pointer                                                  |
859
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
859
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
860
         | Prev pointer                                                  |
860
         | Prev pointer                                                  |
861
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
861
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
862
         |                                                               :
862
         |                                                               :
863
         +-      size - sizeof(struct chunk) unused bytes               -+
863
         +-      size - sizeof(struct chunk) unused bytes               -+
864
         :                                                               |
864
         :                                                               |
865
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
865
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
866
         | Size of this chunk                                            |
866
         | Size of this chunk                                            |
867
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
867
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
868
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
868
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
869
       | Size of next chunk (must be in use, or we would have merged)| +-+
869
       | Size of next chunk (must be in use, or we would have merged)| +-+
870
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
870
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
871
       |                                                               :
871
       |                                                               :
872
       +- User payload                                                -+
872
       +- User payload                                                -+
873
       :                                                               |
873
       :                                                               |
874
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
874
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
875
                                                                     |0|
875
                                                                     |0|
876
                                                                     +-+
876
                                                                     +-+
877
  Note that since we always merge adjacent free chunks, the chunks
877
  Note that since we always merge adjacent free chunks, the chunks
878
  adjacent to a free chunk must be in use.
878
  adjacent to a free chunk must be in use.
879
 
879
 
880
  Given a pointer to a chunk (which can be derived trivially from the
880
  Given a pointer to a chunk (which can be derived trivially from the
881
  payload pointer) we can, in O(1) time, find out whether the adjacent
881
  payload pointer) we can, in O(1) time, find out whether the adjacent
882
  chunks are free, and if so, unlink them from the lists that they
882
  chunks are free, and if so, unlink them from the lists that they
883
  are on and merge them with the current chunk.
883
  are on and merge them with the current chunk.
884
 
884
 
885
  Chunks always begin on even word boundaries, so the mem portion
885
  Chunks always begin on even word boundaries, so the mem portion
886
  (which is returned to the user) is also on an even word boundary, and
886
  (which is returned to the user) is also on an even word boundary, and
887
  thus at least double-word aligned.
887
  thus at least double-word aligned.
888
 
888
 
889
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
889
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
890
  chunk size (which is always a multiple of two words), is an in-use
890
  chunk size (which is always a multiple of two words), is an in-use
891
  bit for the *previous* chunk.  If that bit is *clear*, then the
891
  bit for the *previous* chunk.  If that bit is *clear*, then the
892
  word before the current chunk size contains the previous chunk
892
  word before the current chunk size contains the previous chunk
893
  size, and can be used to find the front of the previous chunk.
893
  size, and can be used to find the front of the previous chunk.
894
  The very first chunk allocated always has this bit set, preventing
894
  The very first chunk allocated always has this bit set, preventing
895
  access to non-existent (or non-owned) memory. If pinuse is set for
895
  access to non-existent (or non-owned) memory. If pinuse is set for
896
  any given chunk, then you CANNOT determine the size of the
896
  any given chunk, then you CANNOT determine the size of the
897
  previous chunk, and might even get a memory addressing fault when
897
  previous chunk, and might even get a memory addressing fault when
898
  trying to do so.
898
  trying to do so.
899
 
899
 
900
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
900
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
901
  the chunk size redundantly records whether the current chunk is
901
  the chunk size redundantly records whether the current chunk is
902
  inuse. This redundancy enables usage checks within free and realloc,
902
  inuse. This redundancy enables usage checks within free and realloc,
903
  and reduces indirection when freeing and consolidating chunks.
903
  and reduces indirection when freeing and consolidating chunks.
904
 
904
 
905
  Each freshly allocated chunk must have both cinuse and pinuse set.
905
  Each freshly allocated chunk must have both cinuse and pinuse set.
906
  That is, each allocated chunk borders either a previously allocated
906
  That is, each allocated chunk borders either a previously allocated
907
  and still in-use chunk, or the base of its memory arena. This is
907
  and still in-use chunk, or the base of its memory arena. This is
908
  ensured by making all allocations from the the `lowest' part of any
908
  ensured by making all allocations from the the `lowest' part of any
909
  found chunk.  Further, no free chunk physically borders another one,
909
  found chunk.  Further, no free chunk physically borders another one,
910
  so each free chunk is known to be preceded and followed by either
910
  so each free chunk is known to be preceded and followed by either
911
  inuse chunks or the ends of memory.
911
  inuse chunks or the ends of memory.
912
 
912
 
913
  Note that the `foot' of the current chunk is actually represented
913
  Note that the `foot' of the current chunk is actually represented
914
  as the prev_foot of the NEXT chunk. This makes it easier to
914
  as the prev_foot of the NEXT chunk. This makes it easier to
915
  deal with alignments etc but can be very confusing when trying
915
  deal with alignments etc but can be very confusing when trying
916
  to extend or adapt this code.
916
  to extend or adapt this code.
917
 
917
 
918
  The exceptions to all this are
918
  The exceptions to all this are
919
 
919
 
920
     1. The special chunk `top' is the top-most available chunk (i.e.,
920
     1. The special chunk `top' is the top-most available chunk (i.e.,
921
        the one bordering the end of available memory). It is treated
921
        the one bordering the end of available memory). It is treated
922
        specially.  Top is never included in any bin, is used only if
922
        specially.  Top is never included in any bin, is used only if
923
        no other chunk is available, and is released back to the
923
        no other chunk is available, and is released back to the
924
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
924
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
925
        the top chunk is treated as larger (and thus less well
925
        the top chunk is treated as larger (and thus less well
926
        fitting) than any other available chunk.  The top chunk
926
        fitting) than any other available chunk.  The top chunk
927
        doesn't update its trailing size field since there is no next
927
        doesn't update its trailing size field since there is no next
928
        contiguous chunk that would have to index off it. However,
928
        contiguous chunk that would have to index off it. However,
929
        space is still allocated for it (TOP_FOOT_SIZE) to enable
929
        space is still allocated for it (TOP_FOOT_SIZE) to enable
930
        separation or merging when space is extended.
930
        separation or merging when space is extended.
931
 
931
 
932
     3. Chunks allocated via mmap, which have the lowest-order bit
932
     3. Chunks allocated via mmap, which have the lowest-order bit
933
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
933
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
934
        PINUSE_BIT in their head fields.  Because they are allocated
934
        PINUSE_BIT in their head fields.  Because they are allocated
935
        one-by-one, each must carry its own prev_foot field, which is
935
        one-by-one, each must carry its own prev_foot field, which is
936
        also used to hold the offset this chunk has within its mmapped
936
        also used to hold the offset this chunk has within its mmapped
937
        region, which is needed to preserve alignment. Each mmapped
937
        region, which is needed to preserve alignment. Each mmapped
938
        chunk is trailed by the first two fields of a fake next-chunk
938
        chunk is trailed by the first two fields of a fake next-chunk
939
        for sake of usage checks.
939
        for sake of usage checks.
940
 
940
 
941
*/
941
*/
942
 
942
 
943
struct malloc_chunk {
943
struct malloc_chunk {
944
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
944
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
945
  size_t               head;       /* Size and inuse bits. */
945
  size_t               head;       /* Size and inuse bits. */
946
  struct malloc_chunk* fd;         /* double links -- used only if free. */
946
  struct malloc_chunk* fd;         /* double links -- used only if free. */
947
  struct malloc_chunk* bk;
947
  struct malloc_chunk* bk;
948
};
948
};
949
 
949
 
950
typedef struct malloc_chunk  mchunk;
950
typedef struct malloc_chunk  mchunk;
951
typedef struct malloc_chunk* mchunkptr;
951
typedef struct malloc_chunk* mchunkptr;
952
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
952
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
953
typedef unsigned int bindex_t;         /* Described below */
953
typedef unsigned int bindex_t;         /* Described below */
954
typedef unsigned int binmap_t;         /* Described below */
954
typedef unsigned int binmap_t;         /* Described below */
955
typedef unsigned int flag_t;           /* The type of various bit flag sets */
955
typedef unsigned int flag_t;           /* The type of various bit flag sets */
956
 
956
 
957
/* ------------------- Chunks sizes and alignments ----------------------- */
957
/* ------------------- Chunks sizes and alignments ----------------------- */
958
 
958
 
959
#define MCHUNK_SIZE         (sizeof(mchunk))
959
#define MCHUNK_SIZE         (sizeof(mchunk))
960
 
960
 
961
#if FOOTERS
961
#if FOOTERS
962
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
962
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
963
#else /* FOOTERS */
963
#else /* FOOTERS */
964
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
964
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
965
#endif /* FOOTERS */
965
#endif /* FOOTERS */
966
 
966
 
967
/* MMapped chunks need a second word of overhead ... */
967
/* MMapped chunks need a second word of overhead ... */
968
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
968
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
969
/* ... and additional padding for fake next-chunk at foot */
969
/* ... and additional padding for fake next-chunk at foot */
970
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
970
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
971
 
971
 
972
/* The smallest size we can malloc is an aligned minimal chunk */
972
/* The smallest size we can malloc is an aligned minimal chunk */
973
#define MIN_CHUNK_SIZE\
973
#define MIN_CHUNK_SIZE\
974
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
974
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
975
 
975
 
976
/* conversion from malloc headers to user pointers, and back */
976
/* conversion from malloc headers to user pointers, and back */
977
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
977
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
978
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
978
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
979
/* chunk associated with aligned address A */
979
/* chunk associated with aligned address A */
980
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
980
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
981
 
981
 
982
/* Bounds on request (not chunk) sizes. */
982
/* Bounds on request (not chunk) sizes. */
983
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
983
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
984
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
984
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
985
 
985
 
986
/* pad request bytes into a usable size */
986
/* pad request bytes into a usable size */
987
#define pad_request(req) \
987
#define pad_request(req) \
988
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
988
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
989
 
989
 
990
/* pad request, checking for minimum (but not maximum) */
990
/* pad request, checking for minimum (but not maximum) */
991
#define request2size(req) \
991
#define request2size(req) \
992
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
992
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
993
 
993
 
994
 
994
 
995
/* ------------------ Operations on head and foot fields ----------------- */
995
/* ------------------ Operations on head and foot fields ----------------- */
996
 
996
 
997
/*
997
/*
998
  The head field of a chunk is or'ed with PINUSE_BIT when previous
998
  The head field of a chunk is or'ed with PINUSE_BIT when previous
999
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
999
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1000
  use. If the chunk was obtained with mmap, the prev_foot field has
1000
  use. If the chunk was obtained with mmap, the prev_foot field has
1001
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1001
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1002
  mmapped region to the base of the chunk.
1002
  mmapped region to the base of the chunk.
1003
*/
1003
*/
1004
 
1004
 
1005
#define PINUSE_BIT          (SIZE_T_ONE)
1005
#define PINUSE_BIT          (SIZE_T_ONE)
1006
#define CINUSE_BIT          (SIZE_T_TWO)
1006
#define CINUSE_BIT          (SIZE_T_TWO)
1007
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1007
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1008
 
1008
 
1009
/* Head value for fenceposts */
1009
/* Head value for fenceposts */
1010
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1010
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1011
 
1011
 
1012
/* extraction of fields from head words */
1012
/* extraction of fields from head words */
1013
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1013
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1014
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1014
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1015
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1015
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1016
 
1016
 
1017
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1017
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1018
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1018
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1019
 
1019
 
1020
/* Treat space at ptr +/- offset as a chunk */
1020
/* Treat space at ptr +/- offset as a chunk */
1021
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1021
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1022
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1022
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1023
 
1023
 
1024
/* Ptr to next or previous physical malloc_chunk. */
1024
/* Ptr to next or previous physical malloc_chunk. */
1025
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1025
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1026
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1026
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1027
 
1027
 
1028
/* extract next chunk's pinuse bit */
1028
/* extract next chunk's pinuse bit */
1029
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1029
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1030
 
1030
 
1031
/* Get/set size at footer */
1031
/* Get/set size at footer */
1032
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1032
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1033
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1033
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1034
 
1034
 
1035
/* Set size, pinuse bit, and foot */
1035
/* Set size, pinuse bit, and foot */
1036
#define set_size_and_pinuse_of_free_chunk(p, s)\
1036
#define set_size_and_pinuse_of_free_chunk(p, s)\
1037
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1037
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1038
 
1038
 
1039
/* Set size, pinuse bit, foot, and clear next pinuse */
1039
/* Set size, pinuse bit, foot, and clear next pinuse */
1040
#define set_free_with_pinuse(p, s, n)\
1040
#define set_free_with_pinuse(p, s, n)\
1041
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1041
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1042
 
1042
 
1043
#define is_mmapped(p)\
1043
#define is_mmapped(p)\
1044
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1044
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1045
 
1045
 
1046
/* Get the internal overhead associated with chunk p */
1046
/* Get the internal overhead associated with chunk p */
1047
#define overhead_for(p)\
1047
#define overhead_for(p)\
1048
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1048
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1049
 
1049
 
1050
/* Return true if malloced space is not necessarily cleared */
1050
/* Return true if malloced space is not necessarily cleared */
1051
#if MMAP_CLEARS
1051
#if MMAP_CLEARS
1052
#define calloc_must_clear(p) (!is_mmapped(p))
1052
#define calloc_must_clear(p) (!is_mmapped(p))
1053
#else /* MMAP_CLEARS */
1053
#else /* MMAP_CLEARS */
1054
#define calloc_must_clear(p) (1)
1054
#define calloc_must_clear(p) (1)
1055
#endif /* MMAP_CLEARS */
1055
#endif /* MMAP_CLEARS */
1056
 
1056
 
1057
/* ---------------------- Overlaid data structures ----------------------- */
1057
/* ---------------------- Overlaid data structures ----------------------- */
1058
 
1058
 
1059
/*
1059
/*
1060
  When chunks are not in use, they are treated as nodes of either
1060
  When chunks are not in use, they are treated as nodes of either
1061
  lists or trees.
1061
  lists or trees.
1062
 
1062
 
1063
  "Small"  chunks are stored in circular doubly-linked lists, and look
1063
  "Small"  chunks are stored in circular doubly-linked lists, and look
1064
  like this:
1064
  like this:
1065
 
1065
 
1066
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1066
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1067
            |             Size of previous chunk                            |
1067
            |             Size of previous chunk                            |
1068
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1068
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1069
    `head:' |             Size of chunk, in bytes                         |P|
1069
    `head:' |             Size of chunk, in bytes                         |P|
1070
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1070
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1071
            |             Forward pointer to next chunk in list             |
1071
            |             Forward pointer to next chunk in list             |
1072
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1072
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1073
            |             Back pointer to previous chunk in list            |
1073
            |             Back pointer to previous chunk in list            |
1074
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1074
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1075
            |             Unused space (may be 0 bytes long)                .
1075
            |             Unused space (may be 0 bytes long)                .
1076
            .                                                               .
1076
            .                                                               .
1077
            .                                                               |
1077
            .                                                               |
1078
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1078
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1079
    `foot:' |             Size of chunk, in bytes                           |
1079
    `foot:' |             Size of chunk, in bytes                           |
1080
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1080
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1081
 
1081
 
1082
  Larger chunks are kept in a form of bitwise digital trees (aka
1082
  Larger chunks are kept in a form of bitwise digital trees (aka
1083
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1083
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1084
  free chunks greater than 256 bytes, their size doesn't impose any
1084
  free chunks greater than 256 bytes, their size doesn't impose any
1085
  constraints on user chunk sizes.  Each node looks like:
1085
  constraints on user chunk sizes.  Each node looks like:
1086
 
1086
 
1087
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1087
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1088
            |             Size of previous chunk                            |
1088
            |             Size of previous chunk                            |
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1090
    `head:' |             Size of chunk, in bytes                         |P|
1090
    `head:' |             Size of chunk, in bytes                         |P|
1091
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1091
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1092
            |             Forward pointer to next chunk of same size        |
1092
            |             Forward pointer to next chunk of same size        |
1093
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1093
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1094
            |             Back pointer to previous chunk of same size       |
1094
            |             Back pointer to previous chunk of same size       |
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1096
            |             Pointer to left child (child[0])                  |
1096
            |             Pointer to left child (child[0])                  |
1097
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1097
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1098
            |             Pointer to right child (child[1])                 |
1098
            |             Pointer to right child (child[1])                 |
1099
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1099
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1100
            |             Pointer to parent                                 |
1100
            |             Pointer to parent                                 |
1101
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1101
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1102
            |             bin index of this chunk                           |
1102
            |             bin index of this chunk                           |
1103
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1103
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1104
            |             Unused space                                      .
1104
            |             Unused space                                      .
1105
            .                                                               |
1105
            .                                                               |
1106
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1106
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1107
    `foot:' |             Size of chunk, in bytes                           |
1107
    `foot:' |             Size of chunk, in bytes                           |
1108
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1108
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1109
 
1109
 
1110
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1110
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1111
  of the same size are arranged in a circularly-linked list, with only
1111
  of the same size are arranged in a circularly-linked list, with only
1112
  the oldest chunk (the next to be used, in our FIFO ordering)
1112
  the oldest chunk (the next to be used, in our FIFO ordering)
1113
  actually in the tree.  (Tree members are distinguished by a non-null
1113
  actually in the tree.  (Tree members are distinguished by a non-null
1114
  parent pointer.)  If a chunk with the same size an an existing node
1114
  parent pointer.)  If a chunk with the same size an an existing node
1115
  is inserted, it is linked off the existing node using pointers that
1115
  is inserted, it is linked off the existing node using pointers that
1116
  work in the same way as fd/bk pointers of small chunks.
1116
  work in the same way as fd/bk pointers of small chunks.
1117
 
1117
 
1118
  Each tree contains a power of 2 sized range of chunk sizes (the
1118
  Each tree contains a power of 2 sized range of chunk sizes (the
1119
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1119
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1120
  tree level, with the chunks in the smaller half of the range (0x100
1120
  tree level, with the chunks in the smaller half of the range (0x100
1121
  <= x < 0x140 for the top nose) in the left subtree and the larger
1121
  <= x < 0x140 for the top nose) in the left subtree and the larger
1122
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1122
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1123
  done by inspecting individual bits.
1123
  done by inspecting individual bits.
1124
 
1124
 
1125
  Using these rules, each node's left subtree contains all smaller
1125
  Using these rules, each node's left subtree contains all smaller
1126
  sizes than its right subtree.  However, the node at the root of each
1126
  sizes than its right subtree.  However, the node at the root of each
1127
  subtree has no particular ordering relationship to either.  (The
1127
  subtree has no particular ordering relationship to either.  (The
1128
  dividing line between the subtree sizes is based on trie relation.)
1128
  dividing line between the subtree sizes is based on trie relation.)
1129
  If we remove the last chunk of a given size from the interior of the
1129
  If we remove the last chunk of a given size from the interior of the
1130
  tree, we need to replace it with a leaf node.  The tree ordering
1130
  tree, we need to replace it with a leaf node.  The tree ordering
1131
  rules permit a node to be replaced by any leaf below it.
1131
  rules permit a node to be replaced by any leaf below it.
1132
 
1132
 
1133
  The smallest chunk in a tree (a common operation in a best-fit
1133
  The smallest chunk in a tree (a common operation in a best-fit
1134
  allocator) can be found by walking a path to the leftmost leaf in
1134
  allocator) can be found by walking a path to the leftmost leaf in
1135
  the tree.  Unlike a usual binary tree, where we follow left child
1135
  the tree.  Unlike a usual binary tree, where we follow left child
1136
  pointers until we reach a null, here we follow the right child
1136
  pointers until we reach a null, here we follow the right child
1137
  pointer any time the left one is null, until we reach a leaf with
1137
  pointer any time the left one is null, until we reach a leaf with
1138
  both child pointers null. The smallest chunk in the tree will be
1138
  both child pointers null. The smallest chunk in the tree will be
1139
  somewhere along that path.
1139
  somewhere along that path.
1140
 
1140
 
1141
  The worst case number of steps to add, find, or remove a node is
1141
  The worst case number of steps to add, find, or remove a node is
1142
  bounded by the number of bits differentiating chunks within
1142
  bounded by the number of bits differentiating chunks within
1143
  bins. Under current bin calculations, this ranges from 6 up to 21
1143
  bins. Under current bin calculations, this ranges from 6 up to 21
1144
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1144
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1145
  is of course much better.
1145
  is of course much better.
1146
*/
1146
*/
1147
 
1147
 
1148
struct malloc_tree_chunk {
1148
struct malloc_tree_chunk {
1149
  /* The first four fields must be compatible with malloc_chunk */
1149
  /* The first four fields must be compatible with malloc_chunk */
1150
  size_t                    prev_foot;
1150
  size_t                    prev_foot;
1151
  size_t                    head;
1151
  size_t                    head;
1152
  struct malloc_tree_chunk* fd;
1152
  struct malloc_tree_chunk* fd;
1153
  struct malloc_tree_chunk* bk;
1153
  struct malloc_tree_chunk* bk;
1154
 
1154
 
1155
  struct malloc_tree_chunk* child[2];
1155
  struct malloc_tree_chunk* child[2];
1156
  struct malloc_tree_chunk* parent;
1156
  struct malloc_tree_chunk* parent;
1157
  bindex_t                  index;
1157
  bindex_t                  index;
1158
};
1158
};
1159
 
1159
 
1160
typedef struct malloc_tree_chunk  tchunk;
1160
typedef struct malloc_tree_chunk  tchunk;
1161
typedef struct malloc_tree_chunk* tchunkptr;
1161
typedef struct malloc_tree_chunk* tchunkptr;
1162
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1162
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1163
 
1163
 
1164
/* A little helper macro for trees */
1164
/* A little helper macro for trees */
1165
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1165
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1166
 
1166
 
1167
/* ----------------------------- Segments -------------------------------- */
1167
/* ----------------------------- Segments -------------------------------- */
1168
 
1168
 
1169
/*
1169
/*
1170
  Each malloc space may include non-contiguous segments, held in a
1170
  Each malloc space may include non-contiguous segments, held in a
1171
  list headed by an embedded malloc_segment record representing the
1171
  list headed by an embedded malloc_segment record representing the
1172
  top-most space. Segments also include flags holding properties of
1172
  top-most space. Segments also include flags holding properties of
1173
  the space. Large chunks that are directly allocated by mmap are not
1173
  the space. Large chunks that are directly allocated by mmap are not
1174
  included in this list. They are instead independently created and
1174
  included in this list. They are instead independently created and
1175
  destroyed without otherwise keeping track of them.
1175
  destroyed without otherwise keeping track of them.
1176
 
1176
 
1177
  Segment management mainly comes into play for spaces allocated by
1177
  Segment management mainly comes into play for spaces allocated by
1178
  MMAP.  Any call to MMAP might or might not return memory that is
1178
  MMAP.  Any call to MMAP might or might not return memory that is
1179
  adjacent to an existing segment.  MORECORE normally contiguously
1179
  adjacent to an existing segment.  MORECORE normally contiguously
1180
  extends the current space, so this space is almost always adjacent,
1180
  extends the current space, so this space is almost always adjacent,
1181
  which is simpler and faster to deal with. (This is why MORECORE is
1181
  which is simpler and faster to deal with. (This is why MORECORE is
1182
  used preferentially to MMAP when both are available -- see
1182
  used preferentially to MMAP when both are available -- see
1183
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1183
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1184
  hinting mechanisms (inconsistently) supported in various
1184
  hinting mechanisms (inconsistently) supported in various
1185
  implementations of unix mmap, or distinguish reserving from
1185
  implementations of unix mmap, or distinguish reserving from
1186
  committing memory. Instead, we just ask for space, and exploit
1186
  committing memory. Instead, we just ask for space, and exploit
1187
  contiguity when we get it.  It is probably possible to do
1187
  contiguity when we get it.  It is probably possible to do
1188
  better than this on some systems, but no general scheme seems
1188
  better than this on some systems, but no general scheme seems
1189
  to be significantly better.
1189
  to be significantly better.
1190
 
1190
 
1191
  Management entails a simpler variant of the consolidation scheme
1191
  Management entails a simpler variant of the consolidation scheme
1192
  used for chunks to reduce fragmentation -- new adjacent memory is
1192
  used for chunks to reduce fragmentation -- new adjacent memory is
1193
  normally prepended or appended to an existing segment. However,
1193
  normally prepended or appended to an existing segment. However,
1194
  there are limitations compared to chunk consolidation that mostly
1194
  there are limitations compared to chunk consolidation that mostly
1195
  reflect the fact that segment processing is relatively infrequent
1195
  reflect the fact that segment processing is relatively infrequent
1196
  (occurring only when getting memory from system) and that we
1196
  (occurring only when getting memory from system) and that we
1197
  don't expect to have huge numbers of segments:
1197
  don't expect to have huge numbers of segments:
1198
 
1198
 
1199
  * Segments are not indexed, so traversal requires linear scans.  (It
1199
  * Segments are not indexed, so traversal requires linear scans.  (It
1200
    would be possible to index these, but is not worth the extra
1200
    would be possible to index these, but is not worth the extra
1201
    overhead and complexity for most programs on most platforms.)
1201
    overhead and complexity for most programs on most platforms.)
1202
  * New segments are only appended to old ones when holding top-most
1202
  * New segments are only appended to old ones when holding top-most
1203
    memory; if they cannot be prepended to others, they are held in
1203
    memory; if they cannot be prepended to others, they are held in
1204
    different segments.
1204
    different segments.
1205
 
1205
 
1206
  Except for the top-most segment of an mstate, each segment record
1206
  Except for the top-most segment of an mstate, each segment record
1207
  is kept at the tail of its segment. Segments are added by pushing
1207
  is kept at the tail of its segment. Segments are added by pushing
1208
  segment records onto the list headed by &mstate.seg for the
1208
  segment records onto the list headed by &mstate.seg for the
1209
  containing mstate.
1209
  containing mstate.
1210
 
1210
 
1211
  Segment flags control allocation/merge/deallocation policies:
1211
  Segment flags control allocation/merge/deallocation policies:
1212
  * If EXTERN_BIT set, then we did not allocate this segment,
1212
  * If EXTERN_BIT set, then we did not allocate this segment,
1213
    and so should not try to deallocate or merge with others.
1213
    and so should not try to deallocate or merge with others.
1214
    (This currently holds only for the initial segment passed
1214
    (This currently holds only for the initial segment passed
1215
    into create_mspace_with_base.)
1215
    into create_mspace_with_base.)
1216
  * If IS_MMAPPED_BIT set, the segment may be merged with
1216
  * If IS_MMAPPED_BIT set, the segment may be merged with
1217
    other surrounding mmapped segments and trimmed/de-allocated
1217
    other surrounding mmapped segments and trimmed/de-allocated
1218
    using munmap.
1218
    using munmap.
1219
  * If neither bit is set, then the segment was obtained using
1219
  * If neither bit is set, then the segment was obtained using
1220
    MORECORE so can be merged with surrounding MORECORE'd segments
1220
    MORECORE so can be merged with surrounding MORECORE'd segments
1221
    and deallocated/trimmed using MORECORE with negative arguments.
1221
    and deallocated/trimmed using MORECORE with negative arguments.
1222
*/
1222
*/
1223
 
1223
 
1224
struct malloc_segment {
1224
struct malloc_segment {
1225
  char*        base;             /* base address */
1225
  char*        base;             /* base address */
1226
  size_t       size;             /* allocated size */
1226
  size_t       size;             /* allocated size */
1227
  struct malloc_segment* next;   /* ptr to next segment */
1227
  struct malloc_segment* next;   /* ptr to next segment */
1228
  flag_t       sflags;           /* mmap and extern flag */
1228
  flag_t       sflags;           /* mmap and extern flag */
1229
};
1229
};
1230
 
1230
 
1231
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1231
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1232
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1232
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1233
 
1233
 
1234
typedef struct malloc_segment  msegment;
1234
typedef struct malloc_segment  msegment;
1235
typedef struct malloc_segment* msegmentptr;
1235
typedef struct malloc_segment* msegmentptr;
1236
 
1236
 
1237
/* ---------------------------- malloc_state ----------------------------- */
1237
/* ---------------------------- malloc_state ----------------------------- */
1238
 
1238
 
1239
/*
1239
/*
1240
   A malloc_state holds all of the bookkeeping for a space.
1240
   A malloc_state holds all of the bookkeeping for a space.
1241
   The main fields are:
1241
   The main fields are:
1242
 
1242
 
1243
  Top
1243
  Top
1244
    The topmost chunk of the currently active segment. Its size is
1244
    The topmost chunk of the currently active segment. Its size is
1245
    cached in topsize.  The actual size of topmost space is
1245
    cached in topsize.  The actual size of topmost space is
1246
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1246
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1247
    fenceposts and segment records if necessary when getting more
1247
    fenceposts and segment records if necessary when getting more
1248
    space from the system.  The size at which to autotrim top is
1248
    space from the system.  The size at which to autotrim top is
1249
    cached from mparams in trim_check, except that it is disabled if
1249
    cached from mparams in trim_check, except that it is disabled if
1250
    an autotrim fails.
1250
    an autotrim fails.
1251
 
1251
 
1252
  Designated victim (dv)
1252
  Designated victim (dv)
1253
    This is the preferred chunk for servicing small requests that
1253
    This is the preferred chunk for servicing small requests that
1254
    don't have exact fits.  It is normally the chunk split off most
1254
    don't have exact fits.  It is normally the chunk split off most
1255
    recently to service another small request.  Its size is cached in
1255
    recently to service another small request.  Its size is cached in
1256
    dvsize. The link fields of this chunk are not maintained since it
1256
    dvsize. The link fields of this chunk are not maintained since it
1257
    is not kept in a bin.
1257
    is not kept in a bin.
1258
 
1258
 
1259
  SmallBins
1259
  SmallBins
1260
    An array of bin headers for free chunks.  These bins hold chunks
1260
    An array of bin headers for free chunks.  These bins hold chunks
1261
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1261
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1262
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1262
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1263
    use in double-linked lists, each bin header acts as a malloc_chunk
1263
    use in double-linked lists, each bin header acts as a malloc_chunk
1264
    pointing to the real first node, if it exists (else pointing to
1264
    pointing to the real first node, if it exists (else pointing to
1265
    itself).  This avoids special-casing for headers.  But to avoid
1265
    itself).  This avoids special-casing for headers.  But to avoid
1266
    waste, we allocate only the fd/bk pointers of bins, and then use
1266
    waste, we allocate only the fd/bk pointers of bins, and then use
1267
    repositioning tricks to treat these as the fields of a chunk.
1267
    repositioning tricks to treat these as the fields of a chunk.
1268
 
1268
 
1269
  TreeBins
1269
  TreeBins
1270
    Treebins are pointers to the roots of trees holding a range of
1270
    Treebins are pointers to the roots of trees holding a range of
1271
    sizes. There are 2 equally spaced treebins for each power of two
1271
    sizes. There are 2 equally spaced treebins for each power of two
1272
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1272
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1273
    larger.
1273
    larger.
1274
 
1274
 
1275
  Bin maps
1275
  Bin maps
1276
    There is one bit map for small bins ("smallmap") and one for
1276
    There is one bit map for small bins ("smallmap") and one for
1277
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1277
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1278
    clears the bit when empty.  Bit operations are then used to avoid
1278
    clears the bit when empty.  Bit operations are then used to avoid
1279
    bin-by-bin searching -- nearly all "search" is done without ever
1279
    bin-by-bin searching -- nearly all "search" is done without ever
1280
    looking at bins that won't be selected.  The bit maps
1280
    looking at bins that won't be selected.  The bit maps
1281
    conservatively use 32 bits per map word, even if on 64bit system.
1281
    conservatively use 32 bits per map word, even if on 64bit system.
1282
    For a good description of some of the bit-based techniques used
1282
    For a good description of some of the bit-based techniques used
1283
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1283
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1284
    supplement at http://hackersdelight.org/). Many of these are
1284
    supplement at http://hackersdelight.org/). Many of these are
1285
    intended to reduce the branchiness of paths through malloc etc, as
1285
    intended to reduce the branchiness of paths through malloc etc, as
1286
    well as to reduce the number of memory locations read or written.
1286
    well as to reduce the number of memory locations read or written.
1287
 
1287
 
1288
  Segments
1288
  Segments
1289
    A list of segments headed by an embedded malloc_segment record
1289
    A list of segments headed by an embedded malloc_segment record
1290
    representing the initial space.
1290
    representing the initial space.
1291
 
1291
 
1292
  Address check support
1292
  Address check support
1293
    The least_addr field is the least address ever obtained from
1293
    The least_addr field is the least address ever obtained from
1294
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1294
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1295
    than this are trapped (unless INSECURE is defined).
1295
    than this are trapped (unless INSECURE is defined).
1296
 
1296
 
1297
  Magic tag
1297
  Magic tag
1298
    A cross-check field that should always hold same value as mparams.magic.
1298
    A cross-check field that should always hold same value as mparams.magic.
1299
 
1299
 
1300
  Flags
1300
  Flags
1301
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1301
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1302
 
1302
 
1303
  Statistics
1303
  Statistics
1304
    Each space keeps track of current and maximum system memory
1304
    Each space keeps track of current and maximum system memory
1305
    obtained via MORECORE or MMAP.
1305
    obtained via MORECORE or MMAP.
1306
 
1306
 
1307
  Locking
1307
  Locking
1308
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1308
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1309
    around every public call using this mspace.
1309
    around every public call using this mspace.
1310
*/
1310
*/
1311
 
1311
 
1312
/* Bin types, widths and sizes */
1312
/* Bin types, widths and sizes */
1313
#define NSMALLBINS        (32U)
1313
#define NSMALLBINS        (32U)
1314
#define NTREEBINS         (32U)
1314
#define NTREEBINS         (32U)
1315
#define SMALLBIN_SHIFT    (3U)
1315
#define SMALLBIN_SHIFT    (3U)
1316
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1316
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1317
#define TREEBIN_SHIFT     (8U)
1317
#define TREEBIN_SHIFT     (8U)
1318
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1318
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1319
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1319
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1320
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1320
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1321
 
1321
 
1322
struct malloc_state {
1322
struct malloc_state {
1323
  binmap_t   smallmap;
1323
  binmap_t   smallmap;
1324
  binmap_t   treemap;
1324
  binmap_t   treemap;
1325
  size_t     dvsize;
1325
  size_t     dvsize;
1326
  size_t     topsize;
1326
  size_t     topsize;
1327
  char*      least_addr;
1327
  char*      least_addr;
1328
  mchunkptr  dv;
1328
  mchunkptr  dv;
1329
  mchunkptr  top;
1329
  mchunkptr  top;
1330
  size_t     trim_check;
1330
  size_t     trim_check;
1331
  size_t     magic;
1331
  size_t     magic;
1332
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1332
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1333
  tbinptr    treebins[NTREEBINS];
1333
  tbinptr    treebins[NTREEBINS];
1334
  size_t     footprint;
1334
  size_t     footprint;
1335
  size_t     max_footprint;
1335
  size_t     max_footprint;
1336
  flag_t     mflags;
1336
  flag_t     mflags;
1337
#if USE_LOCKS
1337
#if USE_LOCKS
1338
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1338
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1339
#endif /* USE_LOCKS */
1339
#endif /* USE_LOCKS */
1340
  msegment   seg;
1340
  msegment   seg;
1341
};
1341
};
1342
 
1342
 
1343
typedef struct malloc_state*    mstate;
1343
typedef struct malloc_state*    mstate;
1344
 
1344
 
1345
/* ------------- Global malloc_state and malloc_params ------------------- */
1345
/* ------------- Global malloc_state and malloc_params ------------------- */
1346
 
1346
 
1347
/*
1347
/*
1348
  malloc_params holds global properties, including those that can be
1348
  malloc_params holds global properties, including those that can be
1349
  dynamically set using mallopt. There is a single instance, mparams,
1349
  dynamically set using mallopt. There is a single instance, mparams,
1350
  initialized in init_mparams.
1350
  initialized in init_mparams.
1351
*/
1351
*/
1352
 
1352
 
1353
struct malloc_params {
1353
struct malloc_params {
1354
  size_t magic;
1354
  size_t magic;
1355
  size_t page_size;
1355
  size_t page_size;
1356
  size_t granularity;
1356
  size_t granularity;
1357
  size_t mmap_threshold;
1357
  size_t mmap_threshold;
1358
  size_t trim_threshold;
1358
  size_t trim_threshold;
1359
  flag_t default_mflags;
1359
  flag_t default_mflags;
1360
};
1360
};
1361
 
1361
 
1362
static struct malloc_params mparams;
1362
static struct malloc_params mparams;
1363
 
1363
 
1364
/* The global malloc_state used for all non-"mspace" calls */
1364
/* The global malloc_state used for all non-"mspace" calls */
1365
static struct malloc_state _gm_;
1365
static struct malloc_state _gm_;
1366
#define gm                 (&_gm_)
1366
#define gm                 (&_gm_)
1367
#define is_global(M)       ((M) == &_gm_)
1367
#define is_global(M)       ((M) == &_gm_)
1368
#define is_initialized(M)  ((M)->top != 0)
1368
#define is_initialized(M)  ((M)->top != 0)
1369
 
1369
 
1370
/* -------------------------- system alloc setup ------------------------- */
1370
/* -------------------------- system alloc setup ------------------------- */
1371
 
1371
 
1372
/* Operations on mflags */
1372
/* Operations on mflags */
1373
 
1373
 
1374
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1374
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1375
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1375
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1376
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1376
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1377
 
1377
 
1378
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1378
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1379
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1379
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1380
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1380
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1381
 
1381
 
1382
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1382
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1383
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1383
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1384
 
1384
 
1385
#define set_lock(M,L)\
1385
#define set_lock(M,L)\
1386
 ((M)->mflags = (L)?\
1386
 ((M)->mflags = (L)?\
1387
  ((M)->mflags | USE_LOCK_BIT) :\
1387
  ((M)->mflags | USE_LOCK_BIT) :\
1388
  ((M)->mflags & ~USE_LOCK_BIT))
1388
  ((M)->mflags & ~USE_LOCK_BIT))
1389
 
1389
 
1390
/* page-align a size */
1390
/* page-align a size */
1391
#define page_align(S)\
1391
#define page_align(S)\
1392
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1392
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1393
 
1393
 
1394
/* granularity-align a size */
1394
/* granularity-align a size */
1395
#define granularity_align(S)\
1395
#define granularity_align(S)\
1396
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1396
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1397
 
1397
 
1398
#define is_page_aligned(S)\
1398
#define is_page_aligned(S)\
1399
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1399
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1400
#define is_granularity_aligned(S)\
1400
#define is_granularity_aligned(S)\
1401
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1401
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1402
 
1402
 
1403
/*  True if segment S holds address A */
1403
/*  True if segment S holds address A */
1404
#define segment_holds(S, A)\
1404
#define segment_holds(S, A)\
1405
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1405
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1406
 
1406
 
1407
/* Return segment holding given address */
1407
/* Return segment holding given address */
1408
static msegmentptr segment_holding(mstate m, char* addr) {
1408
static msegmentptr segment_holding(mstate m, char* addr) {
1409
  msegmentptr sp = &m->seg;
1409
  msegmentptr sp = &m->seg;
1410
  for (;;) {
1410
  for (;;) {
1411
    if (addr >= sp->base && addr < sp->base + sp->size)
1411
    if (addr >= sp->base && addr < sp->base + sp->size)
1412
      return sp;
1412
      return sp;
1413
    if ((sp = sp->next) == 0)
1413
    if ((sp = sp->next) == 0)
1414
      return 0;
1414
      return 0;
1415
  }
1415
  }
1416
}
1416
}
1417
 
1417
 
1418
/* Return true if segment contains a segment link */
1418
/* Return true if segment contains a segment link */
1419
static int has_segment_link(mstate m, msegmentptr ss) {
1419
static int has_segment_link(mstate m, msegmentptr ss) {
1420
  msegmentptr sp = &m->seg;
1420
  msegmentptr sp = &m->seg;
1421
  for (;;) {
1421
  for (;;) {
1422
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1422
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1423
      return 1;
1423
      return 1;
1424
    if ((sp = sp->next) == 0)
1424
    if ((sp = sp->next) == 0)
1425
      return 0;
1425
      return 0;
1426
  }
1426
  }
1427
}
1427
}
1428
 
1428
 
1429
#ifndef MORECORE_CANNOT_TRIM
1429
#ifndef MORECORE_CANNOT_TRIM
1430
#define should_trim(M,s)  ((s) > (M)->trim_check)
1430
#define should_trim(M,s)  ((s) > (M)->trim_check)
1431
#else  /* MORECORE_CANNOT_TRIM */
1431
#else  /* MORECORE_CANNOT_TRIM */
1432
#define should_trim(M,s)  (0)
1432
#define should_trim(M,s)  (0)
1433
#endif /* MORECORE_CANNOT_TRIM */
1433
#endif /* MORECORE_CANNOT_TRIM */
1434
 
1434
 
1435
/*
1435
/*
1436
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1436
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1437
  that may be needed to place segment records and fenceposts when new
1437
  that may be needed to place segment records and fenceposts when new
1438
  noncontiguous segments are added.
1438
  noncontiguous segments are added.
1439
*/
1439
*/
1440
#define TOP_FOOT_SIZE\
1440
#define TOP_FOOT_SIZE\
1441
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1441
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1442
 
1442
 
1443
 
1443
 
1444
/* -------------------------------  Hooks -------------------------------- */
1444
/* -------------------------------  Hooks -------------------------------- */
1445
 
1445
 
1446
/*
1446
/*
1447
  PREACTION should be defined to return 0 on success, and nonzero on
1447
  PREACTION should be defined to return 0 on success, and nonzero on
1448
  failure. If you are not using locking, you can redefine these to do
1448
  failure. If you are not using locking, you can redefine these to do
1449
  anything you like.
1449
  anything you like.
1450
*/
1450
*/
1451
 
1451
 
1452
#if USE_LOCKS
1452
#if USE_LOCKS
1453
 
1453
 
1454
/* Ensure locks are initialized */
1454
/* Ensure locks are initialized */
1455
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1455
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1456
 
1456
 
1457
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1457
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1458
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1458
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1459
#else /* USE_LOCKS */
1459
#else /* USE_LOCKS */
1460
 
1460
 
1461
#ifndef PREACTION
1461
#ifndef PREACTION
1462
#define PREACTION(M) (0)
1462
#define PREACTION(M) (0)
1463
#endif  /* PREACTION */
1463
#endif  /* PREACTION */
1464
 
1464
 
1465
#ifndef POSTACTION
1465
#ifndef POSTACTION
1466
#define POSTACTION(M)
1466
#define POSTACTION(M)
1467
#endif  /* POSTACTION */
1467
#endif  /* POSTACTION */
1468
 
1468
 
1469
#endif /* USE_LOCKS */
1469
#endif /* USE_LOCKS */
1470
 
1470
 
1471
/*
1471
/*
1472
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1472
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1473
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1473
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1474
  reallocs. The argument p is an address that might have triggered the
1474
  reallocs. The argument p is an address that might have triggered the
1475
  fault. It is ignored by the two predefined actions, but might be
1475
  fault. It is ignored by the two predefined actions, but might be
1476
  useful in custom actions that try to help diagnose errors.
1476
  useful in custom actions that try to help diagnose errors.
1477
*/
1477
*/
1478
 
1478
 
1479
#if PROCEED_ON_ERROR
1479
#if PROCEED_ON_ERROR
1480
 
1480
 
1481
/* A count of the number of corruption errors causing resets */
1481
/* A count of the number of corruption errors causing resets */
1482
int malloc_corruption_error_count;
1482
int malloc_corruption_error_count;
1483
 
1483
 
1484
/* default corruption action */
1484
/* default corruption action */
1485
static void reset_on_error(mstate m);
1485
static void reset_on_error(mstate m);
1486
 
1486
 
1487
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1487
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1488
#define USAGE_ERROR_ACTION(m, p)
1488
#define USAGE_ERROR_ACTION(m, p)
1489
 
1489
 
1490
#else /* PROCEED_ON_ERROR */
1490
#else /* PROCEED_ON_ERROR */
1491
 
1491
 
1492
#ifndef CORRUPTION_ERROR_ACTION
1492
#ifndef CORRUPTION_ERROR_ACTION
1493
#define CORRUPTION_ERROR_ACTION(m) ABORT
1493
#define CORRUPTION_ERROR_ACTION(m) ABORT
1494
#endif /* CORRUPTION_ERROR_ACTION */
1494
#endif /* CORRUPTION_ERROR_ACTION */
1495
 
1495
 
1496
#ifndef USAGE_ERROR_ACTION
1496
#ifndef USAGE_ERROR_ACTION
1497
#define USAGE_ERROR_ACTION(m,p) ABORT
1497
#define USAGE_ERROR_ACTION(m,p) ABORT
1498
#endif /* USAGE_ERROR_ACTION */
1498
#endif /* USAGE_ERROR_ACTION */
1499
 
1499
 
1500
#endif /* PROCEED_ON_ERROR */
1500
#endif /* PROCEED_ON_ERROR */
1501
 
1501
 
1502
/* -------------------------- Debugging setup ---------------------------- */
1502
/* -------------------------- Debugging setup ---------------------------- */
1503
 
1503
 
1504
#if ! DEBUG
1504
#if ! DEBUG
1505
 
1505
 
1506
#define check_free_chunk(M,P)
1506
#define check_free_chunk(M,P)
1507
#define check_inuse_chunk(M,P)
1507
#define check_inuse_chunk(M,P)
1508
#define check_malloced_chunk(M,P,N)
1508
#define check_malloced_chunk(M,P,N)
1509
#define check_mmapped_chunk(M,P)
1509
#define check_mmapped_chunk(M,P)
1510
#define check_malloc_state(M)
1510
#define check_malloc_state(M)
1511
#define check_top_chunk(M,P)
1511
#define check_top_chunk(M,P)
1512
 
1512
 
1513
#else /* DEBUG */
1513
#else /* DEBUG */
1514
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1514
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1515
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1515
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1516
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1516
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1517
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1517
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1518
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1518
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1519
#define check_malloc_state(M)       do_check_malloc_state(M)
1519
#define check_malloc_state(M)       do_check_malloc_state(M)
1520
 
1520
 
1521
static void   do_check_any_chunk(mstate m, mchunkptr p);
1521
static void   do_check_any_chunk(mstate m, mchunkptr p);
1522
static void   do_check_top_chunk(mstate m, mchunkptr p);
1522
static void   do_check_top_chunk(mstate m, mchunkptr p);
1523
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1523
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1524
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1524
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1525
static void   do_check_free_chunk(mstate m, mchunkptr p);
1525
static void   do_check_free_chunk(mstate m, mchunkptr p);
1526
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1526
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1527
static void   do_check_tree(mstate m, tchunkptr t);
1527
static void   do_check_tree(mstate m, tchunkptr t);
1528
static void   do_check_treebin(mstate m, bindex_t i);
1528
static void   do_check_treebin(mstate m, bindex_t i);
1529
static void   do_check_smallbin(mstate m, bindex_t i);
1529
static void   do_check_smallbin(mstate m, bindex_t i);
1530
static void   do_check_malloc_state(mstate m);
1530
static void   do_check_malloc_state(mstate m);
1531
static int    bin_find(mstate m, mchunkptr x);
1531
static int    bin_find(mstate m, mchunkptr x);
1532
static size_t traverse_and_check(mstate m);
1532
static size_t traverse_and_check(mstate m);
1533
#endif /* DEBUG */
1533
#endif /* DEBUG */
1534
 
1534
 
1535
/* ---------------------------- Indexing Bins ---------------------------- */
1535
/* ---------------------------- Indexing Bins ---------------------------- */
1536
 
1536
 
1537
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1537
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1538
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1538
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1539
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1539
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1540
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1540
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1541
 
1541
 
1542
/* addressing by index. See above about smallbin repositioning */
1542
/* addressing by index. See above about smallbin repositioning */
1543
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1543
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1544
#define treebin_at(M,i)     (&((M)->treebins[i]))
1544
#define treebin_at(M,i)     (&((M)->treebins[i]))
1545
 
1545
 
1546
/* assign tree index for size S to variable I */
1546
/* assign tree index for size S to variable I */
1547
#if defined(__GNUC__) && defined(i386)
1547
#if defined(__GNUC__) && defined(i386)
1548
#define compute_tree_index(S, I)\
1548
#define compute_tree_index(S, I)\
1549
{\
1549
{\
1550
  size_t X = S >> TREEBIN_SHIFT;\
1550
  size_t X = S >> TREEBIN_SHIFT;\
1551
  if (X == 0)\
1551
  if (X == 0)\
1552
    I = 0;\
1552
    I = 0;\
1553
  else if (X > 0xFFFF)\
1553
  else if (X > 0xFFFF)\
1554
    I = NTREEBINS-1;\
1554
    I = NTREEBINS-1;\
1555
  else {\
1555
  else {\
1556
    unsigned int K;\
1556
    unsigned int K;\
1557
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1557
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1558
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1558
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1559
  }\
1559
  }\
1560
}
1560
}
1561
#else /* GNUC */
1561
#else /* GNUC */
1562
#define compute_tree_index(S, I)\
1562
#define compute_tree_index(S, I)\
1563
{\
1563
{\
1564
  size_t X = S >> TREEBIN_SHIFT;\
1564
  size_t X = S >> TREEBIN_SHIFT;\
1565
  if (X == 0)\
1565
  if (X == 0)\
1566
    I = 0;\
1566
    I = 0;\
1567
  else if (X > 0xFFFF)\
1567
  else if (X > 0xFFFF)\
1568
    I = NTREEBINS-1;\
1568
    I = NTREEBINS-1;\
1569
  else {\
1569
  else {\
1570
    unsigned int Y = (unsigned int)X;\
1570
    unsigned int Y = (unsigned int)X;\
1571
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1571
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1572
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1572
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1573
    N += K;\
1573
    N += K;\
1574
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1574
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1575
    K = 14 - N + ((Y <<= K) >> 15);\
1575
    K = 14 - N + ((Y <<= K) >> 15);\
1576
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1576
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1577
  }\
1577
  }\
1578
}
1578
}
1579
#endif /* GNUC */
1579
#endif /* GNUC */
1580
 
1580
 
1581
/* Bit representing maximum resolved size in a treebin at i */
1581
/* Bit representing maximum resolved size in a treebin at i */
1582
#define bit_for_tree_index(i) \
1582
#define bit_for_tree_index(i) \
1583
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1583
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1584
 
1584
 
1585
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1585
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1586
#define leftshift_for_tree_index(i) \
1586
#define leftshift_for_tree_index(i) \
1587
   ((i == NTREEBINS-1)? 0 : \
1587
   ((i == NTREEBINS-1)? 0 : \
1588
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1588
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1589
 
1589
 
1590
/* The size of the smallest chunk held in bin with index i */
1590
/* The size of the smallest chunk held in bin with index i */
1591
#define minsize_for_tree_index(i) \
1591
#define minsize_for_tree_index(i) \
1592
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1592
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1593
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1593
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1594
 
1594
 
1595
 
1595
 
1596
/* ------------------------ Operations on bin maps ----------------------- */
1596
/* ------------------------ Operations on bin maps ----------------------- */
1597
 
1597
 
1598
/* bit corresponding to given index */
1598
/* bit corresponding to given index */
1599
#define idx2bit(i)              ((binmap_t)(1) << (i))
1599
#define idx2bit(i)              ((binmap_t)(1) << (i))
1600
 
1600
 
1601
/* Mark/Clear bits with given index */
1601
/* Mark/Clear bits with given index */
1602
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1602
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1603
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1603
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1604
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1604
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1605
 
1605
 
1606
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1606
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1607
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1607
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1608
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1608
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1609
 
1609
 
1610
/* index corresponding to given bit */
1610
/* index corresponding to given bit */
1611
 
1611
 
1612
#if defined(__GNUC__) && defined(i386)
1612
#if defined(__GNUC__) && defined(i386)
1613
#define compute_bit2idx(X, I)\
1613
#define compute_bit2idx(X, I)\
1614
{\
1614
{\
1615
  unsigned int J;\
1615
  unsigned int J;\
1616
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1616
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1617
  I = (bindex_t)J;\
1617
  I = (bindex_t)J;\
1618
}
1618
}
1619
 
1619
 
1620
#else /* GNUC */
1620
#else /* GNUC */
1621
#if  USE_BUILTIN_FFS
1621
#if  USE_BUILTIN_FFS
1622
#define compute_bit2idx(X, I) I = ffs(X)-1
1622
#define compute_bit2idx(X, I) I = ffs(X)-1
1623
 
1623
 
1624
#else /* USE_BUILTIN_FFS */
1624
#else /* USE_BUILTIN_FFS */
1625
#define compute_bit2idx(X, I)\
1625
#define compute_bit2idx(X, I)\
1626
{\
1626
{\
1627
  unsigned int Y = X - 1;\
1627
  unsigned int Y = X - 1;\
1628
  unsigned int K = Y >> (16-4) & 16;\
1628
  unsigned int K = Y >> (16-4) & 16;\
1629
  unsigned int N = K;        Y >>= K;\
1629
  unsigned int N = K;        Y >>= K;\
1630
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1630
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1631
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1631
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1632
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1632
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1633
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1633
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1634
  I = (bindex_t)(N + Y);\
1634
  I = (bindex_t)(N + Y);\
1635
}
1635
}
1636
#endif /* USE_BUILTIN_FFS */
1636
#endif /* USE_BUILTIN_FFS */
1637
#endif /* GNUC */
1637
#endif /* GNUC */
1638
 
1638
 
1639
/* isolate the least set bit of a bitmap */
1639
/* isolate the least set bit of a bitmap */
1640
#define least_bit(x)         ((x) & -(x))
1640
#define least_bit(x)         ((x) & -(x))
1641
 
1641
 
1642
/* mask with all bits to left of least bit of x on */
1642
/* mask with all bits to left of least bit of x on */
1643
#define left_bits(x)         ((x<<1) | -(x<<1))
1643
#define left_bits(x)         ((x<<1) | -(x<<1))
1644
 
1644
 
1645
/* mask with all bits to left of or equal to least bit of x on */
1645
/* mask with all bits to left of or equal to least bit of x on */
1646
#define same_or_left_bits(x) ((x) | -(x))
1646
#define same_or_left_bits(x) ((x) | -(x))
1647
 
1647
 
1648
 
1648
 
1649
/* ----------------------- Runtime Check Support ------------------------- */
1649
/* ----------------------- Runtime Check Support ------------------------- */
1650
 
1650
 
1651
/*
1651
/*
1652
  For security, the main invariant is that malloc/free/etc never
1652
  For security, the main invariant is that malloc/free/etc never
1653
  writes to a static address other than malloc_state, unless static
1653
  writes to a static address other than malloc_state, unless static
1654
  malloc_state itself has been corrupted, which cannot occur via
1654
  malloc_state itself has been corrupted, which cannot occur via
1655
  malloc (because of these checks). In essence this means that we
1655
  malloc (because of these checks). In essence this means that we
1656
  believe all pointers, sizes, maps etc held in malloc_state, but
1656
  believe all pointers, sizes, maps etc held in malloc_state, but
1657
  check all of those linked or offsetted from other embedded data
1657
  check all of those linked or offsetted from other embedded data
1658
  structures.  These checks are interspersed with main code in a way
1658
  structures.  These checks are interspersed with main code in a way
1659
  that tends to minimize their run-time cost.
1659
  that tends to minimize their run-time cost.
1660
 
1660
 
1661
  When FOOTERS is defined, in addition to range checking, we also
1661
  When FOOTERS is defined, in addition to range checking, we also
1662
  verify footer fields of inuse chunks, which can be used guarantee
1662
  verify footer fields of inuse chunks, which can be used guarantee
1663
  that the mstate controlling malloc/free is intact.  This is a
1663
  that the mstate controlling malloc/free is intact.  This is a
1664
  streamlined version of the approach described by William Robertson
1664
  streamlined version of the approach described by William Robertson
1665
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1665
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1666
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1666
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1667
  of an inuse chunk holds the xor of its mstate and a random seed,
1667
  of an inuse chunk holds the xor of its mstate and a random seed,
1668
  that is checked upon calls to free() and realloc().  This is
1668
  that is checked upon calls to free() and realloc().  This is
1669
  (probablistically) unguessable from outside the program, but can be
1669
  (probablistically) unguessable from outside the program, but can be
1670
  computed by any code successfully malloc'ing any chunk, so does not
1670
  computed by any code successfully malloc'ing any chunk, so does not
1671
  itself provide protection against code that has already broken
1671
  itself provide protection against code that has already broken
1672
  security through some other means.  Unlike Robertson et al, we
1672
  security through some other means.  Unlike Robertson et al, we
1673
  always dynamically check addresses of all offset chunks (previous,
1673
  always dynamically check addresses of all offset chunks (previous,
1674
  next, etc). This turns out to be cheaper than relying on hashes.
1674
  next, etc). This turns out to be cheaper than relying on hashes.
1675
*/
1675
*/
1676
 
1676
 
1677
#if !INSECURE
1677
#if !INSECURE
1678
/* Check if address a is at least as high as any from MORECORE or MMAP */
1678
/* Check if address a is at least as high as any from MORECORE or MMAP */
1679
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1679
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1680
/* Check if address of next chunk n is higher than base chunk p */
1680
/* Check if address of next chunk n is higher than base chunk p */
1681
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1681
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1682
/* Check if p has its cinuse bit on */
1682
/* Check if p has its cinuse bit on */
1683
#define ok_cinuse(p)     cinuse(p)
1683
#define ok_cinuse(p)     cinuse(p)
1684
/* Check if p has its pinuse bit on */
1684
/* Check if p has its pinuse bit on */
1685
#define ok_pinuse(p)     pinuse(p)
1685
#define ok_pinuse(p)     pinuse(p)
1686
 
1686
 
1687
#else /* !INSECURE */
1687
#else /* !INSECURE */
1688
#define ok_address(M, a) (1)
1688
#define ok_address(M, a) (1)
1689
#define ok_next(b, n)    (1)
1689
#define ok_next(b, n)    (1)
1690
#define ok_cinuse(p)     (1)
1690
#define ok_cinuse(p)     (1)
1691
#define ok_pinuse(p)     (1)
1691
#define ok_pinuse(p)     (1)
1692
#endif /* !INSECURE */
1692
#endif /* !INSECURE */
1693
 
1693
 
1694
#if (FOOTERS && !INSECURE)
1694
#if (FOOTERS && !INSECURE)
1695
/* Check if (alleged) mstate m has expected magic field */
1695
/* Check if (alleged) mstate m has expected magic field */
1696
#define ok_magic(M)      ((M)->magic == mparams.magic)
1696
#define ok_magic(M)      ((M)->magic == mparams.magic)
1697
#else  /* (FOOTERS && !INSECURE) */
1697
#else  /* (FOOTERS && !INSECURE) */
1698
#define ok_magic(M)      (1)
1698
#define ok_magic(M)      (1)
1699
#endif /* (FOOTERS && !INSECURE) */
1699
#endif /* (FOOTERS && !INSECURE) */
1700
 
1700
 
1701
 
1701
 
1702
/* In gcc, use __builtin_expect to minimize impact of checks */
1702
/* In gcc, use __builtin_expect to minimize impact of checks */
1703
#if !INSECURE
1703
#if !INSECURE
1704
#if defined(__GNUC__) && __GNUC__ >= 3
1704
#if defined(__GNUC__) && __GNUC__ >= 3
1705
#define RTCHECK(e)  __builtin_expect(e, 1)
1705
#define RTCHECK(e)  __builtin_expect(e, 1)
1706
#else /* GNUC */
1706
#else /* GNUC */
1707
#define RTCHECK(e)  (e)
1707
#define RTCHECK(e)  (e)
1708
#endif /* GNUC */
1708
#endif /* GNUC */
1709
#else /* !INSECURE */
1709
#else /* !INSECURE */
1710
#define RTCHECK(e)  (1)
1710
#define RTCHECK(e)  (1)
1711
#endif /* !INSECURE */
1711
#endif /* !INSECURE */
1712
 
1712
 
1713
/* macros to set up inuse chunks with or without footers */
1713
/* macros to set up inuse chunks with or without footers */
1714
 
1714
 
1715
#if !FOOTERS
1715
#if !FOOTERS
1716
 
1716
 
1717
#define mark_inuse_foot(M,p,s)
1717
#define mark_inuse_foot(M,p,s)
1718
 
1718
 
1719
/* Set cinuse bit and pinuse bit of next chunk */
1719
/* Set cinuse bit and pinuse bit of next chunk */
1720
#define set_inuse(M,p,s)\
1720
#define set_inuse(M,p,s)\
1721
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1721
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1722
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1722
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1723
 
1723
 
1724
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1724
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1725
#define set_inuse_and_pinuse(M,p,s)\
1725
#define set_inuse_and_pinuse(M,p,s)\
1726
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1726
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1727
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1727
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1728
 
1728
 
1729
/* Set size, cinuse and pinuse bit of this chunk */
1729
/* Set size, cinuse and pinuse bit of this chunk */
1730
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1730
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1731
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1731
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1732
 
1732
 
1733
#else /* FOOTERS */
1733
#else /* FOOTERS */
1734
 
1734
 
1735
/* Set foot of inuse chunk to be xor of mstate and seed */
1735
/* Set foot of inuse chunk to be xor of mstate and seed */
1736
#define mark_inuse_foot(M,p,s)\
1736
#define mark_inuse_foot(M,p,s)\
1737
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1737
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1738
 
1738
 
1739
#define get_mstate_for(p)\
1739
#define get_mstate_for(p)\
1740
  ((mstate)(((mchunkptr)((char*)(p) +\
1740
  ((mstate)(((mchunkptr)((char*)(p) +\
1741
    (chunksize(p))))->prev_foot ^ mparams.magic))
1741
    (chunksize(p))))->prev_foot ^ mparams.magic))
1742
 
1742
 
1743
#define set_inuse(M,p,s)\
1743
#define set_inuse(M,p,s)\
1744
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1744
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1745
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1745
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1746
  mark_inuse_foot(M,p,s))
1746
  mark_inuse_foot(M,p,s))
1747
 
1747
 
1748
#define set_inuse_and_pinuse(M,p,s)\
1748
#define set_inuse_and_pinuse(M,p,s)\
1749
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1749
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1750
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1750
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1751
 mark_inuse_foot(M,p,s))
1751
 mark_inuse_foot(M,p,s))
1752
 
1752
 
1753
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1753
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1754
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1754
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1755
  mark_inuse_foot(M, p, s))
1755
  mark_inuse_foot(M, p, s))
1756
 
1756
 
1757
#endif /* !FOOTERS */
1757
#endif /* !FOOTERS */
1758
 
1758
 
1759
/* ---------------------------- setting mparams -------------------------- */
1759
/* ---------------------------- setting mparams -------------------------- */
1760
 
1760
 
1761
/* Initialize mparams */
1761
/* Initialize mparams */
1762
static int init_mparams(void) {
1762
static int init_mparams(void) {
1763
  if (mparams.page_size == 0) {
1763
  if (mparams.page_size == 0) {
1764
    size_t s;
1764
    size_t s;
1765
 
1765
 
1766
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1766
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1767
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1767
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1768
#if MORECORE_CONTIGUOUS
1768
#if MORECORE_CONTIGUOUS
1769
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1769
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1770
#else  /* MORECORE_CONTIGUOUS */
1770
#else  /* MORECORE_CONTIGUOUS */
1771
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1771
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1772
#endif /* MORECORE_CONTIGUOUS */
1772
#endif /* MORECORE_CONTIGUOUS */
1773
 
1773
 
1774
#if (FOOTERS && !INSECURE)
1774
#if (FOOTERS && !INSECURE)
1775
    {
1775
    {
1776
#if USE_DEV_RANDOM
1776
#if USE_DEV_RANDOM
1777
      int fd;
1777
      int fd;
1778
      unsigned char buf[sizeof(size_t)];
1778
      unsigned char buf[sizeof(size_t)];
1779
      /* Try to use /dev/urandom, else fall back on using time */
1779
      /* Try to use /dev/urandom, else fall back on using time */
1780
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1780
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1781
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1781
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1782
        s = *((size_t *) buf);
1782
        s = *((size_t *) buf);
1783
        close(fd);
1783
        close(fd);
1784
      }
1784
      }
1785
      else
1785
      else
1786
#endif /* USE_DEV_RANDOM */
1786
#endif /* USE_DEV_RANDOM */
1787
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1787
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1788
 
1788
 
1789
      s |= (size_t)8U;    /* ensure nonzero */
1789
      s |= (size_t)8U;    /* ensure nonzero */
1790
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1790
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1791
 
1791
 
1792
    }
1792
    }
1793
#else /* (FOOTERS && !INSECURE) */
1793
#else /* (FOOTERS && !INSECURE) */
1794
    s = (size_t)0x58585858U;
1794
    s = (size_t)0x58585858U;
1795
#endif /* (FOOTERS && !INSECURE) */
1795
#endif /* (FOOTERS && !INSECURE) */
1796
    ACQUIRE_MAGIC_INIT_LOCK();
1796
    ACQUIRE_MAGIC_INIT_LOCK();
1797
    if (mparams.magic == 0) {
1797
    if (mparams.magic == 0) {
1798
      mparams.magic = s;
1798
      mparams.magic = s;
1799
      /* Set up lock for main malloc area */
1799
      /* Set up lock for main malloc area */
1800
      INITIAL_LOCK(&gm->mutex);
1800
      INITIAL_LOCK(&gm->mutex);
1801
      gm->mflags = mparams.default_mflags;
1801
      gm->mflags = mparams.default_mflags;
1802
    }
1802
    }
1803
    RELEASE_MAGIC_INIT_LOCK();
1803
    RELEASE_MAGIC_INIT_LOCK();
1804
 
1804
 
1805
#ifndef WIN32
1805
#ifndef WIN32
1806
    mparams.page_size = malloc_getpagesize;
1806
    mparams.page_size = malloc_getpagesize;
1807
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1807
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1808
                           DEFAULT_GRANULARITY : mparams.page_size);
1808
                           DEFAULT_GRANULARITY : mparams.page_size);
1809
#else /* WIN32 */
1809
#else /* WIN32 */
1810
    {
1810
    {
1811
      SYSTEM_INFO system_info;
1811
      SYSTEM_INFO system_info;
1812
      GetSystemInfo(&system_info);
1812
      GetSystemInfo(&system_info);
1813
      mparams.page_size = system_info.dwPageSize;
1813
      mparams.page_size = system_info.dwPageSize;
1814
      mparams.granularity = system_info.dwAllocationGranularity;
1814
      mparams.granularity = system_info.dwAllocationGranularity;
1815
    }
1815
    }
1816
#endif /* WIN32 */
1816
#endif /* WIN32 */
1817
 
1817
 
1818
    /* Sanity-check configuration:
1818
    /* Sanity-check configuration:
1819
       size_t must be unsigned and as wide as pointer type.
1819
       size_t must be unsigned and as wide as pointer type.
1820
       ints must be at least 4 bytes.
1820
       ints must be at least 4 bytes.
1821
       alignment must be at least 8.
1821
       alignment must be at least 8.
1822
       Alignment, min chunk size, and page size must all be powers of 2.
1822
       Alignment, min chunk size, and page size must all be powers of 2.
1823
    */
1823
    */
1824
    if ((sizeof(size_t) != sizeof(char*)) ||
1824
    if ((sizeof(size_t) != sizeof(char*)) ||
1825
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1825
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1826
        (sizeof(int) < 4)  ||
1826
        (sizeof(int) < 4)  ||
1827
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1827
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1828
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1828
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1829
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1829
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1830
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1830
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1831
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1831
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1832
      ABORT;
1832
      ABORT;
1833
  }
1833
  }
1834
  return 0;
1834
  return 0;
1835
}
1835
}
1836
 
1836
 
1837
/* support for mallopt */
1837
/* support for mallopt */
1838
static int change_mparam(int param_number, int value) {
1838
static int change_mparam(int param_number, int value) {
1839
  size_t val = (size_t)value;
1839
  size_t val = (size_t)value;
1840
  init_mparams();
1840
  init_mparams();
1841
  switch(param_number) {
1841
  switch(param_number) {
1842
  case M_TRIM_THRESHOLD:
1842
  case M_TRIM_THRESHOLD:
1843
    mparams.trim_threshold = val;
1843
    mparams.trim_threshold = val;
1844
    return 1;
1844
    return 1;
1845
  case M_GRANULARITY:
1845
  case M_GRANULARITY:
1846
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1846
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1847
      mparams.granularity = val;
1847
      mparams.granularity = val;
1848
      return 1;
1848
      return 1;
1849
    }
1849
    }
1850
    else
1850
    else
1851
      return 0;
1851
      return 0;
1852
  case M_MMAP_THRESHOLD:
1852
  case M_MMAP_THRESHOLD:
1853
    mparams.mmap_threshold = val;
1853
    mparams.mmap_threshold = val;
1854
    return 1;
1854
    return 1;
1855
  default:
1855
  default:
1856
    return 0;
1856
    return 0;
1857
  }
1857
  }
1858
}
1858
}
1859
 
1859
 
1860
#if DEBUG
1860
#if DEBUG
1861
/* ------------------------- Debugging Support --------------------------- */
1861
/* ------------------------- Debugging Support --------------------------- */
1862
 
1862
 
1863
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1863
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1864
static void do_check_any_chunk(mstate m, mchunkptr p) {
1864
static void do_check_any_chunk(mstate m, mchunkptr p) {
1865
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1865
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1866
  assert(ok_address(m, p));
1866
  assert(ok_address(m, p));
1867
}
1867
}
1868
 
1868
 
1869
/* Check properties of top chunk */
1869
/* Check properties of top chunk */
1870
static void do_check_top_chunk(mstate m, mchunkptr p) {
1870
static void do_check_top_chunk(mstate m, mchunkptr p) {
1871
  msegmentptr sp = segment_holding(m, (char*)p);
1871
  msegmentptr sp = segment_holding(m, (char*)p);
1872
  size_t  sz = chunksize(p);
1872
  size_t  sz = chunksize(p);
1873
  assert(sp != 0);
1873
  assert(sp != 0);
1874
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1874
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1875
  assert(ok_address(m, p));
1875
  assert(ok_address(m, p));
1876
  assert(sz == m->topsize);
1876
  assert(sz == m->topsize);
1877
  assert(sz > 0);
1877
  assert(sz > 0);
1878
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1878
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1879
  assert(pinuse(p));
1879
  assert(pinuse(p));
1880
  assert(!next_pinuse(p));
1880
  assert(!next_pinuse(p));
1881
}
1881
}
1882
 
1882
 
1883
/* Check properties of (inuse) mmapped chunks */
1883
/* Check properties of (inuse) mmapped chunks */
1884
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1884
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1885
  size_t  sz = chunksize(p);
1885
  size_t  sz = chunksize(p);
1886
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1886
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1887
  assert(is_mmapped(p));
1887
  assert(is_mmapped(p));
1888
  assert(use_mmap(m));
1888
  assert(use_mmap(m));
1889
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1889
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1890
  assert(ok_address(m, p));
1890
  assert(ok_address(m, p));
1891
  assert(!is_small(sz));
1891
  assert(!is_small(sz));
1892
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1892
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1893
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1893
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1894
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1894
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1895
}
1895
}
1896
 
1896
 
1897
/* Check properties of inuse chunks */
1897
/* Check properties of inuse chunks */
1898
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1898
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1899
  do_check_any_chunk(m, p);
1899
  do_check_any_chunk(m, p);
1900
  assert(cinuse(p));
1900
  assert(cinuse(p));
1901
  assert(next_pinuse(p));
1901
  assert(next_pinuse(p));
1902
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1902
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1903
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1903
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1904
  if (is_mmapped(p))
1904
  if (is_mmapped(p))
1905
    do_check_mmapped_chunk(m, p);
1905
    do_check_mmapped_chunk(m, p);
1906
}
1906
}
1907
 
1907
 
1908
/* Check properties of free chunks */
1908
/* Check properties of free chunks */
1909
static void do_check_free_chunk(mstate m, mchunkptr p) {
1909
static void do_check_free_chunk(mstate m, mchunkptr p) {
1910
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1910
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1911
  mchunkptr next = chunk_plus_offset(p, sz);
1911
  mchunkptr next = chunk_plus_offset(p, sz);
1912
  do_check_any_chunk(m, p);
1912
  do_check_any_chunk(m, p);
1913
  assert(!cinuse(p));
1913
  assert(!cinuse(p));
1914
  assert(!next_pinuse(p));
1914
  assert(!next_pinuse(p));
1915
  assert (!is_mmapped(p));
1915
  assert (!is_mmapped(p));
1916
  if (p != m->dv && p != m->top) {
1916
  if (p != m->dv && p != m->top) {
1917
    if (sz >= MIN_CHUNK_SIZE) {
1917
    if (sz >= MIN_CHUNK_SIZE) {
1918
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1918
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1919
      assert(is_aligned(chunk2mem(p)));
1919
      assert(is_aligned(chunk2mem(p)));
1920
      assert(next->prev_foot == sz);
1920
      assert(next->prev_foot == sz);
1921
      assert(pinuse(p));
1921
      assert(pinuse(p));
1922
      assert (next == m->top || cinuse(next));
1922
      assert (next == m->top || cinuse(next));
1923
      assert(p->fd->bk == p);
1923
      assert(p->fd->bk == p);
1924
      assert(p->bk->fd == p);
1924
      assert(p->bk->fd == p);
1925
    }
1925
    }
1926
    else  /* markers are always of size SIZE_T_SIZE */
1926
    else  /* markers are always of size SIZE_T_SIZE */
1927
      assert(sz == SIZE_T_SIZE);
1927
      assert(sz == SIZE_T_SIZE);
1928
  }
1928
  }
1929
}
1929
}
1930
 
1930
 
1931
/* Check properties of malloced chunks at the point they are malloced */
1931
/* Check properties of malloced chunks at the point they are malloced */
1932
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1932
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1933
  if (mem != 0) {
1933
  if (mem != 0) {
1934
    mchunkptr p = mem2chunk(mem);
1934
    mchunkptr p = mem2chunk(mem);
1935
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1935
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1936
    do_check_inuse_chunk(m, p);
1936
    do_check_inuse_chunk(m, p);
1937
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1937
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1938
    assert(sz >= MIN_CHUNK_SIZE);
1938
    assert(sz >= MIN_CHUNK_SIZE);
1939
    assert(sz >= s);
1939
    assert(sz >= s);
1940
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1940
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1941
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1941
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1942
  }
1942
  }
1943
}
1943
}
1944
 
1944
 
1945
/* Check a tree and its subtrees.  */
1945
/* Check a tree and its subtrees.  */
1946
static void do_check_tree(mstate m, tchunkptr t) {
1946
static void do_check_tree(mstate m, tchunkptr t) {
1947
  tchunkptr head = 0;
1947
  tchunkptr head = 0;
1948
  tchunkptr u = t;
1948
  tchunkptr u = t;
1949
  bindex_t tindex = t->index;
1949
  bindex_t tindex = t->index;
1950
  size_t tsize = chunksize(t);
1950
  size_t tsize = chunksize(t);
1951
  bindex_t idx;
1951
  bindex_t idx;
1952
  compute_tree_index(tsize, idx);
1952
  compute_tree_index(tsize, idx);
1953
  assert(tindex == idx);
1953
  assert(tindex == idx);
1954
  assert(tsize >= MIN_LARGE_SIZE);
1954
  assert(tsize >= MIN_LARGE_SIZE);
1955
  assert(tsize >= minsize_for_tree_index(idx));
1955
  assert(tsize >= minsize_for_tree_index(idx));
1956
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1956
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1957
 
1957
 
1958
  do { /* traverse through chain of same-sized nodes */
1958
  do { /* traverse through chain of same-sized nodes */
1959
    do_check_any_chunk(m, ((mchunkptr)u));
1959
    do_check_any_chunk(m, ((mchunkptr)u));
1960
    assert(u->index == tindex);
1960
    assert(u->index == tindex);
1961
    assert(chunksize(u) == tsize);
1961
    assert(chunksize(u) == tsize);
1962
    assert(!cinuse(u));
1962
    assert(!cinuse(u));
1963
    assert(!next_pinuse(u));
1963
    assert(!next_pinuse(u));
1964
    assert(u->fd->bk == u);
1964
    assert(u->fd->bk == u);
1965
    assert(u->bk->fd == u);
1965
    assert(u->bk->fd == u);
1966
    if (u->parent == 0) {
1966
    if (u->parent == 0) {
1967
      assert(u->child[0] == 0);
1967
      assert(u->child[0] == 0);
1968
      assert(u->child[1] == 0);
1968
      assert(u->child[1] == 0);
1969
    }
1969
    }
1970
    else {
1970
    else {
1971
      assert(head == 0); /* only one node on chain has parent */
1971
      assert(head == 0); /* only one node on chain has parent */
1972
      head = u;
1972
      head = u;
1973
      assert(u->parent != u);
1973
      assert(u->parent != u);
1974
      assert (u->parent->child[0] == u ||
1974
      assert (u->parent->child[0] == u ||
1975
              u->parent->child[1] == u ||
1975
              u->parent->child[1] == u ||
1976
              *((tbinptr*)(u->parent)) == u);
1976
              *((tbinptr*)(u->parent)) == u);
1977
      if (u->child[0] != 0) {
1977
      if (u->child[0] != 0) {
1978
        assert(u->child[0]->parent == u);
1978
        assert(u->child[0]->parent == u);
1979
        assert(u->child[0] != u);
1979
        assert(u->child[0] != u);
1980
        do_check_tree(m, u->child[0]);
1980
        do_check_tree(m, u->child[0]);
1981
      }
1981
      }
1982
      if (u->child[1] != 0) {
1982
      if (u->child[1] != 0) {
1983
        assert(u->child[1]->parent == u);
1983
        assert(u->child[1]->parent == u);
1984
        assert(u->child[1] != u);
1984
        assert(u->child[1] != u);
1985
        do_check_tree(m, u->child[1]);
1985
        do_check_tree(m, u->child[1]);
1986
      }
1986
      }
1987
      if (u->child[0] != 0 && u->child[1] != 0) {
1987
      if (u->child[0] != 0 && u->child[1] != 0) {
1988
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
1988
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
1989
      }
1989
      }
1990
    }
1990
    }
1991
    u = u->fd;
1991
    u = u->fd;
1992
  } while (u != t);
1992
  } while (u != t);
1993
  assert(head != 0);
1993
  assert(head != 0);
1994
}
1994
}
1995
 
1995
 
1996
/*  Check all the chunks in a treebin.  */
1996
/*  Check all the chunks in a treebin.  */
1997
static void do_check_treebin(mstate m, bindex_t i) {
1997
static void do_check_treebin(mstate m, bindex_t i) {
1998
  tbinptr* tb = treebin_at(m, i);
1998
  tbinptr* tb = treebin_at(m, i);
1999
  tchunkptr t = *tb;
1999
  tchunkptr t = *tb;
2000
  int empty = (m->treemap & (1U << i)) == 0;
2000
  int empty = (m->treemap & (1U << i)) == 0;
2001
  if (t == 0)
2001
  if (t == 0)
2002
    assert(empty);
2002
    assert(empty);
2003
  if (!empty)
2003
  if (!empty)
2004
    do_check_tree(m, t);
2004
    do_check_tree(m, t);
2005
}
2005
}
2006
 
2006
 
2007
/*  Check all the chunks in a smallbin.  */
2007
/*  Check all the chunks in a smallbin.  */
2008
static void do_check_smallbin(mstate m, bindex_t i) {
2008
static void do_check_smallbin(mstate m, bindex_t i) {
2009
  sbinptr b = smallbin_at(m, i);
2009
  sbinptr b = smallbin_at(m, i);
2010
  mchunkptr p = b->bk;
2010
  mchunkptr p = b->bk;
2011
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2011
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2012
  if (p == b)
2012
  if (p == b)
2013
    assert(empty);
2013
    assert(empty);
2014
  if (!empty) {
2014
  if (!empty) {
2015
    for (; p != b; p = p->bk) {
2015
    for (; p != b; p = p->bk) {
2016
      size_t size = chunksize(p);
2016
      size_t size = chunksize(p);
2017
      mchunkptr q;
2017
      mchunkptr q;
2018
      /* each chunk claims to be free */
2018
      /* each chunk claims to be free */
2019
      do_check_free_chunk(m, p);
2019
      do_check_free_chunk(m, p);
2020
      /* chunk belongs in bin */
2020
      /* chunk belongs in bin */
2021
      assert(small_index(size) == i);
2021
      assert(small_index(size) == i);
2022
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2022
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2023
      /* chunk is followed by an inuse chunk */
2023
      /* chunk is followed by an inuse chunk */
2024
      q = next_chunk(p);
2024
      q = next_chunk(p);
2025
      if (q->head != FENCEPOST_HEAD)
2025
      if (q->head != FENCEPOST_HEAD)
2026
        do_check_inuse_chunk(m, q);
2026
        do_check_inuse_chunk(m, q);
2027
    }
2027
    }
2028
  }
2028
  }
2029
}
2029
}
2030
 
2030
 
2031
/* Find x in a bin. Used in other check functions. */
2031
/* Find x in a bin. Used in other check functions. */
2032
static int bin_find(mstate m, mchunkptr x) {
2032
static int bin_find(mstate m, mchunkptr x) {
2033
  size_t size = chunksize(x);
2033
  size_t size = chunksize(x);
2034
  if (is_small(size)) {
2034
  if (is_small(size)) {
2035
    bindex_t sidx = small_index(size);
2035
    bindex_t sidx = small_index(size);
2036
    sbinptr b = smallbin_at(m, sidx);
2036
    sbinptr b = smallbin_at(m, sidx);
2037
    if (smallmap_is_marked(m, sidx)) {
2037
    if (smallmap_is_marked(m, sidx)) {
2038
      mchunkptr p = b;
2038
      mchunkptr p = b;
2039
      do {
2039
      do {
2040
        if (p == x)
2040
        if (p == x)
2041
          return 1;
2041
          return 1;
2042
      } while ((p = p->fd) != b);
2042
      } while ((p = p->fd) != b);
2043
    }
2043
    }
2044
  }
2044
  }
2045
  else {
2045
  else {
2046
    bindex_t tidx;
2046
    bindex_t tidx;
2047
    compute_tree_index(size, tidx);
2047
    compute_tree_index(size, tidx);
2048
    if (treemap_is_marked(m, tidx)) {
2048
    if (treemap_is_marked(m, tidx)) {
2049
      tchunkptr t = *treebin_at(m, tidx);
2049
      tchunkptr t = *treebin_at(m, tidx);
2050
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2050
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2051
      while (t != 0 && chunksize(t) != size) {
2051
      while (t != 0 && chunksize(t) != size) {
2052
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2052
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2053
        sizebits <<= 1;
2053
        sizebits <<= 1;
2054
      }
2054
      }
2055
      if (t != 0) {
2055
      if (t != 0) {
2056
        tchunkptr u = t;
2056
        tchunkptr u = t;
2057
        do {
2057
        do {
2058
          if (u == (tchunkptr)x)
2058
          if (u == (tchunkptr)x)
2059
            return 1;
2059
            return 1;
2060
        } while ((u = u->fd) != t);
2060
        } while ((u = u->fd) != t);
2061
      }
2061
      }
2062
    }
2062
    }
2063
  }
2063
  }
2064
  return 0;
2064
  return 0;
2065
}
2065
}
2066
 
2066
 
2067
/* Traverse each chunk and check it; return total */
2067
/* Traverse each chunk and check it; return total */
2068
static size_t traverse_and_check(mstate m) {
2068
static size_t traverse_and_check(mstate m) {
2069
  size_t sum = 0;
2069
  size_t sum = 0;
2070
  if (is_initialized(m)) {
2070
  if (is_initialized(m)) {
2071
    msegmentptr s = &m->seg;
2071
    msegmentptr s = &m->seg;
2072
    sum += m->topsize + TOP_FOOT_SIZE;
2072
    sum += m->topsize + TOP_FOOT_SIZE;
2073
    while (s != 0) {
2073
    while (s != 0) {
2074
      mchunkptr q = align_as_chunk(s->base);
2074
      mchunkptr q = align_as_chunk(s->base);
2075
      mchunkptr lastq = 0;
2075
      mchunkptr lastq = 0;
2076
      assert(pinuse(q));
2076
      assert(pinuse(q));
2077
      while (segment_holds(s, q) &&
2077
      while (segment_holds(s, q) &&
2078
             q != m->top && q->head != FENCEPOST_HEAD) {
2078
             q != m->top && q->head != FENCEPOST_HEAD) {
2079
        sum += chunksize(q);
2079
        sum += chunksize(q);
2080
        if (cinuse(q)) {
2080
        if (cinuse(q)) {
2081
          assert(!bin_find(m, q));
2081
          assert(!bin_find(m, q));
2082
          do_check_inuse_chunk(m, q);
2082
          do_check_inuse_chunk(m, q);
2083
        }
2083
        }
2084
        else {
2084
        else {
2085
          assert(q == m->dv || bin_find(m, q));
2085
          assert(q == m->dv || bin_find(m, q));
2086
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2086
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2087
          do_check_free_chunk(m, q);
2087
          do_check_free_chunk(m, q);
2088
        }
2088
        }
2089
        lastq = q;
2089
        lastq = q;
2090
        q = next_chunk(q);
2090
        q = next_chunk(q);
2091
      }
2091
      }
2092
      s = s->next;
2092
      s = s->next;
2093
    }
2093
    }
2094
  }
2094
  }
2095
  return sum;
2095
  return sum;
2096
}
2096
}
2097
 
2097
 
2098
/* Check all properties of malloc_state. */
2098
/* Check all properties of malloc_state. */
2099
static void do_check_malloc_state(mstate m) {
2099
static void do_check_malloc_state(mstate m) {
2100
  bindex_t i;
2100
  bindex_t i;
2101
  size_t total;
2101
  size_t total;
2102
  /* check bins */
2102
  /* check bins */
2103
  for (i = 0; i < NSMALLBINS; ++i)
2103
  for (i = 0; i < NSMALLBINS; ++i)
2104
    do_check_smallbin(m, i);
2104
    do_check_smallbin(m, i);
2105
  for (i = 0; i < NTREEBINS; ++i)
2105
  for (i = 0; i < NTREEBINS; ++i)
2106
    do_check_treebin(m, i);
2106
    do_check_treebin(m, i);
2107
 
2107
 
2108
  if (m->dvsize != 0) { /* check dv chunk */
2108
  if (m->dvsize != 0) { /* check dv chunk */
2109
    do_check_any_chunk(m, m->dv);
2109
    do_check_any_chunk(m, m->dv);
2110
    assert(m->dvsize == chunksize(m->dv));
2110
    assert(m->dvsize == chunksize(m->dv));
2111
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2111
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2112
    assert(bin_find(m, m->dv) == 0);
2112
    assert(bin_find(m, m->dv) == 0);
2113
  }
2113
  }
2114
 
2114
 
2115
  if (m->top != 0) {   /* check top chunk */
2115
  if (m->top != 0) {   /* check top chunk */
2116
    do_check_top_chunk(m, m->top);
2116
    do_check_top_chunk(m, m->top);
2117
    assert(m->topsize == chunksize(m->top));
2117
    assert(m->topsize == chunksize(m->top));
2118
    assert(m->topsize > 0);
2118
    assert(m->topsize > 0);
2119
    assert(bin_find(m, m->top) == 0);
2119
    assert(bin_find(m, m->top) == 0);
2120
  }
2120
  }
2121
 
2121
 
2122
  total = traverse_and_check(m);
2122
  total = traverse_and_check(m);
2123
  assert(total <= m->footprint);
2123
  assert(total <= m->footprint);
2124
  assert(m->footprint <= m->max_footprint);
2124
  assert(m->footprint <= m->max_footprint);
2125
}
2125
}
2126
#endif /* DEBUG */
2126
#endif /* DEBUG */
2127
 
2127
 
2128
/* ----------------------------- statistics ------------------------------ */
2128
/* ----------------------------- statistics ------------------------------ */
2129
 
2129
 
2130
#if !NO_MALLINFO
2130
#if !NO_MALLINFO
2131
static struct mallinfo internal_mallinfo(mstate m) {
2131
static struct mallinfo internal_mallinfo(mstate m) {
2132
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2132
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2133
  if (!PREACTION(m)) {
2133
  if (!PREACTION(m)) {
2134
    check_malloc_state(m);
2134
    check_malloc_state(m);
2135
    if (is_initialized(m)) {
2135
    if (is_initialized(m)) {
2136
      size_t nfree = SIZE_T_ONE; /* top always free */
2136
      size_t nfree = SIZE_T_ONE; /* top always free */
2137
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2137
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2138
      size_t sum = mfree;
2138
      size_t sum = mfree;
2139
      msegmentptr s = &m->seg;
2139
      msegmentptr s = &m->seg;
2140
      while (s != 0) {
2140
      while (s != 0) {
2141
        mchunkptr q = align_as_chunk(s->base);
2141
        mchunkptr q = align_as_chunk(s->base);
2142
        while (segment_holds(s, q) &&
2142
        while (segment_holds(s, q) &&
2143
               q != m->top && q->head != FENCEPOST_HEAD) {
2143
               q != m->top && q->head != FENCEPOST_HEAD) {
2144
          size_t sz = chunksize(q);
2144
          size_t sz = chunksize(q);
2145
          sum += sz;
2145
          sum += sz;
2146
          if (!cinuse(q)) {
2146
          if (!cinuse(q)) {
2147
            mfree += sz;
2147
            mfree += sz;
2148
            ++nfree;
2148
            ++nfree;
2149
          }
2149
          }
2150
          q = next_chunk(q);
2150
          q = next_chunk(q);
2151
        }
2151
        }
2152
        s = s->next;
2152
        s = s->next;
2153
      }
2153
      }
2154
 
2154
 
2155
      nm.arena    = sum;
2155
      nm.arena    = sum;
2156
      nm.ordblks  = nfree;
2156
      nm.ordblks  = nfree;
2157
      nm.hblkhd   = m->footprint - sum;
2157
      nm.hblkhd   = m->footprint - sum;
2158
      nm.usmblks  = m->max_footprint;
2158
      nm.usmblks  = m->max_footprint;
2159
      nm.uordblks = m->footprint - mfree;
2159
      nm.uordblks = m->footprint - mfree;
2160
      nm.fordblks = mfree;
2160
      nm.fordblks = mfree;
2161
      nm.keepcost = m->topsize;
2161
      nm.keepcost = m->topsize;
2162
    }
2162
    }
2163
 
2163
 
2164
    POSTACTION(m);
2164
    POSTACTION(m);
2165
  }
2165
  }
2166
  return nm;
2166
  return nm;
2167
}
2167
}
2168
#endif /* !NO_MALLINFO */
2168
#endif /* !NO_MALLINFO */
2169
 
2169
 
2170
static void internal_malloc_stats(mstate m) {
2170
static void internal_malloc_stats(mstate m) {
2171
  if (!PREACTION(m)) {
2171
  if (!PREACTION(m)) {
2172
    size_t maxfp = 0;
2172
    size_t maxfp = 0;
2173
    size_t fp = 0;
2173
    size_t fp = 0;
2174
    size_t used = 0;
2174
    size_t used = 0;
2175
    check_malloc_state(m);
2175
    check_malloc_state(m);
2176
    if (is_initialized(m)) {
2176
    if (is_initialized(m)) {
2177
      msegmentptr s = &m->seg;
2177
      msegmentptr s = &m->seg;
2178
      maxfp = m->max_footprint;
2178
      maxfp = m->max_footprint;
2179
      fp = m->footprint;
2179
      fp = m->footprint;
2180
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2180
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2181
 
2181
 
2182
      while (s != 0) {
2182
      while (s != 0) {
2183
        mchunkptr q = align_as_chunk(s->base);
2183
        mchunkptr q = align_as_chunk(s->base);
2184
        while (segment_holds(s, q) &&
2184
        while (segment_holds(s, q) &&
2185
               q != m->top && q->head != FENCEPOST_HEAD) {
2185
               q != m->top && q->head != FENCEPOST_HEAD) {
2186
          if (!cinuse(q))
2186
          if (!cinuse(q))
2187
            used -= chunksize(q);
2187
            used -= chunksize(q);
2188
          q = next_chunk(q);
2188
          q = next_chunk(q);
2189
        }
2189
        }
2190
        s = s->next;
2190
        s = s->next;
2191
      }
2191
      }
2192
    }
2192
    }
2193
 
2193
 
2194
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2194
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2195
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2195
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2196
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2196
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2197
 
2197
 
2198
    POSTACTION(m);
2198
    POSTACTION(m);
2199
  }
2199
  }
2200
}
2200
}
2201
 
2201
 
2202
/* ----------------------- Operations on smallbins ----------------------- */
2202
/* ----------------------- Operations on smallbins ----------------------- */
2203
 
2203
 
2204
/*
2204
/*
2205
  Various forms of linking and unlinking are defined as macros.  Even
2205
  Various forms of linking and unlinking are defined as macros.  Even
2206
  the ones for trees, which are very long but have very short typical
2206
  the ones for trees, which are very long but have very short typical
2207
  paths.  This is ugly but reduces reliance on inlining support of
2207
  paths.  This is ugly but reduces reliance on inlining support of
2208
  compilers.
2208
  compilers.
2209
*/
2209
*/
2210
 
2210
 
2211
/* Link a free chunk into a smallbin  */
2211
/* Link a free chunk into a smallbin  */
2212
#define insert_small_chunk(M, P, S) {\
2212
#define insert_small_chunk(M, P, S) {\
2213
  bindex_t I  = small_index(S);\
2213
  bindex_t I  = small_index(S);\
2214
  mchunkptr B = smallbin_at(M, I);\
2214
  mchunkptr B = smallbin_at(M, I);\
2215
  mchunkptr F = B;\
2215
  mchunkptr F = B;\
2216
  assert(S >= MIN_CHUNK_SIZE);\
2216
  assert(S >= MIN_CHUNK_SIZE);\
2217
  if (!smallmap_is_marked(M, I))\
2217
  if (!smallmap_is_marked(M, I))\
2218
    mark_smallmap(M, I);\
2218
    mark_smallmap(M, I);\
2219
  else if (RTCHECK(ok_address(M, B->fd)))\
2219
  else if (RTCHECK(ok_address(M, B->fd)))\
2220
    F = B->fd;\
2220
    F = B->fd;\
2221
  else {\
2221
  else {\
2222
    CORRUPTION_ERROR_ACTION(M);\
2222
    CORRUPTION_ERROR_ACTION(M);\
2223
  }\
2223
  }\
2224
  B->fd = P;\
2224
  B->fd = P;\
2225
  F->bk = P;\
2225
  F->bk = P;\
2226
  P->fd = F;\
2226
  P->fd = F;\
2227
  P->bk = B;\
2227
  P->bk = B;\
2228
}
2228
}
2229
 
2229
 
2230
/* Unlink a chunk from a smallbin  */
2230
/* Unlink a chunk from a smallbin  */
2231
#define unlink_small_chunk(M, P, S) {\
2231
#define unlink_small_chunk(M, P, S) {\
2232
  mchunkptr F = P->fd;\
2232
  mchunkptr F = P->fd;\
2233
  mchunkptr B = P->bk;\
2233
  mchunkptr B = P->bk;\
2234
  bindex_t I = small_index(S);\
2234
  bindex_t I = small_index(S);\
2235
  assert(P != B);\
2235
  assert(P != B);\
2236
  assert(P != F);\
2236
  assert(P != F);\
2237
  assert(chunksize(P) == small_index2size(I));\
2237
  assert(chunksize(P) == small_index2size(I));\
2238
  if (F == B)\
2238
  if (F == B)\
2239
    clear_smallmap(M, I);\
2239
    clear_smallmap(M, I);\
2240
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2240
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2241
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2241
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2242
    F->bk = B;\
2242
    F->bk = B;\
2243
    B->fd = F;\
2243
    B->fd = F;\
2244
  }\
2244
  }\
2245
  else {\
2245
  else {\
2246
    CORRUPTION_ERROR_ACTION(M);\
2246
    CORRUPTION_ERROR_ACTION(M);\
2247
  }\
2247
  }\
2248
}
2248
}
2249
 
2249
 
2250
/* Unlink the first chunk from a smallbin */
2250
/* Unlink the first chunk from a smallbin */
2251
#define unlink_first_small_chunk(M, B, P, I) {\
2251
#define unlink_first_small_chunk(M, B, P, I) {\
2252
  mchunkptr F = P->fd;\
2252
  mchunkptr F = P->fd;\
2253
  assert(P != B);\
2253
  assert(P != B);\
2254
  assert(P != F);\
2254
  assert(P != F);\
2255
  assert(chunksize(P) == small_index2size(I));\
2255
  assert(chunksize(P) == small_index2size(I));\
2256
  if (B == F)\
2256
  if (B == F)\
2257
    clear_smallmap(M, I);\
2257
    clear_smallmap(M, I);\
2258
  else if (RTCHECK(ok_address(M, F))) {\
2258
  else if (RTCHECK(ok_address(M, F))) {\
2259
    B->fd = F;\
2259
    B->fd = F;\
2260
    F->bk = B;\
2260
    F->bk = B;\
2261
  }\
2261
  }\
2262
  else {\
2262
  else {\
2263
    CORRUPTION_ERROR_ACTION(M);\
2263
    CORRUPTION_ERROR_ACTION(M);\
2264
  }\
2264
  }\
2265
}
2265
}
2266
 
2266
 
2267
/* Replace dv node, binning the old one */
2267
/* Replace dv node, binning the old one */
2268
/* Used only when dvsize known to be small */
2268
/* Used only when dvsize known to be small */
2269
#define replace_dv(M, P, S) {\
2269
#define replace_dv(M, P, S) {\
2270
  size_t DVS = M->dvsize;\
2270
  size_t DVS = M->dvsize;\
2271
  if (DVS != 0) {\
2271
  if (DVS != 0) {\
2272
    mchunkptr DV = M->dv;\
2272
    mchunkptr DV = M->dv;\
2273
    assert(is_small(DVS));\
2273
    assert(is_small(DVS));\
2274
    insert_small_chunk(M, DV, DVS);\
2274
    insert_small_chunk(M, DV, DVS);\
2275
  }\
2275
  }\
2276
  M->dvsize = S;\
2276
  M->dvsize = S;\
2277
  M->dv = P;\
2277
  M->dv = P;\
2278
}
2278
}
2279
 
2279
 
2280
/* ------------------------- Operations on trees ------------------------- */
2280
/* ------------------------- Operations on trees ------------------------- */
2281
 
2281
 
2282
/* Insert chunk into tree */
2282
/* Insert chunk into tree */
2283
#define insert_large_chunk(M, X, S) {\
2283
#define insert_large_chunk(M, X, S) {\
2284
  tbinptr* H;\
2284
  tbinptr* H;\
2285
  bindex_t I;\
2285
  bindex_t I;\
2286
  compute_tree_index(S, I);\
2286
  compute_tree_index(S, I);\
2287
  H = treebin_at(M, I);\
2287
  H = treebin_at(M, I);\
2288
  X->index = I;\
2288
  X->index = I;\
2289
  X->child[0] = X->child[1] = 0;\
2289
  X->child[0] = X->child[1] = 0;\
2290
  if (!treemap_is_marked(M, I)) {\
2290
  if (!treemap_is_marked(M, I)) {\
2291
    mark_treemap(M, I);\
2291
    mark_treemap(M, I);\
2292
    *H = X;\
2292
    *H = X;\
2293
    X->parent = (tchunkptr)H;\
2293
    X->parent = (tchunkptr)H;\
2294
    X->fd = X->bk = X;\
2294
    X->fd = X->bk = X;\
2295
  }\
2295
  }\
2296
  else {\
2296
  else {\
2297
    tchunkptr T = *H;\
2297
    tchunkptr T = *H;\
2298
    size_t K = S << leftshift_for_tree_index(I);\
2298
    size_t K = S << leftshift_for_tree_index(I);\
2299
    for (;;) {\
2299
    for (;;) {\
2300
      if (chunksize(T) != S) {\
2300
      if (chunksize(T) != S) {\
2301
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2301
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2302
        K <<= 1;\
2302
        K <<= 1;\
2303
        if (*C != 0)\
2303
        if (*C != 0)\
2304
          T = *C;\
2304
          T = *C;\
2305
        else if (RTCHECK(ok_address(M, C))) {\
2305
        else if (RTCHECK(ok_address(M, C))) {\
2306
          *C = X;\
2306
          *C = X;\
2307
          X->parent = T;\
2307
          X->parent = T;\
2308
          X->fd = X->bk = X;\
2308
          X->fd = X->bk = X;\
2309
          break;\
2309
          break;\
2310
        }\
2310
        }\
2311
        else {\
2311
        else {\
2312
          CORRUPTION_ERROR_ACTION(M);\
2312
          CORRUPTION_ERROR_ACTION(M);\
2313
          break;\
2313
          break;\
2314
        }\
2314
        }\
2315
      }\
2315
      }\
2316
      else {\
2316
      else {\
2317
        tchunkptr F = T->fd;\
2317
        tchunkptr F = T->fd;\
2318
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2318
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2319
          T->fd = F->bk = X;\
2319
          T->fd = F->bk = X;\
2320
          X->fd = F;\
2320
          X->fd = F;\
2321
          X->bk = T;\
2321
          X->bk = T;\
2322
          X->parent = 0;\
2322
          X->parent = 0;\
2323
          break;\
2323
          break;\
2324
        }\
2324
        }\
2325
        else {\
2325
        else {\
2326
          CORRUPTION_ERROR_ACTION(M);\
2326
          CORRUPTION_ERROR_ACTION(M);\
2327
          break;\
2327
          break;\
2328
        }\
2328
        }\
2329
      }\
2329
      }\
2330
    }\
2330
    }\
2331
  }\
2331
  }\
2332
}
2332
}
2333
 
2333
 
2334
/*
2334
/*
2335
  Unlink steps:
2335
  Unlink steps:
2336
 
2336
 
2337
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2337
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2338
     and choose its bk node as its replacement.
2338
     and choose its bk node as its replacement.
2339
  2. If x was the last node of its size, but not a leaf node, it must
2339
  2. If x was the last node of its size, but not a leaf node, it must
2340
     be replaced with a leaf node (not merely one with an open left or
2340
     be replaced with a leaf node (not merely one with an open left or
2341
     right), to make sure that lefts and rights of descendents
2341
     right), to make sure that lefts and rights of descendents
2342
     correspond properly to bit masks.  We use the rightmost descendent
2342
     correspond properly to bit masks.  We use the rightmost descendent
2343
     of x.  We could use any other leaf, but this is easy to locate and
2343
     of x.  We could use any other leaf, but this is easy to locate and
2344
     tends to counteract removal of leftmosts elsewhere, and so keeps
2344
     tends to counteract removal of leftmosts elsewhere, and so keeps
2345
     paths shorter than minimally guaranteed.  This doesn't loop much
2345
     paths shorter than minimally guaranteed.  This doesn't loop much
2346
     because on average a node in a tree is near the bottom.
2346
     because on average a node in a tree is near the bottom.
2347
  3. If x is the base of a chain (i.e., has parent links) relink
2347
  3. If x is the base of a chain (i.e., has parent links) relink
2348
     x's parent and children to x's replacement (or null if none).
2348
     x's parent and children to x's replacement (or null if none).
2349
*/
2349
*/
2350
 
2350
 
2351
#define unlink_large_chunk(M, X) {\
2351
#define unlink_large_chunk(M, X) {\
2352
  tchunkptr XP = X->parent;\
2352
  tchunkptr XP = X->parent;\
2353
  tchunkptr R;\
2353
  tchunkptr R;\
2354
  if (X->bk != X) {\
2354
  if (X->bk != X) {\
2355
    tchunkptr F = X->fd;\
2355
    tchunkptr F = X->fd;\
2356
    R = X->bk;\
2356
    R = X->bk;\
2357
    if (RTCHECK(ok_address(M, F))) {\
2357
    if (RTCHECK(ok_address(M, F))) {\
2358
      F->bk = R;\
2358
      F->bk = R;\
2359
      R->fd = F;\
2359
      R->fd = F;\
2360
    }\
2360
    }\
2361
    else {\
2361
    else {\
2362
      CORRUPTION_ERROR_ACTION(M);\
2362
      CORRUPTION_ERROR_ACTION(M);\
2363
    }\
2363
    }\
2364
  }\
2364
  }\
2365
  else {\
2365
  else {\
2366
    tchunkptr* RP;\
2366
    tchunkptr* RP;\
2367
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2367
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2368
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2368
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2369
      tchunkptr* CP;\
2369
      tchunkptr* CP;\
2370
      while ((*(CP = &(R->child[1])) != 0) ||\
2370
      while ((*(CP = &(R->child[1])) != 0) ||\
2371
             (*(CP = &(R->child[0])) != 0)) {\
2371
             (*(CP = &(R->child[0])) != 0)) {\
2372
        R = *(RP = CP);\
2372
        R = *(RP = CP);\
2373
      }\
2373
      }\
2374
      if (RTCHECK(ok_address(M, RP)))\
2374
      if (RTCHECK(ok_address(M, RP)))\
2375
        *RP = 0;\
2375
        *RP = 0;\
2376
      else {\
2376
      else {\
2377
        CORRUPTION_ERROR_ACTION(M);\
2377
        CORRUPTION_ERROR_ACTION(M);\
2378
      }\
2378
      }\
2379
    }\
2379
    }\
2380
  }\
2380
  }\
2381
  if (XP != 0) {\
2381
  if (XP != 0) {\
2382
    tbinptr* H = treebin_at(M, X->index);\
2382
    tbinptr* H = treebin_at(M, X->index);\
2383
    if (X == *H) {\
2383
    if (X == *H) {\
2384
      if ((*H = R) == 0) \
2384
      if ((*H = R) == 0) \
2385
        clear_treemap(M, X->index);\
2385
        clear_treemap(M, X->index);\
2386
    }\
2386
    }\
2387
    else if (RTCHECK(ok_address(M, XP))) {\
2387
    else if (RTCHECK(ok_address(M, XP))) {\
2388
      if (XP->child[0] == X) \
2388
      if (XP->child[0] == X) \
2389
        XP->child[0] = R;\
2389
        XP->child[0] = R;\
2390
      else \
2390
      else \
2391
        XP->child[1] = R;\
2391
        XP->child[1] = R;\
2392
    }\
2392
    }\
2393
    else\
2393
    else\
2394
      CORRUPTION_ERROR_ACTION(M);\
2394
      CORRUPTION_ERROR_ACTION(M);\
2395
    if (R != 0) {\
2395
    if (R != 0) {\
2396
      if (RTCHECK(ok_address(M, R))) {\
2396
      if (RTCHECK(ok_address(M, R))) {\
2397
        tchunkptr C0, C1;\
2397
        tchunkptr C0, C1;\
2398
        R->parent = XP;\
2398
        R->parent = XP;\
2399
        if ((C0 = X->child[0]) != 0) {\
2399
        if ((C0 = X->child[0]) != 0) {\
2400
          if (RTCHECK(ok_address(M, C0))) {\
2400
          if (RTCHECK(ok_address(M, C0))) {\
2401
            R->child[0] = C0;\
2401
            R->child[0] = C0;\
2402
            C0->parent = R;\
2402
            C0->parent = R;\
2403
          }\
2403
          }\
2404
          else\
2404
          else\
2405
            CORRUPTION_ERROR_ACTION(M);\
2405
            CORRUPTION_ERROR_ACTION(M);\
2406
        }\
2406
        }\
2407
        if ((C1 = X->child[1]) != 0) {\
2407
        if ((C1 = X->child[1]) != 0) {\
2408
          if (RTCHECK(ok_address(M, C1))) {\
2408
          if (RTCHECK(ok_address(M, C1))) {\
2409
            R->child[1] = C1;\
2409
            R->child[1] = C1;\
2410
            C1->parent = R;\
2410
            C1->parent = R;\
2411
          }\
2411
          }\
2412
          else\
2412
          else\
2413
            CORRUPTION_ERROR_ACTION(M);\
2413
            CORRUPTION_ERROR_ACTION(M);\
2414
        }\
2414
        }\
2415
      }\
2415
      }\
2416
      else\
2416
      else\
2417
        CORRUPTION_ERROR_ACTION(M);\
2417
        CORRUPTION_ERROR_ACTION(M);\
2418
    }\
2418
    }\
2419
  }\
2419
  }\
2420
}
2420
}
2421
 
2421
 
2422
/* Relays to large vs small bin operations */
2422
/* Relays to large vs small bin operations */
2423
 
2423
 
2424
#define insert_chunk(M, P, S)\
2424
#define insert_chunk(M, P, S)\
2425
  if (is_small(S)) insert_small_chunk(M, P, S)\
2425
  if (is_small(S)) insert_small_chunk(M, P, S)\
2426
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2426
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2427
 
2427
 
2428
#define unlink_chunk(M, P, S)\
2428
#define unlink_chunk(M, P, S)\
2429
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2429
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2430
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2430
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2431
 
2431
 
2432
 
2432
 
2433
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2433
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2434
 
2434
 
2435
#if ONLY_MSPACES
2435
#if ONLY_MSPACES
2436
#define internal_malloc(m, b) mspace_malloc(m, b)
2436
#define internal_malloc(m, b) mspace_malloc(m, b)
2437
#define internal_free(m, mem) mspace_free(m,mem);
2437
#define internal_free(m, mem) mspace_free(m,mem);
2438
#else /* ONLY_MSPACES */
2438
#else /* ONLY_MSPACES */
2439
#if MSPACES
2439
#if MSPACES
2440
#define internal_malloc(m, b)\
2440
#define internal_malloc(m, b)\
2441
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2441
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2442
#define internal_free(m, mem)\
2442
#define internal_free(m, mem)\
2443
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2443
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2444
#else /* MSPACES */
2444
#else /* MSPACES */
2445
#define internal_malloc(m, b) dlmalloc(b)
2445
#define internal_malloc(m, b) dlmalloc(b)
2446
#define internal_free(m, mem) dlfree(mem)
2446
#define internal_free(m, mem) dlfree(mem)
2447
#endif /* MSPACES */
2447
#endif /* MSPACES */
2448
#endif /* ONLY_MSPACES */
2448
#endif /* ONLY_MSPACES */
2449
 
2449
 
2450
/* -----------------------  Direct-mmapping chunks ----------------------- */
2450
/* -----------------------  Direct-mmapping chunks ----------------------- */
2451
 
2451
 
2452
/*
2452
/*
2453
  Directly mmapped chunks are set up with an offset to the start of
2453
  Directly mmapped chunks are set up with an offset to the start of
2454
  the mmapped region stored in the prev_foot field of the chunk. This
2454
  the mmapped region stored in the prev_foot field of the chunk. This
2455
  allows reconstruction of the required argument to MUNMAP when freed,
2455
  allows reconstruction of the required argument to MUNMAP when freed,
2456
  and also allows adjustment of the returned chunk to meet alignment
2456
  and also allows adjustment of the returned chunk to meet alignment
2457
  requirements (especially in memalign).  There is also enough space
2457
  requirements (especially in memalign).  There is also enough space
2458
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2458
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2459
  the PINUSE bit so frees can be checked.
2459
  the PINUSE bit so frees can be checked.
2460
*/
2460
*/
2461
 
2461
 
2462
/* Malloc using mmap */
2462
/* Malloc using mmap */
2463
static void* mmap_alloc(mstate m, size_t nb) {
2463
static void* mmap_alloc(mstate m, size_t nb) {
2464
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2464
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2465
  if (mmsize > nb) {     /* Check for wrap around 0 */
2465
  if (mmsize > nb) {     /* Check for wrap around 0 */
2466
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2466
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2467
    if (mm != CMFAIL) {
2467
    if (mm != CMFAIL) {
2468
      size_t offset = align_offset(chunk2mem(mm));
2468
      size_t offset = align_offset(chunk2mem(mm));
2469
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2469
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2470
      mchunkptr p = (mchunkptr)(mm + offset);
2470
      mchunkptr p = (mchunkptr)(mm + offset);
2471
      p->prev_foot = offset | IS_MMAPPED_BIT;
2471
      p->prev_foot = offset | IS_MMAPPED_BIT;
2472
      (p)->head = (psize|CINUSE_BIT);
2472
      (p)->head = (psize|CINUSE_BIT);
2473
      mark_inuse_foot(m, p, psize);
2473
      mark_inuse_foot(m, p, psize);
2474
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2474
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2475
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2475
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2476
 
2476
 
2477
      if (mm < m->least_addr)
2477
      if (mm < m->least_addr)
2478
        m->least_addr = mm;
2478
        m->least_addr = mm;
2479
      if ((m->footprint += mmsize) > m->max_footprint)
2479
      if ((m->footprint += mmsize) > m->max_footprint)
2480
        m->max_footprint = m->footprint;
2480
        m->max_footprint = m->footprint;
2481
      assert(is_aligned(chunk2mem(p)));
2481
      assert(is_aligned(chunk2mem(p)));
2482
      check_mmapped_chunk(m, p);
2482
      check_mmapped_chunk(m, p);
2483
      return chunk2mem(p);
2483
      return chunk2mem(p);
2484
    }
2484
    }
2485
  }
2485
  }
2486
  return 0;
2486
  return 0;
2487
}
2487
}
2488
 
2488
 
2489
/* Realloc using mmap */
2489
/* Realloc using mmap */
2490
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2490
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2491
  size_t oldsize = chunksize(oldp);
2491
  size_t oldsize = chunksize(oldp);
2492
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2492
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2493
    return 0;
2493
    return 0;
2494
  /* Keep old chunk if big enough but not too big */
2494
  /* Keep old chunk if big enough but not too big */
2495
  if (oldsize >= nb + SIZE_T_SIZE &&
2495
  if (oldsize >= nb + SIZE_T_SIZE &&
2496
      (oldsize - nb) <= (mparams.granularity << 1))
2496
      (oldsize - nb) <= (mparams.granularity << 1))
2497
    return oldp;
2497
    return oldp;
2498
  else {
2498
  else {
2499
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2499
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2500
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2500
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2501
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2501
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2502
                                         CHUNK_ALIGN_MASK);
2502
                                         CHUNK_ALIGN_MASK);
2503
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2503
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2504
                                  oldmmsize, newmmsize, 1);
2504
                                  oldmmsize, newmmsize, 1);
2505
    if (cp != CMFAIL) {
2505
    if (cp != CMFAIL) {
2506
      mchunkptr newp = (mchunkptr)(cp + offset);
2506
      mchunkptr newp = (mchunkptr)(cp + offset);
2507
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2507
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2508
      newp->head = (psize|CINUSE_BIT);
2508
      newp->head = (psize|CINUSE_BIT);
2509
      mark_inuse_foot(m, newp, psize);
2509
      mark_inuse_foot(m, newp, psize);
2510
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2510
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2511
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2511
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2512
 
2512
 
2513
      if (cp < m->least_addr)
2513
      if (cp < m->least_addr)
2514
        m->least_addr = cp;
2514
        m->least_addr = cp;
2515
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2515
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2516
        m->max_footprint = m->footprint;
2516
        m->max_footprint = m->footprint;
2517
      check_mmapped_chunk(m, newp);
2517
      check_mmapped_chunk(m, newp);
2518
      return newp;
2518
      return newp;
2519
    }
2519
    }
2520
  }
2520
  }
2521
  return 0;
2521
  return 0;
2522
}
2522
}
2523
 
2523
 
2524
/* -------------------------- mspace management -------------------------- */
2524
/* -------------------------- mspace management -------------------------- */
2525
 
2525
 
2526
/* Initialize top chunk and its size */
2526
/* Initialize top chunk and its size */
2527
static void init_top(mstate m, mchunkptr p, size_t psize) {
2527
static void init_top(mstate m, mchunkptr p, size_t psize) {
2528
  /* Ensure alignment */
2528
  /* Ensure alignment */
2529
  size_t offset = align_offset(chunk2mem(p));
2529
  size_t offset = align_offset(chunk2mem(p));
2530
  p = (mchunkptr)((char*)p + offset);
2530
  p = (mchunkptr)((char*)p + offset);
2531
  psize -= offset;
2531
  psize -= offset;
2532
 
2532
 
2533
  m->top = p;
2533
  m->top = p;
2534
  m->topsize = psize;
2534
  m->topsize = psize;
2535
  p->head = psize | PINUSE_BIT;
2535
  p->head = psize | PINUSE_BIT;
2536
  /* set size of fake trailing chunk holding overhead space only once */
2536
  /* set size of fake trailing chunk holding overhead space only once */
2537
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2537
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2538
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2538
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2539
}
2539
}
2540
 
2540
 
2541
/* Initialize bins for a new mstate that is otherwise zeroed out */
2541
/* Initialize bins for a new mstate that is otherwise zeroed out */
2542
static void init_bins(mstate m) {
2542
static void init_bins(mstate m) {
2543
  /* Establish circular links for smallbins */
2543
  /* Establish circular links for smallbins */
2544
  bindex_t i;
2544
  bindex_t i;
2545
  for (i = 0; i < NSMALLBINS; ++i) {
2545
  for (i = 0; i < NSMALLBINS; ++i) {
2546
    sbinptr bin = smallbin_at(m,i);
2546
    sbinptr bin = smallbin_at(m,i);
2547
    bin->fd = bin->bk = bin;
2547
    bin->fd = bin->bk = bin;
2548
  }
2548
  }
2549
}
2549
}
2550
 
2550
 
2551
#if PROCEED_ON_ERROR
2551
#if PROCEED_ON_ERROR
2552
 
2552
 
2553
/* default corruption action */
2553
/* default corruption action */
2554
static void reset_on_error(mstate m) {
2554
static void reset_on_error(mstate m) {
2555
  int i;
2555
  int i;
2556
  ++malloc_corruption_error_count;
2556
  ++malloc_corruption_error_count;
2557
  /* Reinitialize fields to forget about all memory */
2557
  /* Reinitialize fields to forget about all memory */
2558
  m->smallbins = m->treebins = 0;
2558
  m->smallbins = m->treebins = 0;
2559
  m->dvsize = m->topsize = 0;
2559
  m->dvsize = m->topsize = 0;
2560
  m->seg.base = 0;
2560
  m->seg.base = 0;
2561
  m->seg.size = 0;
2561
  m->seg.size = 0;
2562
  m->seg.next = 0;
2562
  m->seg.next = 0;
2563
  m->top = m->dv = 0;
2563
  m->top = m->dv = 0;
2564
  for (i = 0; i < NTREEBINS; ++i)
2564
  for (i = 0; i < NTREEBINS; ++i)
2565
    *treebin_at(m, i) = 0;
2565
    *treebin_at(m, i) = 0;
2566
  init_bins(m);
2566
  init_bins(m);
2567
}
2567
}
2568
#endif /* PROCEED_ON_ERROR */
2568
#endif /* PROCEED_ON_ERROR */
2569
 
2569
 
2570
/* Allocate chunk and prepend remainder with chunk in successor base. */
2570
/* Allocate chunk and prepend remainder with chunk in successor base. */
2571
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2571
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2572
                           size_t nb) {
2572
                           size_t nb) {
2573
  mchunkptr p = align_as_chunk(newbase);
2573
  mchunkptr p = align_as_chunk(newbase);
2574
  mchunkptr oldfirst = align_as_chunk(oldbase);
2574
  mchunkptr oldfirst = align_as_chunk(oldbase);
2575
  size_t psize = (char*)oldfirst - (char*)p;
2575
  size_t psize = (char*)oldfirst - (char*)p;
2576
  mchunkptr q = chunk_plus_offset(p, nb);
2576
  mchunkptr q = chunk_plus_offset(p, nb);
2577
  size_t qsize = psize - nb;
2577
  size_t qsize = psize - nb;
2578
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2578
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2579
 
2579
 
2580
  assert((char*)oldfirst > (char*)q);
2580
  assert((char*)oldfirst > (char*)q);
2581
  assert(pinuse(oldfirst));
2581
  assert(pinuse(oldfirst));
2582
  assert(qsize >= MIN_CHUNK_SIZE);
2582
  assert(qsize >= MIN_CHUNK_SIZE);
2583
 
2583
 
2584
  /* consolidate remainder with first chunk of old base */
2584
  /* consolidate remainder with first chunk of old base */
2585
  if (oldfirst == m->top) {
2585
  if (oldfirst == m->top) {
2586
    size_t tsize = m->topsize += qsize;
2586
    size_t tsize = m->topsize += qsize;
2587
    m->top = q;
2587
    m->top = q;
2588
    q->head = tsize | PINUSE_BIT;
2588
    q->head = tsize | PINUSE_BIT;
2589
    check_top_chunk(m, q);
2589
    check_top_chunk(m, q);
2590
  }
2590
  }
2591
  else if (oldfirst == m->dv) {
2591
  else if (oldfirst == m->dv) {
2592
    size_t dsize = m->dvsize += qsize;
2592
    size_t dsize = m->dvsize += qsize;
2593
    m->dv = q;
2593
    m->dv = q;
2594
    set_size_and_pinuse_of_free_chunk(q, dsize);
2594
    set_size_and_pinuse_of_free_chunk(q, dsize);
2595
  }
2595
  }
2596
  else {
2596
  else {
2597
    if (!cinuse(oldfirst)) {
2597
    if (!cinuse(oldfirst)) {
2598
      size_t nsize = chunksize(oldfirst);
2598
      size_t nsize = chunksize(oldfirst);
2599
      unlink_chunk(m, oldfirst, nsize);
2599
      unlink_chunk(m, oldfirst, nsize);
2600
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2600
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2601
      qsize += nsize;
2601
      qsize += nsize;
2602
    }
2602
    }
2603
    set_free_with_pinuse(q, qsize, oldfirst);
2603
    set_free_with_pinuse(q, qsize, oldfirst);
2604
    insert_chunk(m, q, qsize);
2604
    insert_chunk(m, q, qsize);
2605
    check_free_chunk(m, q);
2605
    check_free_chunk(m, q);
2606
  }
2606
  }
2607
 
2607
 
2608
  check_malloced_chunk(m, chunk2mem(p), nb);
2608
  check_malloced_chunk(m, chunk2mem(p), nb);
2609
  return chunk2mem(p);
2609
  return chunk2mem(p);
2610
}
2610
}
2611
 
2611
 
2612
 
2612
 
2613
/* Add a segment to hold a new noncontiguous region */
2613
/* Add a segment to hold a new noncontiguous region */
2614
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2614
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2615
  /* Determine locations and sizes of segment, fenceposts, old top */
2615
  /* Determine locations and sizes of segment, fenceposts, old top */
2616
  char* old_top = (char*)m->top;
2616
  char* old_top = (char*)m->top;
2617
  msegmentptr oldsp = segment_holding(m, old_top);
2617
  msegmentptr oldsp = segment_holding(m, old_top);
2618
  char* old_end = oldsp->base + oldsp->size;
2618
  char* old_end = oldsp->base + oldsp->size;
2619
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2619
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2620
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2620
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2621
  size_t offset = align_offset(chunk2mem(rawsp));
2621
  size_t offset = align_offset(chunk2mem(rawsp));
2622
  char* asp = rawsp + offset;
2622
  char* asp = rawsp + offset;
2623
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2623
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2624
  mchunkptr sp = (mchunkptr)csp;
2624
  mchunkptr sp = (mchunkptr)csp;
2625
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2625
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2626
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2626
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2627
  mchunkptr p = tnext;
2627
  mchunkptr p = tnext;
2628
  int nfences = 0;
2628
  int nfences = 0;
2629
 
2629
 
2630
  /* reset top to new space */
2630
  /* reset top to new space */
2631
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2631
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2632
 
2632
 
2633
  /* Set up segment record */
2633
  /* Set up segment record */
2634
  assert(is_aligned(ss));
2634
  assert(is_aligned(ss));
2635
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2635
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2636
  *ss = m->seg; /* Push current record */
2636
  *ss = m->seg; /* Push current record */
2637
  m->seg.base = tbase;
2637
  m->seg.base = tbase;
2638
  m->seg.size = tsize;
2638
  m->seg.size = tsize;
2639
  m->seg.sflags = mmapped;
2639
  m->seg.sflags = mmapped;
2640
  m->seg.next = ss;
2640
  m->seg.next = ss;
2641
 
2641
 
2642
  /* Insert trailing fenceposts */
2642
  /* Insert trailing fenceposts */
2643
  for (;;) {
2643
  for (;;) {
2644
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2644
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2645
    p->head = FENCEPOST_HEAD;
2645
    p->head = FENCEPOST_HEAD;
2646
    ++nfences;
2646
    ++nfences;
2647
    if ((char*)(&(nextp->head)) < old_end)
2647
    if ((char*)(&(nextp->head)) < old_end)
2648
      p = nextp;
2648
      p = nextp;
2649
    else
2649
    else
2650
      break;
2650
      break;
2651
  }
2651
  }
2652
  assert(nfences >= 2);
2652
  assert(nfences >= 2);
2653
 
2653
 
2654
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2654
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2655
  if (csp != old_top) {
2655
  if (csp != old_top) {
2656
    mchunkptr q = (mchunkptr)old_top;
2656
    mchunkptr q = (mchunkptr)old_top;
2657
    size_t psize = csp - old_top;
2657
    size_t psize = csp - old_top;
2658
    mchunkptr tn = chunk_plus_offset(q, psize);
2658
    mchunkptr tn = chunk_plus_offset(q, psize);
2659
    set_free_with_pinuse(q, psize, tn);
2659
    set_free_with_pinuse(q, psize, tn);
2660
    insert_chunk(m, q, psize);
2660
    insert_chunk(m, q, psize);
2661
  }
2661
  }
2662
 
2662
 
2663
  check_top_chunk(m, m->top);
2663
  check_top_chunk(m, m->top);
2664
}
2664
}
2665
 
2665
 
2666
/* -------------------------- System allocation -------------------------- */
2666
/* -------------------------- System allocation -------------------------- */
2667
 
2667
 
2668
/* Get memory from system using MORECORE or MMAP */
2668
/* Get memory from system using MORECORE or MMAP */
2669
static void* sys_alloc(mstate m, size_t nb) {
2669
static void* sys_alloc(mstate m, size_t nb) {
2670
  char* tbase = CMFAIL;
2670
  char* tbase = CMFAIL;
2671
  size_t tsize = 0;
2671
  size_t tsize = 0;
2672
  flag_t mmap_flag = 0;
2672
  flag_t mmap_flag = 0;
2673
 
2673
 
2674
  init_mparams();
2674
  init_mparams();
2675
 
2675
 
2676
  /* Directly map large chunks */
2676
  /* Directly map large chunks */
2677
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2677
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2678
    void* mem = mmap_alloc(m, nb);
2678
    void* mem = mmap_alloc(m, nb);
2679
    if (mem != 0)
2679
    if (mem != 0)
2680
      return mem;
2680
      return mem;
2681
  }
2681
  }
2682
 
2682
 
2683
  /*
2683
  /*
2684
    Try getting memory in any of three ways (in most-preferred to
2684
    Try getting memory in any of three ways (in most-preferred to
2685
    least-preferred order):
2685
    least-preferred order):
2686
    1. A call to MORECORE that can normally contiguously extend memory.
2686
    1. A call to MORECORE that can normally contiguously extend memory.
2687
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2687
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2688
       or main space is mmapped or a previous contiguous call failed)
2688
       or main space is mmapped or a previous contiguous call failed)
2689
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2689
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2690
       Note that under the default settings, if MORECORE is unable to
2690
       Note that under the default settings, if MORECORE is unable to
2691
       fulfill a request, and HAVE_MMAP is true, then mmap is
2691
       fulfill a request, and HAVE_MMAP is true, then mmap is
2692
       used as a noncontiguous system allocator. This is a useful backup
2692
       used as a noncontiguous system allocator. This is a useful backup
2693
       strategy for systems with holes in address spaces -- in this case
2693
       strategy for systems with holes in address spaces -- in this case
2694
       sbrk cannot contiguously expand the heap, but mmap may be able to
2694
       sbrk cannot contiguously expand the heap, but mmap may be able to
2695
       find space.
2695
       find space.
2696
    3. A call to MORECORE that cannot usually contiguously extend memory.
2696
    3. A call to MORECORE that cannot usually contiguously extend memory.
2697
       (disabled if not HAVE_MORECORE)
2697
       (disabled if not HAVE_MORECORE)
2698
  */
2698
  */
2699
 
2699
 
2700
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2700
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2701
    char* br = CMFAIL;
2701
    char* br = CMFAIL;
2702
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2702
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2703
    size_t asize = 0;
2703
    size_t asize = 0;
2704
    ACQUIRE_MORECORE_LOCK();
2704
    ACQUIRE_MORECORE_LOCK();
2705
 
2705
 
2706
    if (ss == 0) {  /* First time through or recovery */
2706
    if (ss == 0) {  /* First time through or recovery */
2707
      char* base = (char*)CALL_MORECORE(0);
2707
      char* base = (char*)CALL_MORECORE(0);
2708
      if (base != CMFAIL) {
2708
      if (base != CMFAIL) {
2709
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2709
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2710
        /* Adjust to end on a page boundary */
2710
        /* Adjust to end on a page boundary */
2711
        if (!is_page_aligned(base))
2711
        if (!is_page_aligned(base))
2712
          asize += (page_align((size_t)base) - (size_t)base);
2712
          asize += (page_align((size_t)base) - (size_t)base);
2713
        /* Can't call MORECORE if size is negative when treated as signed */
2713
        /* Can't call MORECORE if size is negative when treated as signed */
2714
        if (asize < HALF_MAX_SIZE_T &&
2714
        if (asize < HALF_MAX_SIZE_T &&
2715
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2715
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2716
          tbase = base;
2716
          tbase = base;
2717
          tsize = asize;
2717
          tsize = asize;
2718
        }
2718
        }
2719
      }
2719
      }
2720
    }
2720
    }
2721
    else {
2721
    else {
2722
      /* Subtract out existing available top space from MORECORE request. */
2722
      /* Subtract out existing available top space from MORECORE request. */
2723
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2723
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2724
      /* Use mem here only if it did continuously extend old space */
2724
      /* Use mem here only if it did continuously extend old space */
2725
      if (asize < HALF_MAX_SIZE_T &&
2725
      if (asize < HALF_MAX_SIZE_T &&
2726
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2726
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2727
        tbase = br;
2727
        tbase = br;
2728
        tsize = asize;
2728
        tsize = asize;
2729
      }
2729
      }
2730
    }
2730
    }
2731
 
2731
 
2732
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2732
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2733
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2733
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2734
        if (asize < HALF_MAX_SIZE_T &&
2734
        if (asize < HALF_MAX_SIZE_T &&
2735
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2735
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2736
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2736
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2737
          if (esize < HALF_MAX_SIZE_T) {
2737
          if (esize < HALF_MAX_SIZE_T) {
2738
            char* end = (char*)CALL_MORECORE(esize);
2738
            char* end = (char*)CALL_MORECORE(esize);
2739
            if (end != CMFAIL)
2739
            if (end != CMFAIL)
2740
              asize += esize;
2740
              asize += esize;
2741
            else {            /* Can't use; try to release */
2741
            else {            /* Can't use; try to release */
2742
              CALL_MORECORE(-asize);
2742
              CALL_MORECORE(-asize);
2743
              br = CMFAIL;
2743
              br = CMFAIL;
2744
            }
2744
            }
2745
          }
2745
          }
2746
        }
2746
        }
2747
      }
2747
      }
2748
      if (br != CMFAIL) {    /* Use the space we did get */
2748
      if (br != CMFAIL) {    /* Use the space we did get */
2749
        tbase = br;
2749
        tbase = br;
2750
        tsize = asize;
2750
        tsize = asize;
2751
      }
2751
      }
2752
      else
2752
      else
2753
        disable_contiguous(m); /* Don't try contiguous path in the future */
2753
        disable_contiguous(m); /* Don't try contiguous path in the future */
2754
    }
2754
    }
2755
 
2755
 
2756
    RELEASE_MORECORE_LOCK();
2756
    RELEASE_MORECORE_LOCK();
2757
  }
2757
  }
2758
 
2758
 
2759
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2759
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2760
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2760
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2761
    size_t rsize = granularity_align(req);
2761
    size_t rsize = granularity_align(req);
2762
    if (rsize > nb) { /* Fail if wraps around zero */
2762
    if (rsize > nb) { /* Fail if wraps around zero */
2763
      char* mp = (char*)(CALL_MMAP(rsize));
2763
      char* mp = (char*)(CALL_MMAP(rsize));
2764
      if (mp != CMFAIL) {
2764
      if (mp != CMFAIL) {
2765
        tbase = mp;
2765
        tbase = mp;
2766
        tsize = rsize;
2766
        tsize = rsize;
2767
        mmap_flag = IS_MMAPPED_BIT;
2767
        mmap_flag = IS_MMAPPED_BIT;
2768
      }
2768
      }
2769
    }
2769
    }
2770
  }
2770
  }
2771
 
2771
 
2772
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2772
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2773
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2773
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2774
    if (asize < HALF_MAX_SIZE_T) {
2774
    if (asize < HALF_MAX_SIZE_T) {
2775
      char* br = CMFAIL;
2775
      char* br = CMFAIL;
2776
      char* end = CMFAIL;
2776
      char* end = CMFAIL;
2777
      ACQUIRE_MORECORE_LOCK();
2777
      ACQUIRE_MORECORE_LOCK();
2778
      br = (char*)(CALL_MORECORE(asize));
2778
      br = (char*)(CALL_MORECORE(asize));
2779
      end = (char*)(CALL_MORECORE(0));
2779
      end = (char*)(CALL_MORECORE(0));
2780
      RELEASE_MORECORE_LOCK();
2780
      RELEASE_MORECORE_LOCK();
2781
      if (br != CMFAIL && end != CMFAIL && br < end) {
2781
      if (br != CMFAIL && end != CMFAIL && br < end) {
2782
        size_t ssize = end - br;
2782
        size_t ssize = end - br;
2783
        if (ssize > nb + TOP_FOOT_SIZE) {
2783
        if (ssize > nb + TOP_FOOT_SIZE) {
2784
          tbase = br;
2784
          tbase = br;
2785
          tsize = ssize;
2785
          tsize = ssize;
2786
        }
2786
        }
2787
      }
2787
      }
2788
    }
2788
    }
2789
  }
2789
  }
2790
 
2790
 
2791
  if (tbase != CMFAIL) {
2791
  if (tbase != CMFAIL) {
2792
 
2792
 
2793
    if ((m->footprint += tsize) > m->max_footprint)
2793
    if ((m->footprint += tsize) > m->max_footprint)
2794
      m->max_footprint = m->footprint;
2794
      m->max_footprint = m->footprint;
2795
 
2795
 
2796
    if (!is_initialized(m)) { /* first-time initialization */
2796
    if (!is_initialized(m)) { /* first-time initialization */
2797
      m->seg.base = m->least_addr = tbase;
2797
      m->seg.base = m->least_addr = tbase;
2798
      m->seg.size = tsize;
2798
      m->seg.size = tsize;
2799
      m->seg.sflags = mmap_flag;
2799
      m->seg.sflags = mmap_flag;
2800
      m->magic = mparams.magic;
2800
      m->magic = mparams.magic;
2801
      init_bins(m);
2801
      init_bins(m);
2802
      if (is_global(m))
2802
      if (is_global(m))
2803
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2803
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2804
      else {
2804
      else {
2805
        /* Offset top by embedded malloc_state */
2805
        /* Offset top by embedded malloc_state */
2806
        mchunkptr mn = next_chunk(mem2chunk(m));
2806
        mchunkptr mn = next_chunk(mem2chunk(m));
2807
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2807
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2808
      }
2808
      }
2809
    }
2809
    }
2810
 
2810
 
2811
    else {
2811
    else {
2812
      /* Try to merge with an existing segment */
2812
      /* Try to merge with an existing segment */
2813
      msegmentptr sp = &m->seg;
2813
      msegmentptr sp = &m->seg;
2814
      while (sp != 0 && tbase != sp->base + sp->size)
2814
      while (sp != 0 && tbase != sp->base + sp->size)
2815
        sp = sp->next;
2815
        sp = sp->next;
2816
      if (sp != 0 &&
2816
      if (sp != 0 &&
2817
          !is_extern_segment(sp) &&
2817
          !is_extern_segment(sp) &&
2818
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2818
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2819
          segment_holds(sp, m->top)) { /* append */
2819
          segment_holds(sp, m->top)) { /* append */
2820
        sp->size += tsize;
2820
        sp->size += tsize;
2821
        init_top(m, m->top, m->topsize + tsize);
2821
        init_top(m, m->top, m->topsize + tsize);
2822
      }
2822
      }
2823
      else {
2823
      else {
2824
        if (tbase < m->least_addr)
2824
        if (tbase < m->least_addr)
2825
          m->least_addr = tbase;
2825
          m->least_addr = tbase;
2826
        sp = &m->seg;
2826
        sp = &m->seg;
2827
        while (sp != 0 && sp->base != tbase + tsize)
2827
        while (sp != 0 && sp->base != tbase + tsize)
2828
          sp = sp->next;
2828
          sp = sp->next;
2829
        if (sp != 0 &&
2829
        if (sp != 0 &&
2830
            !is_extern_segment(sp) &&
2830
            !is_extern_segment(sp) &&
2831
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2831
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2832
          char* oldbase = sp->base;
2832
          char* oldbase = sp->base;
2833
          sp->base = tbase;
2833
          sp->base = tbase;
2834
          sp->size += tsize;
2834
          sp->size += tsize;
2835
          return prepend_alloc(m, tbase, oldbase, nb);
2835
          return prepend_alloc(m, tbase, oldbase, nb);
2836
        }
2836
        }
2837
        else
2837
        else
2838
          add_segment(m, tbase, tsize, mmap_flag);
2838
          add_segment(m, tbase, tsize, mmap_flag);
2839
      }
2839
      }
2840
    }
2840
    }
2841
 
2841
 
2842
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2842
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2843
      size_t rsize = m->topsize -= nb;
2843
      size_t rsize = m->topsize -= nb;
2844
      mchunkptr p = m->top;
2844
      mchunkptr p = m->top;
2845
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2845
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2846
      r->head = rsize | PINUSE_BIT;
2846
      r->head = rsize | PINUSE_BIT;
2847
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2847
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2848
      check_top_chunk(m, m->top);
2848
      check_top_chunk(m, m->top);
2849
      check_malloced_chunk(m, chunk2mem(p), nb);
2849
      check_malloced_chunk(m, chunk2mem(p), nb);
2850
      return chunk2mem(p);
2850
      return chunk2mem(p);
2851
    }
2851
    }
2852
  }
2852
  }
2853
 
2853
 
2854
  MALLOC_FAILURE_ACTION;
2854
  MALLOC_FAILURE_ACTION;
2855
  return 0;
2855
  return 0;
2856
}
2856
}
2857
 
2857
 
2858
/* -----------------------  system deallocation -------------------------- */
2858
/* -----------------------  system deallocation -------------------------- */
2859
 
2859
 
2860
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2860
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2861
static size_t release_unused_segments(mstate m) {
2861
static size_t release_unused_segments(mstate m) {
2862
  size_t released = 0;
2862
  size_t released = 0;
2863
  msegmentptr pred = &m->seg;
2863
  msegmentptr pred = &m->seg;
2864
  msegmentptr sp = pred->next;
2864
  msegmentptr sp = pred->next;
2865
  while (sp != 0) {
2865
  while (sp != 0) {
2866
    char* base = sp->base;
2866
    char* base = sp->base;
2867
    size_t size = sp->size;
2867
    size_t size = sp->size;
2868
    msegmentptr next = sp->next;
2868
    msegmentptr next = sp->next;
2869
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2869
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2870
      mchunkptr p = align_as_chunk(base);
2870
      mchunkptr p = align_as_chunk(base);
2871
      size_t psize = chunksize(p);
2871
      size_t psize = chunksize(p);
2872
      /* Can unmap if first chunk holds entire segment and not pinned */
2872
      /* Can unmap if first chunk holds entire segment and not pinned */
2873
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2873
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2874
        tchunkptr tp = (tchunkptr)p;
2874
        tchunkptr tp = (tchunkptr)p;
2875
        assert(segment_holds(sp, (char*)sp));
2875
        assert(segment_holds(sp, (char*)sp));
2876
        if (p == m->dv) {
2876
        if (p == m->dv) {
2877
          m->dv = 0;
2877
          m->dv = 0;
2878
          m->dvsize = 0;
2878
          m->dvsize = 0;
2879
        }
2879
        }
2880
        else {
2880
        else {
2881
          unlink_large_chunk(m, tp);
2881
          unlink_large_chunk(m, tp);
2882
        }
2882
        }
2883
        if (CALL_MUNMAP(base, size) == 0) {
2883
        if (CALL_MUNMAP(base, size) == 0) {
2884
          released += size;
2884
          released += size;
2885
          m->footprint -= size;
2885
          m->footprint -= size;
2886
          /* unlink obsoleted record */
2886
          /* unlink obsoleted record */
2887
          sp = pred;
2887
          sp = pred;
2888
          sp->next = next;
2888
          sp->next = next;
2889
        }
2889
        }
2890
        else { /* back out if cannot unmap */
2890
        else { /* back out if cannot unmap */
2891
          insert_large_chunk(m, tp, psize);
2891
          insert_large_chunk(m, tp, psize);
2892
        }
2892
        }
2893
      }
2893
      }
2894
    }
2894
    }
2895
    pred = sp;
2895
    pred = sp;
2896
    sp = next;
2896
    sp = next;
2897
  }
2897
  }
2898
  return released;
2898
  return released;
2899
}
2899
}
2900
 
2900
 
2901
static int sys_trim(mstate m, size_t pad) {
2901
static int sys_trim(mstate m, size_t pad) {
2902
  size_t released = 0;
2902
  size_t released = 0;
2903
  if (pad < MAX_REQUEST && is_initialized(m)) {
2903
  if (pad < MAX_REQUEST && is_initialized(m)) {
2904
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2904
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2905
 
2905
 
2906
    if (m->topsize > pad) {
2906
    if (m->topsize > pad) {
2907
      /* Shrink top space in granularity-size units, keeping at least one */
2907
      /* Shrink top space in granularity-size units, keeping at least one */
2908
      size_t unit = mparams.granularity;
2908
      size_t unit = mparams.granularity;
2909
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2909
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2910
                      SIZE_T_ONE) * unit;
2910
                      SIZE_T_ONE) * unit;
2911
      msegmentptr sp = segment_holding(m, (char*)m->top);
2911
      msegmentptr sp = segment_holding(m, (char*)m->top);
2912
 
2912
 
2913
      if (!is_extern_segment(sp)) {
2913
      if (!is_extern_segment(sp)) {
2914
        if (is_mmapped_segment(sp)) {
2914
        if (is_mmapped_segment(sp)) {
2915
          if (HAVE_MMAP &&
2915
          if (HAVE_MMAP &&
2916
              sp->size >= extra &&
2916
              sp->size >= extra &&
2917
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2917
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2918
            size_t newsize = sp->size - extra;
2918
            size_t newsize = sp->size - extra;
2919
            /* Prefer mremap, fall back to munmap */
2919
            /* Prefer mremap, fall back to munmap */
2920
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2920
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2921
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2921
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2922
              released = extra;
2922
              released = extra;
2923
            }
2923
            }
2924
          }
2924
          }
2925
        }
2925
        }
2926
        else if (HAVE_MORECORE) {
2926
        else if (HAVE_MORECORE) {
2927
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2927
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2928
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2928
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2929
          ACQUIRE_MORECORE_LOCK();
2929
          ACQUIRE_MORECORE_LOCK();
2930
          {
2930
          {
2931
            /* Make sure end of memory is where we last set it. */
2931
            /* Make sure end of memory is where we last set it. */
2932
            char* old_br = (char*)(CALL_MORECORE(0));
2932
            char* old_br = (char*)(CALL_MORECORE(0));
2933
            if (old_br == sp->base + sp->size) {
2933
            if (old_br == sp->base + sp->size) {
2934
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2934
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2935
              char* new_br = (char*)(CALL_MORECORE(0));
2935
              char* new_br = (char*)(CALL_MORECORE(0));
2936
              if (rel_br != CMFAIL && new_br < old_br)
2936
              if (rel_br != CMFAIL && new_br < old_br)
2937
                released = old_br - new_br;
2937
                released = old_br - new_br;
2938
            }
2938
            }
2939
          }
2939
          }
2940
          RELEASE_MORECORE_LOCK();
2940
          RELEASE_MORECORE_LOCK();
2941
        }
2941
        }
2942
      }
2942
      }
2943
 
2943
 
2944
      if (released != 0) {
2944
      if (released != 0) {
2945
        sp->size -= released;
2945
        sp->size -= released;
2946
        m->footprint -= released;
2946
        m->footprint -= released;
2947
        init_top(m, m->top, m->topsize - released);
2947
        init_top(m, m->top, m->topsize - released);
2948
        check_top_chunk(m, m->top);
2948
        check_top_chunk(m, m->top);
2949
      }
2949
      }
2950
    }
2950
    }
2951
 
2951
 
2952
    /* Unmap any unused mmapped segments */
2952
    /* Unmap any unused mmapped segments */
2953
    if (HAVE_MMAP)
2953
    if (HAVE_MMAP)
2954
      released += release_unused_segments(m);
2954
      released += release_unused_segments(m);
2955
 
2955
 
2956
    /* On failure, disable autotrim to avoid repeated failed future calls */
2956
    /* On failure, disable autotrim to avoid repeated failed future calls */
2957
    if (released == 0)
2957
    if (released == 0)
2958
      m->trim_check = MAX_SIZE_T;
2958
      m->trim_check = MAX_SIZE_T;
2959
  }
2959
  }
2960
 
2960
 
2961
  return (released != 0)? 1 : 0;
2961
  return (released != 0)? 1 : 0;
2962
}
2962
}
2963
 
2963
 
2964
/* ---------------------------- malloc support --------------------------- */
2964
/* ---------------------------- malloc support --------------------------- */
2965
 
2965
 
2966
/* allocate a large request from the best fitting chunk in a treebin */
2966
/* allocate a large request from the best fitting chunk in a treebin */
2967
static void* tmalloc_large(mstate m, size_t nb) {
2967
static void* tmalloc_large(mstate m, size_t nb) {
2968
  tchunkptr v = 0;
2968
  tchunkptr v = 0;
2969
  size_t rsize = -nb; /* Unsigned negation */
2969
  size_t rsize = -nb; /* Unsigned negation */
2970
  tchunkptr t;
2970
  tchunkptr t;
2971
  bindex_t idx;
2971
  bindex_t idx;
2972
  compute_tree_index(nb, idx);
2972
  compute_tree_index(nb, idx);
2973
 
2973
 
2974
  if ((t = *treebin_at(m, idx)) != 0) {
2974
  if ((t = *treebin_at(m, idx)) != 0) {
2975
    /* Traverse tree for this bin looking for node with size == nb */
2975
    /* Traverse tree for this bin looking for node with size == nb */
2976
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2976
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2977
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2977
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2978
    for (;;) {
2978
    for (;;) {
2979
      tchunkptr rt;
2979
      tchunkptr rt;
2980
      size_t trem = chunksize(t) - nb;
2980
      size_t trem = chunksize(t) - nb;
2981
      if (trem < rsize) {
2981
      if (trem < rsize) {
2982
        v = t;
2982
        v = t;
2983
        if ((rsize = trem) == 0)
2983
        if ((rsize = trem) == 0)
2984
          break;
2984
          break;
2985
      }
2985
      }
2986
      rt = t->child[1];
2986
      rt = t->child[1];
2987
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2987
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2988
      if (rt != 0 && rt != t)
2988
      if (rt != 0 && rt != t)
2989
        rst = rt;
2989
        rst = rt;
2990
      if (t == 0) {
2990
      if (t == 0) {
2991
        t = rst; /* set t to least subtree holding sizes > nb */
2991
        t = rst; /* set t to least subtree holding sizes > nb */
2992
        break;
2992
        break;
2993
      }
2993
      }
2994
      sizebits <<= 1;
2994
      sizebits <<= 1;
2995
    }
2995
    }
2996
  }
2996
  }
2997
 
2997
 
2998
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
2998
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
2999
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
2999
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3000
    if (leftbits != 0) {
3000
    if (leftbits != 0) {
3001
      bindex_t i;
3001
      bindex_t i;
3002
      binmap_t leastbit = least_bit(leftbits);
3002
      binmap_t leastbit = least_bit(leftbits);
3003
      compute_bit2idx(leastbit, i);
3003
      compute_bit2idx(leastbit, i);
3004
      t = *treebin_at(m, i);
3004
      t = *treebin_at(m, i);
3005
    }
3005
    }
3006
  }
3006
  }
3007
 
3007
 
3008
  while (t != 0) { /* find smallest of tree or subtree */
3008
  while (t != 0) { /* find smallest of tree or subtree */
3009
    size_t trem = chunksize(t) - nb;
3009
    size_t trem = chunksize(t) - nb;
3010
    if (trem < rsize) {
3010
    if (trem < rsize) {
3011
      rsize = trem;
3011
      rsize = trem;
3012
      v = t;
3012
      v = t;
3013
    }
3013
    }
3014
    t = leftmost_child(t);
3014
    t = leftmost_child(t);
3015
  }
3015
  }
3016
 
3016
 
3017
  /*  If dv is a better fit, return 0 so malloc will use it */
3017
  /*  If dv is a better fit, return 0 so malloc will use it */
3018
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3018
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3019
    if (RTCHECK(ok_address(m, v))) { /* split */
3019
    if (RTCHECK(ok_address(m, v))) { /* split */
3020
      mchunkptr r = chunk_plus_offset(v, nb);
3020
      mchunkptr r = chunk_plus_offset(v, nb);
3021
      assert(chunksize(v) == rsize + nb);
3021
      assert(chunksize(v) == rsize + nb);
3022
      if (RTCHECK(ok_next(v, r))) {
3022
      if (RTCHECK(ok_next(v, r))) {
3023
        unlink_large_chunk(m, v);
3023
        unlink_large_chunk(m, v);
3024
        if (rsize < MIN_CHUNK_SIZE)
3024
        if (rsize < MIN_CHUNK_SIZE)
3025
          set_inuse_and_pinuse(m, v, (rsize + nb));
3025
          set_inuse_and_pinuse(m, v, (rsize + nb));
3026
        else {
3026
        else {
3027
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3027
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3028
          set_size_and_pinuse_of_free_chunk(r, rsize);
3028
          set_size_and_pinuse_of_free_chunk(r, rsize);
3029
          insert_chunk(m, r, rsize);
3029
          insert_chunk(m, r, rsize);
3030
        }
3030
        }
3031
        return chunk2mem(v);
3031
        return chunk2mem(v);
3032
      }
3032
      }
3033
    }
3033
    }
3034
    CORRUPTION_ERROR_ACTION(m);
3034
    CORRUPTION_ERROR_ACTION(m);
3035
  }
3035
  }
3036
  return 0;
3036
  return 0;
3037
}
3037
}
3038
 
3038
 
3039
/* allocate a small request from the best fitting chunk in a treebin */
3039
/* allocate a small request from the best fitting chunk in a treebin */
3040
static void* tmalloc_small(mstate m, size_t nb) {
3040
static void* tmalloc_small(mstate m, size_t nb) {
3041
  tchunkptr t, v;
3041
  tchunkptr t, v;
3042
  size_t rsize;
3042
  size_t rsize;
3043
  bindex_t i;
3043
  bindex_t i;
3044
  binmap_t leastbit = least_bit(m->treemap);
3044
  binmap_t leastbit = least_bit(m->treemap);
3045
  compute_bit2idx(leastbit, i);
3045
  compute_bit2idx(leastbit, i);
3046
 
3046
 
3047
  v = t = *treebin_at(m, i);
3047
  v = t = *treebin_at(m, i);
3048
  rsize = chunksize(t) - nb;
3048
  rsize = chunksize(t) - nb;
3049
 
3049
 
3050
  while ((t = leftmost_child(t)) != 0) {
3050
  while ((t = leftmost_child(t)) != 0) {
3051
    size_t trem = chunksize(t) - nb;
3051
    size_t trem = chunksize(t) - nb;
3052
    if (trem < rsize) {
3052
    if (trem < rsize) {
3053
      rsize = trem;
3053
      rsize = trem;
3054
      v = t;
3054
      v = t;
3055
    }
3055
    }
3056
  }
3056
  }
3057
 
3057
 
3058
  if (RTCHECK(ok_address(m, v))) {
3058
  if (RTCHECK(ok_address(m, v))) {
3059
    mchunkptr r = chunk_plus_offset(v, nb);
3059
    mchunkptr r = chunk_plus_offset(v, nb);
3060
    assert(chunksize(v) == rsize + nb);
3060
    assert(chunksize(v) == rsize + nb);
3061
    if (RTCHECK(ok_next(v, r))) {
3061
    if (RTCHECK(ok_next(v, r))) {
3062
      unlink_large_chunk(m, v);
3062
      unlink_large_chunk(m, v);
3063
      if (rsize < MIN_CHUNK_SIZE)
3063
      if (rsize < MIN_CHUNK_SIZE)
3064
        set_inuse_and_pinuse(m, v, (rsize + nb));
3064
        set_inuse_and_pinuse(m, v, (rsize + nb));
3065
      else {
3065
      else {
3066
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3066
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3067
        set_size_and_pinuse_of_free_chunk(r, rsize);
3067
        set_size_and_pinuse_of_free_chunk(r, rsize);
3068
        replace_dv(m, r, rsize);
3068
        replace_dv(m, r, rsize);
3069
      }
3069
      }
3070
      return chunk2mem(v);
3070
      return chunk2mem(v);
3071
    }
3071
    }
3072
  }
3072
  }
3073
 
3073
 
3074
  CORRUPTION_ERROR_ACTION(m);
3074
  CORRUPTION_ERROR_ACTION(m);
3075
  return 0;
3075
  return 0;
3076
}
3076
}
3077
 
3077
 
3078
/* --------------------------- realloc support --------------------------- */
3078
/* --------------------------- realloc support --------------------------- */
3079
 
3079
 
3080
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3080
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3081
  if (bytes >= MAX_REQUEST) {
3081
  if (bytes >= MAX_REQUEST) {
3082
    MALLOC_FAILURE_ACTION;
3082
    MALLOC_FAILURE_ACTION;
3083
    return 0;
3083
    return 0;
3084
  }
3084
  }
3085
  if (!PREACTION(m)) {
3085
  if (!PREACTION(m)) {
3086
    mchunkptr oldp = mem2chunk(oldmem);
3086
    mchunkptr oldp = mem2chunk(oldmem);
3087
    size_t oldsize = chunksize(oldp);
3087
    size_t oldsize = chunksize(oldp);
3088
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3088
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3089
    mchunkptr newp = 0;
3089
    mchunkptr newp = 0;
3090
    void* extra = 0;
3090
    void* extra = 0;
3091
 
3091
 
3092
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3092
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3093
 
3093
 
3094
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3094
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3095
                ok_next(oldp, next) && ok_pinuse(next))) {
3095
                ok_next(oldp, next) && ok_pinuse(next))) {
3096
      size_t nb = request2size(bytes);
3096
      size_t nb = request2size(bytes);
3097
      if (is_mmapped(oldp))
3097
      if (is_mmapped(oldp))
3098
        newp = mmap_resize(m, oldp, nb);
3098
        newp = mmap_resize(m, oldp, nb);
3099
      else if (oldsize >= nb) { /* already big enough */
3099
      else if (oldsize >= nb) { /* already big enough */
3100
        size_t rsize = oldsize - nb;
3100
        size_t rsize = oldsize - nb;
3101
        newp = oldp;
3101
        newp = oldp;
3102
        if (rsize >= MIN_CHUNK_SIZE) {
3102
        if (rsize >= MIN_CHUNK_SIZE) {
3103
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3103
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3104
          set_inuse(m, newp, nb);
3104
          set_inuse(m, newp, nb);
3105
          set_inuse(m, remainder, rsize);
3105
          set_inuse(m, remainder, rsize);
3106
          extra = chunk2mem(remainder);
3106
          extra = chunk2mem(remainder);
3107
        }
3107
        }
3108
      }
3108
      }
3109
      else if (next == m->top && oldsize + m->topsize > nb) {
3109
      else if (next == m->top && oldsize + m->topsize > nb) {
3110
        /* Expand into top */
3110
        /* Expand into top */
3111
        size_t newsize = oldsize + m->topsize;
3111
        size_t newsize = oldsize + m->topsize;
3112
        size_t newtopsize = newsize - nb;
3112
        size_t newtopsize = newsize - nb;
3113
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3113
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3114
        set_inuse(m, oldp, nb);
3114
        set_inuse(m, oldp, nb);
3115
        newtop->head = newtopsize |PINUSE_BIT;
3115
        newtop->head = newtopsize |PINUSE_BIT;
3116
        m->top = newtop;
3116
        m->top = newtop;
3117
        m->topsize = newtopsize;
3117
        m->topsize = newtopsize;
3118
        newp = oldp;
3118
        newp = oldp;
3119
      }
3119
      }
3120
    }
3120
    }
3121
    else {
3121
    else {
3122
      USAGE_ERROR_ACTION(m, oldmem);
3122
      USAGE_ERROR_ACTION(m, oldmem);
3123
      POSTACTION(m);
3123
      POSTACTION(m);
3124
      return 0;
3124
      return 0;
3125
    }
3125
    }
3126
 
3126
 
3127
    POSTACTION(m);
3127
    POSTACTION(m);
3128
 
3128
 
3129
    if (newp != 0) {
3129
    if (newp != 0) {
3130
      if (extra != 0) {
3130
      if (extra != 0) {
3131
        internal_free(m, extra);
3131
        internal_free(m, extra);
3132
      }
3132
      }
3133
      check_inuse_chunk(m, newp);
3133
      check_inuse_chunk(m, newp);
3134
      return chunk2mem(newp);
3134
      return chunk2mem(newp);
3135
    }
3135
    }
3136
    else {
3136
    else {
3137
      void* newmem = internal_malloc(m, bytes);
3137
      void* newmem = internal_malloc(m, bytes);
3138
      if (newmem != 0) {
3138
      if (newmem != 0) {
3139
        size_t oc = oldsize - overhead_for(oldp);
3139
        size_t oc = oldsize - overhead_for(oldp);
3140
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3140
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3141
        internal_free(m, oldmem);
3141
        internal_free(m, oldmem);
3142
      }
3142
      }
3143
      return newmem;
3143
      return newmem;
3144
    }
3144
    }
3145
  }
3145
  }
3146
  return 0;
3146
  return 0;
3147
}
3147
}
3148
 
3148
 
3149
/* --------------------------- memalign support -------------------------- */
3149
/* --------------------------- memalign support -------------------------- */
3150
 
3150
 
3151
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3151
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3152
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3152
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3153
    return internal_malloc(m, bytes);
3153
    return internal_malloc(m, bytes);
3154
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3154
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3155
    alignment = MIN_CHUNK_SIZE;
3155
    alignment = MIN_CHUNK_SIZE;
3156
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3156
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3157
    size_t a = MALLOC_ALIGNMENT << 1;
3157
    size_t a = MALLOC_ALIGNMENT << 1;
3158
    while (a < alignment) a <<= 1;
3158
    while (a < alignment) a <<= 1;
3159
    alignment = a;
3159
    alignment = a;
3160
  }
3160
  }
3161
 
3161
 
3162
  if (bytes >= MAX_REQUEST - alignment) {
3162
  if (bytes >= MAX_REQUEST - alignment) {
3163
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3163
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3164
      MALLOC_FAILURE_ACTION;
3164
      MALLOC_FAILURE_ACTION;
3165
    }
3165
    }
3166
  }
3166
  }
3167
  else {
3167
  else {
3168
    size_t nb = request2size(bytes);
3168
    size_t nb = request2size(bytes);
3169
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3169
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3170
    char* mem = (char*)internal_malloc(m, req);
3170
    char* mem = (char*)internal_malloc(m, req);
3171
    if (mem != 0) {
3171
    if (mem != 0) {
3172
      void* leader = 0;
3172
      void* leader = 0;
3173
      void* trailer = 0;
3173
      void* trailer = 0;
3174
      mchunkptr p = mem2chunk(mem);
3174
      mchunkptr p = mem2chunk(mem);
3175
 
3175
 
3176
      if (PREACTION(m)) return 0;
3176
      if (PREACTION(m)) return 0;
3177
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3177
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3178
        /*
3178
        /*
3179
          Find an aligned spot inside chunk.  Since we need to give
3179
          Find an aligned spot inside chunk.  Since we need to give
3180
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3180
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3181
          the first calculation places us at a spot with less than
3181
          the first calculation places us at a spot with less than
3182
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3182
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3183
          We've allocated enough total room so that this is always
3183
          We've allocated enough total room so that this is always
3184
          possible.
3184
          possible.
3185
        */
3185
        */
3186
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3186
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3187
                                                       alignment -
3187
                                                       alignment -
3188
                                                       SIZE_T_ONE)) &
3188
                                                       SIZE_T_ONE)) &
3189
                                             -alignment));
3189
                                             -alignment));
3190
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3190
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3191
          br : br+alignment;
3191
          br : br+alignment;
3192
        mchunkptr newp = (mchunkptr)pos;
3192
        mchunkptr newp = (mchunkptr)pos;
3193
        size_t leadsize = pos - (char*)(p);
3193
        size_t leadsize = pos - (char*)(p);
3194
        size_t newsize = chunksize(p) - leadsize;
3194
        size_t newsize = chunksize(p) - leadsize;
3195
 
3195
 
3196
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3196
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3197
          newp->prev_foot = p->prev_foot + leadsize;
3197
          newp->prev_foot = p->prev_foot + leadsize;
3198
          newp->head = (newsize|CINUSE_BIT);
3198
          newp->head = (newsize|CINUSE_BIT);
3199
        }
3199
        }
3200
        else { /* Otherwise, give back leader, use the rest */
3200
        else { /* Otherwise, give back leader, use the rest */
3201
          set_inuse(m, newp, newsize);
3201
          set_inuse(m, newp, newsize);
3202
          set_inuse(m, p, leadsize);
3202
          set_inuse(m, p, leadsize);
3203
          leader = chunk2mem(p);
3203
          leader = chunk2mem(p);
3204
        }
3204
        }
3205
        p = newp;
3205
        p = newp;
3206
      }
3206
      }
3207
 
3207
 
3208
      /* Give back spare room at the end */
3208
      /* Give back spare room at the end */
3209
      if (!is_mmapped(p)) {
3209
      if (!is_mmapped(p)) {
3210
        size_t size = chunksize(p);
3210
        size_t size = chunksize(p);
3211
        if (size > nb + MIN_CHUNK_SIZE) {
3211
        if (size > nb + MIN_CHUNK_SIZE) {
3212
          size_t remainder_size = size - nb;
3212
          size_t remainder_size = size - nb;
3213
          mchunkptr remainder = chunk_plus_offset(p, nb);
3213
          mchunkptr remainder = chunk_plus_offset(p, nb);
3214
          set_inuse(m, p, nb);
3214
          set_inuse(m, p, nb);
3215
          set_inuse(m, remainder, remainder_size);
3215
          set_inuse(m, remainder, remainder_size);
3216
          trailer = chunk2mem(remainder);
3216
          trailer = chunk2mem(remainder);
3217
        }
3217
        }
3218
      }
3218
      }
3219
 
3219
 
3220
      assert (chunksize(p) >= nb);
3220
      assert (chunksize(p) >= nb);
3221
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3221
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3222
      check_inuse_chunk(m, p);
3222
      check_inuse_chunk(m, p);
3223
      POSTACTION(m);
3223
      POSTACTION(m);
3224
      if (leader != 0) {
3224
      if (leader != 0) {
3225
        internal_free(m, leader);
3225
        internal_free(m, leader);
3226
      }
3226
      }
3227
      if (trailer != 0) {
3227
      if (trailer != 0) {
3228
        internal_free(m, trailer);
3228
        internal_free(m, trailer);
3229
      }
3229
      }
3230
      return chunk2mem(p);
3230
      return chunk2mem(p);
3231
    }
3231
    }
3232
  }
3232
  }
3233
  return 0;
3233
  return 0;
3234
}
3234
}
3235
 
3235
 
3236
/* ------------------------ comalloc/coalloc support --------------------- */
3236
/* ------------------------ comalloc/coalloc support --------------------- */
3237
 
3237
 
3238
static void** ialloc(mstate m,
3238
static void** ialloc(mstate m,
3239
                     size_t n_elements,
3239
                     size_t n_elements,
3240
                     size_t* sizes,
3240
                     size_t* sizes,
3241
                     int opts,
3241
                     int opts,
3242
                     void* chunks[]) {
3242
                     void* chunks[]) {
3243
  /*
3243
  /*
3244
    This provides common support for independent_X routines, handling
3244
    This provides common support for independent_X routines, handling
3245
    all of the combinations that can result.
3245
    all of the combinations that can result.
3246
 
3246
 
3247
    The opts arg has:
3247
    The opts arg has:
3248
    bit 0 set if all elements are same size (using sizes[0])
3248
    bit 0 set if all elements are same size (using sizes[0])
3249
    bit 1 set if elements should be zeroed
3249
    bit 1 set if elements should be zeroed
3250
  */
3250
  */
3251
 
3251
 
3252
  size_t    element_size;   /* chunksize of each element, if all same */
3252
  size_t    element_size;   /* chunksize of each element, if all same */
3253
  size_t    contents_size;  /* total size of elements */
3253
  size_t    contents_size;  /* total size of elements */
3254
  size_t    array_size;     /* request size of pointer array */
3254
  size_t    array_size;     /* request size of pointer array */
3255
  void*     mem;            /* malloced aggregate space */
3255
  void*     mem;            /* malloced aggregate space */
3256
  mchunkptr p;              /* corresponding chunk */
3256
  mchunkptr p;              /* corresponding chunk */
3257
  size_t    remainder_size; /* remaining bytes while splitting */
3257
  size_t    remainder_size; /* remaining bytes while splitting */
3258
  void**    marray;         /* either "chunks" or malloced ptr array */
3258
  void**    marray;         /* either "chunks" or malloced ptr array */
3259
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3259
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3260
  flag_t    was_enabled;    /* to disable mmap */
3260
  flag_t    was_enabled;    /* to disable mmap */
3261
  size_t    size;
3261
  size_t    size;
3262
  size_t    i;
3262
  size_t    i;
3263
 
3263
 
3264
  /* compute array length, if needed */
3264
  /* compute array length, if needed */
3265
  if (chunks != 0) {
3265
  if (chunks != 0) {
3266
    if (n_elements == 0)
3266
    if (n_elements == 0)
3267
      return chunks; /* nothing to do */
3267
      return chunks; /* nothing to do */
3268
    marray = chunks;
3268
    marray = chunks;
3269
    array_size = 0;
3269
    array_size = 0;
3270
  }
3270
  }
3271
  else {
3271
  else {
3272
    /* if empty req, must still return chunk representing empty array */
3272
    /* if empty req, must still return chunk representing empty array */
3273
    if (n_elements == 0)
3273
    if (n_elements == 0)
3274
      return (void**)internal_malloc(m, 0);
3274
      return (void**)internal_malloc(m, 0);
3275
    marray = 0;
3275
    marray = 0;
3276
    array_size = request2size(n_elements * (sizeof(void*)));
3276
    array_size = request2size(n_elements * (sizeof(void*)));
3277
  }
3277
  }
3278
 
3278
 
3279
  /* compute total element size */
3279
  /* compute total element size */
3280
  if (opts & 0x1) { /* all-same-size */
3280
  if (opts & 0x1) { /* all-same-size */
3281
    element_size = request2size(*sizes);
3281
    element_size = request2size(*sizes);
3282
    contents_size = n_elements * element_size;
3282
    contents_size = n_elements * element_size;
3283
  }
3283
  }
3284
  else { /* add up all the sizes */
3284
  else { /* add up all the sizes */
3285
    element_size = 0;
3285
    element_size = 0;
3286
    contents_size = 0;
3286
    contents_size = 0;
3287
    for (i = 0; i != n_elements; ++i)
3287
    for (i = 0; i != n_elements; ++i)
3288
      contents_size += request2size(sizes[i]);
3288
      contents_size += request2size(sizes[i]);
3289
  }
3289
  }
3290
 
3290
 
3291
  size = contents_size + array_size;
3291
  size = contents_size + array_size;
3292
 
3292
 
3293
  /*
3293
  /*
3294
     Allocate the aggregate chunk.  First disable direct-mmapping so
3294
     Allocate the aggregate chunk.  First disable direct-mmapping so
3295
     malloc won't use it, since we would not be able to later
3295
     malloc won't use it, since we would not be able to later
3296
     free/realloc space internal to a segregated mmap region.
3296
     free/realloc space internal to a segregated mmap region.
3297
  */
3297
  */
3298
  was_enabled = use_mmap(m);
3298
  was_enabled = use_mmap(m);
3299
  disable_mmap(m);
3299
  disable_mmap(m);
3300
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3300
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3301
  if (was_enabled)
3301
  if (was_enabled)
3302
    enable_mmap(m);
3302
    enable_mmap(m);
3303
  if (mem == 0)
3303
  if (mem == 0)
3304
    return 0;
3304
    return 0;
3305
 
3305
 
3306
  if (PREACTION(m)) return 0;
3306
  if (PREACTION(m)) return 0;
3307
  p = mem2chunk(mem);
3307
  p = mem2chunk(mem);
3308
  remainder_size = chunksize(p);
3308
  remainder_size = chunksize(p);
3309
 
3309
 
3310
  assert(!is_mmapped(p));
3310
  assert(!is_mmapped(p));
3311
 
3311
 
3312
  if (opts & 0x2) {       /* optionally clear the elements */
3312
  if (opts & 0x2) {       /* optionally clear the elements */
3313
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3313
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3314
  }
3314
  }
3315
 
3315
 
3316
  /* If not provided, allocate the pointer array as final part of chunk */
3316
  /* If not provided, allocate the pointer array as final part of chunk */
3317
  if (marray == 0) {
3317
  if (marray == 0) {
3318
    size_t  array_chunk_size;
3318
    size_t  array_chunk_size;
3319
    array_chunk = chunk_plus_offset(p, contents_size);
3319
    array_chunk = chunk_plus_offset(p, contents_size);
3320
    array_chunk_size = remainder_size - contents_size;
3320
    array_chunk_size = remainder_size - contents_size;
3321
    marray = (void**) (chunk2mem(array_chunk));
3321
    marray = (void**) (chunk2mem(array_chunk));
3322
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3322
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3323
    remainder_size = contents_size;
3323
    remainder_size = contents_size;
3324
  }
3324
  }
3325
 
3325
 
3326
  /* split out elements */
3326
  /* split out elements */
3327
  for (i = 0; ; ++i) {
3327
  for (i = 0; ; ++i) {
3328
    marray[i] = chunk2mem(p);
3328
    marray[i] = chunk2mem(p);
3329
    if (i != n_elements-1) {
3329
    if (i != n_elements-1) {
3330
      if (element_size != 0)
3330
      if (element_size != 0)
3331
        size = element_size;
3331
        size = element_size;
3332
      else
3332
      else
3333
        size = request2size(sizes[i]);
3333
        size = request2size(sizes[i]);
3334
      remainder_size -= size;
3334
      remainder_size -= size;
3335
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3335
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3336
      p = chunk_plus_offset(p, size);
3336
      p = chunk_plus_offset(p, size);
3337
    }
3337
    }
3338
    else { /* the final element absorbs any overallocation slop */
3338
    else { /* the final element absorbs any overallocation slop */
3339
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3339
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3340
      break;
3340
      break;
3341
    }
3341
    }
3342
  }
3342
  }
3343
 
3343
 
3344
#if DEBUG
3344
#if DEBUG
3345
  if (marray != chunks) {
3345
  if (marray != chunks) {
3346
    /* final element must have exactly exhausted chunk */
3346
    /* final element must have exactly exhausted chunk */
3347
    if (element_size != 0) {
3347
    if (element_size != 0) {
3348
      assert(remainder_size == element_size);
3348
      assert(remainder_size == element_size);
3349
    }
3349
    }
3350
    else {
3350
    else {
3351
      assert(remainder_size == request2size(sizes[i]));
3351
      assert(remainder_size == request2size(sizes[i]));
3352
    }
3352
    }
3353
    check_inuse_chunk(m, mem2chunk(marray));
3353
    check_inuse_chunk(m, mem2chunk(marray));
3354
  }
3354
  }
3355
  for (i = 0; i != n_elements; ++i)
3355
  for (i = 0; i != n_elements; ++i)
3356
    check_inuse_chunk(m, mem2chunk(marray[i]));
3356
    check_inuse_chunk(m, mem2chunk(marray[i]));
3357
 
3357
 
3358
#endif /* DEBUG */
3358
#endif /* DEBUG */
3359
 
3359
 
3360
  POSTACTION(m);
3360
  POSTACTION(m);
3361
  return marray;
3361
  return marray;
3362
}
3362
}
3363
 
3363
 
3364
 
3364
 
3365
/* -------------------------- public routines ---------------------------- */
3365
/* -------------------------- public routines ---------------------------- */
3366
 
3366
 
3367
#if !ONLY_MSPACES
3367
#if !ONLY_MSPACES
3368
 
3368
 
3369
void* dlmalloc(size_t bytes) {
3369
void* dlmalloc(size_t bytes) {
3370
  /*
3370
  /*
3371
     Basic algorithm:
3371
     Basic algorithm:
3372
     If a small request (< 256 bytes minus per-chunk overhead):
3372
     If a small request (< 256 bytes minus per-chunk overhead):
3373
       1. If one exists, use a remainderless chunk in associated smallbin.
3373
       1. If one exists, use a remainderless chunk in associated smallbin.
3374
          (Remainderless means that there are too few excess bytes to
3374
          (Remainderless means that there are too few excess bytes to
3375
          represent as a chunk.)
3375
          represent as a chunk.)
3376
       2. If it is big enough, use the dv chunk, which is normally the
3376
       2. If it is big enough, use the dv chunk, which is normally the
3377
          chunk adjacent to the one used for the most recent small request.
3377
          chunk adjacent to the one used for the most recent small request.
3378
       3. If one exists, split the smallest available chunk in a bin,
3378
       3. If one exists, split the smallest available chunk in a bin,
3379
          saving remainder in dv.
3379
          saving remainder in dv.
3380
       4. If it is big enough, use the top chunk.
3380
       4. If it is big enough, use the top chunk.
3381
       5. If available, get memory from system and use it
3381
       5. If available, get memory from system and use it
3382
     Otherwise, for a large request:
3382
     Otherwise, for a large request:
3383
       1. Find the smallest available binned chunk that fits, and use it
3383
       1. Find the smallest available binned chunk that fits, and use it
3384
          if it is better fitting than dv chunk, splitting if necessary.
3384
          if it is better fitting than dv chunk, splitting if necessary.
3385
       2. If better fitting than any binned chunk, use the dv chunk.
3385
       2. If better fitting than any binned chunk, use the dv chunk.
3386
       3. If it is big enough, use the top chunk.
3386
       3. If it is big enough, use the top chunk.
3387
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3387
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3388
       5. If available, get memory from system and use it
3388
       5. If available, get memory from system and use it
3389
 
3389
 
3390
     The ugly goto's here ensure that postaction occurs along all paths.
3390
     The ugly goto's here ensure that postaction occurs along all paths.
3391
  */
3391
  */
3392
 
3392
 
3393
  if (!PREACTION(gm)) {
3393
  if (!PREACTION(gm)) {
3394
    void* mem;
3394
    void* mem;
3395
    size_t nb;
3395
    size_t nb;
3396
    if (bytes <= MAX_SMALL_REQUEST) {
3396
    if (bytes <= MAX_SMALL_REQUEST) {
3397
      bindex_t idx;
3397
      bindex_t idx;
3398
      binmap_t smallbits;
3398
      binmap_t smallbits;
3399
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3399
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3400
      idx = small_index(nb);
3400
      idx = small_index(nb);
3401
      smallbits = gm->smallmap >> idx;
3401
      smallbits = gm->smallmap >> idx;
3402
 
3402
 
3403
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3403
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3404
        mchunkptr b, p;
3404
        mchunkptr b, p;
3405
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3405
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3406
        b = smallbin_at(gm, idx);
3406
        b = smallbin_at(gm, idx);
3407
        p = b->fd;
3407
        p = b->fd;
3408
        assert(chunksize(p) == small_index2size(idx));
3408
        assert(chunksize(p) == small_index2size(idx));
3409
        unlink_first_small_chunk(gm, b, p, idx);
3409
        unlink_first_small_chunk(gm, b, p, idx);
3410
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3410
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3411
        mem = chunk2mem(p);
3411
        mem = chunk2mem(p);
3412
        check_malloced_chunk(gm, mem, nb);
3412
        check_malloced_chunk(gm, mem, nb);
3413
        goto postaction;
3413
        goto postaction;
3414
      }
3414
      }
3415
 
3415
 
3416
      else if (nb > gm->dvsize) {
3416
      else if (nb > gm->dvsize) {
3417
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3417
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3418
          mchunkptr b, p, r;
3418
          mchunkptr b, p, r;
3419
          size_t rsize;
3419
          size_t rsize;
3420
          bindex_t i;
3420
          bindex_t i;
3421
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3421
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3422
          binmap_t leastbit = least_bit(leftbits);
3422
          binmap_t leastbit = least_bit(leftbits);
3423
          compute_bit2idx(leastbit, i);
3423
          compute_bit2idx(leastbit, i);
3424
          b = smallbin_at(gm, i);
3424
          b = smallbin_at(gm, i);
3425
          p = b->fd;
3425
          p = b->fd;
3426
          assert(chunksize(p) == small_index2size(i));
3426
          assert(chunksize(p) == small_index2size(i));
3427
          unlink_first_small_chunk(gm, b, p, i);
3427
          unlink_first_small_chunk(gm, b, p, i);
3428
          rsize = small_index2size(i) - nb;
3428
          rsize = small_index2size(i) - nb;
3429
          /* Fit here cannot be remainderless if 4byte sizes */
3429
          /* Fit here cannot be remainderless if 4byte sizes */
3430
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3430
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3431
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3431
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3432
          else {
3432
          else {
3433
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3433
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3434
            r = chunk_plus_offset(p, nb);
3434
            r = chunk_plus_offset(p, nb);
3435
            set_size_and_pinuse_of_free_chunk(r, rsize);
3435
            set_size_and_pinuse_of_free_chunk(r, rsize);
3436
            replace_dv(gm, r, rsize);
3436
            replace_dv(gm, r, rsize);
3437
          }
3437
          }
3438
          mem = chunk2mem(p);
3438
          mem = chunk2mem(p);
3439
          check_malloced_chunk(gm, mem, nb);
3439
          check_malloced_chunk(gm, mem, nb);
3440
          goto postaction;
3440
          goto postaction;
3441
        }
3441
        }
3442
 
3442
 
3443
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3443
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3444
          check_malloced_chunk(gm, mem, nb);
3444
          check_malloced_chunk(gm, mem, nb);
3445
          goto postaction;
3445
          goto postaction;
3446
        }
3446
        }
3447
      }
3447
      }
3448
    }
3448
    }
3449
    else if (bytes >= MAX_REQUEST)
3449
    else if (bytes >= MAX_REQUEST)
3450
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3450
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3451
    else {
3451
    else {
3452
      nb = pad_request(bytes);
3452
      nb = pad_request(bytes);
3453
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3453
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3454
        check_malloced_chunk(gm, mem, nb);
3454
        check_malloced_chunk(gm, mem, nb);
3455
        goto postaction;
3455
        goto postaction;
3456
      }
3456
      }
3457
    }
3457
    }
3458
 
3458
 
3459
    if (nb <= gm->dvsize) {
3459
    if (nb <= gm->dvsize) {
3460
      size_t rsize = gm->dvsize - nb;
3460
      size_t rsize = gm->dvsize - nb;
3461
      mchunkptr p = gm->dv;
3461
      mchunkptr p = gm->dv;
3462
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3462
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3463
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3463
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3464
        gm->dvsize = rsize;
3464
        gm->dvsize = rsize;
3465
        set_size_and_pinuse_of_free_chunk(r, rsize);
3465
        set_size_and_pinuse_of_free_chunk(r, rsize);
3466
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3466
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3467
      }
3467
      }
3468
      else { /* exhaust dv */
3468
      else { /* exhaust dv */
3469
        size_t dvs = gm->dvsize;
3469
        size_t dvs = gm->dvsize;
3470
        gm->dvsize = 0;
3470
        gm->dvsize = 0;
3471
        gm->dv = 0;
3471
        gm->dv = 0;
3472
        set_inuse_and_pinuse(gm, p, dvs);
3472
        set_inuse_and_pinuse(gm, p, dvs);
3473
      }
3473
      }
3474
      mem = chunk2mem(p);
3474
      mem = chunk2mem(p);
3475
      check_malloced_chunk(gm, mem, nb);
3475
      check_malloced_chunk(gm, mem, nb);
3476
      goto postaction;
3476
      goto postaction;
3477
    }
3477
    }
3478
 
3478
 
3479
    else if (nb < gm->topsize) { /* Split top */
3479
    else if (nb < gm->topsize) { /* Split top */
3480
      size_t rsize = gm->topsize -= nb;
3480
      size_t rsize = gm->topsize -= nb;
3481
      mchunkptr p = gm->top;
3481
      mchunkptr p = gm->top;
3482
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3482
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3483
      r->head = rsize | PINUSE_BIT;
3483
      r->head = rsize | PINUSE_BIT;
3484
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3484
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3485
      mem = chunk2mem(p);
3485
      mem = chunk2mem(p);
3486
      check_top_chunk(gm, gm->top);
3486
      check_top_chunk(gm, gm->top);
3487
      check_malloced_chunk(gm, mem, nb);
3487
      check_malloced_chunk(gm, mem, nb);
3488
      goto postaction;
3488
      goto postaction;
3489
    }
3489
    }
3490
 
3490
 
3491
    mem = sys_alloc(gm, nb);
3491
    mem = sys_alloc(gm, nb);
3492
 
3492
 
3493
  postaction:
3493
  postaction:
3494
    POSTACTION(gm);
3494
    POSTACTION(gm);
3495
    return mem;
3495
    return mem;
3496
  }
3496
  }
3497
 
3497
 
3498
  return 0;
3498
  return 0;
3499
}
3499
}
3500
 
3500
 
3501
void dlfree(void* mem) {
3501
void dlfree(void* mem) {
3502
  /*
3502
  /*
3503
     Consolidate freed chunks with preceeding or succeeding bordering
3503
     Consolidate freed chunks with preceeding or succeeding bordering
3504
     free chunks, if they exist, and then place in a bin.  Intermixed
3504
     free chunks, if they exist, and then place in a bin.  Intermixed
3505
     with special cases for top, dv, mmapped chunks, and usage errors.
3505
     with special cases for top, dv, mmapped chunks, and usage errors.
3506
  */
3506
  */
3507
 
3507
 
3508
  if (mem != 0) {
3508
  if (mem != 0) {
3509
    mchunkptr p  = mem2chunk(mem);
3509
    mchunkptr p  = mem2chunk(mem);
3510
#if FOOTERS
3510
#if FOOTERS
3511
    mstate fm = get_mstate_for(p);
3511
    mstate fm = get_mstate_for(p);
3512
    if (!ok_magic(fm)) {
3512
    if (!ok_magic(fm)) {
3513
      USAGE_ERROR_ACTION(fm, p);
3513
      USAGE_ERROR_ACTION(fm, p);
3514
      return;
3514
      return;
3515
    }
3515
    }
3516
#else /* FOOTERS */
3516
#else /* FOOTERS */
3517
#define fm gm
3517
#define fm gm
3518
#endif /* FOOTERS */
3518
#endif /* FOOTERS */
3519
    if (!PREACTION(fm)) {
3519
    if (!PREACTION(fm)) {
3520
      check_inuse_chunk(fm, p);
3520
      check_inuse_chunk(fm, p);
3521
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3521
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3522
        size_t psize = chunksize(p);
3522
        size_t psize = chunksize(p);
3523
        mchunkptr next = chunk_plus_offset(p, psize);
3523
        mchunkptr next = chunk_plus_offset(p, psize);
3524
        if (!pinuse(p)) {
3524
        if (!pinuse(p)) {
3525
          size_t prevsize = p->prev_foot;
3525
          size_t prevsize = p->prev_foot;
3526
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3526
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3527
            prevsize &= ~IS_MMAPPED_BIT;
3527
            prevsize &= ~IS_MMAPPED_BIT;
3528
            psize += prevsize + MMAP_FOOT_PAD;
3528
            psize += prevsize + MMAP_FOOT_PAD;
3529
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3529
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3530
              fm->footprint -= psize;
3530
              fm->footprint -= psize;
3531
            goto postaction;
3531
            goto postaction;
3532
          }
3532
          }
3533
          else {
3533
          else {
3534
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3534
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3535
            psize += prevsize;
3535
            psize += prevsize;
3536
            p = prev;
3536
            p = prev;
3537
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3537
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3538
              if (p != fm->dv) {
3538
              if (p != fm->dv) {
3539
                unlink_chunk(fm, p, prevsize);
3539
                unlink_chunk(fm, p, prevsize);
3540
              }
3540
              }
3541
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3541
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3542
                fm->dvsize = psize;
3542
                fm->dvsize = psize;
3543
                set_free_with_pinuse(p, psize, next);
3543
                set_free_with_pinuse(p, psize, next);
3544
                goto postaction;
3544
                goto postaction;
3545
              }
3545
              }
3546
            }
3546
            }
3547
            else
3547
            else
3548
              goto erroraction;
3548
              goto erroraction;
3549
          }
3549
          }
3550
        }
3550
        }
3551
 
3551
 
3552
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3552
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3553
          if (!cinuse(next)) {  /* consolidate forward */
3553
          if (!cinuse(next)) {  /* consolidate forward */
3554
            if (next == fm->top) {
3554
            if (next == fm->top) {
3555
              size_t tsize = fm->topsize += psize;
3555
              size_t tsize = fm->topsize += psize;
3556
              fm->top = p;
3556
              fm->top = p;
3557
              p->head = tsize | PINUSE_BIT;
3557
              p->head = tsize | PINUSE_BIT;
3558
              if (p == fm->dv) {
3558
              if (p == fm->dv) {
3559
                fm->dv = 0;
3559
                fm->dv = 0;
3560
                fm->dvsize = 0;
3560
                fm->dvsize = 0;
3561
              }
3561
              }
3562
              if (should_trim(fm, tsize))
3562
              if (should_trim(fm, tsize))
3563
                sys_trim(fm, 0);
3563
                sys_trim(fm, 0);
3564
              goto postaction;
3564
              goto postaction;
3565
            }
3565
            }
3566
            else if (next == fm->dv) {
3566
            else if (next == fm->dv) {
3567
              size_t dsize = fm->dvsize += psize;
3567
              size_t dsize = fm->dvsize += psize;
3568
              fm->dv = p;
3568
              fm->dv = p;
3569
              set_size_and_pinuse_of_free_chunk(p, dsize);
3569
              set_size_and_pinuse_of_free_chunk(p, dsize);
3570
              goto postaction;
3570
              goto postaction;
3571
            }
3571
            }
3572
            else {
3572
            else {
3573
              size_t nsize = chunksize(next);
3573
              size_t nsize = chunksize(next);
3574
              psize += nsize;
3574
              psize += nsize;
3575
              unlink_chunk(fm, next, nsize);
3575
              unlink_chunk(fm, next, nsize);
3576
              set_size_and_pinuse_of_free_chunk(p, psize);
3576
              set_size_and_pinuse_of_free_chunk(p, psize);
3577
              if (p == fm->dv) {
3577
              if (p == fm->dv) {
3578
                fm->dvsize = psize;
3578
                fm->dvsize = psize;
3579
                goto postaction;
3579
                goto postaction;
3580
              }
3580
              }
3581
            }
3581
            }
3582
          }
3582
          }
3583
          else
3583
          else
3584
            set_free_with_pinuse(p, psize, next);
3584
            set_free_with_pinuse(p, psize, next);
3585
          insert_chunk(fm, p, psize);
3585
          insert_chunk(fm, p, psize);
3586
          check_free_chunk(fm, p);
3586
          check_free_chunk(fm, p);
3587
          goto postaction;
3587
          goto postaction;
3588
        }
3588
        }
3589
      }
3589
      }
3590
    erroraction:
3590
    erroraction:
3591
      USAGE_ERROR_ACTION(fm, p);
3591
      USAGE_ERROR_ACTION(fm, p);
3592
    postaction:
3592
    postaction:
3593
      POSTACTION(fm);
3593
      POSTACTION(fm);
3594
    }
3594
    }
3595
  }
3595
  }
3596
#if !FOOTERS
3596
#if !FOOTERS
3597
#undef fm
3597
#undef fm
3598
#endif /* FOOTERS */
3598
#endif /* FOOTERS */
3599
}
3599
}
3600
 
3600
 
3601
void* dlcalloc(size_t n_elements, size_t elem_size) {
3601
void* dlcalloc(size_t n_elements, size_t elem_size) {
3602
  void* mem;
3602
  void* mem;
3603
  size_t req = 0;
3603
  size_t req = 0;
3604
  if (n_elements != 0) {
3604
  if (n_elements != 0) {
3605
    req = n_elements * elem_size;
3605
    req = n_elements * elem_size;
3606
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3606
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3607
        (req / n_elements != elem_size))
3607
        (req / n_elements != elem_size))
3608
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3608
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3609
  }
3609
  }
3610
  mem = dlmalloc(req);
3610
  mem = dlmalloc(req);
3611
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3611
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3612
    memset(mem, 0, req);
3612
    memset(mem, 0, req);
3613
  return mem;
3613
  return mem;
3614
}
3614
}
3615
 
3615
 
3616
void* dlrealloc(void* oldmem, size_t bytes) {
3616
void* dlrealloc(void* oldmem, size_t bytes) {
3617
  if (oldmem == 0)
3617
  if (oldmem == 0)
3618
    return dlmalloc(bytes);
3618
    return dlmalloc(bytes);
3619
#ifdef REALLOC_ZERO_BYTES_FREES
3619
#ifdef REALLOC_ZERO_BYTES_FREES
3620
  if (bytes == 0) {
3620
  if (bytes == 0) {
3621
    dlfree(oldmem);
3621
    dlfree(oldmem);
3622
    return 0;
3622
    return 0;
3623
  }
3623
  }
3624
#endif /* REALLOC_ZERO_BYTES_FREES */
3624
#endif /* REALLOC_ZERO_BYTES_FREES */
3625
  else {
3625
  else {
3626
#if ! FOOTERS
3626
#if ! FOOTERS
3627
    mstate m = gm;
3627
    mstate m = gm;
3628
#else /* FOOTERS */
3628
#else /* FOOTERS */
3629
    mstate m = get_mstate_for(mem2chunk(oldmem));
3629
    mstate m = get_mstate_for(mem2chunk(oldmem));
3630
    if (!ok_magic(m)) {
3630
    if (!ok_magic(m)) {
3631
      USAGE_ERROR_ACTION(m, oldmem);
3631
      USAGE_ERROR_ACTION(m, oldmem);
3632
      return 0;
3632
      return 0;
3633
    }
3633
    }
3634
#endif /* FOOTERS */
3634
#endif /* FOOTERS */
3635
    return internal_realloc(m, oldmem, bytes);
3635
    return internal_realloc(m, oldmem, bytes);
3636
  }
3636
  }
3637
}
3637
}
3638
 
3638
 
3639
void* dlmemalign(size_t alignment, size_t bytes) {
3639
void* dlmemalign(size_t alignment, size_t bytes) {
3640
  return internal_memalign(gm, alignment, bytes);
3640
  return internal_memalign(gm, alignment, bytes);
3641
}
3641
}
3642
 
3642
 
3643
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3643
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3644
                                 void* chunks[]) {
3644
                                 void* chunks[]) {
3645
  size_t sz = elem_size; /* serves as 1-element array */
3645
  size_t sz = elem_size; /* serves as 1-element array */
3646
  return ialloc(gm, n_elements, &sz, 3, chunks);
3646
  return ialloc(gm, n_elements, &sz, 3, chunks);
3647
}
3647
}
3648
 
3648
 
3649
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3649
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3650
                                   void* chunks[]) {
3650
                                   void* chunks[]) {
3651
  return ialloc(gm, n_elements, sizes, 0, chunks);
3651
  return ialloc(gm, n_elements, sizes, 0, chunks);
3652
}
3652
}
3653
 
3653
 
3654
void* dlvalloc(size_t bytes) {
3654
void* dlvalloc(size_t bytes) {
3655
  size_t pagesz;
3655
  size_t pagesz;
3656
  init_mparams();
3656
  init_mparams();
3657
  pagesz = mparams.page_size;
3657
  pagesz = mparams.page_size;
3658
  return dlmemalign(pagesz, bytes);
3658
  return dlmemalign(pagesz, bytes);
3659
}
3659
}
3660
 
3660
 
3661
void* dlpvalloc(size_t bytes) {
3661
void* dlpvalloc(size_t bytes) {
3662
  size_t pagesz;
3662
  size_t pagesz;
3663
  init_mparams();
3663
  init_mparams();
3664
  pagesz = mparams.page_size;
3664
  pagesz = mparams.page_size;
3665
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3665
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3666
}
3666
}
3667
 
3667
 
3668
int dlmalloc_trim(size_t pad) {
3668
int dlmalloc_trim(size_t pad) {
3669
  int result = 0;
3669
  int result = 0;
3670
  if (!PREACTION(gm)) {
3670
  if (!PREACTION(gm)) {
3671
    result = sys_trim(gm, pad);
3671
    result = sys_trim(gm, pad);
3672
    POSTACTION(gm);
3672
    POSTACTION(gm);
3673
  }
3673
  }
3674
  return result;
3674
  return result;
3675
}
3675
}
3676
 
3676
 
3677
size_t dlmalloc_footprint(void) {
3677
size_t dlmalloc_footprint(void) {
3678
  return gm->footprint;
3678
  return gm->footprint;
3679
}
3679
}
3680
 
3680
 
3681
size_t dlmalloc_max_footprint(void) {
3681
size_t dlmalloc_max_footprint(void) {
3682
  return gm->max_footprint;
3682
  return gm->max_footprint;
3683
}
3683
}
3684
 
3684
 
3685
#if !NO_MALLINFO
3685
#if !NO_MALLINFO
3686
struct mallinfo dlmallinfo(void) {
3686
struct mallinfo dlmallinfo(void) {
3687
  return internal_mallinfo(gm);
3687
  return internal_mallinfo(gm);
3688
}
3688
}
3689
#endif /* NO_MALLINFO */
3689
#endif /* NO_MALLINFO */
3690
 
3690
 
3691
void dlmalloc_stats() {
3691
void dlmalloc_stats() {
3692
  internal_malloc_stats(gm);
3692
  internal_malloc_stats(gm);
3693
}
3693
}
3694
 
3694
 
3695
size_t dlmalloc_usable_size(void* mem) {
3695
size_t dlmalloc_usable_size(void* mem) {
3696
  if (mem != 0) {
3696
  if (mem != 0) {
3697
    mchunkptr p = mem2chunk(mem);
3697
    mchunkptr p = mem2chunk(mem);
3698
    if (cinuse(p))
3698
    if (cinuse(p))
3699
      return chunksize(p) - overhead_for(p);
3699
      return chunksize(p) - overhead_for(p);
3700
  }
3700
  }
3701
  return 0;
3701
  return 0;
3702
}
3702
}
3703
 
3703
 
3704
int dlmallopt(int param_number, int value) {
3704
int dlmallopt(int param_number, int value) {
3705
  return change_mparam(param_number, value);
3705
  return change_mparam(param_number, value);
3706
}
3706
}
3707
 
3707
 
3708
#endif /* !ONLY_MSPACES */
3708
#endif /* !ONLY_MSPACES */
3709
 
3709
 
3710
/* ----------------------------- user mspaces ---------------------------- */
3710
/* ----------------------------- user mspaces ---------------------------- */
3711
 
3711
 
3712
#if MSPACES
3712
#if MSPACES
3713
 
3713
 
3714
static mstate init_user_mstate(char* tbase, size_t tsize) {
3714
static mstate init_user_mstate(char* tbase, size_t tsize) {
3715
  size_t msize = pad_request(sizeof(struct malloc_state));
3715
  size_t msize = pad_request(sizeof(struct malloc_state));
3716
  mchunkptr mn;
3716
  mchunkptr mn;
3717
  mchunkptr msp = align_as_chunk(tbase);
3717
  mchunkptr msp = align_as_chunk(tbase);
3718
  mstate m = (mstate)(chunk2mem(msp));
3718
  mstate m = (mstate)(chunk2mem(msp));
3719
  memset(m, 0, msize);
3719
  memset(m, 0, msize);
3720
  INITIAL_LOCK(&m->mutex);
3720
  INITIAL_LOCK(&m->mutex);
3721
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3721
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3722
  m->seg.base = m->least_addr = tbase;
3722
  m->seg.base = m->least_addr = tbase;
3723
  m->seg.size = m->footprint = m->max_footprint = tsize;
3723
  m->seg.size = m->footprint = m->max_footprint = tsize;
3724
  m->magic = mparams.magic;
3724
  m->magic = mparams.magic;
3725
  m->mflags = mparams.default_mflags;
3725
  m->mflags = mparams.default_mflags;
3726
  disable_contiguous(m);
3726
  disable_contiguous(m);
3727
  init_bins(m);
3727
  init_bins(m);
3728
  mn = next_chunk(mem2chunk(m));
3728
  mn = next_chunk(mem2chunk(m));
3729
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3729
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3730
  check_top_chunk(m, m->top);
3730
  check_top_chunk(m, m->top);
3731
  return m;
3731
  return m;
3732
}
3732
}
3733
 
3733
 
3734
mspace create_mspace(size_t capacity, int locked) {
3734
mspace create_mspace(size_t capacity, int locked) {
3735
  mstate m = 0;
3735
  mstate m = 0;
3736
  size_t msize = pad_request(sizeof(struct malloc_state));
3736
  size_t msize = pad_request(sizeof(struct malloc_state));
3737
  init_mparams(); /* Ensure pagesize etc initialized */
3737
  init_mparams(); /* Ensure pagesize etc initialized */
3738
 
3738
 
3739
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3739
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3740
    size_t rs = ((capacity == 0)? mparams.granularity :
3740
    size_t rs = ((capacity == 0)? mparams.granularity :
3741
                 (capacity + TOP_FOOT_SIZE + msize));
3741
                 (capacity + TOP_FOOT_SIZE + msize));
3742
    size_t tsize = granularity_align(rs);
3742
    size_t tsize = granularity_align(rs);
3743
    char* tbase = (char*)(CALL_MMAP(tsize));
3743
    char* tbase = (char*)(CALL_MMAP(tsize));
3744
    if (tbase != CMFAIL) {
3744
    if (tbase != CMFAIL) {
3745
      m = init_user_mstate(tbase, tsize);
3745
      m = init_user_mstate(tbase, tsize);
3746
      m->seg.sflags = IS_MMAPPED_BIT;
3746
      m->seg.sflags = IS_MMAPPED_BIT;
3747
      set_lock(m, locked);
3747
      set_lock(m, locked);
3748
    }
3748
    }
3749
  }
3749
  }
3750
  return (mspace)m;
3750
  return (mspace)m;
3751
}
3751
}
3752
 
3752
 
3753
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3753
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3754
  mstate m = 0;
3754
  mstate m = 0;
3755
  size_t msize = pad_request(sizeof(struct malloc_state));
3755
  size_t msize = pad_request(sizeof(struct malloc_state));
3756
  init_mparams(); /* Ensure pagesize etc initialized */
3756
  init_mparams(); /* Ensure pagesize etc initialized */
3757
 
3757
 
3758
  if (capacity > msize + TOP_FOOT_SIZE &&
3758
  if (capacity > msize + TOP_FOOT_SIZE &&
3759
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3759
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3760
    m = init_user_mstate((char*)base, capacity);
3760
    m = init_user_mstate((char*)base, capacity);
3761
    m->seg.sflags = EXTERN_BIT;
3761
    m->seg.sflags = EXTERN_BIT;
3762
    set_lock(m, locked);
3762
    set_lock(m, locked);
3763
  }
3763
  }
3764
  return (mspace)m;
3764
  return (mspace)m;
3765
}
3765
}
3766
 
3766
 
3767
size_t destroy_mspace(mspace msp) {
3767
size_t destroy_mspace(mspace msp) {
3768
  size_t freed = 0;
3768
  size_t freed = 0;
3769
  mstate ms = (mstate)msp;
3769
  mstate ms = (mstate)msp;
3770
  if (ok_magic(ms)) {
3770
  if (ok_magic(ms)) {
3771
    msegmentptr sp = &ms->seg;
3771
    msegmentptr sp = &ms->seg;
3772
    while (sp != 0) {
3772
    while (sp != 0) {
3773
      char* base = sp->base;
3773
      char* base = sp->base;
3774
      size_t size = sp->size;
3774
      size_t size = sp->size;
3775
      flag_t flag = sp->sflags;
3775
      flag_t flag = sp->sflags;
3776
      sp = sp->next;
3776
      sp = sp->next;
3777
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3777
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3778
          CALL_MUNMAP(base, size) == 0)
3778
          CALL_MUNMAP(base, size) == 0)
3779
        freed += size;
3779
        freed += size;
3780
    }
3780
    }
3781
  }
3781
  }
3782
  else {
3782
  else {
3783
    USAGE_ERROR_ACTION(ms,ms);
3783
    USAGE_ERROR_ACTION(ms,ms);
3784
  }
3784
  }
3785
  return freed;
3785
  return freed;
3786
}
3786
}
3787
 
3787
 
3788
/*
3788
/*
3789
  mspace versions of routines are near-clones of the global
3789
  mspace versions of routines are near-clones of the global
3790
  versions. This is not so nice but better than the alternatives.
3790
  versions. This is not so nice but better than the alternatives.
3791
*/
3791
*/
3792
 
3792
 
3793
 
3793
 
3794
void* mspace_malloc(mspace msp, size_t bytes) {
3794
void* mspace_malloc(mspace msp, size_t bytes) {
3795
  mstate ms = (mstate)msp;
3795
  mstate ms = (mstate)msp;
3796
  if (!ok_magic(ms)) {
3796
  if (!ok_magic(ms)) {
3797
    USAGE_ERROR_ACTION(ms,ms);
3797
    USAGE_ERROR_ACTION(ms,ms);
3798
    return 0;
3798
    return 0;
3799
  }
3799
  }
3800
  if (!PREACTION(ms)) {
3800
  if (!PREACTION(ms)) {
3801
    void* mem;
3801
    void* mem;
3802
    size_t nb;
3802
    size_t nb;
3803
    if (bytes <= MAX_SMALL_REQUEST) {
3803
    if (bytes <= MAX_SMALL_REQUEST) {
3804
      bindex_t idx;
3804
      bindex_t idx;
3805
      binmap_t smallbits;
3805
      binmap_t smallbits;
3806
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3806
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3807
      idx = small_index(nb);
3807
      idx = small_index(nb);
3808
      smallbits = ms->smallmap >> idx;
3808
      smallbits = ms->smallmap >> idx;
3809
 
3809
 
3810
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3810
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3811
        mchunkptr b, p;
3811
        mchunkptr b, p;
3812
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3812
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3813
        b = smallbin_at(ms, idx);
3813
        b = smallbin_at(ms, idx);
3814
        p = b->fd;
3814
        p = b->fd;
3815
        assert(chunksize(p) == small_index2size(idx));
3815
        assert(chunksize(p) == small_index2size(idx));
3816
        unlink_first_small_chunk(ms, b, p, idx);
3816
        unlink_first_small_chunk(ms, b, p, idx);
3817
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3817
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3818
        mem = chunk2mem(p);
3818
        mem = chunk2mem(p);
3819
        check_malloced_chunk(ms, mem, nb);
3819
        check_malloced_chunk(ms, mem, nb);
3820
        goto postaction;
3820
        goto postaction;
3821
      }
3821
      }
3822
 
3822
 
3823
      else if (nb > ms->dvsize) {
3823
      else if (nb > ms->dvsize) {
3824
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3824
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3825
          mchunkptr b, p, r;
3825
          mchunkptr b, p, r;
3826
          size_t rsize;
3826
          size_t rsize;
3827
          bindex_t i;
3827
          bindex_t i;
3828
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3828
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3829
          binmap_t leastbit = least_bit(leftbits);
3829
          binmap_t leastbit = least_bit(leftbits);
3830
          compute_bit2idx(leastbit, i);
3830
          compute_bit2idx(leastbit, i);
3831
          b = smallbin_at(ms, i);
3831
          b = smallbin_at(ms, i);
3832
          p = b->fd;
3832
          p = b->fd;
3833
          assert(chunksize(p) == small_index2size(i));
3833
          assert(chunksize(p) == small_index2size(i));
3834
          unlink_first_small_chunk(ms, b, p, i);
3834
          unlink_first_small_chunk(ms, b, p, i);
3835
          rsize = small_index2size(i) - nb;
3835
          rsize = small_index2size(i) - nb;
3836
          /* Fit here cannot be remainderless if 4byte sizes */
3836
          /* Fit here cannot be remainderless if 4byte sizes */
3837
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3837
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3838
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3838
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3839
          else {
3839
          else {
3840
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3840
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3841
            r = chunk_plus_offset(p, nb);
3841
            r = chunk_plus_offset(p, nb);
3842
            set_size_and_pinuse_of_free_chunk(r, rsize);
3842
            set_size_and_pinuse_of_free_chunk(r, rsize);
3843
            replace_dv(ms, r, rsize);
3843
            replace_dv(ms, r, rsize);
3844
          }
3844
          }
3845
          mem = chunk2mem(p);
3845
          mem = chunk2mem(p);
3846
          check_malloced_chunk(ms, mem, nb);
3846
          check_malloced_chunk(ms, mem, nb);
3847
          goto postaction;
3847
          goto postaction;
3848
        }
3848
        }
3849
 
3849
 
3850
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3850
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3851
          check_malloced_chunk(ms, mem, nb);
3851
          check_malloced_chunk(ms, mem, nb);
3852
          goto postaction;
3852
          goto postaction;
3853
        }
3853
        }
3854
      }
3854
      }
3855
    }
3855
    }
3856
    else if (bytes >= MAX_REQUEST)
3856
    else if (bytes >= MAX_REQUEST)
3857
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3857
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3858
    else {
3858
    else {
3859
      nb = pad_request(bytes);
3859
      nb = pad_request(bytes);
3860
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3860
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3861
        check_malloced_chunk(ms, mem, nb);
3861
        check_malloced_chunk(ms, mem, nb);
3862
        goto postaction;
3862
        goto postaction;
3863
      }
3863
      }
3864
    }
3864
    }
3865
 
3865
 
3866
    if (nb <= ms->dvsize) {
3866
    if (nb <= ms->dvsize) {
3867
      size_t rsize = ms->dvsize - nb;
3867
      size_t rsize = ms->dvsize - nb;
3868
      mchunkptr p = ms->dv;
3868
      mchunkptr p = ms->dv;
3869
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3869
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3870
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3870
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3871
        ms->dvsize = rsize;
3871
        ms->dvsize = rsize;
3872
        set_size_and_pinuse_of_free_chunk(r, rsize);
3872
        set_size_and_pinuse_of_free_chunk(r, rsize);
3873
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3873
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3874
      }
3874
      }
3875
      else { /* exhaust dv */
3875
      else { /* exhaust dv */
3876
        size_t dvs = ms->dvsize;
3876
        size_t dvs = ms->dvsize;
3877
        ms->dvsize = 0;
3877
        ms->dvsize = 0;
3878
        ms->dv = 0;
3878
        ms->dv = 0;
3879
        set_inuse_and_pinuse(ms, p, dvs);
3879
        set_inuse_and_pinuse(ms, p, dvs);
3880
      }
3880
      }
3881
      mem = chunk2mem(p);
3881
      mem = chunk2mem(p);
3882
      check_malloced_chunk(ms, mem, nb);
3882
      check_malloced_chunk(ms, mem, nb);
3883
      goto postaction;
3883
      goto postaction;
3884
    }
3884
    }
3885
 
3885
 
3886
    else if (nb < ms->topsize) { /* Split top */
3886
    else if (nb < ms->topsize) { /* Split top */
3887
      size_t rsize = ms->topsize -= nb;
3887
      size_t rsize = ms->topsize -= nb;
3888
      mchunkptr p = ms->top;
3888
      mchunkptr p = ms->top;
3889
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3889
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3890
      r->head = rsize | PINUSE_BIT;
3890
      r->head = rsize | PINUSE_BIT;
3891
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3891
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3892
      mem = chunk2mem(p);
3892
      mem = chunk2mem(p);
3893
      check_top_chunk(ms, ms->top);
3893
      check_top_chunk(ms, ms->top);
3894
      check_malloced_chunk(ms, mem, nb);
3894
      check_malloced_chunk(ms, mem, nb);
3895
      goto postaction;
3895
      goto postaction;
3896
    }
3896
    }
3897
 
3897
 
3898
    mem = sys_alloc(ms, nb);
3898
    mem = sys_alloc(ms, nb);
3899
 
3899
 
3900
  postaction:
3900
  postaction:
3901
    POSTACTION(ms);
3901
    POSTACTION(ms);
3902
    return mem;
3902
    return mem;
3903
  }
3903
  }
3904
 
3904
 
3905
  return 0;
3905
  return 0;
3906
}
3906
}
3907
 
3907
 
3908
void mspace_free(mspace msp, void* mem) {
3908
void mspace_free(mspace msp, void* mem) {
3909
  if (mem != 0) {
3909
  if (mem != 0) {
3910
    mchunkptr p  = mem2chunk(mem);
3910
    mchunkptr p  = mem2chunk(mem);
3911
#if FOOTERS
3911
#if FOOTERS
3912
    mstate fm = get_mstate_for(p);
3912
    mstate fm = get_mstate_for(p);
3913
#else /* FOOTERS */
3913
#else /* FOOTERS */
3914
    mstate fm = (mstate)msp;
3914
    mstate fm = (mstate)msp;
3915
#endif /* FOOTERS */
3915
#endif /* FOOTERS */
3916
    if (!ok_magic(fm)) {
3916
    if (!ok_magic(fm)) {
3917
      USAGE_ERROR_ACTION(fm, p);
3917
      USAGE_ERROR_ACTION(fm, p);
3918
      return;
3918
      return;
3919
    }
3919
    }
3920
    if (!PREACTION(fm)) {
3920
    if (!PREACTION(fm)) {
3921
      check_inuse_chunk(fm, p);
3921
      check_inuse_chunk(fm, p);
3922
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3922
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3923
        size_t psize = chunksize(p);
3923
        size_t psize = chunksize(p);
3924
        mchunkptr next = chunk_plus_offset(p, psize);
3924
        mchunkptr next = chunk_plus_offset(p, psize);
3925
        if (!pinuse(p)) {
3925
        if (!pinuse(p)) {
3926
          size_t prevsize = p->prev_foot;
3926
          size_t prevsize = p->prev_foot;
3927
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3927
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3928
            prevsize &= ~IS_MMAPPED_BIT;
3928
            prevsize &= ~IS_MMAPPED_BIT;
3929
            psize += prevsize + MMAP_FOOT_PAD;
3929
            psize += prevsize + MMAP_FOOT_PAD;
3930
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3930
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3931
              fm->footprint -= psize;
3931
              fm->footprint -= psize;
3932
            goto postaction;
3932
            goto postaction;
3933
          }
3933
          }
3934
          else {
3934
          else {
3935
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3935
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3936
            psize += prevsize;
3936
            psize += prevsize;
3937
            p = prev;
3937
            p = prev;
3938
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3938
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3939
              if (p != fm->dv) {
3939
              if (p != fm->dv) {
3940
                unlink_chunk(fm, p, prevsize);
3940
                unlink_chunk(fm, p, prevsize);
3941
              }
3941
              }
3942
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3942
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3943
                fm->dvsize = psize;
3943
                fm->dvsize = psize;
3944
                set_free_with_pinuse(p, psize, next);
3944
                set_free_with_pinuse(p, psize, next);
3945
                goto postaction;
3945
                goto postaction;
3946
              }
3946
              }
3947
            }
3947
            }
3948
            else
3948
            else
3949
              goto erroraction;
3949
              goto erroraction;
3950
          }
3950
          }
3951
        }
3951
        }
3952
 
3952
 
3953
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3953
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3954
          if (!cinuse(next)) {  /* consolidate forward */
3954
          if (!cinuse(next)) {  /* consolidate forward */
3955
            if (next == fm->top) {
3955
            if (next == fm->top) {
3956
              size_t tsize = fm->topsize += psize;
3956
              size_t tsize = fm->topsize += psize;
3957
              fm->top = p;
3957
              fm->top = p;
3958
              p->head = tsize | PINUSE_BIT;
3958
              p->head = tsize | PINUSE_BIT;
3959
              if (p == fm->dv) {
3959
              if (p == fm->dv) {
3960
                fm->dv = 0;
3960
                fm->dv = 0;
3961
                fm->dvsize = 0;
3961
                fm->dvsize = 0;
3962
              }
3962
              }
3963
              if (should_trim(fm, tsize))
3963
              if (should_trim(fm, tsize))
3964
                sys_trim(fm, 0);
3964
                sys_trim(fm, 0);
3965
              goto postaction;
3965
              goto postaction;
3966
            }
3966
            }
3967
            else if (next == fm->dv) {
3967
            else if (next == fm->dv) {
3968
              size_t dsize = fm->dvsize += psize;
3968
              size_t dsize = fm->dvsize += psize;
3969
              fm->dv = p;
3969
              fm->dv = p;
3970
              set_size_and_pinuse_of_free_chunk(p, dsize);
3970
              set_size_and_pinuse_of_free_chunk(p, dsize);
3971
              goto postaction;
3971
              goto postaction;
3972
            }
3972
            }
3973
            else {
3973
            else {
3974
              size_t nsize = chunksize(next);
3974
              size_t nsize = chunksize(next);
3975
              psize += nsize;
3975
              psize += nsize;
3976
              unlink_chunk(fm, next, nsize);
3976
              unlink_chunk(fm, next, nsize);
3977
              set_size_and_pinuse_of_free_chunk(p, psize);
3977
              set_size_and_pinuse_of_free_chunk(p, psize);
3978
              if (p == fm->dv) {
3978
              if (p == fm->dv) {
3979
                fm->dvsize = psize;
3979
                fm->dvsize = psize;
3980
                goto postaction;
3980
                goto postaction;
3981
              }
3981
              }
3982
            }
3982
            }
3983
          }
3983
          }
3984
          else
3984
          else
3985
            set_free_with_pinuse(p, psize, next);
3985
            set_free_with_pinuse(p, psize, next);
3986
          insert_chunk(fm, p, psize);
3986
          insert_chunk(fm, p, psize);
3987
          check_free_chunk(fm, p);
3987
          check_free_chunk(fm, p);
3988
          goto postaction;
3988
          goto postaction;
3989
        }
3989
        }
3990
      }
3990
      }
3991
    erroraction:
3991
    erroraction:
3992
      USAGE_ERROR_ACTION(fm, p);
3992
      USAGE_ERROR_ACTION(fm, p);
3993
    postaction:
3993
    postaction:
3994
      POSTACTION(fm);
3994
      POSTACTION(fm);
3995
    }
3995
    }
3996
  }
3996
  }
3997
}
3997
}
3998
 
3998
 
3999
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
3999
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4000
  void* mem;
4000
  void* mem;
4001
  size_t req = 0;
4001
  size_t req = 0;
4002
  mstate ms = (mstate)msp;
4002
  mstate ms = (mstate)msp;
4003
  if (!ok_magic(ms)) {
4003
  if (!ok_magic(ms)) {
4004
    USAGE_ERROR_ACTION(ms,ms);
4004
    USAGE_ERROR_ACTION(ms,ms);
4005
    return 0;
4005
    return 0;
4006
  }
4006
  }
4007
  if (n_elements != 0) {
4007
  if (n_elements != 0) {
4008
    req = n_elements * elem_size;
4008
    req = n_elements * elem_size;
4009
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4009
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4010
        (req / n_elements != elem_size))
4010
        (req / n_elements != elem_size))
4011
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4011
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4012
  }
4012
  }
4013
  mem = internal_malloc(ms, req);
4013
  mem = internal_malloc(ms, req);
4014
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4014
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4015
    memset(mem, 0, req);
4015
    memset(mem, 0, req);
4016
  return mem;
4016
  return mem;
4017
}
4017
}
4018
 
4018
 
4019
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4019
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4020
  if (oldmem == 0)
4020
  if (oldmem == 0)
4021
    return mspace_malloc(msp, bytes);
4021
    return mspace_malloc(msp, bytes);
4022
#ifdef REALLOC_ZERO_BYTES_FREES
4022
#ifdef REALLOC_ZERO_BYTES_FREES
4023
  if (bytes == 0) {
4023
  if (bytes == 0) {
4024
    mspace_free(msp, oldmem);
4024
    mspace_free(msp, oldmem);
4025
    return 0;
4025
    return 0;
4026
  }
4026
  }
4027
#endif /* REALLOC_ZERO_BYTES_FREES */
4027
#endif /* REALLOC_ZERO_BYTES_FREES */
4028
  else {
4028
  else {
4029
#if FOOTERS
4029
#if FOOTERS
4030
    mchunkptr p  = mem2chunk(oldmem);
4030
    mchunkptr p  = mem2chunk(oldmem);
4031
    mstate ms = get_mstate_for(p);
4031
    mstate ms = get_mstate_for(p);
4032
#else /* FOOTERS */
4032
#else /* FOOTERS */
4033
    mstate ms = (mstate)msp;
4033
    mstate ms = (mstate)msp;
4034
#endif /* FOOTERS */
4034
#endif /* FOOTERS */
4035
    if (!ok_magic(ms)) {
4035
    if (!ok_magic(ms)) {
4036
      USAGE_ERROR_ACTION(ms,ms);
4036
      USAGE_ERROR_ACTION(ms,ms);
4037
      return 0;
4037
      return 0;
4038
    }
4038
    }
4039
    return internal_realloc(ms, oldmem, bytes);
4039
    return internal_realloc(ms, oldmem, bytes);
4040
  }
4040
  }
4041
}
4041
}
4042
 
4042
 
4043
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4043
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4044
  mstate ms = (mstate)msp;
4044
  mstate ms = (mstate)msp;
4045
  if (!ok_magic(ms)) {
4045
  if (!ok_magic(ms)) {
4046
    USAGE_ERROR_ACTION(ms,ms);
4046
    USAGE_ERROR_ACTION(ms,ms);
4047
    return 0;
4047
    return 0;
4048
  }
4048
  }
4049
  return internal_memalign(ms, alignment, bytes);
4049
  return internal_memalign(ms, alignment, bytes);
4050
}
4050
}
4051
 
4051
 
4052
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4052
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4053
                                 size_t elem_size, void* chunks[]) {
4053
                                 size_t elem_size, void* chunks[]) {
4054
  size_t sz = elem_size; /* serves as 1-element array */
4054
  size_t sz = elem_size; /* serves as 1-element array */
4055
  mstate ms = (mstate)msp;
4055
  mstate ms = (mstate)msp;
4056
  if (!ok_magic(ms)) {
4056
  if (!ok_magic(ms)) {
4057
    USAGE_ERROR_ACTION(ms,ms);
4057
    USAGE_ERROR_ACTION(ms,ms);
4058
    return 0;
4058
    return 0;
4059
  }
4059
  }
4060
  return ialloc(ms, n_elements, &sz, 3, chunks);
4060
  return ialloc(ms, n_elements, &sz, 3, chunks);
4061
}
4061
}
4062
 
4062
 
4063
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4063
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4064
                                   size_t sizes[], void* chunks[]) {
4064
                                   size_t sizes[], void* chunks[]) {
4065
  mstate ms = (mstate)msp;
4065
  mstate ms = (mstate)msp;
4066
  if (!ok_magic(ms)) {
4066
  if (!ok_magic(ms)) {
4067
    USAGE_ERROR_ACTION(ms,ms);
4067
    USAGE_ERROR_ACTION(ms,ms);
4068
    return 0;
4068
    return 0;
4069
  }
4069
  }
4070
  return ialloc(ms, n_elements, sizes, 0, chunks);
4070
  return ialloc(ms, n_elements, sizes, 0, chunks);
4071
}
4071
}
4072
 
4072
 
4073
int mspace_trim(mspace msp, size_t pad) {
4073
int mspace_trim(mspace msp, size_t pad) {
4074
  int result = 0;
4074
  int result = 0;
4075
  mstate ms = (mstate)msp;
4075
  mstate ms = (mstate)msp;
4076
  if (ok_magic(ms)) {
4076
  if (ok_magic(ms)) {
4077
    if (!PREACTION(ms)) {
4077
    if (!PREACTION(ms)) {
4078
      result = sys_trim(ms, pad);
4078
      result = sys_trim(ms, pad);
4079
      POSTACTION(ms);
4079
      POSTACTION(ms);
4080
    }
4080
    }
4081
  }
4081
  }
4082
  else {
4082
  else {
4083
    USAGE_ERROR_ACTION(ms,ms);
4083
    USAGE_ERROR_ACTION(ms,ms);
4084
  }
4084
  }
4085
  return result;
4085
  return result;
4086
}
4086
}
4087
 
4087
 
4088
void mspace_malloc_stats(mspace msp) {
4088
void mspace_malloc_stats(mspace msp) {
4089
  mstate ms = (mstate)msp;
4089
  mstate ms = (mstate)msp;
4090
  if (ok_magic(ms)) {
4090
  if (ok_magic(ms)) {
4091
    internal_malloc_stats(ms);
4091
    internal_malloc_stats(ms);
4092
  }
4092
  }
4093
  else {
4093
  else {
4094
    USAGE_ERROR_ACTION(ms,ms);
4094
    USAGE_ERROR_ACTION(ms,ms);
4095
  }
4095
  }
4096
}
4096
}
4097
 
4097
 
4098
size_t mspace_footprint(mspace msp) {
4098
size_t mspace_footprint(mspace msp) {
4099
  size_t result;
4099
  size_t result;
4100
  mstate ms = (mstate)msp;
4100
  mstate ms = (mstate)msp;
4101
  if (ok_magic(ms)) {
4101
  if (ok_magic(ms)) {
4102
    result = ms->footprint;
4102
    result = ms->footprint;
4103
  }
4103
  }
4104
  USAGE_ERROR_ACTION(ms,ms);
4104
  USAGE_ERROR_ACTION(ms,ms);
4105
  return result;
4105
  return result;
4106
}
4106
}
4107
 
4107
 
4108
 
4108
 
4109
size_t mspace_max_footprint(mspace msp) {
4109
size_t mspace_max_footprint(mspace msp) {
4110
  size_t result;
4110
  size_t result;
4111
  mstate ms = (mstate)msp;
4111
  mstate ms = (mstate)msp;
4112
  if (ok_magic(ms)) {
4112
  if (ok_magic(ms)) {
4113
    result = ms->max_footprint;
4113
    result = ms->max_footprint;
4114
  }
4114
  }
4115
  USAGE_ERROR_ACTION(ms,ms);
4115
  USAGE_ERROR_ACTION(ms,ms);
4116
  return result;
4116
  return result;
4117
}
4117
}
4118
 
4118
 
4119
 
4119
 
4120
#if !NO_MALLINFO
4120
#if !NO_MALLINFO
4121
struct mallinfo mspace_mallinfo(mspace msp) {
4121
struct mallinfo mspace_mallinfo(mspace msp) {
4122
  mstate ms = (mstate)msp;
4122
  mstate ms = (mstate)msp;
4123
  if (!ok_magic(ms)) {
4123
  if (!ok_magic(ms)) {
4124
    USAGE_ERROR_ACTION(ms,ms);
4124
    USAGE_ERROR_ACTION(ms,ms);
4125
  }
4125
  }
4126
  return internal_mallinfo(ms);
4126
  return internal_mallinfo(ms);
4127
}
4127
}
4128
#endif /* NO_MALLINFO */
4128
#endif /* NO_MALLINFO */
4129
 
4129
 
4130
int mspace_mallopt(int param_number, int value) {
4130
int mspace_mallopt(int param_number, int value) {
4131
  return change_mparam(param_number, value);
4131
  return change_mparam(param_number, value);
4132
}
4132
}
4133
 
4133
 
4134
#endif /* MSPACES */
4134
#endif /* MSPACES */
4135
 
4135
 
4136
/* -------------------- Alternative MORECORE functions ------------------- */
4136
/* -------------------- Alternative MORECORE functions ------------------- */
4137
 
4137
 
4138
/*
4138
/*
4139
  Guidelines for creating a custom version of MORECORE:
4139
  Guidelines for creating a custom version of MORECORE:
4140
 
4140
 
4141
  * For best performance, MORECORE should allocate in multiples of pagesize.
4141
  * For best performance, MORECORE should allocate in multiples of pagesize.
4142
  * MORECORE may allocate more memory than requested. (Or even less,
4142
  * MORECORE may allocate more memory than requested. (Or even less,
4143
      but this will usually result in a malloc failure.)
4143
      but this will usually result in a malloc failure.)
4144
  * MORECORE must not allocate memory when given argument zero, but
4144
  * MORECORE must not allocate memory when given argument zero, but
4145
      instead return one past the end address of memory from previous
4145
      instead return one past the end address of memory from previous
4146
      nonzero call.
4146
      nonzero call.
4147
  * For best performance, consecutive calls to MORECORE with positive
4147
  * For best performance, consecutive calls to MORECORE with positive
4148
      arguments should return increasing addresses, indicating that
4148
      arguments should return increasing addresses, indicating that
4149
      space has been contiguously extended.
4149
      space has been contiguously extended.
4150
  * Even though consecutive calls to MORECORE need not return contiguous
4150
  * Even though consecutive calls to MORECORE need not return contiguous
4151
      addresses, it must be OK for malloc'ed chunks to span multiple
4151
      addresses, it must be OK for malloc'ed chunks to span multiple
4152
      regions in those cases where they do happen to be contiguous.
4152
      regions in those cases where they do happen to be contiguous.
4153
  * MORECORE need not handle negative arguments -- it may instead
4153
  * MORECORE need not handle negative arguments -- it may instead
4154
      just return MFAIL when given negative arguments.
4154
      just return MFAIL when given negative arguments.
4155
      Negative arguments are always multiples of pagesize. MORECORE
4155
      Negative arguments are always multiples of pagesize. MORECORE
4156
      must not misinterpret negative args as large positive unsigned
4156
      must not misinterpret negative args as large positive unsigned
4157
      args. You can suppress all such calls from even occurring by defining
4157
      args. You can suppress all such calls from even occurring by defining
4158
      MORECORE_CANNOT_TRIM,
4158
      MORECORE_CANNOT_TRIM,
4159
 
4159
 
4160
  As an example alternative MORECORE, here is a custom allocator
4160
  As an example alternative MORECORE, here is a custom allocator
4161
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4161
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4162
  necessarily physically contiguous non-paged memory (locked in,
4162
  necessarily physically contiguous non-paged memory (locked in,
4163
  present and won't get swapped out).  You can use it by uncommenting
4163
  present and won't get swapped out).  You can use it by uncommenting
4164
  this section, adding some #includes, and setting up the appropriate
4164
  this section, adding some #includes, and setting up the appropriate
4165
  defines above:
4165
  defines above:
4166
 
4166
 
4167
      #define MORECORE osMoreCore
4167
      #define MORECORE osMoreCore
4168
 
4168
 
4169
  There is also a shutdown routine that should somehow be called for
4169
  There is also a shutdown routine that should somehow be called for
4170
  cleanup upon program exit.
4170
  cleanup upon program exit.
4171
 
4171
 
4172
  #define MAX_POOL_ENTRIES 100
4172
  #define MAX_POOL_ENTRIES 100
4173
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4173
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4174
  static int next_os_pool;
4174
  static int next_os_pool;
4175
  void *our_os_pools[MAX_POOL_ENTRIES];
4175
  void *our_os_pools[MAX_POOL_ENTRIES];
4176
 
4176
 
4177
  void *osMoreCore(int size)
4177
  void *osMoreCore(int size)
4178
  {
4178
  {
4179
    void *ptr = 0;
4179
    void *ptr = 0;
4180
    static void *sbrk_top = 0;
4180
    static void *sbrk_top = 0;
4181
 
4181
 
4182
    if (size > 0)
4182
    if (size > 0)
4183
    {
4183
    {
4184
      if (size < MINIMUM_MORECORE_SIZE)
4184
      if (size < MINIMUM_MORECORE_SIZE)
4185
         size = MINIMUM_MORECORE_SIZE;
4185
         size = MINIMUM_MORECORE_SIZE;
4186
      if (CurrentExecutionLevel() == kTaskLevel)
4186
      if (CurrentExecutionLevel() == kTaskLevel)
4187
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4187
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4188
      if (ptr == 0)
4188
      if (ptr == 0)
4189
      {
4189
      {
4190
        return (void *) MFAIL;
4190
        return (void *) MFAIL;
4191
      }
4191
      }
4192
      // save ptrs so they can be freed during cleanup
4192
      // save ptrs so they can be freed during cleanup
4193
      our_os_pools[next_os_pool] = ptr;
4193
      our_os_pools[next_os_pool] = ptr;
4194
      next_os_pool++;
4194
      next_os_pool++;
4195
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4195
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4196
      sbrk_top = (char *) ptr + size;
4196
      sbrk_top = (char *) ptr + size;
4197
      return ptr;
4197
      return ptr;
4198
    }
4198
    }
4199
    else if (size < 0)
4199
    else if (size < 0)
4200
    {
4200
    {
4201
      // we don't currently support shrink behavior
4201
      // we don't currently support shrink behavior
4202
      return (void *) MFAIL;
4202
      return (void *) MFAIL;
4203
    }
4203
    }
4204
    else
4204
    else
4205
    {
4205
    {
4206
      return sbrk_top;
4206
      return sbrk_top;
4207
    }
4207
    }
4208
  }
4208
  }
4209
 
4209
 
4210
  // cleanup any allocated memory pools
4210
  // cleanup any allocated memory pools
4211
  // called as last thing before shutting down driver
4211
  // called as last thing before shutting down driver
4212
 
4212
 
4213
  void osCleanupMem(void)
4213
  void osCleanupMem(void)
4214
  {
4214
  {
4215
    void **ptr;
4215
    void **ptr;
4216
 
4216
 
4217
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4217
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4218
      if (*ptr)
4218
      if (*ptr)
4219
      {
4219
      {
4220
         PoolDeallocate(*ptr);
4220
         PoolDeallocate(*ptr);
4221
         *ptr = 0;
4221
         *ptr = 0;
4222
      }
4222
      }
4223
  }
4223
  }
4224
 
4224
 
4225
*/
4225
*/
4226
 
4226
 
4227
 
4227
 
4228
/* -----------------------------------------------------------------------
4228
/* -----------------------------------------------------------------------
4229
History:
4229
History:
4230
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4230
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4231
      * Add max_footprint functions
4231
      * Add max_footprint functions
4232
      * Ensure all appropriate literals are size_t
4232
      * Ensure all appropriate literals are size_t
4233
      * Fix conditional compilation problem for some #define settings
4233
      * Fix conditional compilation problem for some #define settings
4234
      * Avoid concatenating segments with the one provided
4234
      * Avoid concatenating segments with the one provided
4235
        in create_mspace_with_base
4235
        in create_mspace_with_base
4236
      * Rename some variables to avoid compiler shadowing warnings
4236
      * Rename some variables to avoid compiler shadowing warnings
4237
      * Use explicit lock initialization.
4237
      * Use explicit lock initialization.
4238
      * Better handling of sbrk interference.
4238
      * Better handling of sbrk interference.
4239
      * Simplify and fix segment insertion, trimming and mspace_destroy
4239
      * Simplify and fix segment insertion, trimming and mspace_destroy
4240
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4240
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4241
      * Thanks especially to Dennis Flanagan for help on these.
4241
      * Thanks especially to Dennis Flanagan for help on these.
4242
 
4242
 
4243
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4243
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4244
      * Fix memalign brace error.
4244
      * Fix memalign brace error.
4245
 
4245
 
4246
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4246
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4247
      * Fix improper #endif nesting in C++
4247
      * Fix improper #endif nesting in C++
4248
      * Add explicit casts needed for C++
4248
      * Add explicit casts needed for C++
4249
 
4249
 
4250
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4250
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4251
      * Use trees for large bins
4251
      * Use trees for large bins
4252
      * Support mspaces
4252
      * Support mspaces
4253
      * Use segments to unify sbrk-based and mmap-based system allocation,
4253
      * Use segments to unify sbrk-based and mmap-based system allocation,
4254
        removing need for emulation on most platforms without sbrk.
4254
        removing need for emulation on most platforms without sbrk.
4255
      * Default safety checks
4255
      * Default safety checks
4256
      * Optional footer checks. Thanks to William Robertson for the idea.
4256
      * Optional footer checks. Thanks to William Robertson for the idea.
4257
      * Internal code refactoring
4257
      * Internal code refactoring
4258
      * Incorporate suggestions and platform-specific changes.
4258
      * Incorporate suggestions and platform-specific changes.
4259
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4259
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4260
        Aaron Bachmann,  Emery Berger, and others.
4260
        Aaron Bachmann,  Emery Berger, and others.
4261
      * Speed up non-fastbin processing enough to remove fastbins.
4261
      * Speed up non-fastbin processing enough to remove fastbins.
4262
      * Remove useless cfree() to avoid conflicts with other apps.
4262
      * Remove useless cfree() to avoid conflicts with other apps.
4263
      * Remove internal memcpy, memset. Compilers handle builtins better.
4263
      * Remove internal memcpy, memset. Compilers handle builtins better.
4264
      * Remove some options that no one ever used and rename others.
4264
      * Remove some options that no one ever used and rename others.
4265
 
4265
 
4266
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4266
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4267
      * Fix malloc_state bitmap array misdeclaration
4267
      * Fix malloc_state bitmap array misdeclaration
4268
 
4268
 
4269
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4269
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4270
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4270
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4271
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4271
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4272
      * Better detection and support for non-contiguousness of MORECORE.
4272
      * Better detection and support for non-contiguousness of MORECORE.
4273
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4273
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4274
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4274
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4275
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4275
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4276
      * Raised default trim and map thresholds to 256K.
4276
      * Raised default trim and map thresholds to 256K.
4277
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4277
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4278
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4278
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4279
      * Branch-free bin calculation
4279
      * Branch-free bin calculation
4280
      * Default trim and mmap thresholds now 256K.
4280
      * Default trim and mmap thresholds now 256K.
4281
 
4281
 
4282
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4282
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4283
      * Introduce independent_comalloc and independent_calloc.
4283
      * Introduce independent_comalloc and independent_calloc.
4284
        Thanks to Michael Pachos for motivation and help.
4284
        Thanks to Michael Pachos for motivation and help.
4285
      * Make optional .h file available
4285
      * Make optional .h file available
4286
      * Allow > 2GB requests on 32bit systems.
4286
      * Allow > 2GB requests on 32bit systems.
4287
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4287
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4288
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4288
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4289
        and Anonymous.
4289
        and Anonymous.
4290
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4290
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4291
        helping test this.)
4291
        helping test this.)
4292
      * memalign: check alignment arg
4292
      * memalign: check alignment arg
4293
      * realloc: don't try to shift chunks backwards, since this
4293
      * realloc: don't try to shift chunks backwards, since this
4294
        leads to  more fragmentation in some programs and doesn't
4294
        leads to  more fragmentation in some programs and doesn't
4295
        seem to help in any others.
4295
        seem to help in any others.
4296
      * Collect all cases in malloc requiring system memory into sysmalloc
4296
      * Collect all cases in malloc requiring system memory into sysmalloc
4297
      * Use mmap as backup to sbrk
4297
      * Use mmap as backup to sbrk
4298
      * Place all internal state in malloc_state
4298
      * Place all internal state in malloc_state
4299
      * Introduce fastbins (although similar to 2.5.1)
4299
      * Introduce fastbins (although similar to 2.5.1)
4300
      * Many minor tunings and cosmetic improvements
4300
      * Many minor tunings and cosmetic improvements
4301
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4301
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4302
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4302
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4303
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4303
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4304
      * Include errno.h to support default failure action.
4304
      * Include errno.h to support default failure action.
4305
 
4305
 
4306
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4306
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4307
      * return null for negative arguments
4307
      * return null for negative arguments
4308
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4308
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4309
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4309
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4310
          (e.g. WIN32 platforms)
4310
          (e.g. WIN32 platforms)
4311
         * Cleanup header file inclusion for WIN32 platforms
4311
         * Cleanup header file inclusion for WIN32 platforms
4312
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4312
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4313
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4313
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4314
           memory allocation routines
4314
           memory allocation routines
4315
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4315
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4316
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4316
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4317
           usage of 'assert' in non-WIN32 code
4317
           usage of 'assert' in non-WIN32 code
4318
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4318
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4319
           avoid infinite loop
4319
           avoid infinite loop
4320
      * Always call 'fREe()' rather than 'free()'
4320
      * Always call 'fREe()' rather than 'free()'
4321
 
4321
 
4322
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4322
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4323
      * Fixed ordering problem with boundary-stamping
4323
      * Fixed ordering problem with boundary-stamping
4324
 
4324
 
4325
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4325
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4326
      * Added pvalloc, as recommended by H.J. Liu
4326
      * Added pvalloc, as recommended by H.J. Liu
4327
      * Added 64bit pointer support mainly from Wolfram Gloger
4327
      * Added 64bit pointer support mainly from Wolfram Gloger
4328
      * Added anonymously donated WIN32 sbrk emulation
4328
      * Added anonymously donated WIN32 sbrk emulation
4329
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4329
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4330
      * malloc_extend_top: fix mask error that caused wastage after
4330
      * malloc_extend_top: fix mask error that caused wastage after
4331
        foreign sbrks
4331
        foreign sbrks
4332
      * Add linux mremap support code from HJ Liu
4332
      * Add linux mremap support code from HJ Liu
4333
 
4333
 
4334
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4334
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4335
      * Integrated most documentation with the code.
4335
      * Integrated most documentation with the code.
4336
      * Add support for mmap, with help from
4336
      * Add support for mmap, with help from
4337
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4337
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4338
      * Use last_remainder in more cases.
4338
      * Use last_remainder in more cases.
4339
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4339
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4340
      * Use ordered bins instead of best-fit threshhold
4340
      * Use ordered bins instead of best-fit threshhold
4341
      * Eliminate block-local decls to simplify tracing and debugging.
4341
      * Eliminate block-local decls to simplify tracing and debugging.
4342
      * Support another case of realloc via move into top
4342
      * Support another case of realloc via move into top
4343
      * Fix error occuring when initial sbrk_base not word-aligned.
4343
      * Fix error occuring when initial sbrk_base not word-aligned.
4344
      * Rely on page size for units instead of SBRK_UNIT to
4344
      * Rely on page size for units instead of SBRK_UNIT to
4345
        avoid surprises about sbrk alignment conventions.
4345
        avoid surprises about sbrk alignment conventions.
4346
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4346
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4347
        (raymond@es.ele.tue.nl) for the suggestion.
4347
        (raymond@es.ele.tue.nl) for the suggestion.
4348
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4348
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4349
      * More precautions for cases where other routines call sbrk,
4349
      * More precautions for cases where other routines call sbrk,
4350
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4350
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4351
      * Added macros etc., allowing use in linux libc from
4351
      * Added macros etc., allowing use in linux libc from
4352
        H.J. Lu (hjl@gnu.ai.mit.edu)
4352
        H.J. Lu (hjl@gnu.ai.mit.edu)
4353
      * Inverted this history list
4353
      * Inverted this history list
4354
 
4354
 
4355
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4355
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4356
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4356
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4357
      * Removed all preallocation code since under current scheme
4357
      * Removed all preallocation code since under current scheme
4358
        the work required to undo bad preallocations exceeds
4358
        the work required to undo bad preallocations exceeds
4359
        the work saved in good cases for most test programs.
4359
        the work saved in good cases for most test programs.
4360
      * No longer use return list or unconsolidated bins since
4360
      * No longer use return list or unconsolidated bins since
4361
        no scheme using them consistently outperforms those that don't
4361
        no scheme using them consistently outperforms those that don't
4362
        given above changes.
4362
        given above changes.
4363
      * Use best fit for very large chunks to prevent some worst-cases.
4363
      * Use best fit for very large chunks to prevent some worst-cases.
4364
      * Added some support for debugging
4364
      * Added some support for debugging
4365
 
4365
 
4366
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4366
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4367
      * Removed footers when chunks are in use. Thanks to
4367
      * Removed footers when chunks are in use. Thanks to
4368
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4368
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4369
 
4369
 
4370
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4370
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4371
      * Added malloc_trim, with help from Wolfram Gloger
4371
      * Added malloc_trim, with help from Wolfram Gloger
4372
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4372
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4373
 
4373
 
4374
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4374
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4375
 
4375
 
4376
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4376
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4377
      * realloc: try to expand in both directions
4377
      * realloc: try to expand in both directions
4378
      * malloc: swap order of clean-bin strategy;
4378
      * malloc: swap order of clean-bin strategy;
4379
      * realloc: only conditionally expand backwards
4379
      * realloc: only conditionally expand backwards
4380
      * Try not to scavenge used bins
4380
      * Try not to scavenge used bins
4381
      * Use bin counts as a guide to preallocation
4381
      * Use bin counts as a guide to preallocation
4382
      * Occasionally bin return list chunks in first scan
4382
      * Occasionally bin return list chunks in first scan
4383
      * Add a few optimizations from colin@nyx10.cs.du.edu
4383
      * Add a few optimizations from colin@nyx10.cs.du.edu
4384
 
4384
 
4385
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4385
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4386
      * faster bin computation & slightly different binning
4386
      * faster bin computation & slightly different binning
4387
      * merged all consolidations to one part of malloc proper
4387
      * merged all consolidations to one part of malloc proper
4388
         (eliminating old malloc_find_space & malloc_clean_bin)
4388
         (eliminating old malloc_find_space & malloc_clean_bin)
4389
      * Scan 2 returns chunks (not just 1)
4389
      * Scan 2 returns chunks (not just 1)
4390
      * Propagate failure in realloc if malloc returns 0
4390
      * Propagate failure in realloc if malloc returns 0
4391
      * Add stuff to allow compilation on non-ANSI compilers
4391
      * Add stuff to allow compilation on non-ANSI compilers
4392
          from kpv@research.att.com
4392
          from kpv@research.att.com
4393
 
4393
 
4394
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4394
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4395
      * removed potential for odd address access in prev_chunk
4395
      * removed potential for odd address access in prev_chunk
4396
      * removed dependency on getpagesize.h
4396
      * removed dependency on getpagesize.h
4397
      * misc cosmetics and a bit more internal documentation
4397
      * misc cosmetics and a bit more internal documentation
4398
      * anticosmetics: mangled names in macros to evade debugger strangeness
4398
      * anticosmetics: mangled names in macros to evade debugger strangeness
4399
      * tested on sparc, hp-700, dec-mips, rs6000
4399
      * tested on sparc, hp-700, dec-mips, rs6000
4400
          with gcc & native cc (hp, dec only) allowing
4400
          with gcc & native cc (hp, dec only) allowing
4401
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4401
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4402
 
4402
 
4403
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4403
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4404
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4404
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4405
         structure of old version,  but most details differ.)
4405
         structure of old version,  but most details differ.)
4406
 
4406
 
4407
*/
4407
*/
4408
 
4408