Subversion Repositories HelenOS-historic

Rev

Rev 999 | Rev 1143 | Go to most recent revision | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 999 Rev 1113
1
/*
1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
8
 
9
   Note: There may be an updated version of this malloc obtainable at
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
11
         Check before installing!
12
 
12
 
13
* Quickstart
13
* Quickstart
14
 
14
 
15
  This library is all in one file to simplify the most common usage:
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
19
  compile-time and dynamic tuning options.
20
 
20
 
21
  For convenience, an include file for code using this malloc is at:
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
33
  with library versions.
34
 
34
 
35
* Vital statistics:
35
* Vital statistics:
36
 
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
42
       (e.g. 2.7.2) supporting these.)
43
 
43
 
44
  Alignment:                                     8 bytes (default)
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
53
       if FOOTERS is defined.
54
 
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
66
 
67
  Security: static-safe; optionally more or less
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
78
       cannot, detect all possible programming errors.
79
 
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
89
       FOOTERS may also be worth using with MSPACES.)
90
 
90
 
91
       By default detected errors cause the program to abort (calling
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
99
       run out of memory because dropped memory is never reclaimed.
100
 
100
 
101
       If you don't like either of these options, you can define
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
105
       which might (or might not) provide a small performance improvement.
106
 
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
115
       a version of this malloc. (See http://www.malloc.de).
116
 
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
124
       like memset.
125
 
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
128
       others as well.
129
 
129
 
130
* Overview of algorithms
130
* Overview of algorithms
131
 
131
 
132
  This is not the fastest, most space-conserving, most portable, or
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
136
  general-purpose allocator for malloc-intensive programs.
137
 
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
149
  possibly fragmenting memory used only for large chunks.)
150
 
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
156
  system allocation routines, which is often just 1.
157
 
157
 
158
  The implementation is not very modular and seriously overuses
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
161
  but now, enough of them seem not to.
162
 
162
 
163
  Some compilers issue a lot of warnings about code that is
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
166
  ignored.
167
 
167
 
168
  For a longer but out of date high-level description, see
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
170
 
171
* MSPACES
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
185
 
186
  You can similarly create thread-local allocators by storing
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
191
      return mspace_malloc(tlms, bytes);
192
    }
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
200
  originating spaces.
201
 
201
 
202
 -------------------------  Compile-time options ---------------------------
202
 -------------------------  Compile-time options ---------------------------
203
 
203
 
204
Be careful in setting #define values for numerical constants of type
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
206
to size_t precision unless they are explicitly casted.
207
 
207
 
208
WIN32                    default: defined if _WIN32 defined
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
210
  Otherwise defaults are for unix.
211
 
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
217
  the case of 8-byte alignment.
218
 
218
 
219
MSPACES                  default: 0 (false)
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
221
  This is only supported if HAVE_MMAP is true.
222
 
222
 
223
ONLY_MSPACES             default: 0 (false)
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
224
  If true, only compile in mspace versions, not regular versions.
225
 
225
 
226
USE_LOCKS                default: 0 (false)
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
229
  overridden on a per-mspace basis for mspace versions.)
230
 
230
 
231
FOOTERS                  default: 0
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
234
  space and time overhead.
235
 
235
 
236
INSECURE                 default: 0
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
238
 
239
USE_DL_PREFIX            default: NOT defined
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
242
  of a program, using your regular system malloc elsewhere.
243
 
243
 
244
ABORT                    default: defined as abort()
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
253
  can better optimize code conditionally calling it.
254
 
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
264
  generates slower code than the default abort policy.
265
 
265
 
266
DEBUG                    default: NOT defined
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
275
  in the course of computing the summaries.
276
 
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
282
  which will usually make debugging easier.
283
 
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
286
  return memory because there is none available.
287
 
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
289
  True if this system supports sbrk or an emulation of it.
290
 
290
 
291
MORECORE                  default: sbrk
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
302
  version of MORECORE.
303
 
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
310
  and possibly wasted space it would take to discover this though.
311
 
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
316
  arguments.
317
 
317
 
318
HAVE_MMAP                 default: 1 (true)
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
326
  to MMAP, so long as they are adjacent.
327
 
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
330
  extend or shrink allocation spaces.
331
 
331
 
332
MMAP_CLEARS               default: 1 on unix
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
334
  for standard unix mmap using /dev/zero.
335
 
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
342
  slightly faster than the C version.)
343
 
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
349
  initialization.
350
 
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
353
  stamping footers. Otherwise, the current time is used.
354
 
354
 
355
NO_MALLINFO                default: 0
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
357
  of dealing with mismatches between system declarations and
358
  those in this file.
358
  those in this file.
359
 
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
369
  realloc(p, 0).
370
 
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
375
  You might need to manually insert some of the declarations they provide.
376
 
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
390
  "TOP_PAD")
391
 
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
413
  will immediately be returned to the system.
414
 
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
437
  disable mmap by setting to MAX_SIZE_T.
438
 
438
 
439
*/
439
*/
440
 
440
 
441
#include <sys/types.h>  /* For size_t */
441
#include <sys/types.h>  /* For size_t */
442
 
442
 
443
/** Non-default helenos customizations */
443
/** Non-default helenos customizations */
444
#define LACKS_FCNTL_H
444
#define LACKS_FCNTL_H
445
#define LACKS_SYS_MMAN_H
445
#define LACKS_SYS_MMAN_H
446
#define LACKS_SYS_PARAM_H
446
#define LACKS_SYS_PARAM_H
447
#undef HAVE_MMAP
447
#undef HAVE_MMAP
448
#define HAVE_MMAP 0
448
#define HAVE_MMAP 0
449
#define LACKS_ERRNO_H
449
#define LACKS_ERRNO_H
450
/* Set errno? */
450
/* Set errno? */
451
#undef MALLOC_FAILURE_ACTION
451
#undef MALLOC_FAILURE_ACTION
452
#define MALLOC_FAILURE_ACTION
452
#define MALLOC_FAILURE_ACTION
453
 
453
 
454
/* The maximum possible size_t value has all bits set */
454
/* The maximum possible size_t value has all bits set */
455
#define MAX_SIZE_T           (~(size_t)0)
455
#define MAX_SIZE_T           (~(size_t)0)
456
 
456
 
457
#define ONLY_MSPACES 0
457
#define ONLY_MSPACES 0
458
#define MSPACES 0
458
#define MSPACES 0
459
#define MALLOC_ALIGNMENT ((size_t)8U)
459
#define MALLOC_ALIGNMENT ((size_t)8U)
460
#define FOOTERS 0
460
#define FOOTERS 0
461
#define ABORT  abort()
461
#define ABORT  abort()
462
#define ABORT_ON_ASSERT_FAILURE 1
462
#define ABORT_ON_ASSERT_FAILURE 1
463
#define PROCEED_ON_ERROR 0
463
#define PROCEED_ON_ERROR 0
464
#define USE_LOCKS 0
464
#define USE_LOCKS 0
465
#define INSECURE 0
465
#define INSECURE 0
466
#define HAVE_MMAP 0
466
#define HAVE_MMAP 0
467
 
467
 
468
#define MMAP_CLEARS 1
468
#define MMAP_CLEARS 1
469
 
469
 
470
#define HAVE_MORECORE 1
470
#define HAVE_MORECORE 1
471
#define MORECORE_CONTIGUOUS 1
471
#define MORECORE_CONTIGUOUS 1
472
#define MORECORE sbrk
472
#define MORECORE sbrk
473
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
473
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
474
 
474
 
475
#ifndef DEFAULT_TRIM_THRESHOLD
475
#ifndef DEFAULT_TRIM_THRESHOLD
476
#ifndef MORECORE_CANNOT_TRIM
476
#ifndef MORECORE_CANNOT_TRIM
477
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
477
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
478
#else   /* MORECORE_CANNOT_TRIM */
478
#else   /* MORECORE_CANNOT_TRIM */
479
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
479
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
480
#endif  /* MORECORE_CANNOT_TRIM */
480
#endif  /* MORECORE_CANNOT_TRIM */
481
#endif  /* DEFAULT_TRIM_THRESHOLD */
481
#endif  /* DEFAULT_TRIM_THRESHOLD */
482
#ifndef DEFAULT_MMAP_THRESHOLD
482
#ifndef DEFAULT_MMAP_THRESHOLD
483
#if HAVE_MMAP
483
#if HAVE_MMAP
484
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
484
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
485
#else   /* HAVE_MMAP */
485
#else   /* HAVE_MMAP */
486
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
486
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
487
#endif  /* HAVE_MMAP */
487
#endif  /* HAVE_MMAP */
488
#endif  /* DEFAULT_MMAP_THRESHOLD */
488
#endif  /* DEFAULT_MMAP_THRESHOLD */
489
#ifndef USE_BUILTIN_FFS
489
#ifndef USE_BUILTIN_FFS
490
#define USE_BUILTIN_FFS 0
490
#define USE_BUILTIN_FFS 0
491
#endif  /* USE_BUILTIN_FFS */
491
#endif  /* USE_BUILTIN_FFS */
492
#ifndef USE_DEV_RANDOM
492
#ifndef USE_DEV_RANDOM
493
#define USE_DEV_RANDOM 0
493
#define USE_DEV_RANDOM 0
494
#endif  /* USE_DEV_RANDOM */
494
#endif  /* USE_DEV_RANDOM */
495
#ifndef NO_MALLINFO
495
#ifndef NO_MALLINFO
496
#define NO_MALLINFO 0
496
#define NO_MALLINFO 0
497
#endif  /* NO_MALLINFO */
497
#endif  /* NO_MALLINFO */
498
#ifndef MALLINFO_FIELD_TYPE
498
#ifndef MALLINFO_FIELD_TYPE
499
#define MALLINFO_FIELD_TYPE size_t
499
#define MALLINFO_FIELD_TYPE size_t
500
#endif  /* MALLINFO_FIELD_TYPE */
500
#endif  /* MALLINFO_FIELD_TYPE */
501
 
501
 
502
/*
502
/*
503
  mallopt tuning options.  SVID/XPG defines four standard parameter
503
  mallopt tuning options.  SVID/XPG defines four standard parameter
504
  numbers for mallopt, normally defined in malloc.h.  None of these
504
  numbers for mallopt, normally defined in malloc.h.  None of these
505
  are used in this malloc, so setting them has no effect. But this
505
  are used in this malloc, so setting them has no effect. But this
506
  malloc does support the following options.
506
  malloc does support the following options.
507
*/
507
*/
508
 
508
 
509
#define M_TRIM_THRESHOLD     (-1)
509
#define M_TRIM_THRESHOLD     (-1)
510
#define M_GRANULARITY        (-2)
510
#define M_GRANULARITY        (-2)
511
#define M_MMAP_THRESHOLD     (-3)
511
#define M_MMAP_THRESHOLD     (-3)
512
 
512
 
513
/*
513
/*
514
  ========================================================================
514
  ========================================================================
515
  To make a fully customizable malloc.h header file, cut everything
515
  To make a fully customizable malloc.h header file, cut everything
516
  above this line, put into file malloc.h, edit to suit, and #include it
516
  above this line, put into file malloc.h, edit to suit, and #include it
517
  on the next line, as well as in programs that use this malloc.
517
  on the next line, as well as in programs that use this malloc.
518
  ========================================================================
518
  ========================================================================
519
*/
519
*/
520
 
520
 
521
#include "malloc.h"
521
#include "malloc.h"
522
 
522
 
523
/*------------------------------ internal #includes ---------------------- */
523
/*------------------------------ internal #includes ---------------------- */
524
 
524
 
525
#include <stdio.h>       /* for printing in malloc_stats */
525
#include <stdio.h>       /* for printing in malloc_stats */
526
#include <string.h>
526
#include <string.h>
527
 
527
 
528
#ifndef LACKS_ERRNO_H
528
#ifndef LACKS_ERRNO_H
529
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
529
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
530
#endif /* LACKS_ERRNO_H */
530
#endif /* LACKS_ERRNO_H */
531
#if FOOTERS
531
#if FOOTERS
532
#include <time.h>        /* for magic initialization */
532
#include <time.h>        /* for magic initialization */
533
#endif /* FOOTERS */
533
#endif /* FOOTERS */
534
#ifndef LACKS_STDLIB_H
534
#ifndef LACKS_STDLIB_H
535
#include <stdlib.h>      /* for abort() */
535
#include <stdlib.h>      /* for abort() */
536
#endif /* LACKS_STDLIB_H */
536
#endif /* LACKS_STDLIB_H */
537
#ifdef DEBUG
537
#ifdef DEBUG
538
#if ABORT_ON_ASSERT_FAILURE
538
#if ABORT_ON_ASSERT_FAILURE
539
#define assert(x) if(!(x)) ABORT
539
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
540
#else /* ABORT_ON_ASSERT_FAILURE */
540
#else /* ABORT_ON_ASSERT_FAILURE */
541
#include <assert.h>
541
#include <assert.h>
542
#endif /* ABORT_ON_ASSERT_FAILURE */
542
#endif /* ABORT_ON_ASSERT_FAILURE */
543
#else  /* DEBUG */
543
#else  /* DEBUG */
544
#define assert(x)
544
#define assert(x)
545
#endif /* DEBUG */
545
#endif /* DEBUG */
546
#if USE_BUILTIN_FFS
546
#if USE_BUILTIN_FFS
547
#ifndef LACKS_STRINGS_H
547
#ifndef LACKS_STRINGS_H
548
#include <strings.h>     /* for ffs */
548
#include <strings.h>     /* for ffs */
549
#endif /* LACKS_STRINGS_H */
549
#endif /* LACKS_STRINGS_H */
550
#endif /* USE_BUILTIN_FFS */
550
#endif /* USE_BUILTIN_FFS */
551
#if HAVE_MMAP
551
#if HAVE_MMAP
552
#ifndef LACKS_SYS_MMAN_H
552
#ifndef LACKS_SYS_MMAN_H
553
#include <sys/mman.h>    /* for mmap */
553
#include <sys/mman.h>    /* for mmap */
554
#endif /* LACKS_SYS_MMAN_H */
554
#endif /* LACKS_SYS_MMAN_H */
555
#ifndef LACKS_FCNTL_H
555
#ifndef LACKS_FCNTL_H
556
#include <fcntl.h>
556
#include <fcntl.h>
557
#endif /* LACKS_FCNTL_H */
557
#endif /* LACKS_FCNTL_H */
558
#endif /* HAVE_MMAP */
558
#endif /* HAVE_MMAP */
559
#if HAVE_MORECORE
559
#if HAVE_MORECORE
560
#ifndef LACKS_UNISTD_H
560
#ifndef LACKS_UNISTD_H
561
#include <unistd.h>     /* for sbrk */
561
#include <unistd.h>     /* for sbrk */
562
#else /* LACKS_UNISTD_H */
562
#else /* LACKS_UNISTD_H */
563
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
563
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
564
extern void*     sbrk(ptrdiff_t);
564
extern void*     sbrk(ptrdiff_t);
565
#endif /* FreeBSD etc */
565
#endif /* FreeBSD etc */
566
#endif /* LACKS_UNISTD_H */
566
#endif /* LACKS_UNISTD_H */
567
#endif /* HAVE_MMAP */
567
#endif /* HAVE_MMAP */
568
 
568
 
569
#ifndef WIN32
569
#ifndef WIN32
570
#ifndef malloc_getpagesize
570
#ifndef malloc_getpagesize
571
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
571
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
572
#    ifndef _SC_PAGE_SIZE
572
#    ifndef _SC_PAGE_SIZE
573
#      define _SC_PAGE_SIZE _SC_PAGESIZE
573
#      define _SC_PAGE_SIZE _SC_PAGESIZE
574
#    endif
574
#    endif
575
#  endif
575
#  endif
576
#  ifdef _SC_PAGE_SIZE
576
#  ifdef _SC_PAGE_SIZE
577
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
577
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
578
#  else
578
#  else
579
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
579
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
580
       extern size_t getpagesize();
580
       extern size_t getpagesize();
581
#      define malloc_getpagesize getpagesize()
581
#      define malloc_getpagesize getpagesize()
582
#    else
582
#    else
583
#      ifdef WIN32 /* use supplied emulation of getpagesize */
583
#      ifdef WIN32 /* use supplied emulation of getpagesize */
584
#        define malloc_getpagesize getpagesize()
584
#        define malloc_getpagesize getpagesize()
585
#      else
585
#      else
586
#        ifndef LACKS_SYS_PARAM_H
586
#        ifndef LACKS_SYS_PARAM_H
587
#          include <sys/param.h>
587
#          include <sys/param.h>
588
#        endif
588
#        endif
589
#        ifdef EXEC_PAGESIZE
589
#        ifdef EXEC_PAGESIZE
590
#          define malloc_getpagesize EXEC_PAGESIZE
590
#          define malloc_getpagesize EXEC_PAGESIZE
591
#        else
591
#        else
592
#          ifdef NBPG
592
#          ifdef NBPG
593
#            ifndef CLSIZE
593
#            ifndef CLSIZE
594
#              define malloc_getpagesize NBPG
594
#              define malloc_getpagesize NBPG
595
#            else
595
#            else
596
#              define malloc_getpagesize (NBPG * CLSIZE)
596
#              define malloc_getpagesize (NBPG * CLSIZE)
597
#            endif
597
#            endif
598
#          else
598
#          else
599
#            ifdef NBPC
599
#            ifdef NBPC
600
#              define malloc_getpagesize NBPC
600
#              define malloc_getpagesize NBPC
601
#            else
601
#            else
602
#              ifdef PAGESIZE
602
#              ifdef PAGESIZE
603
#                define malloc_getpagesize PAGESIZE
603
#                define malloc_getpagesize PAGESIZE
604
#              else /* just guess */
604
#              else /* just guess */
605
#                define malloc_getpagesize ((size_t)4096U)
605
#                define malloc_getpagesize ((size_t)4096U)
606
#              endif
606
#              endif
607
#            endif
607
#            endif
608
#          endif
608
#          endif
609
#        endif
609
#        endif
610
#      endif
610
#      endif
611
#    endif
611
#    endif
612
#  endif
612
#  endif
613
#endif
613
#endif
614
#endif
614
#endif
615
 
615
 
616
/* ------------------- size_t and alignment properties -------------------- */
616
/* ------------------- size_t and alignment properties -------------------- */
617
 
617
 
618
/* The byte and bit size of a size_t */
618
/* The byte and bit size of a size_t */
619
#define SIZE_T_SIZE         (sizeof(size_t))
619
#define SIZE_T_SIZE         (sizeof(size_t))
620
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
620
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
621
 
621
 
622
/* Some constants coerced to size_t */
622
/* Some constants coerced to size_t */
623
/* Annoying but necessary to avoid errors on some plaftorms */
623
/* Annoying but necessary to avoid errors on some plaftorms */
624
#define SIZE_T_ZERO         ((size_t)0)
624
#define SIZE_T_ZERO         ((size_t)0)
625
#define SIZE_T_ONE          ((size_t)1)
625
#define SIZE_T_ONE          ((size_t)1)
626
#define SIZE_T_TWO          ((size_t)2)
626
#define SIZE_T_TWO          ((size_t)2)
627
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
627
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
628
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
628
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
629
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
629
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
630
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
630
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
631
 
631
 
632
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
632
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
633
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
633
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
634
 
634
 
635
/* True if address a has acceptable alignment */
635
/* True if address a has acceptable alignment */
636
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
636
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
637
 
637
 
638
/* the number of bytes to offset an address to align it */
638
/* the number of bytes to offset an address to align it */
639
#define align_offset(A)\
639
#define align_offset(A)\
640
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
640
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
641
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
641
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
642
 
642
 
643
/* -------------------------- MMAP preliminaries ------------------------- */
643
/* -------------------------- MMAP preliminaries ------------------------- */
644
 
644
 
645
/*
645
/*
646
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
646
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
647
   checks to fail so compiler optimizer can delete code rather than
647
   checks to fail so compiler optimizer can delete code rather than
648
   using so many "#if"s.
648
   using so many "#if"s.
649
*/
649
*/
650
 
650
 
651
 
651
 
652
/* MORECORE and MMAP must return MFAIL on failure */
652
/* MORECORE and MMAP must return MFAIL on failure */
653
#define MFAIL                ((void*)(MAX_SIZE_T))
653
#define MFAIL                ((void*)(MAX_SIZE_T))
654
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
654
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
655
 
655
 
656
#if !HAVE_MMAP
656
#if !HAVE_MMAP
657
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
657
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
658
#define USE_MMAP_BIT         (SIZE_T_ZERO)
658
#define USE_MMAP_BIT         (SIZE_T_ZERO)
659
#define CALL_MMAP(s)         MFAIL
659
#define CALL_MMAP(s)         MFAIL
660
#define CALL_MUNMAP(a, s)    (-1)
660
#define CALL_MUNMAP(a, s)    (-1)
661
#define DIRECT_MMAP(s)       MFAIL
661
#define DIRECT_MMAP(s)       MFAIL
662
 
662
 
663
#else /* HAVE_MMAP */
663
#else /* HAVE_MMAP */
664
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
664
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
665
#define USE_MMAP_BIT         (SIZE_T_ONE)
665
#define USE_MMAP_BIT         (SIZE_T_ONE)
666
 
666
 
667
#ifndef WIN32
667
#ifndef WIN32
668
#define CALL_MUNMAP(a, s)    munmap((a), (s))
668
#define CALL_MUNMAP(a, s)    munmap((a), (s))
669
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
669
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
670
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
670
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
671
#define MAP_ANONYMOUS        MAP_ANON
671
#define MAP_ANONYMOUS        MAP_ANON
672
#endif /* MAP_ANON */
672
#endif /* MAP_ANON */
673
#ifdef MAP_ANONYMOUS
673
#ifdef MAP_ANONYMOUS
674
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
674
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
675
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
675
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
676
#else /* MAP_ANONYMOUS */
676
#else /* MAP_ANONYMOUS */
677
/*
677
/*
678
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
678
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
679
   is unlikely to be needed, but is supplied just in case.
679
   is unlikely to be needed, but is supplied just in case.
680
*/
680
*/
681
#define MMAP_FLAGS           (MAP_PRIVATE)
681
#define MMAP_FLAGS           (MAP_PRIVATE)
682
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
682
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
683
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
683
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
684
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
684
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
685
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
685
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
686
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
686
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
687
#endif /* MAP_ANONYMOUS */
687
#endif /* MAP_ANONYMOUS */
688
 
688
 
689
#define DIRECT_MMAP(s)       CALL_MMAP(s)
689
#define DIRECT_MMAP(s)       CALL_MMAP(s)
690
#else /* WIN32 */
690
#else /* WIN32 */
691
 
691
 
692
/* Win32 MMAP via VirtualAlloc */
692
/* Win32 MMAP via VirtualAlloc */
693
static void* win32mmap(size_t size) {
693
static void* win32mmap(size_t size) {
694
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
694
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
695
  return (ptr != 0)? ptr: MFAIL;
695
  return (ptr != 0)? ptr: MFAIL;
696
}
696
}
697
 
697
 
698
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
698
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
699
static void* win32direct_mmap(size_t size) {
699
static void* win32direct_mmap(size_t size) {
700
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
700
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
701
                           PAGE_READWRITE);
701
                           PAGE_READWRITE);
702
  return (ptr != 0)? ptr: MFAIL;
702
  return (ptr != 0)? ptr: MFAIL;
703
}
703
}
704
 
704
 
705
/* This function supports releasing coalesed segments */
705
/* This function supports releasing coalesed segments */
706
static int win32munmap(void* ptr, size_t size) {
706
static int win32munmap(void* ptr, size_t size) {
707
  MEMORY_BASIC_INFORMATION minfo;
707
  MEMORY_BASIC_INFORMATION minfo;
708
  char* cptr = ptr;
708
  char* cptr = ptr;
709
  while (size) {
709
  while (size) {
710
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
710
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
711
      return -1;
711
      return -1;
712
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
712
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
713
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
713
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
714
      return -1;
714
      return -1;
715
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
715
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
716
      return -1;
716
      return -1;
717
    cptr += minfo.RegionSize;
717
    cptr += minfo.RegionSize;
718
    size -= minfo.RegionSize;
718
    size -= minfo.RegionSize;
719
  }
719
  }
720
  return 0;
720
  return 0;
721
}
721
}
722
 
722
 
723
#define CALL_MMAP(s)         win32mmap(s)
723
#define CALL_MMAP(s)         win32mmap(s)
724
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
724
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
725
#define DIRECT_MMAP(s)       win32direct_mmap(s)
725
#define DIRECT_MMAP(s)       win32direct_mmap(s)
726
#endif /* WIN32 */
726
#endif /* WIN32 */
727
#endif /* HAVE_MMAP */
727
#endif /* HAVE_MMAP */
728
 
728
 
729
#if HAVE_MMAP && HAVE_MREMAP
729
#if HAVE_MMAP && HAVE_MREMAP
730
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
730
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
731
#else  /* HAVE_MMAP && HAVE_MREMAP */
731
#else  /* HAVE_MMAP && HAVE_MREMAP */
732
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
732
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
733
#endif /* HAVE_MMAP && HAVE_MREMAP */
733
#endif /* HAVE_MMAP && HAVE_MREMAP */
734
 
734
 
735
#if HAVE_MORECORE
735
#if HAVE_MORECORE
736
#define CALL_MORECORE(S)     MORECORE(S)
736
#define CALL_MORECORE(S)     MORECORE(S)
737
#else  /* HAVE_MORECORE */
737
#else  /* HAVE_MORECORE */
738
#define CALL_MORECORE(S)     MFAIL
738
#define CALL_MORECORE(S)     MFAIL
739
#endif /* HAVE_MORECORE */
739
#endif /* HAVE_MORECORE */
740
 
740
 
741
/* mstate bit set if continguous morecore disabled or failed */
741
/* mstate bit set if continguous morecore disabled or failed */
742
#define USE_NONCONTIGUOUS_BIT (4U)
742
#define USE_NONCONTIGUOUS_BIT (4U)
743
 
743
 
744
/* segment bit set in create_mspace_with_base */
744
/* segment bit set in create_mspace_with_base */
745
#define EXTERN_BIT            (8U)
745
#define EXTERN_BIT            (8U)
746
 
746
 
747
 
747
 
748
/* --------------------------- Lock preliminaries ------------------------ */
748
/* --------------------------- Lock preliminaries ------------------------ */
749
 
749
 
750
#if USE_LOCKS
750
#if USE_LOCKS
751
 
751
 
752
/*
752
/*
753
  When locks are defined, there are up to two global locks:
753
  When locks are defined, there are up to two global locks:
754
 
754
 
755
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
755
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
756
    MORECORE.  In many cases sys_alloc requires two calls, that should
756
    MORECORE.  In many cases sys_alloc requires two calls, that should
757
    not be interleaved with calls by other threads.  This does not
757
    not be interleaved with calls by other threads.  This does not
758
    protect against direct calls to MORECORE by other threads not
758
    protect against direct calls to MORECORE by other threads not
759
    using this lock, so there is still code to cope the best we can on
759
    using this lock, so there is still code to cope the best we can on
760
    interference.
760
    interference.
761
 
761
 
762
  * magic_init_mutex ensures that mparams.magic and other
762
  * magic_init_mutex ensures that mparams.magic and other
763
    unique mparams values are initialized only once.
763
    unique mparams values are initialized only once.
764
*/
764
*/
765
 
765
 
766
#ifndef WIN32
766
#ifndef WIN32
767
/* By default use posix locks */
767
/* By default use posix locks */
768
#include <pthread.h>
768
#include <pthread.h>
769
#define MLOCK_T pthread_mutex_t
769
#define MLOCK_T pthread_mutex_t
770
#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
770
#define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
771
#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
771
#define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
772
#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
772
#define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
773
 
773
 
774
#if HAVE_MORECORE
774
#if HAVE_MORECORE
775
static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
775
static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
776
#endif /* HAVE_MORECORE */
776
#endif /* HAVE_MORECORE */
777
 
777
 
778
static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
778
static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
779
 
779
 
780
#else /* WIN32 */
780
#else /* WIN32 */
781
/*
781
/*
782
   Because lock-protected regions have bounded times, and there
782
   Because lock-protected regions have bounded times, and there
783
   are no recursive lock calls, we can use simple spinlocks.
783
   are no recursive lock calls, we can use simple spinlocks.
784
*/
784
*/
785
 
785
 
786
#define MLOCK_T long
786
#define MLOCK_T long
787
static int win32_acquire_lock (MLOCK_T *sl) {
787
static int win32_acquire_lock (MLOCK_T *sl) {
788
  for (;;) {
788
  for (;;) {
789
#ifdef InterlockedCompareExchangePointer
789
#ifdef InterlockedCompareExchangePointer
790
    if (!InterlockedCompareExchange(sl, 1, 0))
790
    if (!InterlockedCompareExchange(sl, 1, 0))
791
      return 0;
791
      return 0;
792
#else  /* Use older void* version */
792
#else  /* Use older void* version */
793
    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
793
    if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
794
      return 0;
794
      return 0;
795
#endif /* InterlockedCompareExchangePointer */
795
#endif /* InterlockedCompareExchangePointer */
796
    Sleep (0);
796
    Sleep (0);
797
  }
797
  }
798
}
798
}
799
 
799
 
800
static void win32_release_lock (MLOCK_T *sl) {
800
static void win32_release_lock (MLOCK_T *sl) {
801
  InterlockedExchange (sl, 0);
801
  InterlockedExchange (sl, 0);
802
}
802
}
803
 
803
 
804
#define INITIAL_LOCK(l)      *(l)=0
804
#define INITIAL_LOCK(l)      *(l)=0
805
#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
805
#define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
806
#define RELEASE_LOCK(l)      win32_release_lock(l)
806
#define RELEASE_LOCK(l)      win32_release_lock(l)
807
#if HAVE_MORECORE
807
#if HAVE_MORECORE
808
static MLOCK_T morecore_mutex;
808
static MLOCK_T morecore_mutex;
809
#endif /* HAVE_MORECORE */
809
#endif /* HAVE_MORECORE */
810
static MLOCK_T magic_init_mutex;
810
static MLOCK_T magic_init_mutex;
811
#endif /* WIN32 */
811
#endif /* WIN32 */
812
 
812
 
813
#define USE_LOCK_BIT               (2U)
813
#define USE_LOCK_BIT               (2U)
814
#else  /* USE_LOCKS */
814
#else  /* USE_LOCKS */
815
#define USE_LOCK_BIT               (0U)
815
#define USE_LOCK_BIT               (0U)
816
#define INITIAL_LOCK(l)
816
#define INITIAL_LOCK(l)
817
#endif /* USE_LOCKS */
817
#endif /* USE_LOCKS */
818
 
818
 
819
#if USE_LOCKS && HAVE_MORECORE
819
#if USE_LOCKS && HAVE_MORECORE
820
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
820
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
821
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
821
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
822
#else /* USE_LOCKS && HAVE_MORECORE */
822
#else /* USE_LOCKS && HAVE_MORECORE */
823
#define ACQUIRE_MORECORE_LOCK()
823
#define ACQUIRE_MORECORE_LOCK()
824
#define RELEASE_MORECORE_LOCK()
824
#define RELEASE_MORECORE_LOCK()
825
#endif /* USE_LOCKS && HAVE_MORECORE */
825
#endif /* USE_LOCKS && HAVE_MORECORE */
826
 
826
 
827
#if USE_LOCKS
827
#if USE_LOCKS
828
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
828
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
829
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
829
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
830
#else  /* USE_LOCKS */
830
#else  /* USE_LOCKS */
831
#define ACQUIRE_MAGIC_INIT_LOCK()
831
#define ACQUIRE_MAGIC_INIT_LOCK()
832
#define RELEASE_MAGIC_INIT_LOCK()
832
#define RELEASE_MAGIC_INIT_LOCK()
833
#endif /* USE_LOCKS */
833
#endif /* USE_LOCKS */
834
 
834
 
835
 
835
 
836
/* -----------------------  Chunk representations ------------------------ */
836
/* -----------------------  Chunk representations ------------------------ */
837
 
837
 
838
/*
838
/*
839
  (The following includes lightly edited explanations by Colin Plumb.)
839
  (The following includes lightly edited explanations by Colin Plumb.)
840
 
840
 
841
  The malloc_chunk declaration below is misleading (but accurate and
841
  The malloc_chunk declaration below is misleading (but accurate and
842
  necessary).  It declares a "view" into memory allowing access to
842
  necessary).  It declares a "view" into memory allowing access to
843
  necessary fields at known offsets from a given base.
843
  necessary fields at known offsets from a given base.
844
 
844
 
845
  Chunks of memory are maintained using a `boundary tag' method as
845
  Chunks of memory are maintained using a `boundary tag' method as
846
  originally described by Knuth.  (See the paper by Paul Wilson
846
  originally described by Knuth.  (See the paper by Paul Wilson
847
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
847
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
848
  techniques.)  Sizes of free chunks are stored both in the front of
848
  techniques.)  Sizes of free chunks are stored both in the front of
849
  each chunk and at the end.  This makes consolidating fragmented
849
  each chunk and at the end.  This makes consolidating fragmented
850
  chunks into bigger chunks fast.  The head fields also hold bits
850
  chunks into bigger chunks fast.  The head fields also hold bits
851
  representing whether chunks are free or in use.
851
  representing whether chunks are free or in use.
852
 
852
 
853
  Here are some pictures to make it clearer.  They are "exploded" to
853
  Here are some pictures to make it clearer.  They are "exploded" to
854
  show that the state of a chunk can be thought of as extending from
854
  show that the state of a chunk can be thought of as extending from
855
  the high 31 bits of the head field of its header through the
855
  the high 31 bits of the head field of its header through the
856
  prev_foot and PINUSE_BIT bit of the following chunk header.
856
  prev_foot and PINUSE_BIT bit of the following chunk header.
857
 
857
 
858
  A chunk that's in use looks like:
858
  A chunk that's in use looks like:
859
 
859
 
860
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
860
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
861
           | Size of previous chunk (if P = 1)                             |
861
           | Size of previous chunk (if P = 1)                             |
862
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
862
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
863
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
863
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
864
         | Size of this chunk                                         1| +-+
864
         | Size of this chunk                                         1| +-+
865
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
865
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
866
         |                                                               |
866
         |                                                               |
867
         +-                                                             -+
867
         +-                                                             -+
868
         |                                                               |
868
         |                                                               |
869
         +-                                                             -+
869
         +-                                                             -+
870
         |                                                               :
870
         |                                                               :
871
         +-      size - sizeof(size_t) available payload bytes          -+
871
         +-      size - sizeof(size_t) available payload bytes          -+
872
         :                                                               |
872
         :                                                               |
873
 chunk-> +-                                                             -+
873
 chunk-> +-                                                             -+
874
         |                                                               |
874
         |                                                               |
875
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
875
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
876
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
876
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
877
       | Size of next chunk (may or may not be in use)               | +-+
877
       | Size of next chunk (may or may not be in use)               | +-+
878
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
878
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
879
 
879
 
880
    And if it's free, it looks like this:
880
    And if it's free, it looks like this:
881
 
881
 
882
   chunk-> +-                                                             -+
882
   chunk-> +-                                                             -+
883
           | User payload (must be in use, or we would have merged!)       |
883
           | User payload (must be in use, or we would have merged!)       |
884
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
884
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
885
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
885
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
886
         | Size of this chunk                                         0| +-+
886
         | Size of this chunk                                         0| +-+
887
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
887
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
888
         | Next pointer                                                  |
888
         | Next pointer                                                  |
889
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
889
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
890
         | Prev pointer                                                  |
890
         | Prev pointer                                                  |
891
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
891
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
892
         |                                                               :
892
         |                                                               :
893
         +-      size - sizeof(struct chunk) unused bytes               -+
893
         +-      size - sizeof(struct chunk) unused bytes               -+
894
         :                                                               |
894
         :                                                               |
895
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
895
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
896
         | Size of this chunk                                            |
896
         | Size of this chunk                                            |
897
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
897
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
898
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
898
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
899
       | Size of next chunk (must be in use, or we would have merged)| +-+
899
       | Size of next chunk (must be in use, or we would have merged)| +-+
900
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
900
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
901
       |                                                               :
901
       |                                                               :
902
       +- User payload                                                -+
902
       +- User payload                                                -+
903
       :                                                               |
903
       :                                                               |
904
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
904
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
905
                                                                     |0|
905
                                                                     |0|
906
                                                                     +-+
906
                                                                     +-+
907
  Note that since we always merge adjacent free chunks, the chunks
907
  Note that since we always merge adjacent free chunks, the chunks
908
  adjacent to a free chunk must be in use.
908
  adjacent to a free chunk must be in use.
909
 
909
 
910
  Given a pointer to a chunk (which can be derived trivially from the
910
  Given a pointer to a chunk (which can be derived trivially from the
911
  payload pointer) we can, in O(1) time, find out whether the adjacent
911
  payload pointer) we can, in O(1) time, find out whether the adjacent
912
  chunks are free, and if so, unlink them from the lists that they
912
  chunks are free, and if so, unlink them from the lists that they
913
  are on and merge them with the current chunk.
913
  are on and merge them with the current chunk.
914
 
914
 
915
  Chunks always begin on even word boundaries, so the mem portion
915
  Chunks always begin on even word boundaries, so the mem portion
916
  (which is returned to the user) is also on an even word boundary, and
916
  (which is returned to the user) is also on an even word boundary, and
917
  thus at least double-word aligned.
917
  thus at least double-word aligned.
918
 
918
 
919
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
919
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
920
  chunk size (which is always a multiple of two words), is an in-use
920
  chunk size (which is always a multiple of two words), is an in-use
921
  bit for the *previous* chunk.  If that bit is *clear*, then the
921
  bit for the *previous* chunk.  If that bit is *clear*, then the
922
  word before the current chunk size contains the previous chunk
922
  word before the current chunk size contains the previous chunk
923
  size, and can be used to find the front of the previous chunk.
923
  size, and can be used to find the front of the previous chunk.
924
  The very first chunk allocated always has this bit set, preventing
924
  The very first chunk allocated always has this bit set, preventing
925
  access to non-existent (or non-owned) memory. If pinuse is set for
925
  access to non-existent (or non-owned) memory. If pinuse is set for
926
  any given chunk, then you CANNOT determine the size of the
926
  any given chunk, then you CANNOT determine the size of the
927
  previous chunk, and might even get a memory addressing fault when
927
  previous chunk, and might even get a memory addressing fault when
928
  trying to do so.
928
  trying to do so.
929
 
929
 
930
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
930
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
931
  the chunk size redundantly records whether the current chunk is
931
  the chunk size redundantly records whether the current chunk is
932
  inuse. This redundancy enables usage checks within free and realloc,
932
  inuse. This redundancy enables usage checks within free and realloc,
933
  and reduces indirection when freeing and consolidating chunks.
933
  and reduces indirection when freeing and consolidating chunks.
934
 
934
 
935
  Each freshly allocated chunk must have both cinuse and pinuse set.
935
  Each freshly allocated chunk must have both cinuse and pinuse set.
936
  That is, each allocated chunk borders either a previously allocated
936
  That is, each allocated chunk borders either a previously allocated
937
  and still in-use chunk, or the base of its memory arena. This is
937
  and still in-use chunk, or the base of its memory arena. This is
938
  ensured by making all allocations from the the `lowest' part of any
938
  ensured by making all allocations from the the `lowest' part of any
939
  found chunk.  Further, no free chunk physically borders another one,
939
  found chunk.  Further, no free chunk physically borders another one,
940
  so each free chunk is known to be preceded and followed by either
940
  so each free chunk is known to be preceded and followed by either
941
  inuse chunks or the ends of memory.
941
  inuse chunks or the ends of memory.
942
 
942
 
943
  Note that the `foot' of the current chunk is actually represented
943
  Note that the `foot' of the current chunk is actually represented
944
  as the prev_foot of the NEXT chunk. This makes it easier to
944
  as the prev_foot of the NEXT chunk. This makes it easier to
945
  deal with alignments etc but can be very confusing when trying
945
  deal with alignments etc but can be very confusing when trying
946
  to extend or adapt this code.
946
  to extend or adapt this code.
947
 
947
 
948
  The exceptions to all this are
948
  The exceptions to all this are
949
 
949
 
950
     1. The special chunk `top' is the top-most available chunk (i.e.,
950
     1. The special chunk `top' is the top-most available chunk (i.e.,
951
        the one bordering the end of available memory). It is treated
951
        the one bordering the end of available memory). It is treated
952
        specially.  Top is never included in any bin, is used only if
952
        specially.  Top is never included in any bin, is used only if
953
        no other chunk is available, and is released back to the
953
        no other chunk is available, and is released back to the
954
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
954
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
955
        the top chunk is treated as larger (and thus less well
955
        the top chunk is treated as larger (and thus less well
956
        fitting) than any other available chunk.  The top chunk
956
        fitting) than any other available chunk.  The top chunk
957
        doesn't update its trailing size field since there is no next
957
        doesn't update its trailing size field since there is no next
958
        contiguous chunk that would have to index off it. However,
958
        contiguous chunk that would have to index off it. However,
959
        space is still allocated for it (TOP_FOOT_SIZE) to enable
959
        space is still allocated for it (TOP_FOOT_SIZE) to enable
960
        separation or merging when space is extended.
960
        separation or merging when space is extended.
961
 
961
 
962
     3. Chunks allocated via mmap, which have the lowest-order bit
962
     3. Chunks allocated via mmap, which have the lowest-order bit
963
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
963
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
964
        PINUSE_BIT in their head fields.  Because they are allocated
964
        PINUSE_BIT in their head fields.  Because they are allocated
965
        one-by-one, each must carry its own prev_foot field, which is
965
        one-by-one, each must carry its own prev_foot field, which is
966
        also used to hold the offset this chunk has within its mmapped
966
        also used to hold the offset this chunk has within its mmapped
967
        region, which is needed to preserve alignment. Each mmapped
967
        region, which is needed to preserve alignment. Each mmapped
968
        chunk is trailed by the first two fields of a fake next-chunk
968
        chunk is trailed by the first two fields of a fake next-chunk
969
        for sake of usage checks.
969
        for sake of usage checks.
970
 
970
 
971
*/
971
*/
972
 
972
 
973
struct malloc_chunk {
973
struct malloc_chunk {
974
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
974
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
975
  size_t               head;       /* Size and inuse bits. */
975
  size_t               head;       /* Size and inuse bits. */
976
  struct malloc_chunk* fd;         /* double links -- used only if free. */
976
  struct malloc_chunk* fd;         /* double links -- used only if free. */
977
  struct malloc_chunk* bk;
977
  struct malloc_chunk* bk;
978
};
978
};
979
 
979
 
980
typedef struct malloc_chunk  mchunk;
980
typedef struct malloc_chunk  mchunk;
981
typedef struct malloc_chunk* mchunkptr;
981
typedef struct malloc_chunk* mchunkptr;
982
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
982
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
983
typedef unsigned int bindex_t;         /* Described below */
983
typedef unsigned int bindex_t;         /* Described below */
984
typedef unsigned int binmap_t;         /* Described below */
984
typedef unsigned int binmap_t;         /* Described below */
985
typedef unsigned int flag_t;           /* The type of various bit flag sets */
985
typedef unsigned int flag_t;           /* The type of various bit flag sets */
986
 
986
 
987
/* ------------------- Chunks sizes and alignments ----------------------- */
987
/* ------------------- Chunks sizes and alignments ----------------------- */
988
 
988
 
989
#define MCHUNK_SIZE         (sizeof(mchunk))
989
#define MCHUNK_SIZE         (sizeof(mchunk))
990
 
990
 
991
#if FOOTERS
991
#if FOOTERS
992
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
992
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
993
#else /* FOOTERS */
993
#else /* FOOTERS */
994
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
994
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
995
#endif /* FOOTERS */
995
#endif /* FOOTERS */
996
 
996
 
997
/* MMapped chunks need a second word of overhead ... */
997
/* MMapped chunks need a second word of overhead ... */
998
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
998
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
999
/* ... and additional padding for fake next-chunk at foot */
999
/* ... and additional padding for fake next-chunk at foot */
1000
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1000
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1001
 
1001
 
1002
/* The smallest size we can malloc is an aligned minimal chunk */
1002
/* The smallest size we can malloc is an aligned minimal chunk */
1003
#define MIN_CHUNK_SIZE\
1003
#define MIN_CHUNK_SIZE\
1004
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1004
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1005
 
1005
 
1006
/* conversion from malloc headers to user pointers, and back */
1006
/* conversion from malloc headers to user pointers, and back */
1007
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1007
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1008
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1008
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1009
/* chunk associated with aligned address A */
1009
/* chunk associated with aligned address A */
1010
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1010
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1011
 
1011
 
1012
/* Bounds on request (not chunk) sizes. */
1012
/* Bounds on request (not chunk) sizes. */
1013
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1013
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1014
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1014
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1015
 
1015
 
1016
/* pad request bytes into a usable size */
1016
/* pad request bytes into a usable size */
1017
#define pad_request(req) \
1017
#define pad_request(req) \
1018
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1018
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1019
 
1019
 
1020
/* pad request, checking for minimum (but not maximum) */
1020
/* pad request, checking for minimum (but not maximum) */
1021
#define request2size(req) \
1021
#define request2size(req) \
1022
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1022
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1023
 
1023
 
1024
 
1024
 
1025
/* ------------------ Operations on head and foot fields ----------------- */
1025
/* ------------------ Operations on head and foot fields ----------------- */
1026
 
1026
 
1027
/*
1027
/*
1028
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1028
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1029
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1029
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1030
  use. If the chunk was obtained with mmap, the prev_foot field has
1030
  use. If the chunk was obtained with mmap, the prev_foot field has
1031
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1031
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1032
  mmapped region to the base of the chunk.
1032
  mmapped region to the base of the chunk.
1033
*/
1033
*/
1034
 
1034
 
1035
#define PINUSE_BIT          (SIZE_T_ONE)
1035
#define PINUSE_BIT          (SIZE_T_ONE)
1036
#define CINUSE_BIT          (SIZE_T_TWO)
1036
#define CINUSE_BIT          (SIZE_T_TWO)
1037
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1037
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1038
 
1038
 
1039
/* Head value for fenceposts */
1039
/* Head value for fenceposts */
1040
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1040
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1041
 
1041
 
1042
/* extraction of fields from head words */
1042
/* extraction of fields from head words */
1043
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1043
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1044
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1044
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1045
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1045
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1046
 
1046
 
1047
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1047
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1048
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1048
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1049
 
1049
 
1050
/* Treat space at ptr +/- offset as a chunk */
1050
/* Treat space at ptr +/- offset as a chunk */
1051
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1051
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1052
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1052
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1053
 
1053
 
1054
/* Ptr to next or previous physical malloc_chunk. */
1054
/* Ptr to next or previous physical malloc_chunk. */
1055
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1055
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1056
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1056
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1057
 
1057
 
1058
/* extract next chunk's pinuse bit */
1058
/* extract next chunk's pinuse bit */
1059
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1059
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1060
 
1060
 
1061
/* Get/set size at footer */
1061
/* Get/set size at footer */
1062
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1062
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1063
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1063
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1064
 
1064
 
1065
/* Set size, pinuse bit, and foot */
1065
/* Set size, pinuse bit, and foot */
1066
#define set_size_and_pinuse_of_free_chunk(p, s)\
1066
#define set_size_and_pinuse_of_free_chunk(p, s)\
1067
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1067
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1068
 
1068
 
1069
/* Set size, pinuse bit, foot, and clear next pinuse */
1069
/* Set size, pinuse bit, foot, and clear next pinuse */
1070
#define set_free_with_pinuse(p, s, n)\
1070
#define set_free_with_pinuse(p, s, n)\
1071
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1071
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1072
 
1072
 
1073
#define is_mmapped(p)\
1073
#define is_mmapped(p)\
1074
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1074
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1075
 
1075
 
1076
/* Get the internal overhead associated with chunk p */
1076
/* Get the internal overhead associated with chunk p */
1077
#define overhead_for(p)\
1077
#define overhead_for(p)\
1078
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1078
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1079
 
1079
 
1080
/* Return true if malloced space is not necessarily cleared */
1080
/* Return true if malloced space is not necessarily cleared */
1081
#if MMAP_CLEARS
1081
#if MMAP_CLEARS
1082
#define calloc_must_clear(p) (!is_mmapped(p))
1082
#define calloc_must_clear(p) (!is_mmapped(p))
1083
#else /* MMAP_CLEARS */
1083
#else /* MMAP_CLEARS */
1084
#define calloc_must_clear(p) (1)
1084
#define calloc_must_clear(p) (1)
1085
#endif /* MMAP_CLEARS */
1085
#endif /* MMAP_CLEARS */
1086
 
1086
 
1087
/* ---------------------- Overlaid data structures ----------------------- */
1087
/* ---------------------- Overlaid data structures ----------------------- */
1088
 
1088
 
1089
/*
1089
/*
1090
  When chunks are not in use, they are treated as nodes of either
1090
  When chunks are not in use, they are treated as nodes of either
1091
  lists or trees.
1091
  lists or trees.
1092
 
1092
 
1093
  "Small"  chunks are stored in circular doubly-linked lists, and look
1093
  "Small"  chunks are stored in circular doubly-linked lists, and look
1094
  like this:
1094
  like this:
1095
 
1095
 
1096
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1096
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1097
            |             Size of previous chunk                            |
1097
            |             Size of previous chunk                            |
1098
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1098
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1099
    `head:' |             Size of chunk, in bytes                         |P|
1099
    `head:' |             Size of chunk, in bytes                         |P|
1100
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1100
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1101
            |             Forward pointer to next chunk in list             |
1101
            |             Forward pointer to next chunk in list             |
1102
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1102
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1103
            |             Back pointer to previous chunk in list            |
1103
            |             Back pointer to previous chunk in list            |
1104
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1104
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1105
            |             Unused space (may be 0 bytes long)                .
1105
            |             Unused space (may be 0 bytes long)                .
1106
            .                                                               .
1106
            .                                                               .
1107
            .                                                               |
1107
            .                                                               |
1108
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1108
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1109
    `foot:' |             Size of chunk, in bytes                           |
1109
    `foot:' |             Size of chunk, in bytes                           |
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1111
 
1111
 
1112
  Larger chunks are kept in a form of bitwise digital trees (aka
1112
  Larger chunks are kept in a form of bitwise digital trees (aka
1113
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1113
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1114
  free chunks greater than 256 bytes, their size doesn't impose any
1114
  free chunks greater than 256 bytes, their size doesn't impose any
1115
  constraints on user chunk sizes.  Each node looks like:
1115
  constraints on user chunk sizes.  Each node looks like:
1116
 
1116
 
1117
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1117
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1118
            |             Size of previous chunk                            |
1118
            |             Size of previous chunk                            |
1119
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1119
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1120
    `head:' |             Size of chunk, in bytes                         |P|
1120
    `head:' |             Size of chunk, in bytes                         |P|
1121
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1121
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1122
            |             Forward pointer to next chunk of same size        |
1122
            |             Forward pointer to next chunk of same size        |
1123
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1123
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1124
            |             Back pointer to previous chunk of same size       |
1124
            |             Back pointer to previous chunk of same size       |
1125
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1125
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1126
            |             Pointer to left child (child[0])                  |
1126
            |             Pointer to left child (child[0])                  |
1127
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1127
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1128
            |             Pointer to right child (child[1])                 |
1128
            |             Pointer to right child (child[1])                 |
1129
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1129
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1130
            |             Pointer to parent                                 |
1130
            |             Pointer to parent                                 |
1131
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1131
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1132
            |             bin index of this chunk                           |
1132
            |             bin index of this chunk                           |
1133
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1133
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1134
            |             Unused space                                      .
1134
            |             Unused space                                      .
1135
            .                                                               |
1135
            .                                                               |
1136
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1136
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1137
    `foot:' |             Size of chunk, in bytes                           |
1137
    `foot:' |             Size of chunk, in bytes                           |
1138
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1138
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1139
 
1139
 
1140
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1140
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1141
  of the same size are arranged in a circularly-linked list, with only
1141
  of the same size are arranged in a circularly-linked list, with only
1142
  the oldest chunk (the next to be used, in our FIFO ordering)
1142
  the oldest chunk (the next to be used, in our FIFO ordering)
1143
  actually in the tree.  (Tree members are distinguished by a non-null
1143
  actually in the tree.  (Tree members are distinguished by a non-null
1144
  parent pointer.)  If a chunk with the same size an an existing node
1144
  parent pointer.)  If a chunk with the same size an an existing node
1145
  is inserted, it is linked off the existing node using pointers that
1145
  is inserted, it is linked off the existing node using pointers that
1146
  work in the same way as fd/bk pointers of small chunks.
1146
  work in the same way as fd/bk pointers of small chunks.
1147
 
1147
 
1148
  Each tree contains a power of 2 sized range of chunk sizes (the
1148
  Each tree contains a power of 2 sized range of chunk sizes (the
1149
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1149
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1150
  tree level, with the chunks in the smaller half of the range (0x100
1150
  tree level, with the chunks in the smaller half of the range (0x100
1151
  <= x < 0x140 for the top nose) in the left subtree and the larger
1151
  <= x < 0x140 for the top nose) in the left subtree and the larger
1152
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1152
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1153
  done by inspecting individual bits.
1153
  done by inspecting individual bits.
1154
 
1154
 
1155
  Using these rules, each node's left subtree contains all smaller
1155
  Using these rules, each node's left subtree contains all smaller
1156
  sizes than its right subtree.  However, the node at the root of each
1156
  sizes than its right subtree.  However, the node at the root of each
1157
  subtree has no particular ordering relationship to either.  (The
1157
  subtree has no particular ordering relationship to either.  (The
1158
  dividing line between the subtree sizes is based on trie relation.)
1158
  dividing line between the subtree sizes is based on trie relation.)
1159
  If we remove the last chunk of a given size from the interior of the
1159
  If we remove the last chunk of a given size from the interior of the
1160
  tree, we need to replace it with a leaf node.  The tree ordering
1160
  tree, we need to replace it with a leaf node.  The tree ordering
1161
  rules permit a node to be replaced by any leaf below it.
1161
  rules permit a node to be replaced by any leaf below it.
1162
 
1162
 
1163
  The smallest chunk in a tree (a common operation in a best-fit
1163
  The smallest chunk in a tree (a common operation in a best-fit
1164
  allocator) can be found by walking a path to the leftmost leaf in
1164
  allocator) can be found by walking a path to the leftmost leaf in
1165
  the tree.  Unlike a usual binary tree, where we follow left child
1165
  the tree.  Unlike a usual binary tree, where we follow left child
1166
  pointers until we reach a null, here we follow the right child
1166
  pointers until we reach a null, here we follow the right child
1167
  pointer any time the left one is null, until we reach a leaf with
1167
  pointer any time the left one is null, until we reach a leaf with
1168
  both child pointers null. The smallest chunk in the tree will be
1168
  both child pointers null. The smallest chunk in the tree will be
1169
  somewhere along that path.
1169
  somewhere along that path.
1170
 
1170
 
1171
  The worst case number of steps to add, find, or remove a node is
1171
  The worst case number of steps to add, find, or remove a node is
1172
  bounded by the number of bits differentiating chunks within
1172
  bounded by the number of bits differentiating chunks within
1173
  bins. Under current bin calculations, this ranges from 6 up to 21
1173
  bins. Under current bin calculations, this ranges from 6 up to 21
1174
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1174
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1175
  is of course much better.
1175
  is of course much better.
1176
*/
1176
*/
1177
 
1177
 
1178
struct malloc_tree_chunk {
1178
struct malloc_tree_chunk {
1179
  /* The first four fields must be compatible with malloc_chunk */
1179
  /* The first four fields must be compatible with malloc_chunk */
1180
  size_t                    prev_foot;
1180
  size_t                    prev_foot;
1181
  size_t                    head;
1181
  size_t                    head;
1182
  struct malloc_tree_chunk* fd;
1182
  struct malloc_tree_chunk* fd;
1183
  struct malloc_tree_chunk* bk;
1183
  struct malloc_tree_chunk* bk;
1184
 
1184
 
1185
  struct malloc_tree_chunk* child[2];
1185
  struct malloc_tree_chunk* child[2];
1186
  struct malloc_tree_chunk* parent;
1186
  struct malloc_tree_chunk* parent;
1187
  bindex_t                  index;
1187
  bindex_t                  index;
1188
};
1188
};
1189
 
1189
 
1190
typedef struct malloc_tree_chunk  tchunk;
1190
typedef struct malloc_tree_chunk  tchunk;
1191
typedef struct malloc_tree_chunk* tchunkptr;
1191
typedef struct malloc_tree_chunk* tchunkptr;
1192
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1192
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1193
 
1193
 
1194
/* A little helper macro for trees */
1194
/* A little helper macro for trees */
1195
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1195
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1196
 
1196
 
1197
/* ----------------------------- Segments -------------------------------- */
1197
/* ----------------------------- Segments -------------------------------- */
1198
 
1198
 
1199
/*
1199
/*
1200
  Each malloc space may include non-contiguous segments, held in a
1200
  Each malloc space may include non-contiguous segments, held in a
1201
  list headed by an embedded malloc_segment record representing the
1201
  list headed by an embedded malloc_segment record representing the
1202
  top-most space. Segments also include flags holding properties of
1202
  top-most space. Segments also include flags holding properties of
1203
  the space. Large chunks that are directly allocated by mmap are not
1203
  the space. Large chunks that are directly allocated by mmap are not
1204
  included in this list. They are instead independently created and
1204
  included in this list. They are instead independently created and
1205
  destroyed without otherwise keeping track of them.
1205
  destroyed without otherwise keeping track of them.
1206
 
1206
 
1207
  Segment management mainly comes into play for spaces allocated by
1207
  Segment management mainly comes into play for spaces allocated by
1208
  MMAP.  Any call to MMAP might or might not return memory that is
1208
  MMAP.  Any call to MMAP might or might not return memory that is
1209
  adjacent to an existing segment.  MORECORE normally contiguously
1209
  adjacent to an existing segment.  MORECORE normally contiguously
1210
  extends the current space, so this space is almost always adjacent,
1210
  extends the current space, so this space is almost always adjacent,
1211
  which is simpler and faster to deal with. (This is why MORECORE is
1211
  which is simpler and faster to deal with. (This is why MORECORE is
1212
  used preferentially to MMAP when both are available -- see
1212
  used preferentially to MMAP when both are available -- see
1213
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1213
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1214
  hinting mechanisms (inconsistently) supported in various
1214
  hinting mechanisms (inconsistently) supported in various
1215
  implementations of unix mmap, or distinguish reserving from
1215
  implementations of unix mmap, or distinguish reserving from
1216
  committing memory. Instead, we just ask for space, and exploit
1216
  committing memory. Instead, we just ask for space, and exploit
1217
  contiguity when we get it.  It is probably possible to do
1217
  contiguity when we get it.  It is probably possible to do
1218
  better than this on some systems, but no general scheme seems
1218
  better than this on some systems, but no general scheme seems
1219
  to be significantly better.
1219
  to be significantly better.
1220
 
1220
 
1221
  Management entails a simpler variant of the consolidation scheme
1221
  Management entails a simpler variant of the consolidation scheme
1222
  used for chunks to reduce fragmentation -- new adjacent memory is
1222
  used for chunks to reduce fragmentation -- new adjacent memory is
1223
  normally prepended or appended to an existing segment. However,
1223
  normally prepended or appended to an existing segment. However,
1224
  there are limitations compared to chunk consolidation that mostly
1224
  there are limitations compared to chunk consolidation that mostly
1225
  reflect the fact that segment processing is relatively infrequent
1225
  reflect the fact that segment processing is relatively infrequent
1226
  (occurring only when getting memory from system) and that we
1226
  (occurring only when getting memory from system) and that we
1227
  don't expect to have huge numbers of segments:
1227
  don't expect to have huge numbers of segments:
1228
 
1228
 
1229
  * Segments are not indexed, so traversal requires linear scans.  (It
1229
  * Segments are not indexed, so traversal requires linear scans.  (It
1230
    would be possible to index these, but is not worth the extra
1230
    would be possible to index these, but is not worth the extra
1231
    overhead and complexity for most programs on most platforms.)
1231
    overhead and complexity for most programs on most platforms.)
1232
  * New segments are only appended to old ones when holding top-most
1232
  * New segments are only appended to old ones when holding top-most
1233
    memory; if they cannot be prepended to others, they are held in
1233
    memory; if they cannot be prepended to others, they are held in
1234
    different segments.
1234
    different segments.
1235
 
1235
 
1236
  Except for the top-most segment of an mstate, each segment record
1236
  Except for the top-most segment of an mstate, each segment record
1237
  is kept at the tail of its segment. Segments are added by pushing
1237
  is kept at the tail of its segment. Segments are added by pushing
1238
  segment records onto the list headed by &mstate.seg for the
1238
  segment records onto the list headed by &mstate.seg for the
1239
  containing mstate.
1239
  containing mstate.
1240
 
1240
 
1241
  Segment flags control allocation/merge/deallocation policies:
1241
  Segment flags control allocation/merge/deallocation policies:
1242
  * If EXTERN_BIT set, then we did not allocate this segment,
1242
  * If EXTERN_BIT set, then we did not allocate this segment,
1243
    and so should not try to deallocate or merge with others.
1243
    and so should not try to deallocate or merge with others.
1244
    (This currently holds only for the initial segment passed
1244
    (This currently holds only for the initial segment passed
1245
    into create_mspace_with_base.)
1245
    into create_mspace_with_base.)
1246
  * If IS_MMAPPED_BIT set, the segment may be merged with
1246
  * If IS_MMAPPED_BIT set, the segment may be merged with
1247
    other surrounding mmapped segments and trimmed/de-allocated
1247
    other surrounding mmapped segments and trimmed/de-allocated
1248
    using munmap.
1248
    using munmap.
1249
  * If neither bit is set, then the segment was obtained using
1249
  * If neither bit is set, then the segment was obtained using
1250
    MORECORE so can be merged with surrounding MORECORE'd segments
1250
    MORECORE so can be merged with surrounding MORECORE'd segments
1251
    and deallocated/trimmed using MORECORE with negative arguments.
1251
    and deallocated/trimmed using MORECORE with negative arguments.
1252
*/
1252
*/
1253
 
1253
 
1254
struct malloc_segment {
1254
struct malloc_segment {
1255
  char*        base;             /* base address */
1255
  char*        base;             /* base address */
1256
  size_t       size;             /* allocated size */
1256
  size_t       size;             /* allocated size */
1257
  struct malloc_segment* next;   /* ptr to next segment */
1257
  struct malloc_segment* next;   /* ptr to next segment */
1258
  flag_t       sflags;           /* mmap and extern flag */
1258
  flag_t       sflags;           /* mmap and extern flag */
1259
};
1259
};
1260
 
1260
 
1261
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1261
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1262
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1262
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1263
 
1263
 
1264
typedef struct malloc_segment  msegment;
1264
typedef struct malloc_segment  msegment;
1265
typedef struct malloc_segment* msegmentptr;
1265
typedef struct malloc_segment* msegmentptr;
1266
 
1266
 
1267
/* ---------------------------- malloc_state ----------------------------- */
1267
/* ---------------------------- malloc_state ----------------------------- */
1268
 
1268
 
1269
/*
1269
/*
1270
   A malloc_state holds all of the bookkeeping for a space.
1270
   A malloc_state holds all of the bookkeeping for a space.
1271
   The main fields are:
1271
   The main fields are:
1272
 
1272
 
1273
  Top
1273
  Top
1274
    The topmost chunk of the currently active segment. Its size is
1274
    The topmost chunk of the currently active segment. Its size is
1275
    cached in topsize.  The actual size of topmost space is
1275
    cached in topsize.  The actual size of topmost space is
1276
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1276
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1277
    fenceposts and segment records if necessary when getting more
1277
    fenceposts and segment records if necessary when getting more
1278
    space from the system.  The size at which to autotrim top is
1278
    space from the system.  The size at which to autotrim top is
1279
    cached from mparams in trim_check, except that it is disabled if
1279
    cached from mparams in trim_check, except that it is disabled if
1280
    an autotrim fails.
1280
    an autotrim fails.
1281
 
1281
 
1282
  Designated victim (dv)
1282
  Designated victim (dv)
1283
    This is the preferred chunk for servicing small requests that
1283
    This is the preferred chunk for servicing small requests that
1284
    don't have exact fits.  It is normally the chunk split off most
1284
    don't have exact fits.  It is normally the chunk split off most
1285
    recently to service another small request.  Its size is cached in
1285
    recently to service another small request.  Its size is cached in
1286
    dvsize. The link fields of this chunk are not maintained since it
1286
    dvsize. The link fields of this chunk are not maintained since it
1287
    is not kept in a bin.
1287
    is not kept in a bin.
1288
 
1288
 
1289
  SmallBins
1289
  SmallBins
1290
    An array of bin headers for free chunks.  These bins hold chunks
1290
    An array of bin headers for free chunks.  These bins hold chunks
1291
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1291
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1292
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1292
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1293
    use in double-linked lists, each bin header acts as a malloc_chunk
1293
    use in double-linked lists, each bin header acts as a malloc_chunk
1294
    pointing to the real first node, if it exists (else pointing to
1294
    pointing to the real first node, if it exists (else pointing to
1295
    itself).  This avoids special-casing for headers.  But to avoid
1295
    itself).  This avoids special-casing for headers.  But to avoid
1296
    waste, we allocate only the fd/bk pointers of bins, and then use
1296
    waste, we allocate only the fd/bk pointers of bins, and then use
1297
    repositioning tricks to treat these as the fields of a chunk.
1297
    repositioning tricks to treat these as the fields of a chunk.
1298
 
1298
 
1299
  TreeBins
1299
  TreeBins
1300
    Treebins are pointers to the roots of trees holding a range of
1300
    Treebins are pointers to the roots of trees holding a range of
1301
    sizes. There are 2 equally spaced treebins for each power of two
1301
    sizes. There are 2 equally spaced treebins for each power of two
1302
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1302
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1303
    larger.
1303
    larger.
1304
 
1304
 
1305
  Bin maps
1305
  Bin maps
1306
    There is one bit map for small bins ("smallmap") and one for
1306
    There is one bit map for small bins ("smallmap") and one for
1307
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1307
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1308
    clears the bit when empty.  Bit operations are then used to avoid
1308
    clears the bit when empty.  Bit operations are then used to avoid
1309
    bin-by-bin searching -- nearly all "search" is done without ever
1309
    bin-by-bin searching -- nearly all "search" is done without ever
1310
    looking at bins that won't be selected.  The bit maps
1310
    looking at bins that won't be selected.  The bit maps
1311
    conservatively use 32 bits per map word, even if on 64bit system.
1311
    conservatively use 32 bits per map word, even if on 64bit system.
1312
    For a good description of some of the bit-based techniques used
1312
    For a good description of some of the bit-based techniques used
1313
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1313
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1314
    supplement at http://hackersdelight.org/). Many of these are
1314
    supplement at http://hackersdelight.org/). Many of these are
1315
    intended to reduce the branchiness of paths through malloc etc, as
1315
    intended to reduce the branchiness of paths through malloc etc, as
1316
    well as to reduce the number of memory locations read or written.
1316
    well as to reduce the number of memory locations read or written.
1317
 
1317
 
1318
  Segments
1318
  Segments
1319
    A list of segments headed by an embedded malloc_segment record
1319
    A list of segments headed by an embedded malloc_segment record
1320
    representing the initial space.
1320
    representing the initial space.
1321
 
1321
 
1322
  Address check support
1322
  Address check support
1323
    The least_addr field is the least address ever obtained from
1323
    The least_addr field is the least address ever obtained from
1324
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1324
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1325
    than this are trapped (unless INSECURE is defined).
1325
    than this are trapped (unless INSECURE is defined).
1326
 
1326
 
1327
  Magic tag
1327
  Magic tag
1328
    A cross-check field that should always hold same value as mparams.magic.
1328
    A cross-check field that should always hold same value as mparams.magic.
1329
 
1329
 
1330
  Flags
1330
  Flags
1331
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1331
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1332
 
1332
 
1333
  Statistics
1333
  Statistics
1334
    Each space keeps track of current and maximum system memory
1334
    Each space keeps track of current and maximum system memory
1335
    obtained via MORECORE or MMAP.
1335
    obtained via MORECORE or MMAP.
1336
 
1336
 
1337
  Locking
1337
  Locking
1338
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1338
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1339
    around every public call using this mspace.
1339
    around every public call using this mspace.
1340
*/
1340
*/
1341
 
1341
 
1342
/* Bin types, widths and sizes */
1342
/* Bin types, widths and sizes */
1343
#define NSMALLBINS        (32U)
1343
#define NSMALLBINS        (32U)
1344
#define NTREEBINS         (32U)
1344
#define NTREEBINS         (32U)
1345
#define SMALLBIN_SHIFT    (3U)
1345
#define SMALLBIN_SHIFT    (3U)
1346
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1346
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1347
#define TREEBIN_SHIFT     (8U)
1347
#define TREEBIN_SHIFT     (8U)
1348
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1348
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1349
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1349
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1350
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1350
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1351
 
1351
 
1352
struct malloc_state {
1352
struct malloc_state {
1353
  binmap_t   smallmap;
1353
  binmap_t   smallmap;
1354
  binmap_t   treemap;
1354
  binmap_t   treemap;
1355
  size_t     dvsize;
1355
  size_t     dvsize;
1356
  size_t     topsize;
1356
  size_t     topsize;
1357
  char*      least_addr;
1357
  char*      least_addr;
1358
  mchunkptr  dv;
1358
  mchunkptr  dv;
1359
  mchunkptr  top;
1359
  mchunkptr  top;
1360
  size_t     trim_check;
1360
  size_t     trim_check;
1361
  size_t     magic;
1361
  size_t     magic;
1362
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1362
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1363
  tbinptr    treebins[NTREEBINS];
1363
  tbinptr    treebins[NTREEBINS];
1364
  size_t     footprint;
1364
  size_t     footprint;
1365
  size_t     max_footprint;
1365
  size_t     max_footprint;
1366
  flag_t     mflags;
1366
  flag_t     mflags;
1367
#if USE_LOCKS
1367
#if USE_LOCKS
1368
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1368
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1369
#endif /* USE_LOCKS */
1369
#endif /* USE_LOCKS */
1370
  msegment   seg;
1370
  msegment   seg;
1371
};
1371
};
1372
 
1372
 
1373
typedef struct malloc_state*    mstate;
1373
typedef struct malloc_state*    mstate;
1374
 
1374
 
1375
/* ------------- Global malloc_state and malloc_params ------------------- */
1375
/* ------------- Global malloc_state and malloc_params ------------------- */
1376
 
1376
 
1377
/*
1377
/*
1378
  malloc_params holds global properties, including those that can be
1378
  malloc_params holds global properties, including those that can be
1379
  dynamically set using mallopt. There is a single instance, mparams,
1379
  dynamically set using mallopt. There is a single instance, mparams,
1380
  initialized in init_mparams.
1380
  initialized in init_mparams.
1381
*/
1381
*/
1382
 
1382
 
1383
struct malloc_params {
1383
struct malloc_params {
1384
  size_t magic;
1384
  size_t magic;
1385
  size_t page_size;
1385
  size_t page_size;
1386
  size_t granularity;
1386
  size_t granularity;
1387
  size_t mmap_threshold;
1387
  size_t mmap_threshold;
1388
  size_t trim_threshold;
1388
  size_t trim_threshold;
1389
  flag_t default_mflags;
1389
  flag_t default_mflags;
1390
};
1390
};
1391
 
1391
 
1392
static struct malloc_params mparams;
1392
static struct malloc_params mparams;
1393
 
1393
 
1394
/* The global malloc_state used for all non-"mspace" calls */
1394
/* The global malloc_state used for all non-"mspace" calls */
1395
static struct malloc_state _gm_;
1395
static struct malloc_state _gm_;
1396
#define gm                 (&_gm_)
1396
#define gm                 (&_gm_)
1397
#define is_global(M)       ((M) == &_gm_)
1397
#define is_global(M)       ((M) == &_gm_)
1398
#define is_initialized(M)  ((M)->top != 0)
1398
#define is_initialized(M)  ((M)->top != 0)
1399
 
1399
 
1400
/* -------------------------- system alloc setup ------------------------- */
1400
/* -------------------------- system alloc setup ------------------------- */
1401
 
1401
 
1402
/* Operations on mflags */
1402
/* Operations on mflags */
1403
 
1403
 
1404
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1404
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1405
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1405
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1406
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1406
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1407
 
1407
 
1408
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1408
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1409
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1409
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1410
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1410
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1411
 
1411
 
1412
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1412
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1413
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1413
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1414
 
1414
 
1415
#define set_lock(M,L)\
1415
#define set_lock(M,L)\
1416
 ((M)->mflags = (L)?\
1416
 ((M)->mflags = (L)?\
1417
  ((M)->mflags | USE_LOCK_BIT) :\
1417
  ((M)->mflags | USE_LOCK_BIT) :\
1418
  ((M)->mflags & ~USE_LOCK_BIT))
1418
  ((M)->mflags & ~USE_LOCK_BIT))
1419
 
1419
 
1420
/* page-align a size */
1420
/* page-align a size */
1421
#define page_align(S)\
1421
#define page_align(S)\
1422
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1422
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1423
 
1423
 
1424
/* granularity-align a size */
1424
/* granularity-align a size */
1425
#define granularity_align(S)\
1425
#define granularity_align(S)\
1426
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1426
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1427
 
1427
 
1428
#define is_page_aligned(S)\
1428
#define is_page_aligned(S)\
1429
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1429
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1430
#define is_granularity_aligned(S)\
1430
#define is_granularity_aligned(S)\
1431
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1431
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1432
 
1432
 
1433
/*  True if segment S holds address A */
1433
/*  True if segment S holds address A */
1434
#define segment_holds(S, A)\
1434
#define segment_holds(S, A)\
1435
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1435
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1436
 
1436
 
1437
/* Return segment holding given address */
1437
/* Return segment holding given address */
1438
static msegmentptr segment_holding(mstate m, char* addr) {
1438
static msegmentptr segment_holding(mstate m, char* addr) {
1439
  msegmentptr sp = &m->seg;
1439
  msegmentptr sp = &m->seg;
1440
  for (;;) {
1440
  for (;;) {
1441
    if (addr >= sp->base && addr < sp->base + sp->size)
1441
    if (addr >= sp->base && addr < sp->base + sp->size)
1442
      return sp;
1442
      return sp;
1443
    if ((sp = sp->next) == 0)
1443
    if ((sp = sp->next) == 0)
1444
      return 0;
1444
      return 0;
1445
  }
1445
  }
1446
}
1446
}
1447
 
1447
 
1448
/* Return true if segment contains a segment link */
1448
/* Return true if segment contains a segment link */
1449
static int has_segment_link(mstate m, msegmentptr ss) {
1449
static int has_segment_link(mstate m, msegmentptr ss) {
1450
  msegmentptr sp = &m->seg;
1450
  msegmentptr sp = &m->seg;
1451
  for (;;) {
1451
  for (;;) {
1452
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1452
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1453
      return 1;
1453
      return 1;
1454
    if ((sp = sp->next) == 0)
1454
    if ((sp = sp->next) == 0)
1455
      return 0;
1455
      return 0;
1456
  }
1456
  }
1457
}
1457
}
1458
 
1458
 
1459
#ifndef MORECORE_CANNOT_TRIM
1459
#ifndef MORECORE_CANNOT_TRIM
1460
#define should_trim(M,s)  ((s) > (M)->trim_check)
1460
#define should_trim(M,s)  ((s) > (M)->trim_check)
1461
#else  /* MORECORE_CANNOT_TRIM */
1461
#else  /* MORECORE_CANNOT_TRIM */
1462
#define should_trim(M,s)  (0)
1462
#define should_trim(M,s)  (0)
1463
#endif /* MORECORE_CANNOT_TRIM */
1463
#endif /* MORECORE_CANNOT_TRIM */
1464
 
1464
 
1465
/*
1465
/*
1466
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1466
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1467
  that may be needed to place segment records and fenceposts when new
1467
  that may be needed to place segment records and fenceposts when new
1468
  noncontiguous segments are added.
1468
  noncontiguous segments are added.
1469
*/
1469
*/
1470
#define TOP_FOOT_SIZE\
1470
#define TOP_FOOT_SIZE\
1471
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1471
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1472
 
1472
 
1473
 
1473
 
1474
/* -------------------------------  Hooks -------------------------------- */
1474
/* -------------------------------  Hooks -------------------------------- */
1475
 
1475
 
1476
/*
1476
/*
1477
  PREACTION should be defined to return 0 on success, and nonzero on
1477
  PREACTION should be defined to return 0 on success, and nonzero on
1478
  failure. If you are not using locking, you can redefine these to do
1478
  failure. If you are not using locking, you can redefine these to do
1479
  anything you like.
1479
  anything you like.
1480
*/
1480
*/
1481
 
1481
 
1482
#if USE_LOCKS
1482
#if USE_LOCKS
1483
 
1483
 
1484
/* Ensure locks are initialized */
1484
/* Ensure locks are initialized */
1485
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1485
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1486
 
1486
 
1487
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1487
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1488
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1488
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1489
#else /* USE_LOCKS */
1489
#else /* USE_LOCKS */
1490
 
1490
 
1491
#ifndef PREACTION
1491
#ifndef PREACTION
1492
#define PREACTION(M) (0)
1492
#define PREACTION(M) (0)
1493
#endif  /* PREACTION */
1493
#endif  /* PREACTION */
1494
 
1494
 
1495
#ifndef POSTACTION
1495
#ifndef POSTACTION
1496
#define POSTACTION(M)
1496
#define POSTACTION(M)
1497
#endif  /* POSTACTION */
1497
#endif  /* POSTACTION */
1498
 
1498
 
1499
#endif /* USE_LOCKS */
1499
#endif /* USE_LOCKS */
1500
 
1500
 
1501
/*
1501
/*
1502
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1502
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1503
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1503
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1504
  reallocs. The argument p is an address that might have triggered the
1504
  reallocs. The argument p is an address that might have triggered the
1505
  fault. It is ignored by the two predefined actions, but might be
1505
  fault. It is ignored by the two predefined actions, but might be
1506
  useful in custom actions that try to help diagnose errors.
1506
  useful in custom actions that try to help diagnose errors.
1507
*/
1507
*/
1508
 
1508
 
1509
#if PROCEED_ON_ERROR
1509
#if PROCEED_ON_ERROR
1510
 
1510
 
1511
/* A count of the number of corruption errors causing resets */
1511
/* A count of the number of corruption errors causing resets */
1512
int malloc_corruption_error_count;
1512
int malloc_corruption_error_count;
1513
 
1513
 
1514
/* default corruption action */
1514
/* default corruption action */
1515
static void reset_on_error(mstate m);
1515
static void reset_on_error(mstate m);
1516
 
1516
 
1517
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1517
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1518
#define USAGE_ERROR_ACTION(m, p)
1518
#define USAGE_ERROR_ACTION(m, p)
1519
 
1519
 
1520
#else /* PROCEED_ON_ERROR */
1520
#else /* PROCEED_ON_ERROR */
1521
 
1521
 
1522
#ifndef CORRUPTION_ERROR_ACTION
1522
#ifndef CORRUPTION_ERROR_ACTION
1523
#define CORRUPTION_ERROR_ACTION(m) ABORT
1523
#define CORRUPTION_ERROR_ACTION(m) ABORT
1524
#endif /* CORRUPTION_ERROR_ACTION */
1524
#endif /* CORRUPTION_ERROR_ACTION */
1525
 
1525
 
1526
#ifndef USAGE_ERROR_ACTION
1526
#ifndef USAGE_ERROR_ACTION
1527
#define USAGE_ERROR_ACTION(m,p) ABORT
1527
#define USAGE_ERROR_ACTION(m,p) ABORT
1528
#endif /* USAGE_ERROR_ACTION */
1528
#endif /* USAGE_ERROR_ACTION */
1529
 
1529
 
1530
#endif /* PROCEED_ON_ERROR */
1530
#endif /* PROCEED_ON_ERROR */
1531
 
1531
 
1532
/* -------------------------- Debugging setup ---------------------------- */
1532
/* -------------------------- Debugging setup ---------------------------- */
1533
 
1533
 
1534
#if ! DEBUG
1534
#if ! DEBUG
1535
 
1535
 
1536
#define check_free_chunk(M,P)
1536
#define check_free_chunk(M,P)
1537
#define check_inuse_chunk(M,P)
1537
#define check_inuse_chunk(M,P)
1538
#define check_malloced_chunk(M,P,N)
1538
#define check_malloced_chunk(M,P,N)
1539
#define check_mmapped_chunk(M,P)
1539
#define check_mmapped_chunk(M,P)
1540
#define check_malloc_state(M)
1540
#define check_malloc_state(M)
1541
#define check_top_chunk(M,P)
1541
#define check_top_chunk(M,P)
1542
 
1542
 
1543
#else /* DEBUG */
1543
#else /* DEBUG */
1544
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1544
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1545
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1545
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1546
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1546
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1547
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1547
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1548
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1548
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1549
#define check_malloc_state(M)       do_check_malloc_state(M)
1549
#define check_malloc_state(M)       do_check_malloc_state(M)
1550
 
1550
 
1551
static void   do_check_any_chunk(mstate m, mchunkptr p);
1551
static void   do_check_any_chunk(mstate m, mchunkptr p);
1552
static void   do_check_top_chunk(mstate m, mchunkptr p);
1552
static void   do_check_top_chunk(mstate m, mchunkptr p);
1553
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1553
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1554
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1554
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1555
static void   do_check_free_chunk(mstate m, mchunkptr p);
1555
static void   do_check_free_chunk(mstate m, mchunkptr p);
1556
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1556
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1557
static void   do_check_tree(mstate m, tchunkptr t);
1557
static void   do_check_tree(mstate m, tchunkptr t);
1558
static void   do_check_treebin(mstate m, bindex_t i);
1558
static void   do_check_treebin(mstate m, bindex_t i);
1559
static void   do_check_smallbin(mstate m, bindex_t i);
1559
static void   do_check_smallbin(mstate m, bindex_t i);
1560
static void   do_check_malloc_state(mstate m);
1560
static void   do_check_malloc_state(mstate m);
1561
static int    bin_find(mstate m, mchunkptr x);
1561
static int    bin_find(mstate m, mchunkptr x);
1562
static size_t traverse_and_check(mstate m);
1562
static size_t traverse_and_check(mstate m);
1563
#endif /* DEBUG */
1563
#endif /* DEBUG */
1564
 
1564
 
1565
/* ---------------------------- Indexing Bins ---------------------------- */
1565
/* ---------------------------- Indexing Bins ---------------------------- */
1566
 
1566
 
1567
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1567
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1568
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1568
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1569
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1569
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1570
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1570
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1571
 
1571
 
1572
/* addressing by index. See above about smallbin repositioning */
1572
/* addressing by index. See above about smallbin repositioning */
1573
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1573
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1574
#define treebin_at(M,i)     (&((M)->treebins[i]))
1574
#define treebin_at(M,i)     (&((M)->treebins[i]))
1575
 
1575
 
1576
/* assign tree index for size S to variable I */
1576
/* assign tree index for size S to variable I */
1577
#if defined(__GNUC__) && defined(i386)
1577
#if defined(__GNUC__) && defined(i386)
1578
#define compute_tree_index(S, I)\
1578
#define compute_tree_index(S, I)\
1579
{\
1579
{\
1580
  size_t X = S >> TREEBIN_SHIFT;\
1580
  size_t X = S >> TREEBIN_SHIFT;\
1581
  if (X == 0)\
1581
  if (X == 0)\
1582
    I = 0;\
1582
    I = 0;\
1583
  else if (X > 0xFFFF)\
1583
  else if (X > 0xFFFF)\
1584
    I = NTREEBINS-1;\
1584
    I = NTREEBINS-1;\
1585
  else {\
1585
  else {\
1586
    unsigned int K;\
1586
    unsigned int K;\
1587
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1587
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1588
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1588
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1589
  }\
1589
  }\
1590
}
1590
}
1591
#else /* GNUC */
1591
#else /* GNUC */
1592
#define compute_tree_index(S, I)\
1592
#define compute_tree_index(S, I)\
1593
{\
1593
{\
1594
  size_t X = S >> TREEBIN_SHIFT;\
1594
  size_t X = S >> TREEBIN_SHIFT;\
1595
  if (X == 0)\
1595
  if (X == 0)\
1596
    I = 0;\
1596
    I = 0;\
1597
  else if (X > 0xFFFF)\
1597
  else if (X > 0xFFFF)\
1598
    I = NTREEBINS-1;\
1598
    I = NTREEBINS-1;\
1599
  else {\
1599
  else {\
1600
    unsigned int Y = (unsigned int)X;\
1600
    unsigned int Y = (unsigned int)X;\
1601
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1601
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1602
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1602
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1603
    N += K;\
1603
    N += K;\
1604
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1604
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1605
    K = 14 - N + ((Y <<= K) >> 15);\
1605
    K = 14 - N + ((Y <<= K) >> 15);\
1606
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1606
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1607
  }\
1607
  }\
1608
}
1608
}
1609
#endif /* GNUC */
1609
#endif /* GNUC */
1610
 
1610
 
1611
/* Bit representing maximum resolved size in a treebin at i */
1611
/* Bit representing maximum resolved size in a treebin at i */
1612
#define bit_for_tree_index(i) \
1612
#define bit_for_tree_index(i) \
1613
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1613
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1614
 
1614
 
1615
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1615
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1616
#define leftshift_for_tree_index(i) \
1616
#define leftshift_for_tree_index(i) \
1617
   ((i == NTREEBINS-1)? 0 : \
1617
   ((i == NTREEBINS-1)? 0 : \
1618
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1618
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1619
 
1619
 
1620
/* The size of the smallest chunk held in bin with index i */
1620
/* The size of the smallest chunk held in bin with index i */
1621
#define minsize_for_tree_index(i) \
1621
#define minsize_for_tree_index(i) \
1622
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1622
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1623
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1623
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1624
 
1624
 
1625
 
1625
 
1626
/* ------------------------ Operations on bin maps ----------------------- */
1626
/* ------------------------ Operations on bin maps ----------------------- */
1627
 
1627
 
1628
/* bit corresponding to given index */
1628
/* bit corresponding to given index */
1629
#define idx2bit(i)              ((binmap_t)(1) << (i))
1629
#define idx2bit(i)              ((binmap_t)(1) << (i))
1630
 
1630
 
1631
/* Mark/Clear bits with given index */
1631
/* Mark/Clear bits with given index */
1632
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1632
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1633
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1633
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1634
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1634
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1635
 
1635
 
1636
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1636
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1637
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1637
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1638
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1638
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1639
 
1639
 
1640
/* index corresponding to given bit */
1640
/* index corresponding to given bit */
1641
 
1641
 
1642
#if defined(__GNUC__) && defined(i386)
1642
#if defined(__GNUC__) && defined(i386)
1643
#define compute_bit2idx(X, I)\
1643
#define compute_bit2idx(X, I)\
1644
{\
1644
{\
1645
  unsigned int J;\
1645
  unsigned int J;\
1646
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1646
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1647
  I = (bindex_t)J;\
1647
  I = (bindex_t)J;\
1648
}
1648
}
1649
 
1649
 
1650
#else /* GNUC */
1650
#else /* GNUC */
1651
#if  USE_BUILTIN_FFS
1651
#if  USE_BUILTIN_FFS
1652
#define compute_bit2idx(X, I) I = ffs(X)-1
1652
#define compute_bit2idx(X, I) I = ffs(X)-1
1653
 
1653
 
1654
#else /* USE_BUILTIN_FFS */
1654
#else /* USE_BUILTIN_FFS */
1655
#define compute_bit2idx(X, I)\
1655
#define compute_bit2idx(X, I)\
1656
{\
1656
{\
1657
  unsigned int Y = X - 1;\
1657
  unsigned int Y = X - 1;\
1658
  unsigned int K = Y >> (16-4) & 16;\
1658
  unsigned int K = Y >> (16-4) & 16;\
1659
  unsigned int N = K;        Y >>= K;\
1659
  unsigned int N = K;        Y >>= K;\
1660
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1660
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1661
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1661
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1662
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1662
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1663
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1663
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1664
  I = (bindex_t)(N + Y);\
1664
  I = (bindex_t)(N + Y);\
1665
}
1665
}
1666
#endif /* USE_BUILTIN_FFS */
1666
#endif /* USE_BUILTIN_FFS */
1667
#endif /* GNUC */
1667
#endif /* GNUC */
1668
 
1668
 
1669
/* isolate the least set bit of a bitmap */
1669
/* isolate the least set bit of a bitmap */
1670
#define least_bit(x)         ((x) & -(x))
1670
#define least_bit(x)         ((x) & -(x))
1671
 
1671
 
1672
/* mask with all bits to left of least bit of x on */
1672
/* mask with all bits to left of least bit of x on */
1673
#define left_bits(x)         ((x<<1) | -(x<<1))
1673
#define left_bits(x)         ((x<<1) | -(x<<1))
1674
 
1674
 
1675
/* mask with all bits to left of or equal to least bit of x on */
1675
/* mask with all bits to left of or equal to least bit of x on */
1676
#define same_or_left_bits(x) ((x) | -(x))
1676
#define same_or_left_bits(x) ((x) | -(x))
1677
 
1677
 
1678
 
1678
 
1679
/* ----------------------- Runtime Check Support ------------------------- */
1679
/* ----------------------- Runtime Check Support ------------------------- */
1680
 
1680
 
1681
/*
1681
/*
1682
  For security, the main invariant is that malloc/free/etc never
1682
  For security, the main invariant is that malloc/free/etc never
1683
  writes to a static address other than malloc_state, unless static
1683
  writes to a static address other than malloc_state, unless static
1684
  malloc_state itself has been corrupted, which cannot occur via
1684
  malloc_state itself has been corrupted, which cannot occur via
1685
  malloc (because of these checks). In essence this means that we
1685
  malloc (because of these checks). In essence this means that we
1686
  believe all pointers, sizes, maps etc held in malloc_state, but
1686
  believe all pointers, sizes, maps etc held in malloc_state, but
1687
  check all of those linked or offsetted from other embedded data
1687
  check all of those linked or offsetted from other embedded data
1688
  structures.  These checks are interspersed with main code in a way
1688
  structures.  These checks are interspersed with main code in a way
1689
  that tends to minimize their run-time cost.
1689
  that tends to minimize their run-time cost.
1690
 
1690
 
1691
  When FOOTERS is defined, in addition to range checking, we also
1691
  When FOOTERS is defined, in addition to range checking, we also
1692
  verify footer fields of inuse chunks, which can be used guarantee
1692
  verify footer fields of inuse chunks, which can be used guarantee
1693
  that the mstate controlling malloc/free is intact.  This is a
1693
  that the mstate controlling malloc/free is intact.  This is a
1694
  streamlined version of the approach described by William Robertson
1694
  streamlined version of the approach described by William Robertson
1695
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1695
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1696
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1696
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1697
  of an inuse chunk holds the xor of its mstate and a random seed,
1697
  of an inuse chunk holds the xor of its mstate and a random seed,
1698
  that is checked upon calls to free() and realloc().  This is
1698
  that is checked upon calls to free() and realloc().  This is
1699
  (probablistically) unguessable from outside the program, but can be
1699
  (probablistically) unguessable from outside the program, but can be
1700
  computed by any code successfully malloc'ing any chunk, so does not
1700
  computed by any code successfully malloc'ing any chunk, so does not
1701
  itself provide protection against code that has already broken
1701
  itself provide protection against code that has already broken
1702
  security through some other means.  Unlike Robertson et al, we
1702
  security through some other means.  Unlike Robertson et al, we
1703
  always dynamically check addresses of all offset chunks (previous,
1703
  always dynamically check addresses of all offset chunks (previous,
1704
  next, etc). This turns out to be cheaper than relying on hashes.
1704
  next, etc). This turns out to be cheaper than relying on hashes.
1705
*/
1705
*/
1706
 
1706
 
1707
#if !INSECURE
1707
#if !INSECURE
1708
/* Check if address a is at least as high as any from MORECORE or MMAP */
1708
/* Check if address a is at least as high as any from MORECORE or MMAP */
1709
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1709
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1710
/* Check if address of next chunk n is higher than base chunk p */
1710
/* Check if address of next chunk n is higher than base chunk p */
1711
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1711
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1712
/* Check if p has its cinuse bit on */
1712
/* Check if p has its cinuse bit on */
1713
#define ok_cinuse(p)     cinuse(p)
1713
#define ok_cinuse(p)     cinuse(p)
1714
/* Check if p has its pinuse bit on */
1714
/* Check if p has its pinuse bit on */
1715
#define ok_pinuse(p)     pinuse(p)
1715
#define ok_pinuse(p)     pinuse(p)
1716
 
1716
 
1717
#else /* !INSECURE */
1717
#else /* !INSECURE */
1718
#define ok_address(M, a) (1)
1718
#define ok_address(M, a) (1)
1719
#define ok_next(b, n)    (1)
1719
#define ok_next(b, n)    (1)
1720
#define ok_cinuse(p)     (1)
1720
#define ok_cinuse(p)     (1)
1721
#define ok_pinuse(p)     (1)
1721
#define ok_pinuse(p)     (1)
1722
#endif /* !INSECURE */
1722
#endif /* !INSECURE */
1723
 
1723
 
1724
#if (FOOTERS && !INSECURE)
1724
#if (FOOTERS && !INSECURE)
1725
/* Check if (alleged) mstate m has expected magic field */
1725
/* Check if (alleged) mstate m has expected magic field */
1726
#define ok_magic(M)      ((M)->magic == mparams.magic)
1726
#define ok_magic(M)      ((M)->magic == mparams.magic)
1727
#else  /* (FOOTERS && !INSECURE) */
1727
#else  /* (FOOTERS && !INSECURE) */
1728
#define ok_magic(M)      (1)
1728
#define ok_magic(M)      (1)
1729
#endif /* (FOOTERS && !INSECURE) */
1729
#endif /* (FOOTERS && !INSECURE) */
1730
 
1730
 
1731
 
1731
 
1732
/* In gcc, use __builtin_expect to minimize impact of checks */
1732
/* In gcc, use __builtin_expect to minimize impact of checks */
1733
#if !INSECURE
1733
#if !INSECURE
1734
#if defined(__GNUC__) && __GNUC__ >= 3
1734
#if defined(__GNUC__) && __GNUC__ >= 3
1735
#define RTCHECK(e)  __builtin_expect(e, 1)
1735
#define RTCHECK(e)  __builtin_expect(e, 1)
1736
#else /* GNUC */
1736
#else /* GNUC */
1737
#define RTCHECK(e)  (e)
1737
#define RTCHECK(e)  (e)
1738
#endif /* GNUC */
1738
#endif /* GNUC */
1739
#else /* !INSECURE */
1739
#else /* !INSECURE */
1740
#define RTCHECK(e)  (1)
1740
#define RTCHECK(e)  (1)
1741
#endif /* !INSECURE */
1741
#endif /* !INSECURE */
1742
 
1742
 
1743
/* macros to set up inuse chunks with or without footers */
1743
/* macros to set up inuse chunks with or without footers */
1744
 
1744
 
1745
#if !FOOTERS
1745
#if !FOOTERS
1746
 
1746
 
1747
#define mark_inuse_foot(M,p,s)
1747
#define mark_inuse_foot(M,p,s)
1748
 
1748
 
1749
/* Set cinuse bit and pinuse bit of next chunk */
1749
/* Set cinuse bit and pinuse bit of next chunk */
1750
#define set_inuse(M,p,s)\
1750
#define set_inuse(M,p,s)\
1751
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1751
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1752
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1752
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1753
 
1753
 
1754
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1754
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1755
#define set_inuse_and_pinuse(M,p,s)\
1755
#define set_inuse_and_pinuse(M,p,s)\
1756
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1756
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1757
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1757
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1758
 
1758
 
1759
/* Set size, cinuse and pinuse bit of this chunk */
1759
/* Set size, cinuse and pinuse bit of this chunk */
1760
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1760
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1761
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1761
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1762
 
1762
 
1763
#else /* FOOTERS */
1763
#else /* FOOTERS */
1764
 
1764
 
1765
/* Set foot of inuse chunk to be xor of mstate and seed */
1765
/* Set foot of inuse chunk to be xor of mstate and seed */
1766
#define mark_inuse_foot(M,p,s)\
1766
#define mark_inuse_foot(M,p,s)\
1767
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1767
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1768
 
1768
 
1769
#define get_mstate_for(p)\
1769
#define get_mstate_for(p)\
1770
  ((mstate)(((mchunkptr)((char*)(p) +\
1770
  ((mstate)(((mchunkptr)((char*)(p) +\
1771
    (chunksize(p))))->prev_foot ^ mparams.magic))
1771
    (chunksize(p))))->prev_foot ^ mparams.magic))
1772
 
1772
 
1773
#define set_inuse(M,p,s)\
1773
#define set_inuse(M,p,s)\
1774
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1774
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1775
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1775
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1776
  mark_inuse_foot(M,p,s))
1776
  mark_inuse_foot(M,p,s))
1777
 
1777
 
1778
#define set_inuse_and_pinuse(M,p,s)\
1778
#define set_inuse_and_pinuse(M,p,s)\
1779
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1779
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1780
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1780
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1781
 mark_inuse_foot(M,p,s))
1781
 mark_inuse_foot(M,p,s))
1782
 
1782
 
1783
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1783
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1784
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1784
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1785
  mark_inuse_foot(M, p, s))
1785
  mark_inuse_foot(M, p, s))
1786
 
1786
 
1787
#endif /* !FOOTERS */
1787
#endif /* !FOOTERS */
1788
 
1788
 
1789
/* ---------------------------- setting mparams -------------------------- */
1789
/* ---------------------------- setting mparams -------------------------- */
1790
 
1790
 
1791
/* Initialize mparams */
1791
/* Initialize mparams */
1792
static int init_mparams(void) {
1792
static int init_mparams(void) {
1793
  if (mparams.page_size == 0) {
1793
  if (mparams.page_size == 0) {
1794
    size_t s;
1794
    size_t s;
1795
 
1795
 
1796
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1796
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1797
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1797
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1798
#if MORECORE_CONTIGUOUS
1798
#if MORECORE_CONTIGUOUS
1799
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1799
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1800
#else  /* MORECORE_CONTIGUOUS */
1800
#else  /* MORECORE_CONTIGUOUS */
1801
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1801
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1802
#endif /* MORECORE_CONTIGUOUS */
1802
#endif /* MORECORE_CONTIGUOUS */
1803
 
1803
 
1804
#if (FOOTERS && !INSECURE)
1804
#if (FOOTERS && !INSECURE)
1805
    {
1805
    {
1806
#if USE_DEV_RANDOM
1806
#if USE_DEV_RANDOM
1807
      int fd;
1807
      int fd;
1808
      unsigned char buf[sizeof(size_t)];
1808
      unsigned char buf[sizeof(size_t)];
1809
      /* Try to use /dev/urandom, else fall back on using time */
1809
      /* Try to use /dev/urandom, else fall back on using time */
1810
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1810
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1811
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1811
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1812
        s = *((size_t *) buf);
1812
        s = *((size_t *) buf);
1813
        close(fd);
1813
        close(fd);
1814
      }
1814
      }
1815
      else
1815
      else
1816
#endif /* USE_DEV_RANDOM */
1816
#endif /* USE_DEV_RANDOM */
1817
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1817
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1818
 
1818
 
1819
      s |= (size_t)8U;    /* ensure nonzero */
1819
      s |= (size_t)8U;    /* ensure nonzero */
1820
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1820
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1821
 
1821
 
1822
    }
1822
    }
1823
#else /* (FOOTERS && !INSECURE) */
1823
#else /* (FOOTERS && !INSECURE) */
1824
    s = (size_t)0x58585858U;
1824
    s = (size_t)0x58585858U;
1825
#endif /* (FOOTERS && !INSECURE) */
1825
#endif /* (FOOTERS && !INSECURE) */
1826
    ACQUIRE_MAGIC_INIT_LOCK();
1826
    ACQUIRE_MAGIC_INIT_LOCK();
1827
    if (mparams.magic == 0) {
1827
    if (mparams.magic == 0) {
1828
      mparams.magic = s;
1828
      mparams.magic = s;
1829
      /* Set up lock for main malloc area */
1829
      /* Set up lock for main malloc area */
1830
      INITIAL_LOCK(&gm->mutex);
1830
      INITIAL_LOCK(&gm->mutex);
1831
      gm->mflags = mparams.default_mflags;
1831
      gm->mflags = mparams.default_mflags;
1832
    }
1832
    }
1833
    RELEASE_MAGIC_INIT_LOCK();
1833
    RELEASE_MAGIC_INIT_LOCK();
1834
 
1834
 
1835
#ifndef WIN32
1835
#ifndef WIN32
1836
    mparams.page_size = malloc_getpagesize;
1836
    mparams.page_size = malloc_getpagesize;
1837
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1837
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1838
                           DEFAULT_GRANULARITY : mparams.page_size);
1838
                           DEFAULT_GRANULARITY : mparams.page_size);
1839
#else /* WIN32 */
1839
#else /* WIN32 */
1840
    {
1840
    {
1841
      SYSTEM_INFO system_info;
1841
      SYSTEM_INFO system_info;
1842
      GetSystemInfo(&system_info);
1842
      GetSystemInfo(&system_info);
1843
      mparams.page_size = system_info.dwPageSize;
1843
      mparams.page_size = system_info.dwPageSize;
1844
      mparams.granularity = system_info.dwAllocationGranularity;
1844
      mparams.granularity = system_info.dwAllocationGranularity;
1845
    }
1845
    }
1846
#endif /* WIN32 */
1846
#endif /* WIN32 */
1847
 
1847
 
1848
    /* Sanity-check configuration:
1848
    /* Sanity-check configuration:
1849
       size_t must be unsigned and as wide as pointer type.
1849
       size_t must be unsigned and as wide as pointer type.
1850
       ints must be at least 4 bytes.
1850
       ints must be at least 4 bytes.
1851
       alignment must be at least 8.
1851
       alignment must be at least 8.
1852
       Alignment, min chunk size, and page size must all be powers of 2.
1852
       Alignment, min chunk size, and page size must all be powers of 2.
1853
    */
1853
    */
1854
    if ((sizeof(size_t) != sizeof(char*)) ||
1854
    if ((sizeof(size_t) != sizeof(char*)) ||
1855
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1855
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1856
        (sizeof(int) < 4)  ||
1856
        (sizeof(int) < 4)  ||
1857
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1857
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1858
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1858
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1859
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1859
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1860
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1860
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1861
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1861
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1862
      ABORT;
1862
      ABORT;
1863
  }
1863
  }
1864
  return 0;
1864
  return 0;
1865
}
1865
}
1866
 
1866
 
1867
/* support for mallopt */
1867
/* support for mallopt */
1868
static int change_mparam(int param_number, int value) {
1868
static int change_mparam(int param_number, int value) {
1869
  size_t val = (size_t)value;
1869
  size_t val = (size_t)value;
1870
  init_mparams();
1870
  init_mparams();
1871
  switch(param_number) {
1871
  switch(param_number) {
1872
  case M_TRIM_THRESHOLD:
1872
  case M_TRIM_THRESHOLD:
1873
    mparams.trim_threshold = val;
1873
    mparams.trim_threshold = val;
1874
    return 1;
1874
    return 1;
1875
  case M_GRANULARITY:
1875
  case M_GRANULARITY:
1876
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1876
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1877
      mparams.granularity = val;
1877
      mparams.granularity = val;
1878
      return 1;
1878
      return 1;
1879
    }
1879
    }
1880
    else
1880
    else
1881
      return 0;
1881
      return 0;
1882
  case M_MMAP_THRESHOLD:
1882
  case M_MMAP_THRESHOLD:
1883
    mparams.mmap_threshold = val;
1883
    mparams.mmap_threshold = val;
1884
    return 1;
1884
    return 1;
1885
  default:
1885
  default:
1886
    return 0;
1886
    return 0;
1887
  }
1887
  }
1888
}
1888
}
1889
 
1889
 
1890
#if DEBUG
1890
#if DEBUG
1891
/* ------------------------- Debugging Support --------------------------- */
1891
/* ------------------------- Debugging Support --------------------------- */
1892
 
1892
 
1893
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1893
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1894
static void do_check_any_chunk(mstate m, mchunkptr p) {
1894
static void do_check_any_chunk(mstate m, mchunkptr p) {
1895
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1895
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1896
  assert(ok_address(m, p));
1896
  assert(ok_address(m, p));
1897
}
1897
}
1898
 
1898
 
1899
/* Check properties of top chunk */
1899
/* Check properties of top chunk */
1900
static void do_check_top_chunk(mstate m, mchunkptr p) {
1900
static void do_check_top_chunk(mstate m, mchunkptr p) {
1901
  msegmentptr sp = segment_holding(m, (char*)p);
1901
  msegmentptr sp = segment_holding(m, (char*)p);
1902
  size_t  sz = chunksize(p);
1902
  size_t  sz = chunksize(p);
1903
  assert(sp != 0);
1903
  assert(sp != 0);
1904
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1904
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1905
  assert(ok_address(m, p));
1905
  assert(ok_address(m, p));
1906
  assert(sz == m->topsize);
1906
  assert(sz == m->topsize);
1907
  assert(sz > 0);
1907
  assert(sz > 0);
1908
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1908
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1909
  assert(pinuse(p));
1909
  assert(pinuse(p));
1910
  assert(!next_pinuse(p));
1910
  assert(!next_pinuse(p));
1911
}
1911
}
1912
 
1912
 
1913
/* Check properties of (inuse) mmapped chunks */
1913
/* Check properties of (inuse) mmapped chunks */
1914
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1914
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1915
  size_t  sz = chunksize(p);
1915
  size_t  sz = chunksize(p);
1916
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1916
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1917
  assert(is_mmapped(p));
1917
  assert(is_mmapped(p));
1918
  assert(use_mmap(m));
1918
  assert(use_mmap(m));
1919
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1919
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1920
  assert(ok_address(m, p));
1920
  assert(ok_address(m, p));
1921
  assert(!is_small(sz));
1921
  assert(!is_small(sz));
1922
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1922
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1923
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1923
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1924
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1924
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1925
}
1925
}
1926
 
1926
 
1927
/* Check properties of inuse chunks */
1927
/* Check properties of inuse chunks */
1928
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1928
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1929
  do_check_any_chunk(m, p);
1929
  do_check_any_chunk(m, p);
1930
  assert(cinuse(p));
1930
  assert(cinuse(p));
1931
  assert(next_pinuse(p));
1931
  assert(next_pinuse(p));
1932
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1932
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1933
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1933
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1934
  if (is_mmapped(p))
1934
  if (is_mmapped(p))
1935
    do_check_mmapped_chunk(m, p);
1935
    do_check_mmapped_chunk(m, p);
1936
}
1936
}
1937
 
1937
 
1938
/* Check properties of free chunks */
1938
/* Check properties of free chunks */
1939
static void do_check_free_chunk(mstate m, mchunkptr p) {
1939
static void do_check_free_chunk(mstate m, mchunkptr p) {
1940
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1940
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1941
  mchunkptr next = chunk_plus_offset(p, sz);
1941
  mchunkptr next = chunk_plus_offset(p, sz);
1942
  do_check_any_chunk(m, p);
1942
  do_check_any_chunk(m, p);
1943
  assert(!cinuse(p));
1943
  assert(!cinuse(p));
1944
  assert(!next_pinuse(p));
1944
  assert(!next_pinuse(p));
1945
  assert (!is_mmapped(p));
1945
  assert (!is_mmapped(p));
1946
  if (p != m->dv && p != m->top) {
1946
  if (p != m->dv && p != m->top) {
1947
    if (sz >= MIN_CHUNK_SIZE) {
1947
    if (sz >= MIN_CHUNK_SIZE) {
1948
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1948
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1949
      assert(is_aligned(chunk2mem(p)));
1949
      assert(is_aligned(chunk2mem(p)));
1950
      assert(next->prev_foot == sz);
1950
      assert(next->prev_foot == sz);
1951
      assert(pinuse(p));
1951
      assert(pinuse(p));
1952
      assert (next == m->top || cinuse(next));
1952
      assert (next == m->top || cinuse(next));
1953
      assert(p->fd->bk == p);
1953
      assert(p->fd->bk == p);
1954
      assert(p->bk->fd == p);
1954
      assert(p->bk->fd == p);
1955
    }
1955
    }
1956
    else  /* markers are always of size SIZE_T_SIZE */
1956
    else  /* markers are always of size SIZE_T_SIZE */
1957
      assert(sz == SIZE_T_SIZE);
1957
      assert(sz == SIZE_T_SIZE);
1958
  }
1958
  }
1959
}
1959
}
1960
 
1960
 
1961
/* Check properties of malloced chunks at the point they are malloced */
1961
/* Check properties of malloced chunks at the point they are malloced */
1962
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1962
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1963
  if (mem != 0) {
1963
  if (mem != 0) {
1964
    mchunkptr p = mem2chunk(mem);
1964
    mchunkptr p = mem2chunk(mem);
1965
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1965
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1966
    do_check_inuse_chunk(m, p);
1966
    do_check_inuse_chunk(m, p);
1967
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1967
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1968
    assert(sz >= MIN_CHUNK_SIZE);
1968
    assert(sz >= MIN_CHUNK_SIZE);
1969
    assert(sz >= s);
1969
    assert(sz >= s);
1970
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1970
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1971
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1971
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1972
  }
1972
  }
1973
}
1973
}
1974
 
1974
 
1975
/* Check a tree and its subtrees.  */
1975
/* Check a tree and its subtrees.  */
1976
static void do_check_tree(mstate m, tchunkptr t) {
1976
static void do_check_tree(mstate m, tchunkptr t) {
1977
  tchunkptr head = 0;
1977
  tchunkptr head = 0;
1978
  tchunkptr u = t;
1978
  tchunkptr u = t;
1979
  bindex_t tindex = t->index;
1979
  bindex_t tindex = t->index;
1980
  size_t tsize = chunksize(t);
1980
  size_t tsize = chunksize(t);
1981
  bindex_t idx;
1981
  bindex_t idx;
1982
  compute_tree_index(tsize, idx);
1982
  compute_tree_index(tsize, idx);
1983
  assert(tindex == idx);
1983
  assert(tindex == idx);
1984
  assert(tsize >= MIN_LARGE_SIZE);
1984
  assert(tsize >= MIN_LARGE_SIZE);
1985
  assert(tsize >= minsize_for_tree_index(idx));
1985
  assert(tsize >= minsize_for_tree_index(idx));
1986
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1986
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1987
 
1987
 
1988
  do { /* traverse through chain of same-sized nodes */
1988
  do { /* traverse through chain of same-sized nodes */
1989
    do_check_any_chunk(m, ((mchunkptr)u));
1989
    do_check_any_chunk(m, ((mchunkptr)u));
1990
    assert(u->index == tindex);
1990
    assert(u->index == tindex);
1991
    assert(chunksize(u) == tsize);
1991
    assert(chunksize(u) == tsize);
1992
    assert(!cinuse(u));
1992
    assert(!cinuse(u));
1993
    assert(!next_pinuse(u));
1993
    assert(!next_pinuse(u));
1994
    assert(u->fd->bk == u);
1994
    assert(u->fd->bk == u);
1995
    assert(u->bk->fd == u);
1995
    assert(u->bk->fd == u);
1996
    if (u->parent == 0) {
1996
    if (u->parent == 0) {
1997
      assert(u->child[0] == 0);
1997
      assert(u->child[0] == 0);
1998
      assert(u->child[1] == 0);
1998
      assert(u->child[1] == 0);
1999
    }
1999
    }
2000
    else {
2000
    else {
2001
      assert(head == 0); /* only one node on chain has parent */
2001
      assert(head == 0); /* only one node on chain has parent */
2002
      head = u;
2002
      head = u;
2003
      assert(u->parent != u);
2003
      assert(u->parent != u);
2004
      assert (u->parent->child[0] == u ||
2004
      assert (u->parent->child[0] == u ||
2005
              u->parent->child[1] == u ||
2005
              u->parent->child[1] == u ||
2006
              *((tbinptr*)(u->parent)) == u);
2006
              *((tbinptr*)(u->parent)) == u);
2007
      if (u->child[0] != 0) {
2007
      if (u->child[0] != 0) {
2008
        assert(u->child[0]->parent == u);
2008
        assert(u->child[0]->parent == u);
2009
        assert(u->child[0] != u);
2009
        assert(u->child[0] != u);
2010
        do_check_tree(m, u->child[0]);
2010
        do_check_tree(m, u->child[0]);
2011
      }
2011
      }
2012
      if (u->child[1] != 0) {
2012
      if (u->child[1] != 0) {
2013
        assert(u->child[1]->parent == u);
2013
        assert(u->child[1]->parent == u);
2014
        assert(u->child[1] != u);
2014
        assert(u->child[1] != u);
2015
        do_check_tree(m, u->child[1]);
2015
        do_check_tree(m, u->child[1]);
2016
      }
2016
      }
2017
      if (u->child[0] != 0 && u->child[1] != 0) {
2017
      if (u->child[0] != 0 && u->child[1] != 0) {
2018
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2018
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2019
      }
2019
      }
2020
    }
2020
    }
2021
    u = u->fd;
2021
    u = u->fd;
2022
  } while (u != t);
2022
  } while (u != t);
2023
  assert(head != 0);
2023
  assert(head != 0);
2024
}
2024
}
2025
 
2025
 
2026
/*  Check all the chunks in a treebin.  */
2026
/*  Check all the chunks in a treebin.  */
2027
static void do_check_treebin(mstate m, bindex_t i) {
2027
static void do_check_treebin(mstate m, bindex_t i) {
2028
  tbinptr* tb = treebin_at(m, i);
2028
  tbinptr* tb = treebin_at(m, i);
2029
  tchunkptr t = *tb;
2029
  tchunkptr t = *tb;
2030
  int empty = (m->treemap & (1U << i)) == 0;
2030
  int empty = (m->treemap & (1U << i)) == 0;
2031
  if (t == 0)
2031
  if (t == 0)
2032
    assert(empty);
2032
    assert(empty);
2033
  if (!empty)
2033
  if (!empty)
2034
    do_check_tree(m, t);
2034
    do_check_tree(m, t);
2035
}
2035
}
2036
 
2036
 
2037
/*  Check all the chunks in a smallbin.  */
2037
/*  Check all the chunks in a smallbin.  */
2038
static void do_check_smallbin(mstate m, bindex_t i) {
2038
static void do_check_smallbin(mstate m, bindex_t i) {
2039
  sbinptr b = smallbin_at(m, i);
2039
  sbinptr b = smallbin_at(m, i);
2040
  mchunkptr p = b->bk;
2040
  mchunkptr p = b->bk;
2041
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2041
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2042
  if (p == b)
2042
  if (p == b)
2043
    assert(empty);
2043
    assert(empty);
2044
  if (!empty) {
2044
  if (!empty) {
2045
    for (; p != b; p = p->bk) {
2045
    for (; p != b; p = p->bk) {
2046
      size_t size = chunksize(p);
2046
      size_t size = chunksize(p);
2047
      mchunkptr q;
2047
      mchunkptr q;
2048
      /* each chunk claims to be free */
2048
      /* each chunk claims to be free */
2049
      do_check_free_chunk(m, p);
2049
      do_check_free_chunk(m, p);
2050
      /* chunk belongs in bin */
2050
      /* chunk belongs in bin */
2051
      assert(small_index(size) == i);
2051
      assert(small_index(size) == i);
2052
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2052
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2053
      /* chunk is followed by an inuse chunk */
2053
      /* chunk is followed by an inuse chunk */
2054
      q = next_chunk(p);
2054
      q = next_chunk(p);
2055
      if (q->head != FENCEPOST_HEAD)
2055
      if (q->head != FENCEPOST_HEAD)
2056
        do_check_inuse_chunk(m, q);
2056
        do_check_inuse_chunk(m, q);
2057
    }
2057
    }
2058
  }
2058
  }
2059
}
2059
}
2060
 
2060
 
2061
/* Find x in a bin. Used in other check functions. */
2061
/* Find x in a bin. Used in other check functions. */
2062
static int bin_find(mstate m, mchunkptr x) {
2062
static int bin_find(mstate m, mchunkptr x) {
2063
  size_t size = chunksize(x);
2063
  size_t size = chunksize(x);
2064
  if (is_small(size)) {
2064
  if (is_small(size)) {
2065
    bindex_t sidx = small_index(size);
2065
    bindex_t sidx = small_index(size);
2066
    sbinptr b = smallbin_at(m, sidx);
2066
    sbinptr b = smallbin_at(m, sidx);
2067
    if (smallmap_is_marked(m, sidx)) {
2067
    if (smallmap_is_marked(m, sidx)) {
2068
      mchunkptr p = b;
2068
      mchunkptr p = b;
2069
      do {
2069
      do {
2070
        if (p == x)
2070
        if (p == x)
2071
          return 1;
2071
          return 1;
2072
      } while ((p = p->fd) != b);
2072
      } while ((p = p->fd) != b);
2073
    }
2073
    }
2074
  }
2074
  }
2075
  else {
2075
  else {
2076
    bindex_t tidx;
2076
    bindex_t tidx;
2077
    compute_tree_index(size, tidx);
2077
    compute_tree_index(size, tidx);
2078
    if (treemap_is_marked(m, tidx)) {
2078
    if (treemap_is_marked(m, tidx)) {
2079
      tchunkptr t = *treebin_at(m, tidx);
2079
      tchunkptr t = *treebin_at(m, tidx);
2080
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2080
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2081
      while (t != 0 && chunksize(t) != size) {
2081
      while (t != 0 && chunksize(t) != size) {
2082
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2082
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2083
        sizebits <<= 1;
2083
        sizebits <<= 1;
2084
      }
2084
      }
2085
      if (t != 0) {
2085
      if (t != 0) {
2086
        tchunkptr u = t;
2086
        tchunkptr u = t;
2087
        do {
2087
        do {
2088
          if (u == (tchunkptr)x)
2088
          if (u == (tchunkptr)x)
2089
            return 1;
2089
            return 1;
2090
        } while ((u = u->fd) != t);
2090
        } while ((u = u->fd) != t);
2091
      }
2091
      }
2092
    }
2092
    }
2093
  }
2093
  }
2094
  return 0;
2094
  return 0;
2095
}
2095
}
2096
 
2096
 
2097
/* Traverse each chunk and check it; return total */
2097
/* Traverse each chunk and check it; return total */
2098
static size_t traverse_and_check(mstate m) {
2098
static size_t traverse_and_check(mstate m) {
2099
  size_t sum = 0;
2099
  size_t sum = 0;
2100
  if (is_initialized(m)) {
2100
  if (is_initialized(m)) {
2101
    msegmentptr s = &m->seg;
2101
    msegmentptr s = &m->seg;
2102
    sum += m->topsize + TOP_FOOT_SIZE;
2102
    sum += m->topsize + TOP_FOOT_SIZE;
2103
    while (s != 0) {
2103
    while (s != 0) {
2104
      mchunkptr q = align_as_chunk(s->base);
2104
      mchunkptr q = align_as_chunk(s->base);
2105
      mchunkptr lastq = 0;
2105
      mchunkptr lastq = 0;
2106
      assert(pinuse(q));
2106
      assert(pinuse(q));
2107
      while (segment_holds(s, q) &&
2107
      while (segment_holds(s, q) &&
2108
             q != m->top && q->head != FENCEPOST_HEAD) {
2108
             q != m->top && q->head != FENCEPOST_HEAD) {
2109
        sum += chunksize(q);
2109
        sum += chunksize(q);
2110
        if (cinuse(q)) {
2110
        if (cinuse(q)) {
2111
          assert(!bin_find(m, q));
2111
          assert(!bin_find(m, q));
2112
          do_check_inuse_chunk(m, q);
2112
          do_check_inuse_chunk(m, q);
2113
        }
2113
        }
2114
        else {
2114
        else {
2115
          assert(q == m->dv || bin_find(m, q));
2115
          assert(q == m->dv || bin_find(m, q));
2116
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2116
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2117
          do_check_free_chunk(m, q);
2117
          do_check_free_chunk(m, q);
2118
        }
2118
        }
2119
        lastq = q;
2119
        lastq = q;
2120
        q = next_chunk(q);
2120
        q = next_chunk(q);
2121
      }
2121
      }
2122
      s = s->next;
2122
      s = s->next;
2123
    }
2123
    }
2124
  }
2124
  }
2125
  return sum;
2125
  return sum;
2126
}
2126
}
2127
 
2127
 
2128
/* Check all properties of malloc_state. */
2128
/* Check all properties of malloc_state. */
2129
static void do_check_malloc_state(mstate m) {
2129
static void do_check_malloc_state(mstate m) {
2130
  bindex_t i;
2130
  bindex_t i;
2131
  size_t total;
2131
  size_t total;
2132
  /* check bins */
2132
  /* check bins */
2133
  for (i = 0; i < NSMALLBINS; ++i)
2133
  for (i = 0; i < NSMALLBINS; ++i)
2134
    do_check_smallbin(m, i);
2134
    do_check_smallbin(m, i);
2135
  for (i = 0; i < NTREEBINS; ++i)
2135
  for (i = 0; i < NTREEBINS; ++i)
2136
    do_check_treebin(m, i);
2136
    do_check_treebin(m, i);
2137
 
2137
 
2138
  if (m->dvsize != 0) { /* check dv chunk */
2138
  if (m->dvsize != 0) { /* check dv chunk */
2139
    do_check_any_chunk(m, m->dv);
2139
    do_check_any_chunk(m, m->dv);
2140
    assert(m->dvsize == chunksize(m->dv));
2140
    assert(m->dvsize == chunksize(m->dv));
2141
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2141
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2142
    assert(bin_find(m, m->dv) == 0);
2142
    assert(bin_find(m, m->dv) == 0);
2143
  }
2143
  }
2144
 
2144
 
2145
  if (m->top != 0) {   /* check top chunk */
2145
  if (m->top != 0) {   /* check top chunk */
2146
    do_check_top_chunk(m, m->top);
2146
    do_check_top_chunk(m, m->top);
2147
    assert(m->topsize == chunksize(m->top));
2147
    assert(m->topsize == chunksize(m->top));
2148
    assert(m->topsize > 0);
2148
    assert(m->topsize > 0);
2149
    assert(bin_find(m, m->top) == 0);
2149
    assert(bin_find(m, m->top) == 0);
2150
  }
2150
  }
2151
 
2151
 
2152
  total = traverse_and_check(m);
2152
  total = traverse_and_check(m);
2153
  assert(total <= m->footprint);
2153
  assert(total <= m->footprint);
2154
  assert(m->footprint <= m->max_footprint);
2154
  assert(m->footprint <= m->max_footprint);
2155
}
2155
}
2156
#endif /* DEBUG */
2156
#endif /* DEBUG */
2157
 
2157
 
2158
/* ----------------------------- statistics ------------------------------ */
2158
/* ----------------------------- statistics ------------------------------ */
2159
 
2159
 
2160
#if !NO_MALLINFO
2160
#if !NO_MALLINFO
2161
static struct mallinfo internal_mallinfo(mstate m) {
2161
static struct mallinfo internal_mallinfo(mstate m) {
2162
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2162
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2163
  if (!PREACTION(m)) {
2163
  if (!PREACTION(m)) {
2164
    check_malloc_state(m);
2164
    check_malloc_state(m);
2165
    if (is_initialized(m)) {
2165
    if (is_initialized(m)) {
2166
      size_t nfree = SIZE_T_ONE; /* top always free */
2166
      size_t nfree = SIZE_T_ONE; /* top always free */
2167
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2167
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2168
      size_t sum = mfree;
2168
      size_t sum = mfree;
2169
      msegmentptr s = &m->seg;
2169
      msegmentptr s = &m->seg;
2170
      while (s != 0) {
2170
      while (s != 0) {
2171
        mchunkptr q = align_as_chunk(s->base);
2171
        mchunkptr q = align_as_chunk(s->base);
2172
        while (segment_holds(s, q) &&
2172
        while (segment_holds(s, q) &&
2173
               q != m->top && q->head != FENCEPOST_HEAD) {
2173
               q != m->top && q->head != FENCEPOST_HEAD) {
2174
          size_t sz = chunksize(q);
2174
          size_t sz = chunksize(q);
2175
          sum += sz;
2175
          sum += sz;
2176
          if (!cinuse(q)) {
2176
          if (!cinuse(q)) {
2177
            mfree += sz;
2177
            mfree += sz;
2178
            ++nfree;
2178
            ++nfree;
2179
          }
2179
          }
2180
          q = next_chunk(q);
2180
          q = next_chunk(q);
2181
        }
2181
        }
2182
        s = s->next;
2182
        s = s->next;
2183
      }
2183
      }
2184
 
2184
 
2185
      nm.arena    = sum;
2185
      nm.arena    = sum;
2186
      nm.ordblks  = nfree;
2186
      nm.ordblks  = nfree;
2187
      nm.hblkhd   = m->footprint - sum;
2187
      nm.hblkhd   = m->footprint - sum;
2188
      nm.usmblks  = m->max_footprint;
2188
      nm.usmblks  = m->max_footprint;
2189
      nm.uordblks = m->footprint - mfree;
2189
      nm.uordblks = m->footprint - mfree;
2190
      nm.fordblks = mfree;
2190
      nm.fordblks = mfree;
2191
      nm.keepcost = m->topsize;
2191
      nm.keepcost = m->topsize;
2192
    }
2192
    }
2193
 
2193
 
2194
    POSTACTION(m);
2194
    POSTACTION(m);
2195
  }
2195
  }
2196
  return nm;
2196
  return nm;
2197
}
2197
}
2198
#endif /* !NO_MALLINFO */
2198
#endif /* !NO_MALLINFO */
2199
 
2199
 
2200
static void internal_malloc_stats(mstate m) {
2200
static void internal_malloc_stats(mstate m) {
2201
  if (!PREACTION(m)) {
2201
  if (!PREACTION(m)) {
2202
    size_t maxfp = 0;
2202
    size_t maxfp = 0;
2203
    size_t fp = 0;
2203
    size_t fp = 0;
2204
    size_t used = 0;
2204
    size_t used = 0;
2205
    check_malloc_state(m);
2205
    check_malloc_state(m);
2206
    if (is_initialized(m)) {
2206
    if (is_initialized(m)) {
2207
      msegmentptr s = &m->seg;
2207
      msegmentptr s = &m->seg;
2208
      maxfp = m->max_footprint;
2208
      maxfp = m->max_footprint;
2209
      fp = m->footprint;
2209
      fp = m->footprint;
2210
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2210
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2211
 
2211
 
2212
      while (s != 0) {
2212
      while (s != 0) {
2213
        mchunkptr q = align_as_chunk(s->base);
2213
        mchunkptr q = align_as_chunk(s->base);
2214
        while (segment_holds(s, q) &&
2214
        while (segment_holds(s, q) &&
2215
               q != m->top && q->head != FENCEPOST_HEAD) {
2215
               q != m->top && q->head != FENCEPOST_HEAD) {
2216
          if (!cinuse(q))
2216
          if (!cinuse(q))
2217
            used -= chunksize(q);
2217
            used -= chunksize(q);
2218
          q = next_chunk(q);
2218
          q = next_chunk(q);
2219
        }
2219
        }
2220
        s = s->next;
2220
        s = s->next;
2221
      }
2221
      }
2222
    }
2222
    }
2223
 
2223
 
2224
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2224
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2225
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2225
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2226
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2226
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2227
 
2227
 
2228
    POSTACTION(m);
2228
    POSTACTION(m);
2229
  }
2229
  }
2230
}
2230
}
2231
 
2231
 
2232
/* ----------------------- Operations on smallbins ----------------------- */
2232
/* ----------------------- Operations on smallbins ----------------------- */
2233
 
2233
 
2234
/*
2234
/*
2235
  Various forms of linking and unlinking are defined as macros.  Even
2235
  Various forms of linking and unlinking are defined as macros.  Even
2236
  the ones for trees, which are very long but have very short typical
2236
  the ones for trees, which are very long but have very short typical
2237
  paths.  This is ugly but reduces reliance on inlining support of
2237
  paths.  This is ugly but reduces reliance on inlining support of
2238
  compilers.
2238
  compilers.
2239
*/
2239
*/
2240
 
2240
 
2241
/* Link a free chunk into a smallbin  */
2241
/* Link a free chunk into a smallbin  */
2242
#define insert_small_chunk(M, P, S) {\
2242
#define insert_small_chunk(M, P, S) {\
2243
  bindex_t I  = small_index(S);\
2243
  bindex_t I  = small_index(S);\
2244
  mchunkptr B = smallbin_at(M, I);\
2244
  mchunkptr B = smallbin_at(M, I);\
2245
  mchunkptr F = B;\
2245
  mchunkptr F = B;\
2246
  assert(S >= MIN_CHUNK_SIZE);\
2246
  assert(S >= MIN_CHUNK_SIZE);\
2247
  if (!smallmap_is_marked(M, I))\
2247
  if (!smallmap_is_marked(M, I))\
2248
    mark_smallmap(M, I);\
2248
    mark_smallmap(M, I);\
2249
  else if (RTCHECK(ok_address(M, B->fd)))\
2249
  else if (RTCHECK(ok_address(M, B->fd)))\
2250
    F = B->fd;\
2250
    F = B->fd;\
2251
  else {\
2251
  else {\
2252
    CORRUPTION_ERROR_ACTION(M);\
2252
    CORRUPTION_ERROR_ACTION(M);\
2253
  }\
2253
  }\
2254
  B->fd = P;\
2254
  B->fd = P;\
2255
  F->bk = P;\
2255
  F->bk = P;\
2256
  P->fd = F;\
2256
  P->fd = F;\
2257
  P->bk = B;\
2257
  P->bk = B;\
2258
}
2258
}
2259
 
2259
 
2260
/* Unlink a chunk from a smallbin  */
2260
/* Unlink a chunk from a smallbin  */
2261
#define unlink_small_chunk(M, P, S) {\
2261
#define unlink_small_chunk(M, P, S) {\
2262
  mchunkptr F = P->fd;\
2262
  mchunkptr F = P->fd;\
2263
  mchunkptr B = P->bk;\
2263
  mchunkptr B = P->bk;\
2264
  bindex_t I = small_index(S);\
2264
  bindex_t I = small_index(S);\
2265
  assert(P != B);\
2265
  assert(P != B);\
2266
  assert(P != F);\
2266
  assert(P != F);\
2267
  assert(chunksize(P) == small_index2size(I));\
2267
  assert(chunksize(P) == small_index2size(I));\
2268
  if (F == B)\
2268
  if (F == B)\
2269
    clear_smallmap(M, I);\
2269
    clear_smallmap(M, I);\
2270
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2270
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2271
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2271
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2272
    F->bk = B;\
2272
    F->bk = B;\
2273
    B->fd = F;\
2273
    B->fd = F;\
2274
  }\
2274
  }\
2275
  else {\
2275
  else {\
2276
    CORRUPTION_ERROR_ACTION(M);\
2276
    CORRUPTION_ERROR_ACTION(M);\
2277
  }\
2277
  }\
2278
}
2278
}
2279
 
2279
 
2280
/* Unlink the first chunk from a smallbin */
2280
/* Unlink the first chunk from a smallbin */
2281
#define unlink_first_small_chunk(M, B, P, I) {\
2281
#define unlink_first_small_chunk(M, B, P, I) {\
2282
  mchunkptr F = P->fd;\
2282
  mchunkptr F = P->fd;\
2283
  assert(P != B);\
2283
  assert(P != B);\
2284
  assert(P != F);\
2284
  assert(P != F);\
2285
  assert(chunksize(P) == small_index2size(I));\
2285
  assert(chunksize(P) == small_index2size(I));\
2286
  if (B == F)\
2286
  if (B == F)\
2287
    clear_smallmap(M, I);\
2287
    clear_smallmap(M, I);\
2288
  else if (RTCHECK(ok_address(M, F))) {\
2288
  else if (RTCHECK(ok_address(M, F))) {\
2289
    B->fd = F;\
2289
    B->fd = F;\
2290
    F->bk = B;\
2290
    F->bk = B;\
2291
  }\
2291
  }\
2292
  else {\
2292
  else {\
2293
    CORRUPTION_ERROR_ACTION(M);\
2293
    CORRUPTION_ERROR_ACTION(M);\
2294
  }\
2294
  }\
2295
}
2295
}
2296
 
2296
 
2297
/* Replace dv node, binning the old one */
2297
/* Replace dv node, binning the old one */
2298
/* Used only when dvsize known to be small */
2298
/* Used only when dvsize known to be small */
2299
#define replace_dv(M, P, S) {\
2299
#define replace_dv(M, P, S) {\
2300
  size_t DVS = M->dvsize;\
2300
  size_t DVS = M->dvsize;\
2301
  if (DVS != 0) {\
2301
  if (DVS != 0) {\
2302
    mchunkptr DV = M->dv;\
2302
    mchunkptr DV = M->dv;\
2303
    assert(is_small(DVS));\
2303
    assert(is_small(DVS));\
2304
    insert_small_chunk(M, DV, DVS);\
2304
    insert_small_chunk(M, DV, DVS);\
2305
  }\
2305
  }\
2306
  M->dvsize = S;\
2306
  M->dvsize = S;\
2307
  M->dv = P;\
2307
  M->dv = P;\
2308
}
2308
}
2309
 
2309
 
2310
/* ------------------------- Operations on trees ------------------------- */
2310
/* ------------------------- Operations on trees ------------------------- */
2311
 
2311
 
2312
/* Insert chunk into tree */
2312
/* Insert chunk into tree */
2313
#define insert_large_chunk(M, X, S) {\
2313
#define insert_large_chunk(M, X, S) {\
2314
  tbinptr* H;\
2314
  tbinptr* H;\
2315
  bindex_t I;\
2315
  bindex_t I;\
2316
  compute_tree_index(S, I);\
2316
  compute_tree_index(S, I);\
2317
  H = treebin_at(M, I);\
2317
  H = treebin_at(M, I);\
2318
  X->index = I;\
2318
  X->index = I;\
2319
  X->child[0] = X->child[1] = 0;\
2319
  X->child[0] = X->child[1] = 0;\
2320
  if (!treemap_is_marked(M, I)) {\
2320
  if (!treemap_is_marked(M, I)) {\
2321
    mark_treemap(M, I);\
2321
    mark_treemap(M, I);\
2322
    *H = X;\
2322
    *H = X;\
2323
    X->parent = (tchunkptr)H;\
2323
    X->parent = (tchunkptr)H;\
2324
    X->fd = X->bk = X;\
2324
    X->fd = X->bk = X;\
2325
  }\
2325
  }\
2326
  else {\
2326
  else {\
2327
    tchunkptr T = *H;\
2327
    tchunkptr T = *H;\
2328
    size_t K = S << leftshift_for_tree_index(I);\
2328
    size_t K = S << leftshift_for_tree_index(I);\
2329
    for (;;) {\
2329
    for (;;) {\
2330
      if (chunksize(T) != S) {\
2330
      if (chunksize(T) != S) {\
2331
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2331
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2332
        K <<= 1;\
2332
        K <<= 1;\
2333
        if (*C != 0)\
2333
        if (*C != 0)\
2334
          T = *C;\
2334
          T = *C;\
2335
        else if (RTCHECK(ok_address(M, C))) {\
2335
        else if (RTCHECK(ok_address(M, C))) {\
2336
          *C = X;\
2336
          *C = X;\
2337
          X->parent = T;\
2337
          X->parent = T;\
2338
          X->fd = X->bk = X;\
2338
          X->fd = X->bk = X;\
2339
          break;\
2339
          break;\
2340
        }\
2340
        }\
2341
        else {\
2341
        else {\
2342
          CORRUPTION_ERROR_ACTION(M);\
2342
          CORRUPTION_ERROR_ACTION(M);\
2343
          break;\
2343
          break;\
2344
        }\
2344
        }\
2345
      }\
2345
      }\
2346
      else {\
2346
      else {\
2347
        tchunkptr F = T->fd;\
2347
        tchunkptr F = T->fd;\
2348
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2348
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2349
          T->fd = F->bk = X;\
2349
          T->fd = F->bk = X;\
2350
          X->fd = F;\
2350
          X->fd = F;\
2351
          X->bk = T;\
2351
          X->bk = T;\
2352
          X->parent = 0;\
2352
          X->parent = 0;\
2353
          break;\
2353
          break;\
2354
        }\
2354
        }\
2355
        else {\
2355
        else {\
2356
          CORRUPTION_ERROR_ACTION(M);\
2356
          CORRUPTION_ERROR_ACTION(M);\
2357
          break;\
2357
          break;\
2358
        }\
2358
        }\
2359
      }\
2359
      }\
2360
    }\
2360
    }\
2361
  }\
2361
  }\
2362
}
2362
}
2363
 
2363
 
2364
/*
2364
/*
2365
  Unlink steps:
2365
  Unlink steps:
2366
 
2366
 
2367
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2367
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2368
     and choose its bk node as its replacement.
2368
     and choose its bk node as its replacement.
2369
  2. If x was the last node of its size, but not a leaf node, it must
2369
  2. If x was the last node of its size, but not a leaf node, it must
2370
     be replaced with a leaf node (not merely one with an open left or
2370
     be replaced with a leaf node (not merely one with an open left or
2371
     right), to make sure that lefts and rights of descendents
2371
     right), to make sure that lefts and rights of descendents
2372
     correspond properly to bit masks.  We use the rightmost descendent
2372
     correspond properly to bit masks.  We use the rightmost descendent
2373
     of x.  We could use any other leaf, but this is easy to locate and
2373
     of x.  We could use any other leaf, but this is easy to locate and
2374
     tends to counteract removal of leftmosts elsewhere, and so keeps
2374
     tends to counteract removal of leftmosts elsewhere, and so keeps
2375
     paths shorter than minimally guaranteed.  This doesn't loop much
2375
     paths shorter than minimally guaranteed.  This doesn't loop much
2376
     because on average a node in a tree is near the bottom.
2376
     because on average a node in a tree is near the bottom.
2377
  3. If x is the base of a chain (i.e., has parent links) relink
2377
  3. If x is the base of a chain (i.e., has parent links) relink
2378
     x's parent and children to x's replacement (or null if none).
2378
     x's parent and children to x's replacement (or null if none).
2379
*/
2379
*/
2380
 
2380
 
2381
#define unlink_large_chunk(M, X) {\
2381
#define unlink_large_chunk(M, X) {\
2382
  tchunkptr XP = X->parent;\
2382
  tchunkptr XP = X->parent;\
2383
  tchunkptr R;\
2383
  tchunkptr R;\
2384
  if (X->bk != X) {\
2384
  if (X->bk != X) {\
2385
    tchunkptr F = X->fd;\
2385
    tchunkptr F = X->fd;\
2386
    R = X->bk;\
2386
    R = X->bk;\
2387
    if (RTCHECK(ok_address(M, F))) {\
2387
    if (RTCHECK(ok_address(M, F))) {\
2388
      F->bk = R;\
2388
      F->bk = R;\
2389
      R->fd = F;\
2389
      R->fd = F;\
2390
    }\
2390
    }\
2391
    else {\
2391
    else {\
2392
      CORRUPTION_ERROR_ACTION(M);\
2392
      CORRUPTION_ERROR_ACTION(M);\
2393
    }\
2393
    }\
2394
  }\
2394
  }\
2395
  else {\
2395
  else {\
2396
    tchunkptr* RP;\
2396
    tchunkptr* RP;\
2397
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2397
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2398
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2398
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2399
      tchunkptr* CP;\
2399
      tchunkptr* CP;\
2400
      while ((*(CP = &(R->child[1])) != 0) ||\
2400
      while ((*(CP = &(R->child[1])) != 0) ||\
2401
             (*(CP = &(R->child[0])) != 0)) {\
2401
             (*(CP = &(R->child[0])) != 0)) {\
2402
        R = *(RP = CP);\
2402
        R = *(RP = CP);\
2403
      }\
2403
      }\
2404
      if (RTCHECK(ok_address(M, RP)))\
2404
      if (RTCHECK(ok_address(M, RP)))\
2405
        *RP = 0;\
2405
        *RP = 0;\
2406
      else {\
2406
      else {\
2407
        CORRUPTION_ERROR_ACTION(M);\
2407
        CORRUPTION_ERROR_ACTION(M);\
2408
      }\
2408
      }\
2409
    }\
2409
    }\
2410
  }\
2410
  }\
2411
  if (XP != 0) {\
2411
  if (XP != 0) {\
2412
    tbinptr* H = treebin_at(M, X->index);\
2412
    tbinptr* H = treebin_at(M, X->index);\
2413
    if (X == *H) {\
2413
    if (X == *H) {\
2414
      if ((*H = R) == 0) \
2414
      if ((*H = R) == 0) \
2415
        clear_treemap(M, X->index);\
2415
        clear_treemap(M, X->index);\
2416
    }\
2416
    }\
2417
    else if (RTCHECK(ok_address(M, XP))) {\
2417
    else if (RTCHECK(ok_address(M, XP))) {\
2418
      if (XP->child[0] == X) \
2418
      if (XP->child[0] == X) \
2419
        XP->child[0] = R;\
2419
        XP->child[0] = R;\
2420
      else \
2420
      else \
2421
        XP->child[1] = R;\
2421
        XP->child[1] = R;\
2422
    }\
2422
    }\
2423
    else\
2423
    else\
2424
      CORRUPTION_ERROR_ACTION(M);\
2424
      CORRUPTION_ERROR_ACTION(M);\
2425
    if (R != 0) {\
2425
    if (R != 0) {\
2426
      if (RTCHECK(ok_address(M, R))) {\
2426
      if (RTCHECK(ok_address(M, R))) {\
2427
        tchunkptr C0, C1;\
2427
        tchunkptr C0, C1;\
2428
        R->parent = XP;\
2428
        R->parent = XP;\
2429
        if ((C0 = X->child[0]) != 0) {\
2429
        if ((C0 = X->child[0]) != 0) {\
2430
          if (RTCHECK(ok_address(M, C0))) {\
2430
          if (RTCHECK(ok_address(M, C0))) {\
2431
            R->child[0] = C0;\
2431
            R->child[0] = C0;\
2432
            C0->parent = R;\
2432
            C0->parent = R;\
2433
          }\
2433
          }\
2434
          else\
2434
          else\
2435
            CORRUPTION_ERROR_ACTION(M);\
2435
            CORRUPTION_ERROR_ACTION(M);\
2436
        }\
2436
        }\
2437
        if ((C1 = X->child[1]) != 0) {\
2437
        if ((C1 = X->child[1]) != 0) {\
2438
          if (RTCHECK(ok_address(M, C1))) {\
2438
          if (RTCHECK(ok_address(M, C1))) {\
2439
            R->child[1] = C1;\
2439
            R->child[1] = C1;\
2440
            C1->parent = R;\
2440
            C1->parent = R;\
2441
          }\
2441
          }\
2442
          else\
2442
          else\
2443
            CORRUPTION_ERROR_ACTION(M);\
2443
            CORRUPTION_ERROR_ACTION(M);\
2444
        }\
2444
        }\
2445
      }\
2445
      }\
2446
      else\
2446
      else\
2447
        CORRUPTION_ERROR_ACTION(M);\
2447
        CORRUPTION_ERROR_ACTION(M);\
2448
    }\
2448
    }\
2449
  }\
2449
  }\
2450
}
2450
}
2451
 
2451
 
2452
/* Relays to large vs small bin operations */
2452
/* Relays to large vs small bin operations */
2453
 
2453
 
2454
#define insert_chunk(M, P, S)\
2454
#define insert_chunk(M, P, S)\
2455
  if (is_small(S)) insert_small_chunk(M, P, S)\
2455
  if (is_small(S)) insert_small_chunk(M, P, S)\
2456
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2456
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2457
 
2457
 
2458
#define unlink_chunk(M, P, S)\
2458
#define unlink_chunk(M, P, S)\
2459
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2459
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2460
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2460
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2461
 
2461
 
2462
 
2462
 
2463
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2463
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2464
 
2464
 
2465
#if ONLY_MSPACES
2465
#if ONLY_MSPACES
2466
#define internal_malloc(m, b) mspace_malloc(m, b)
2466
#define internal_malloc(m, b) mspace_malloc(m, b)
2467
#define internal_free(m, mem) mspace_free(m,mem);
2467
#define internal_free(m, mem) mspace_free(m,mem);
2468
#else /* ONLY_MSPACES */
2468
#else /* ONLY_MSPACES */
2469
#if MSPACES
2469
#if MSPACES
2470
#define internal_malloc(m, b)\
2470
#define internal_malloc(m, b)\
2471
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2471
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2472
#define internal_free(m, mem)\
2472
#define internal_free(m, mem)\
2473
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2473
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2474
#else /* MSPACES */
2474
#else /* MSPACES */
2475
#define internal_malloc(m, b) dlmalloc(b)
2475
#define internal_malloc(m, b) dlmalloc(b)
2476
#define internal_free(m, mem) dlfree(mem)
2476
#define internal_free(m, mem) dlfree(mem)
2477
#endif /* MSPACES */
2477
#endif /* MSPACES */
2478
#endif /* ONLY_MSPACES */
2478
#endif /* ONLY_MSPACES */
2479
 
2479
 
2480
/* -----------------------  Direct-mmapping chunks ----------------------- */
2480
/* -----------------------  Direct-mmapping chunks ----------------------- */
2481
 
2481
 
2482
/*
2482
/*
2483
  Directly mmapped chunks are set up with an offset to the start of
2483
  Directly mmapped chunks are set up with an offset to the start of
2484
  the mmapped region stored in the prev_foot field of the chunk. This
2484
  the mmapped region stored in the prev_foot field of the chunk. This
2485
  allows reconstruction of the required argument to MUNMAP when freed,
2485
  allows reconstruction of the required argument to MUNMAP when freed,
2486
  and also allows adjustment of the returned chunk to meet alignment
2486
  and also allows adjustment of the returned chunk to meet alignment
2487
  requirements (especially in memalign).  There is also enough space
2487
  requirements (especially in memalign).  There is also enough space
2488
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2488
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2489
  the PINUSE bit so frees can be checked.
2489
  the PINUSE bit so frees can be checked.
2490
*/
2490
*/
2491
 
2491
 
2492
/* Malloc using mmap */
2492
/* Malloc using mmap */
2493
static void* mmap_alloc(mstate m, size_t nb) {
2493
static void* mmap_alloc(mstate m, size_t nb) {
2494
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2494
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2495
  if (mmsize > nb) {     /* Check for wrap around 0 */
2495
  if (mmsize > nb) {     /* Check for wrap around 0 */
2496
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2496
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2497
    if (mm != CMFAIL) {
2497
    if (mm != CMFAIL) {
2498
      size_t offset = align_offset(chunk2mem(mm));
2498
      size_t offset = align_offset(chunk2mem(mm));
2499
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2499
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2500
      mchunkptr p = (mchunkptr)(mm + offset);
2500
      mchunkptr p = (mchunkptr)(mm + offset);
2501
      p->prev_foot = offset | IS_MMAPPED_BIT;
2501
      p->prev_foot = offset | IS_MMAPPED_BIT;
2502
      (p)->head = (psize|CINUSE_BIT);
2502
      (p)->head = (psize|CINUSE_BIT);
2503
      mark_inuse_foot(m, p, psize);
2503
      mark_inuse_foot(m, p, psize);
2504
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2504
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2505
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2505
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2506
 
2506
 
2507
      if (mm < m->least_addr)
2507
      if (mm < m->least_addr)
2508
        m->least_addr = mm;
2508
        m->least_addr = mm;
2509
      if ((m->footprint += mmsize) > m->max_footprint)
2509
      if ((m->footprint += mmsize) > m->max_footprint)
2510
        m->max_footprint = m->footprint;
2510
        m->max_footprint = m->footprint;
2511
      assert(is_aligned(chunk2mem(p)));
2511
      assert(is_aligned(chunk2mem(p)));
2512
      check_mmapped_chunk(m, p);
2512
      check_mmapped_chunk(m, p);
2513
      return chunk2mem(p);
2513
      return chunk2mem(p);
2514
    }
2514
    }
2515
  }
2515
  }
2516
  return 0;
2516
  return 0;
2517
}
2517
}
2518
 
2518
 
2519
/* Realloc using mmap */
2519
/* Realloc using mmap */
2520
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2520
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2521
  size_t oldsize = chunksize(oldp);
2521
  size_t oldsize = chunksize(oldp);
2522
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2522
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2523
    return 0;
2523
    return 0;
2524
  /* Keep old chunk if big enough but not too big */
2524
  /* Keep old chunk if big enough but not too big */
2525
  if (oldsize >= nb + SIZE_T_SIZE &&
2525
  if (oldsize >= nb + SIZE_T_SIZE &&
2526
      (oldsize - nb) <= (mparams.granularity << 1))
2526
      (oldsize - nb) <= (mparams.granularity << 1))
2527
    return oldp;
2527
    return oldp;
2528
  else {
2528
  else {
2529
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2529
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2530
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2530
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2531
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2531
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2532
                                         CHUNK_ALIGN_MASK);
2532
                                         CHUNK_ALIGN_MASK);
2533
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2533
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2534
                                  oldmmsize, newmmsize, 1);
2534
                                  oldmmsize, newmmsize, 1);
2535
    if (cp != CMFAIL) {
2535
    if (cp != CMFAIL) {
2536
      mchunkptr newp = (mchunkptr)(cp + offset);
2536
      mchunkptr newp = (mchunkptr)(cp + offset);
2537
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2537
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2538
      newp->head = (psize|CINUSE_BIT);
2538
      newp->head = (psize|CINUSE_BIT);
2539
      mark_inuse_foot(m, newp, psize);
2539
      mark_inuse_foot(m, newp, psize);
2540
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2540
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2541
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2541
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2542
 
2542
 
2543
      if (cp < m->least_addr)
2543
      if (cp < m->least_addr)
2544
        m->least_addr = cp;
2544
        m->least_addr = cp;
2545
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2545
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2546
        m->max_footprint = m->footprint;
2546
        m->max_footprint = m->footprint;
2547
      check_mmapped_chunk(m, newp);
2547
      check_mmapped_chunk(m, newp);
2548
      return newp;
2548
      return newp;
2549
    }
2549
    }
2550
  }
2550
  }
2551
  return 0;
2551
  return 0;
2552
}
2552
}
2553
 
2553
 
2554
/* -------------------------- mspace management -------------------------- */
2554
/* -------------------------- mspace management -------------------------- */
2555
 
2555
 
2556
/* Initialize top chunk and its size */
2556
/* Initialize top chunk and its size */
2557
static void init_top(mstate m, mchunkptr p, size_t psize) {
2557
static void init_top(mstate m, mchunkptr p, size_t psize) {
2558
  /* Ensure alignment */
2558
  /* Ensure alignment */
2559
  size_t offset = align_offset(chunk2mem(p));
2559
  size_t offset = align_offset(chunk2mem(p));
2560
  p = (mchunkptr)((char*)p + offset);
2560
  p = (mchunkptr)((char*)p + offset);
2561
  psize -= offset;
2561
  psize -= offset;
2562
 
2562
 
2563
  m->top = p;
2563
  m->top = p;
2564
  m->topsize = psize;
2564
  m->topsize = psize;
2565
  p->head = psize | PINUSE_BIT;
2565
  p->head = psize | PINUSE_BIT;
2566
  /* set size of fake trailing chunk holding overhead space only once */
2566
  /* set size of fake trailing chunk holding overhead space only once */
2567
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2567
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2568
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2568
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2569
}
2569
}
2570
 
2570
 
2571
/* Initialize bins for a new mstate that is otherwise zeroed out */
2571
/* Initialize bins for a new mstate that is otherwise zeroed out */
2572
static void init_bins(mstate m) {
2572
static void init_bins(mstate m) {
2573
  /* Establish circular links for smallbins */
2573
  /* Establish circular links for smallbins */
2574
  bindex_t i;
2574
  bindex_t i;
2575
  for (i = 0; i < NSMALLBINS; ++i) {
2575
  for (i = 0; i < NSMALLBINS; ++i) {
2576
    sbinptr bin = smallbin_at(m,i);
2576
    sbinptr bin = smallbin_at(m,i);
2577
    bin->fd = bin->bk = bin;
2577
    bin->fd = bin->bk = bin;
2578
  }
2578
  }
2579
}
2579
}
2580
 
2580
 
2581
#if PROCEED_ON_ERROR
2581
#if PROCEED_ON_ERROR
2582
 
2582
 
2583
/* default corruption action */
2583
/* default corruption action */
2584
static void reset_on_error(mstate m) {
2584
static void reset_on_error(mstate m) {
2585
  int i;
2585
  int i;
2586
  ++malloc_corruption_error_count;
2586
  ++malloc_corruption_error_count;
2587
  /* Reinitialize fields to forget about all memory */
2587
  /* Reinitialize fields to forget about all memory */
2588
  m->smallbins = m->treebins = 0;
2588
  m->smallbins = m->treebins = 0;
2589
  m->dvsize = m->topsize = 0;
2589
  m->dvsize = m->topsize = 0;
2590
  m->seg.base = 0;
2590
  m->seg.base = 0;
2591
  m->seg.size = 0;
2591
  m->seg.size = 0;
2592
  m->seg.next = 0;
2592
  m->seg.next = 0;
2593
  m->top = m->dv = 0;
2593
  m->top = m->dv = 0;
2594
  for (i = 0; i < NTREEBINS; ++i)
2594
  for (i = 0; i < NTREEBINS; ++i)
2595
    *treebin_at(m, i) = 0;
2595
    *treebin_at(m, i) = 0;
2596
  init_bins(m);
2596
  init_bins(m);
2597
}
2597
}
2598
#endif /* PROCEED_ON_ERROR */
2598
#endif /* PROCEED_ON_ERROR */
2599
 
2599
 
2600
/* Allocate chunk and prepend remainder with chunk in successor base. */
2600
/* Allocate chunk and prepend remainder with chunk in successor base. */
2601
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2601
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2602
                           size_t nb) {
2602
                           size_t nb) {
2603
  mchunkptr p = align_as_chunk(newbase);
2603
  mchunkptr p = align_as_chunk(newbase);
2604
  mchunkptr oldfirst = align_as_chunk(oldbase);
2604
  mchunkptr oldfirst = align_as_chunk(oldbase);
2605
  size_t psize = (char*)oldfirst - (char*)p;
2605
  size_t psize = (char*)oldfirst - (char*)p;
2606
  mchunkptr q = chunk_plus_offset(p, nb);
2606
  mchunkptr q = chunk_plus_offset(p, nb);
2607
  size_t qsize = psize - nb;
2607
  size_t qsize = psize - nb;
2608
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2608
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2609
 
2609
 
2610
  assert((char*)oldfirst > (char*)q);
2610
  assert((char*)oldfirst > (char*)q);
2611
  assert(pinuse(oldfirst));
2611
  assert(pinuse(oldfirst));
2612
  assert(qsize >= MIN_CHUNK_SIZE);
2612
  assert(qsize >= MIN_CHUNK_SIZE);
2613
 
2613
 
2614
  /* consolidate remainder with first chunk of old base */
2614
  /* consolidate remainder with first chunk of old base */
2615
  if (oldfirst == m->top) {
2615
  if (oldfirst == m->top) {
2616
    size_t tsize = m->topsize += qsize;
2616
    size_t tsize = m->topsize += qsize;
2617
    m->top = q;
2617
    m->top = q;
2618
    q->head = tsize | PINUSE_BIT;
2618
    q->head = tsize | PINUSE_BIT;
2619
    check_top_chunk(m, q);
2619
    check_top_chunk(m, q);
2620
  }
2620
  }
2621
  else if (oldfirst == m->dv) {
2621
  else if (oldfirst == m->dv) {
2622
    size_t dsize = m->dvsize += qsize;
2622
    size_t dsize = m->dvsize += qsize;
2623
    m->dv = q;
2623
    m->dv = q;
2624
    set_size_and_pinuse_of_free_chunk(q, dsize);
2624
    set_size_and_pinuse_of_free_chunk(q, dsize);
2625
  }
2625
  }
2626
  else {
2626
  else {
2627
    if (!cinuse(oldfirst)) {
2627
    if (!cinuse(oldfirst)) {
2628
      size_t nsize = chunksize(oldfirst);
2628
      size_t nsize = chunksize(oldfirst);
2629
      unlink_chunk(m, oldfirst, nsize);
2629
      unlink_chunk(m, oldfirst, nsize);
2630
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2630
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2631
      qsize += nsize;
2631
      qsize += nsize;
2632
    }
2632
    }
2633
    set_free_with_pinuse(q, qsize, oldfirst);
2633
    set_free_with_pinuse(q, qsize, oldfirst);
2634
    insert_chunk(m, q, qsize);
2634
    insert_chunk(m, q, qsize);
2635
    check_free_chunk(m, q);
2635
    check_free_chunk(m, q);
2636
  }
2636
  }
2637
 
2637
 
2638
  check_malloced_chunk(m, chunk2mem(p), nb);
2638
  check_malloced_chunk(m, chunk2mem(p), nb);
2639
  return chunk2mem(p);
2639
  return chunk2mem(p);
2640
}
2640
}
2641
 
2641
 
2642
 
2642
 
2643
/* Add a segment to hold a new noncontiguous region */
2643
/* Add a segment to hold a new noncontiguous region */
2644
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2644
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2645
  /* Determine locations and sizes of segment, fenceposts, old top */
2645
  /* Determine locations and sizes of segment, fenceposts, old top */
2646
  char* old_top = (char*)m->top;
2646
  char* old_top = (char*)m->top;
2647
  msegmentptr oldsp = segment_holding(m, old_top);
2647
  msegmentptr oldsp = segment_holding(m, old_top);
2648
  char* old_end = oldsp->base + oldsp->size;
2648
  char* old_end = oldsp->base + oldsp->size;
2649
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2649
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2650
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2650
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2651
  size_t offset = align_offset(chunk2mem(rawsp));
2651
  size_t offset = align_offset(chunk2mem(rawsp));
2652
  char* asp = rawsp + offset;
2652
  char* asp = rawsp + offset;
2653
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2653
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2654
  mchunkptr sp = (mchunkptr)csp;
2654
  mchunkptr sp = (mchunkptr)csp;
2655
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2655
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2656
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2656
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2657
  mchunkptr p = tnext;
2657
  mchunkptr p = tnext;
2658
  int nfences = 0;
2658
  int nfences = 0;
2659
 
2659
 
2660
  /* reset top to new space */
2660
  /* reset top to new space */
2661
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2661
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2662
 
2662
 
2663
  /* Set up segment record */
2663
  /* Set up segment record */
2664
  assert(is_aligned(ss));
2664
  assert(is_aligned(ss));
2665
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2665
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2666
  *ss = m->seg; /* Push current record */
2666
  *ss = m->seg; /* Push current record */
2667
  m->seg.base = tbase;
2667
  m->seg.base = tbase;
2668
  m->seg.size = tsize;
2668
  m->seg.size = tsize;
2669
  m->seg.sflags = mmapped;
2669
  m->seg.sflags = mmapped;
2670
  m->seg.next = ss;
2670
  m->seg.next = ss;
2671
 
2671
 
2672
  /* Insert trailing fenceposts */
2672
  /* Insert trailing fenceposts */
2673
  for (;;) {
2673
  for (;;) {
2674
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2674
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2675
    p->head = FENCEPOST_HEAD;
2675
    p->head = FENCEPOST_HEAD;
2676
    ++nfences;
2676
    ++nfences;
2677
    if ((char*)(&(nextp->head)) < old_end)
2677
    if ((char*)(&(nextp->head)) < old_end)
2678
      p = nextp;
2678
      p = nextp;
2679
    else
2679
    else
2680
      break;
2680
      break;
2681
  }
2681
  }
2682
  assert(nfences >= 2);
2682
  assert(nfences >= 2);
2683
 
2683
 
2684
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2684
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2685
  if (csp != old_top) {
2685
  if (csp != old_top) {
2686
    mchunkptr q = (mchunkptr)old_top;
2686
    mchunkptr q = (mchunkptr)old_top;
2687
    size_t psize = csp - old_top;
2687
    size_t psize = csp - old_top;
2688
    mchunkptr tn = chunk_plus_offset(q, psize);
2688
    mchunkptr tn = chunk_plus_offset(q, psize);
2689
    set_free_with_pinuse(q, psize, tn);
2689
    set_free_with_pinuse(q, psize, tn);
2690
    insert_chunk(m, q, psize);
2690
    insert_chunk(m, q, psize);
2691
  }
2691
  }
2692
 
2692
 
2693
  check_top_chunk(m, m->top);
2693
  check_top_chunk(m, m->top);
2694
}
2694
}
2695
 
2695
 
2696
/* -------------------------- System allocation -------------------------- */
2696
/* -------------------------- System allocation -------------------------- */
2697
 
2697
 
2698
/* Get memory from system using MORECORE or MMAP */
2698
/* Get memory from system using MORECORE or MMAP */
2699
static void* sys_alloc(mstate m, size_t nb) {
2699
static void* sys_alloc(mstate m, size_t nb) {
2700
  char* tbase = CMFAIL;
2700
  char* tbase = CMFAIL;
2701
  size_t tsize = 0;
2701
  size_t tsize = 0;
2702
  flag_t mmap_flag = 0;
2702
  flag_t mmap_flag = 0;
2703
 
2703
 
2704
  init_mparams();
2704
  init_mparams();
2705
 
2705
 
2706
  /* Directly map large chunks */
2706
  /* Directly map large chunks */
2707
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2707
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2708
    void* mem = mmap_alloc(m, nb);
2708
    void* mem = mmap_alloc(m, nb);
2709
    if (mem != 0)
2709
    if (mem != 0)
2710
      return mem;
2710
      return mem;
2711
  }
2711
  }
2712
 
2712
 
2713
  /*
2713
  /*
2714
    Try getting memory in any of three ways (in most-preferred to
2714
    Try getting memory in any of three ways (in most-preferred to
2715
    least-preferred order):
2715
    least-preferred order):
2716
    1. A call to MORECORE that can normally contiguously extend memory.
2716
    1. A call to MORECORE that can normally contiguously extend memory.
2717
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2717
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2718
       or main space is mmapped or a previous contiguous call failed)
2718
       or main space is mmapped or a previous contiguous call failed)
2719
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2719
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2720
       Note that under the default settings, if MORECORE is unable to
2720
       Note that under the default settings, if MORECORE is unable to
2721
       fulfill a request, and HAVE_MMAP is true, then mmap is
2721
       fulfill a request, and HAVE_MMAP is true, then mmap is
2722
       used as a noncontiguous system allocator. This is a useful backup
2722
       used as a noncontiguous system allocator. This is a useful backup
2723
       strategy for systems with holes in address spaces -- in this case
2723
       strategy for systems with holes in address spaces -- in this case
2724
       sbrk cannot contiguously expand the heap, but mmap may be able to
2724
       sbrk cannot contiguously expand the heap, but mmap may be able to
2725
       find space.
2725
       find space.
2726
    3. A call to MORECORE that cannot usually contiguously extend memory.
2726
    3. A call to MORECORE that cannot usually contiguously extend memory.
2727
       (disabled if not HAVE_MORECORE)
2727
       (disabled if not HAVE_MORECORE)
2728
  */
2728
  */
2729
 
2729
 
2730
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2730
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2731
    char* br = CMFAIL;
2731
    char* br = CMFAIL;
2732
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2732
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2733
    size_t asize = 0;
2733
    size_t asize = 0;
2734
    ACQUIRE_MORECORE_LOCK();
2734
    ACQUIRE_MORECORE_LOCK();
2735
 
2735
 
2736
    if (ss == 0) {  /* First time through or recovery */
2736
    if (ss == 0) {  /* First time through or recovery */
2737
      char* base = (char*)CALL_MORECORE(0);
2737
      char* base = (char*)CALL_MORECORE(0);
2738
      if (base != CMFAIL) {
2738
      if (base != CMFAIL) {
2739
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2739
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2740
        /* Adjust to end on a page boundary */
2740
        /* Adjust to end on a page boundary */
2741
        if (!is_page_aligned(base))
2741
        if (!is_page_aligned(base))
2742
          asize += (page_align((size_t)base) - (size_t)base);
2742
          asize += (page_align((size_t)base) - (size_t)base);
2743
        /* Can't call MORECORE if size is negative when treated as signed */
2743
        /* Can't call MORECORE if size is negative when treated as signed */
2744
        if (asize < HALF_MAX_SIZE_T &&
2744
        if (asize < HALF_MAX_SIZE_T &&
2745
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2745
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2746
          tbase = base;
2746
          tbase = base;
2747
          tsize = asize;
2747
          tsize = asize;
2748
        }
2748
        }
2749
      }
2749
      }
2750
    }
2750
    }
2751
    else {
2751
    else {
2752
      /* Subtract out existing available top space from MORECORE request. */
2752
      /* Subtract out existing available top space from MORECORE request. */
2753
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2753
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2754
      /* Use mem here only if it did continuously extend old space */
2754
      /* Use mem here only if it did continuously extend old space */
2755
      if (asize < HALF_MAX_SIZE_T &&
2755
      if (asize < HALF_MAX_SIZE_T &&
2756
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2756
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2757
        tbase = br;
2757
        tbase = br;
2758
        tsize = asize;
2758
        tsize = asize;
2759
      }
2759
      }
2760
    }
2760
    }
2761
 
2761
 
2762
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2762
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2763
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2763
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2764
        if (asize < HALF_MAX_SIZE_T &&
2764
        if (asize < HALF_MAX_SIZE_T &&
2765
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2765
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2766
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2766
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2767
          if (esize < HALF_MAX_SIZE_T) {
2767
          if (esize < HALF_MAX_SIZE_T) {
2768
            char* end = (char*)CALL_MORECORE(esize);
2768
            char* end = (char*)CALL_MORECORE(esize);
2769
            if (end != CMFAIL)
2769
            if (end != CMFAIL)
2770
              asize += esize;
2770
              asize += esize;
2771
            else {            /* Can't use; try to release */
2771
            else {            /* Can't use; try to release */
2772
              CALL_MORECORE(-asize);
2772
              CALL_MORECORE(-asize);
2773
              br = CMFAIL;
2773
              br = CMFAIL;
2774
            }
2774
            }
2775
          }
2775
          }
2776
        }
2776
        }
2777
      }
2777
      }
2778
      if (br != CMFAIL) {    /* Use the space we did get */
2778
      if (br != CMFAIL) {    /* Use the space we did get */
2779
        tbase = br;
2779
        tbase = br;
2780
        tsize = asize;
2780
        tsize = asize;
2781
      }
2781
      }
2782
      else
2782
      else
2783
        disable_contiguous(m); /* Don't try contiguous path in the future */
2783
        disable_contiguous(m); /* Don't try contiguous path in the future */
2784
    }
2784
    }
2785
 
2785
 
2786
    RELEASE_MORECORE_LOCK();
2786
    RELEASE_MORECORE_LOCK();
2787
  }
2787
  }
2788
 
2788
 
2789
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2789
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2790
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2790
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2791
    size_t rsize = granularity_align(req);
2791
    size_t rsize = granularity_align(req);
2792
    if (rsize > nb) { /* Fail if wraps around zero */
2792
    if (rsize > nb) { /* Fail if wraps around zero */
2793
      char* mp = (char*)(CALL_MMAP(rsize));
2793
      char* mp = (char*)(CALL_MMAP(rsize));
2794
      if (mp != CMFAIL) {
2794
      if (mp != CMFAIL) {
2795
        tbase = mp;
2795
        tbase = mp;
2796
        tsize = rsize;
2796
        tsize = rsize;
2797
        mmap_flag = IS_MMAPPED_BIT;
2797
        mmap_flag = IS_MMAPPED_BIT;
2798
      }
2798
      }
2799
    }
2799
    }
2800
  }
2800
  }
2801
 
2801
 
2802
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2802
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2803
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2803
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2804
    if (asize < HALF_MAX_SIZE_T) {
2804
    if (asize < HALF_MAX_SIZE_T) {
2805
      char* br = CMFAIL;
2805
      char* br = CMFAIL;
2806
      char* end = CMFAIL;
2806
      char* end = CMFAIL;
2807
      ACQUIRE_MORECORE_LOCK();
2807
      ACQUIRE_MORECORE_LOCK();
2808
      br = (char*)(CALL_MORECORE(asize));
2808
      br = (char*)(CALL_MORECORE(asize));
2809
      end = (char*)(CALL_MORECORE(0));
2809
      end = (char*)(CALL_MORECORE(0));
2810
      RELEASE_MORECORE_LOCK();
2810
      RELEASE_MORECORE_LOCK();
2811
      if (br != CMFAIL && end != CMFAIL && br < end) {
2811
      if (br != CMFAIL && end != CMFAIL && br < end) {
2812
        size_t ssize = end - br;
2812
        size_t ssize = end - br;
2813
        if (ssize > nb + TOP_FOOT_SIZE) {
2813
        if (ssize > nb + TOP_FOOT_SIZE) {
2814
          tbase = br;
2814
          tbase = br;
2815
          tsize = ssize;
2815
          tsize = ssize;
2816
        }
2816
        }
2817
      }
2817
      }
2818
    }
2818
    }
2819
  }
2819
  }
2820
 
2820
 
2821
  if (tbase != CMFAIL) {
2821
  if (tbase != CMFAIL) {
2822
 
2822
 
2823
    if ((m->footprint += tsize) > m->max_footprint)
2823
    if ((m->footprint += tsize) > m->max_footprint)
2824
      m->max_footprint = m->footprint;
2824
      m->max_footprint = m->footprint;
2825
 
2825
 
2826
    if (!is_initialized(m)) { /* first-time initialization */
2826
    if (!is_initialized(m)) { /* first-time initialization */
2827
      m->seg.base = m->least_addr = tbase;
2827
      m->seg.base = m->least_addr = tbase;
2828
      m->seg.size = tsize;
2828
      m->seg.size = tsize;
2829
      m->seg.sflags = mmap_flag;
2829
      m->seg.sflags = mmap_flag;
2830
      m->magic = mparams.magic;
2830
      m->magic = mparams.magic;
2831
      init_bins(m);
2831
      init_bins(m);
2832
      if (is_global(m))
2832
      if (is_global(m))
2833
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2833
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2834
      else {
2834
      else {
2835
        /* Offset top by embedded malloc_state */
2835
        /* Offset top by embedded malloc_state */
2836
        mchunkptr mn = next_chunk(mem2chunk(m));
2836
        mchunkptr mn = next_chunk(mem2chunk(m));
2837
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2837
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2838
      }
2838
      }
2839
    }
2839
    }
2840
 
2840
 
2841
    else {
2841
    else {
2842
      /* Try to merge with an existing segment */
2842
      /* Try to merge with an existing segment */
2843
      msegmentptr sp = &m->seg;
2843
      msegmentptr sp = &m->seg;
2844
      while (sp != 0 && tbase != sp->base + sp->size)
2844
      while (sp != 0 && tbase != sp->base + sp->size)
2845
        sp = sp->next;
2845
        sp = sp->next;
2846
      if (sp != 0 &&
2846
      if (sp != 0 &&
2847
          !is_extern_segment(sp) &&
2847
          !is_extern_segment(sp) &&
2848
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2848
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2849
          segment_holds(sp, m->top)) { /* append */
2849
          segment_holds(sp, m->top)) { /* append */
2850
        sp->size += tsize;
2850
        sp->size += tsize;
2851
        init_top(m, m->top, m->topsize + tsize);
2851
        init_top(m, m->top, m->topsize + tsize);
2852
      }
2852
      }
2853
      else {
2853
      else {
2854
        if (tbase < m->least_addr)
2854
        if (tbase < m->least_addr)
2855
          m->least_addr = tbase;
2855
          m->least_addr = tbase;
2856
        sp = &m->seg;
2856
        sp = &m->seg;
2857
        while (sp != 0 && sp->base != tbase + tsize)
2857
        while (sp != 0 && sp->base != tbase + tsize)
2858
          sp = sp->next;
2858
          sp = sp->next;
2859
        if (sp != 0 &&
2859
        if (sp != 0 &&
2860
            !is_extern_segment(sp) &&
2860
            !is_extern_segment(sp) &&
2861
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2861
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2862
          char* oldbase = sp->base;
2862
          char* oldbase = sp->base;
2863
          sp->base = tbase;
2863
          sp->base = tbase;
2864
          sp->size += tsize;
2864
          sp->size += tsize;
2865
          return prepend_alloc(m, tbase, oldbase, nb);
2865
          return prepend_alloc(m, tbase, oldbase, nb);
2866
        }
2866
        }
2867
        else
2867
        else
2868
          add_segment(m, tbase, tsize, mmap_flag);
2868
          add_segment(m, tbase, tsize, mmap_flag);
2869
      }
2869
      }
2870
    }
2870
    }
2871
 
2871
 
2872
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2872
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2873
      size_t rsize = m->topsize -= nb;
2873
      size_t rsize = m->topsize -= nb;
2874
      mchunkptr p = m->top;
2874
      mchunkptr p = m->top;
2875
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2875
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2876
      r->head = rsize | PINUSE_BIT;
2876
      r->head = rsize | PINUSE_BIT;
2877
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2877
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2878
      check_top_chunk(m, m->top);
2878
      check_top_chunk(m, m->top);
2879
      check_malloced_chunk(m, chunk2mem(p), nb);
2879
      check_malloced_chunk(m, chunk2mem(p), nb);
2880
      return chunk2mem(p);
2880
      return chunk2mem(p);
2881
    }
2881
    }
2882
  }
2882
  }
2883
 
2883
 
2884
  MALLOC_FAILURE_ACTION;
2884
  MALLOC_FAILURE_ACTION;
2885
  return 0;
2885
  return 0;
2886
}
2886
}
2887
 
2887
 
2888
/* -----------------------  system deallocation -------------------------- */
2888
/* -----------------------  system deallocation -------------------------- */
2889
 
2889
 
2890
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2890
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2891
static size_t release_unused_segments(mstate m) {
2891
static size_t release_unused_segments(mstate m) {
2892
  size_t released = 0;
2892
  size_t released = 0;
2893
  msegmentptr pred = &m->seg;
2893
  msegmentptr pred = &m->seg;
2894
  msegmentptr sp = pred->next;
2894
  msegmentptr sp = pred->next;
2895
  while (sp != 0) {
2895
  while (sp != 0) {
2896
    char* base = sp->base;
2896
    char* base = sp->base;
2897
    size_t size = sp->size;
2897
    size_t size = sp->size;
2898
    msegmentptr next = sp->next;
2898
    msegmentptr next = sp->next;
2899
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2899
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2900
      mchunkptr p = align_as_chunk(base);
2900
      mchunkptr p = align_as_chunk(base);
2901
      size_t psize = chunksize(p);
2901
      size_t psize = chunksize(p);
2902
      /* Can unmap if first chunk holds entire segment and not pinned */
2902
      /* Can unmap if first chunk holds entire segment and not pinned */
2903
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2903
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2904
        tchunkptr tp = (tchunkptr)p;
2904
        tchunkptr tp = (tchunkptr)p;
2905
        assert(segment_holds(sp, (char*)sp));
2905
        assert(segment_holds(sp, (char*)sp));
2906
        if (p == m->dv) {
2906
        if (p == m->dv) {
2907
          m->dv = 0;
2907
          m->dv = 0;
2908
          m->dvsize = 0;
2908
          m->dvsize = 0;
2909
        }
2909
        }
2910
        else {
2910
        else {
2911
          unlink_large_chunk(m, tp);
2911
          unlink_large_chunk(m, tp);
2912
        }
2912
        }
2913
        if (CALL_MUNMAP(base, size) == 0) {
2913
        if (CALL_MUNMAP(base, size) == 0) {
2914
          released += size;
2914
          released += size;
2915
          m->footprint -= size;
2915
          m->footprint -= size;
2916
          /* unlink obsoleted record */
2916
          /* unlink obsoleted record */
2917
          sp = pred;
2917
          sp = pred;
2918
          sp->next = next;
2918
          sp->next = next;
2919
        }
2919
        }
2920
        else { /* back out if cannot unmap */
2920
        else { /* back out if cannot unmap */
2921
          insert_large_chunk(m, tp, psize);
2921
          insert_large_chunk(m, tp, psize);
2922
        }
2922
        }
2923
      }
2923
      }
2924
    }
2924
    }
2925
    pred = sp;
2925
    pred = sp;
2926
    sp = next;
2926
    sp = next;
2927
  }
2927
  }
2928
  return released;
2928
  return released;
2929
}
2929
}
2930
 
2930
 
2931
static int sys_trim(mstate m, size_t pad) {
2931
static int sys_trim(mstate m, size_t pad) {
2932
  size_t released = 0;
2932
  size_t released = 0;
2933
  if (pad < MAX_REQUEST && is_initialized(m)) {
2933
  if (pad < MAX_REQUEST && is_initialized(m)) {
2934
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2934
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2935
 
2935
 
2936
    if (m->topsize > pad) {
2936
    if (m->topsize > pad) {
2937
      /* Shrink top space in granularity-size units, keeping at least one */
2937
      /* Shrink top space in granularity-size units, keeping at least one */
2938
      size_t unit = mparams.granularity;
2938
      size_t unit = mparams.granularity;
2939
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2939
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2940
                      SIZE_T_ONE) * unit;
2940
                      SIZE_T_ONE) * unit;
2941
      msegmentptr sp = segment_holding(m, (char*)m->top);
2941
      msegmentptr sp = segment_holding(m, (char*)m->top);
2942
 
2942
 
2943
      if (!is_extern_segment(sp)) {
2943
      if (!is_extern_segment(sp)) {
2944
        if (is_mmapped_segment(sp)) {
2944
        if (is_mmapped_segment(sp)) {
2945
          if (HAVE_MMAP &&
2945
          if (HAVE_MMAP &&
2946
              sp->size >= extra &&
2946
              sp->size >= extra &&
2947
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2947
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2948
            size_t newsize = sp->size - extra;
2948
            size_t newsize = sp->size - extra;
2949
            /* Prefer mremap, fall back to munmap */
2949
            /* Prefer mremap, fall back to munmap */
2950
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2950
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2951
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2951
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2952
              released = extra;
2952
              released = extra;
2953
            }
2953
            }
2954
          }
2954
          }
2955
        }
2955
        }
2956
        else if (HAVE_MORECORE) {
2956
        else if (HAVE_MORECORE) {
2957
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2957
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2958
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2958
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2959
          ACQUIRE_MORECORE_LOCK();
2959
          ACQUIRE_MORECORE_LOCK();
2960
          {
2960
          {
2961
            /* Make sure end of memory is where we last set it. */
2961
            /* Make sure end of memory is where we last set it. */
2962
            char* old_br = (char*)(CALL_MORECORE(0));
2962
            char* old_br = (char*)(CALL_MORECORE(0));
2963
            if (old_br == sp->base + sp->size) {
2963
            if (old_br == sp->base + sp->size) {
2964
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2964
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2965
              char* new_br = (char*)(CALL_MORECORE(0));
2965
              char* new_br = (char*)(CALL_MORECORE(0));
2966
              if (rel_br != CMFAIL && new_br < old_br)
2966
              if (rel_br != CMFAIL && new_br < old_br)
2967
                released = old_br - new_br;
2967
                released = old_br - new_br;
2968
            }
2968
            }
2969
          }
2969
          }
2970
          RELEASE_MORECORE_LOCK();
2970
          RELEASE_MORECORE_LOCK();
2971
        }
2971
        }
2972
      }
2972
      }
2973
 
2973
 
2974
      if (released != 0) {
2974
      if (released != 0) {
2975
        sp->size -= released;
2975
        sp->size -= released;
2976
        m->footprint -= released;
2976
        m->footprint -= released;
2977
        init_top(m, m->top, m->topsize - released);
2977
        init_top(m, m->top, m->topsize - released);
2978
        check_top_chunk(m, m->top);
2978
        check_top_chunk(m, m->top);
2979
      }
2979
      }
2980
    }
2980
    }
2981
 
2981
 
2982
    /* Unmap any unused mmapped segments */
2982
    /* Unmap any unused mmapped segments */
2983
    if (HAVE_MMAP)
2983
    if (HAVE_MMAP)
2984
      released += release_unused_segments(m);
2984
      released += release_unused_segments(m);
2985
 
2985
 
2986
    /* On failure, disable autotrim to avoid repeated failed future calls */
2986
    /* On failure, disable autotrim to avoid repeated failed future calls */
2987
    if (released == 0)
2987
    if (released == 0)
2988
      m->trim_check = MAX_SIZE_T;
2988
      m->trim_check = MAX_SIZE_T;
2989
  }
2989
  }
2990
 
2990
 
2991
  return (released != 0)? 1 : 0;
2991
  return (released != 0)? 1 : 0;
2992
}
2992
}
2993
 
2993
 
2994
/* ---------------------------- malloc support --------------------------- */
2994
/* ---------------------------- malloc support --------------------------- */
2995
 
2995
 
2996
/* allocate a large request from the best fitting chunk in a treebin */
2996
/* allocate a large request from the best fitting chunk in a treebin */
2997
static void* tmalloc_large(mstate m, size_t nb) {
2997
static void* tmalloc_large(mstate m, size_t nb) {
2998
  tchunkptr v = 0;
2998
  tchunkptr v = 0;
2999
  size_t rsize = -nb; /* Unsigned negation */
2999
  size_t rsize = -nb; /* Unsigned negation */
3000
  tchunkptr t;
3000
  tchunkptr t;
3001
  bindex_t idx;
3001
  bindex_t idx;
3002
  compute_tree_index(nb, idx);
3002
  compute_tree_index(nb, idx);
3003
 
3003
 
3004
  if ((t = *treebin_at(m, idx)) != 0) {
3004
  if ((t = *treebin_at(m, idx)) != 0) {
3005
    /* Traverse tree for this bin looking for node with size == nb */
3005
    /* Traverse tree for this bin looking for node with size == nb */
3006
    size_t sizebits = nb << leftshift_for_tree_index(idx);
3006
    size_t sizebits = nb << leftshift_for_tree_index(idx);
3007
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
3007
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
3008
    for (;;) {
3008
    for (;;) {
3009
      tchunkptr rt;
3009
      tchunkptr rt;
3010
      size_t trem = chunksize(t) - nb;
3010
      size_t trem = chunksize(t) - nb;
3011
      if (trem < rsize) {
3011
      if (trem < rsize) {
3012
        v = t;
3012
        v = t;
3013
        if ((rsize = trem) == 0)
3013
        if ((rsize = trem) == 0)
3014
          break;
3014
          break;
3015
      }
3015
      }
3016
      rt = t->child[1];
3016
      rt = t->child[1];
3017
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3017
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3018
      if (rt != 0 && rt != t)
3018
      if (rt != 0 && rt != t)
3019
        rst = rt;
3019
        rst = rt;
3020
      if (t == 0) {
3020
      if (t == 0) {
3021
        t = rst; /* set t to least subtree holding sizes > nb */
3021
        t = rst; /* set t to least subtree holding sizes > nb */
3022
        break;
3022
        break;
3023
      }
3023
      }
3024
      sizebits <<= 1;
3024
      sizebits <<= 1;
3025
    }
3025
    }
3026
  }
3026
  }
3027
 
3027
 
3028
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3028
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3029
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3029
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3030
    if (leftbits != 0) {
3030
    if (leftbits != 0) {
3031
      bindex_t i;
3031
      bindex_t i;
3032
      binmap_t leastbit = least_bit(leftbits);
3032
      binmap_t leastbit = least_bit(leftbits);
3033
      compute_bit2idx(leastbit, i);
3033
      compute_bit2idx(leastbit, i);
3034
      t = *treebin_at(m, i);
3034
      t = *treebin_at(m, i);
3035
    }
3035
    }
3036
  }
3036
  }
3037
 
3037
 
3038
  while (t != 0) { /* find smallest of tree or subtree */
3038
  while (t != 0) { /* find smallest of tree or subtree */
3039
    size_t trem = chunksize(t) - nb;
3039
    size_t trem = chunksize(t) - nb;
3040
    if (trem < rsize) {
3040
    if (trem < rsize) {
3041
      rsize = trem;
3041
      rsize = trem;
3042
      v = t;
3042
      v = t;
3043
    }
3043
    }
3044
    t = leftmost_child(t);
3044
    t = leftmost_child(t);
3045
  }
3045
  }
3046
 
3046
 
3047
  /*  If dv is a better fit, return 0 so malloc will use it */
3047
  /*  If dv is a better fit, return 0 so malloc will use it */
3048
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3048
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3049
    if (RTCHECK(ok_address(m, v))) { /* split */
3049
    if (RTCHECK(ok_address(m, v))) { /* split */
3050
      mchunkptr r = chunk_plus_offset(v, nb);
3050
      mchunkptr r = chunk_plus_offset(v, nb);
3051
      assert(chunksize(v) == rsize + nb);
3051
      assert(chunksize(v) == rsize + nb);
3052
      if (RTCHECK(ok_next(v, r))) {
3052
      if (RTCHECK(ok_next(v, r))) {
3053
        unlink_large_chunk(m, v);
3053
        unlink_large_chunk(m, v);
3054
        if (rsize < MIN_CHUNK_SIZE)
3054
        if (rsize < MIN_CHUNK_SIZE)
3055
          set_inuse_and_pinuse(m, v, (rsize + nb));
3055
          set_inuse_and_pinuse(m, v, (rsize + nb));
3056
        else {
3056
        else {
3057
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3057
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3058
          set_size_and_pinuse_of_free_chunk(r, rsize);
3058
          set_size_and_pinuse_of_free_chunk(r, rsize);
3059
          insert_chunk(m, r, rsize);
3059
          insert_chunk(m, r, rsize);
3060
        }
3060
        }
3061
        return chunk2mem(v);
3061
        return chunk2mem(v);
3062
      }
3062
      }
3063
    }
3063
    }
3064
    CORRUPTION_ERROR_ACTION(m);
3064
    CORRUPTION_ERROR_ACTION(m);
3065
  }
3065
  }
3066
  return 0;
3066
  return 0;
3067
}
3067
}
3068
 
3068
 
3069
/* allocate a small request from the best fitting chunk in a treebin */
3069
/* allocate a small request from the best fitting chunk in a treebin */
3070
static void* tmalloc_small(mstate m, size_t nb) {
3070
static void* tmalloc_small(mstate m, size_t nb) {
3071
  tchunkptr t, v;
3071
  tchunkptr t, v;
3072
  size_t rsize;
3072
  size_t rsize;
3073
  bindex_t i;
3073
  bindex_t i;
3074
  binmap_t leastbit = least_bit(m->treemap);
3074
  binmap_t leastbit = least_bit(m->treemap);
3075
  compute_bit2idx(leastbit, i);
3075
  compute_bit2idx(leastbit, i);
3076
 
3076
 
3077
  v = t = *treebin_at(m, i);
3077
  v = t = *treebin_at(m, i);
3078
  rsize = chunksize(t) - nb;
3078
  rsize = chunksize(t) - nb;
3079
 
3079
 
3080
  while ((t = leftmost_child(t)) != 0) {
3080
  while ((t = leftmost_child(t)) != 0) {
3081
    size_t trem = chunksize(t) - nb;
3081
    size_t trem = chunksize(t) - nb;
3082
    if (trem < rsize) {
3082
    if (trem < rsize) {
3083
      rsize = trem;
3083
      rsize = trem;
3084
      v = t;
3084
      v = t;
3085
    }
3085
    }
3086
  }
3086
  }
3087
 
3087
 
3088
  if (RTCHECK(ok_address(m, v))) {
3088
  if (RTCHECK(ok_address(m, v))) {
3089
    mchunkptr r = chunk_plus_offset(v, nb);
3089
    mchunkptr r = chunk_plus_offset(v, nb);
3090
    assert(chunksize(v) == rsize + nb);
3090
    assert(chunksize(v) == rsize + nb);
3091
    if (RTCHECK(ok_next(v, r))) {
3091
    if (RTCHECK(ok_next(v, r))) {
3092
      unlink_large_chunk(m, v);
3092
      unlink_large_chunk(m, v);
3093
      if (rsize < MIN_CHUNK_SIZE)
3093
      if (rsize < MIN_CHUNK_SIZE)
3094
        set_inuse_and_pinuse(m, v, (rsize + nb));
3094
        set_inuse_and_pinuse(m, v, (rsize + nb));
3095
      else {
3095
      else {
3096
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3096
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3097
        set_size_and_pinuse_of_free_chunk(r, rsize);
3097
        set_size_and_pinuse_of_free_chunk(r, rsize);
3098
        replace_dv(m, r, rsize);
3098
        replace_dv(m, r, rsize);
3099
      }
3099
      }
3100
      return chunk2mem(v);
3100
      return chunk2mem(v);
3101
    }
3101
    }
3102
  }
3102
  }
3103
 
3103
 
3104
  CORRUPTION_ERROR_ACTION(m);
3104
  CORRUPTION_ERROR_ACTION(m);
3105
  return 0;
3105
  return 0;
3106
}
3106
}
3107
 
3107
 
3108
/* --------------------------- realloc support --------------------------- */
3108
/* --------------------------- realloc support --------------------------- */
3109
 
3109
 
3110
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3110
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3111
  if (bytes >= MAX_REQUEST) {
3111
  if (bytes >= MAX_REQUEST) {
3112
    MALLOC_FAILURE_ACTION;
3112
    MALLOC_FAILURE_ACTION;
3113
    return 0;
3113
    return 0;
3114
  }
3114
  }
3115
  if (!PREACTION(m)) {
3115
  if (!PREACTION(m)) {
3116
    mchunkptr oldp = mem2chunk(oldmem);
3116
    mchunkptr oldp = mem2chunk(oldmem);
3117
    size_t oldsize = chunksize(oldp);
3117
    size_t oldsize = chunksize(oldp);
3118
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3118
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3119
    mchunkptr newp = 0;
3119
    mchunkptr newp = 0;
3120
    void* extra = 0;
3120
    void* extra = 0;
3121
 
3121
 
3122
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3122
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3123
 
3123
 
3124
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3124
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3125
                ok_next(oldp, next) && ok_pinuse(next))) {
3125
                ok_next(oldp, next) && ok_pinuse(next))) {
3126
      size_t nb = request2size(bytes);
3126
      size_t nb = request2size(bytes);
3127
      if (is_mmapped(oldp))
3127
      if (is_mmapped(oldp))
3128
        newp = mmap_resize(m, oldp, nb);
3128
        newp = mmap_resize(m, oldp, nb);
3129
      else if (oldsize >= nb) { /* already big enough */
3129
      else if (oldsize >= nb) { /* already big enough */
3130
        size_t rsize = oldsize - nb;
3130
        size_t rsize = oldsize - nb;
3131
        newp = oldp;
3131
        newp = oldp;
3132
        if (rsize >= MIN_CHUNK_SIZE) {
3132
        if (rsize >= MIN_CHUNK_SIZE) {
3133
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3133
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3134
          set_inuse(m, newp, nb);
3134
          set_inuse(m, newp, nb);
3135
          set_inuse(m, remainder, rsize);
3135
          set_inuse(m, remainder, rsize);
3136
          extra = chunk2mem(remainder);
3136
          extra = chunk2mem(remainder);
3137
        }
3137
        }
3138
      }
3138
      }
3139
      else if (next == m->top && oldsize + m->topsize > nb) {
3139
      else if (next == m->top && oldsize + m->topsize > nb) {
3140
        /* Expand into top */
3140
        /* Expand into top */
3141
        size_t newsize = oldsize + m->topsize;
3141
        size_t newsize = oldsize + m->topsize;
3142
        size_t newtopsize = newsize - nb;
3142
        size_t newtopsize = newsize - nb;
3143
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3143
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3144
        set_inuse(m, oldp, nb);
3144
        set_inuse(m, oldp, nb);
3145
        newtop->head = newtopsize |PINUSE_BIT;
3145
        newtop->head = newtopsize |PINUSE_BIT;
3146
        m->top = newtop;
3146
        m->top = newtop;
3147
        m->topsize = newtopsize;
3147
        m->topsize = newtopsize;
3148
        newp = oldp;
3148
        newp = oldp;
3149
      }
3149
      }
3150
    }
3150
    }
3151
    else {
3151
    else {
3152
      USAGE_ERROR_ACTION(m, oldmem);
3152
      USAGE_ERROR_ACTION(m, oldmem);
3153
      POSTACTION(m);
3153
      POSTACTION(m);
3154
      return 0;
3154
      return 0;
3155
    }
3155
    }
3156
 
3156
 
3157
    POSTACTION(m);
3157
    POSTACTION(m);
3158
 
3158
 
3159
    if (newp != 0) {
3159
    if (newp != 0) {
3160
      if (extra != 0) {
3160
      if (extra != 0) {
3161
        internal_free(m, extra);
3161
        internal_free(m, extra);
3162
      }
3162
      }
3163
      check_inuse_chunk(m, newp);
3163
      check_inuse_chunk(m, newp);
3164
      return chunk2mem(newp);
3164
      return chunk2mem(newp);
3165
    }
3165
    }
3166
    else {
3166
    else {
3167
      void* newmem = internal_malloc(m, bytes);
3167
      void* newmem = internal_malloc(m, bytes);
3168
      if (newmem != 0) {
3168
      if (newmem != 0) {
3169
        size_t oc = oldsize - overhead_for(oldp);
3169
        size_t oc = oldsize - overhead_for(oldp);
3170
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3170
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3171
        internal_free(m, oldmem);
3171
        internal_free(m, oldmem);
3172
      }
3172
      }
3173
      return newmem;
3173
      return newmem;
3174
    }
3174
    }
3175
  }
3175
  }
3176
  return 0;
3176
  return 0;
3177
}
3177
}
3178
 
3178
 
3179
/* --------------------------- memalign support -------------------------- */
3179
/* --------------------------- memalign support -------------------------- */
3180
 
3180
 
3181
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3181
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3182
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3182
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3183
    return internal_malloc(m, bytes);
3183
    return internal_malloc(m, bytes);
3184
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3184
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3185
    alignment = MIN_CHUNK_SIZE;
3185
    alignment = MIN_CHUNK_SIZE;
3186
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3186
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3187
    size_t a = MALLOC_ALIGNMENT << 1;
3187
    size_t a = MALLOC_ALIGNMENT << 1;
3188
    while (a < alignment) a <<= 1;
3188
    while (a < alignment) a <<= 1;
3189
    alignment = a;
3189
    alignment = a;
3190
  }
3190
  }
3191
 
3191
 
3192
  if (bytes >= MAX_REQUEST - alignment) {
3192
  if (bytes >= MAX_REQUEST - alignment) {
3193
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3193
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3194
      MALLOC_FAILURE_ACTION;
3194
      MALLOC_FAILURE_ACTION;
3195
    }
3195
    }
3196
  }
3196
  }
3197
  else {
3197
  else {
3198
    size_t nb = request2size(bytes);
3198
    size_t nb = request2size(bytes);
3199
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3199
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3200
    char* mem = (char*)internal_malloc(m, req);
3200
    char* mem = (char*)internal_malloc(m, req);
3201
    if (mem != 0) {
3201
    if (mem != 0) {
3202
      void* leader = 0;
3202
      void* leader = 0;
3203
      void* trailer = 0;
3203
      void* trailer = 0;
3204
      mchunkptr p = mem2chunk(mem);
3204
      mchunkptr p = mem2chunk(mem);
3205
 
3205
 
3206
      if (PREACTION(m)) return 0;
3206
      if (PREACTION(m)) return 0;
3207
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3207
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3208
        /*
3208
        /*
3209
          Find an aligned spot inside chunk.  Since we need to give
3209
          Find an aligned spot inside chunk.  Since we need to give
3210
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3210
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3211
          the first calculation places us at a spot with less than
3211
          the first calculation places us at a spot with less than
3212
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3212
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3213
          We've allocated enough total room so that this is always
3213
          We've allocated enough total room so that this is always
3214
          possible.
3214
          possible.
3215
        */
3215
        */
3216
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3216
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3217
                                                       alignment -
3217
                                                       alignment -
3218
                                                       SIZE_T_ONE)) &
3218
                                                       SIZE_T_ONE)) &
3219
                                             -alignment));
3219
                                             -alignment));
3220
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3220
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3221
          br : br+alignment;
3221
          br : br+alignment;
3222
        mchunkptr newp = (mchunkptr)pos;
3222
        mchunkptr newp = (mchunkptr)pos;
3223
        size_t leadsize = pos - (char*)(p);
3223
        size_t leadsize = pos - (char*)(p);
3224
        size_t newsize = chunksize(p) - leadsize;
3224
        size_t newsize = chunksize(p) - leadsize;
3225
 
3225
 
3226
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3226
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3227
          newp->prev_foot = p->prev_foot + leadsize;
3227
          newp->prev_foot = p->prev_foot + leadsize;
3228
          newp->head = (newsize|CINUSE_BIT);
3228
          newp->head = (newsize|CINUSE_BIT);
3229
        }
3229
        }
3230
        else { /* Otherwise, give back leader, use the rest */
3230
        else { /* Otherwise, give back leader, use the rest */
3231
          set_inuse(m, newp, newsize);
3231
          set_inuse(m, newp, newsize);
3232
          set_inuse(m, p, leadsize);
3232
          set_inuse(m, p, leadsize);
3233
          leader = chunk2mem(p);
3233
          leader = chunk2mem(p);
3234
        }
3234
        }
3235
        p = newp;
3235
        p = newp;
3236
      }
3236
      }
3237
 
3237
 
3238
      /* Give back spare room at the end */
3238
      /* Give back spare room at the end */
3239
      if (!is_mmapped(p)) {
3239
      if (!is_mmapped(p)) {
3240
        size_t size = chunksize(p);
3240
        size_t size = chunksize(p);
3241
        if (size > nb + MIN_CHUNK_SIZE) {
3241
        if (size > nb + MIN_CHUNK_SIZE) {
3242
          size_t remainder_size = size - nb;
3242
          size_t remainder_size = size - nb;
3243
          mchunkptr remainder = chunk_plus_offset(p, nb);
3243
          mchunkptr remainder = chunk_plus_offset(p, nb);
3244
          set_inuse(m, p, nb);
3244
          set_inuse(m, p, nb);
3245
          set_inuse(m, remainder, remainder_size);
3245
          set_inuse(m, remainder, remainder_size);
3246
          trailer = chunk2mem(remainder);
3246
          trailer = chunk2mem(remainder);
3247
        }
3247
        }
3248
      }
3248
      }
3249
 
3249
 
3250
      assert (chunksize(p) >= nb);
3250
      assert (chunksize(p) >= nb);
3251
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3251
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3252
      check_inuse_chunk(m, p);
3252
      check_inuse_chunk(m, p);
3253
      POSTACTION(m);
3253
      POSTACTION(m);
3254
      if (leader != 0) {
3254
      if (leader != 0) {
3255
        internal_free(m, leader);
3255
        internal_free(m, leader);
3256
      }
3256
      }
3257
      if (trailer != 0) {
3257
      if (trailer != 0) {
3258
        internal_free(m, trailer);
3258
        internal_free(m, trailer);
3259
      }
3259
      }
3260
      return chunk2mem(p);
3260
      return chunk2mem(p);
3261
    }
3261
    }
3262
  }
3262
  }
3263
  return 0;
3263
  return 0;
3264
}
3264
}
3265
 
3265
 
3266
/* ------------------------ comalloc/coalloc support --------------------- */
3266
/* ------------------------ comalloc/coalloc support --------------------- */
3267
 
3267
 
3268
static void** ialloc(mstate m,
3268
static void** ialloc(mstate m,
3269
                     size_t n_elements,
3269
                     size_t n_elements,
3270
                     size_t* sizes,
3270
                     size_t* sizes,
3271
                     int opts,
3271
                     int opts,
3272
                     void* chunks[]) {
3272
                     void* chunks[]) {
3273
  /*
3273
  /*
3274
    This provides common support for independent_X routines, handling
3274
    This provides common support for independent_X routines, handling
3275
    all of the combinations that can result.
3275
    all of the combinations that can result.
3276
 
3276
 
3277
    The opts arg has:
3277
    The opts arg has:
3278
    bit 0 set if all elements are same size (using sizes[0])
3278
    bit 0 set if all elements are same size (using sizes[0])
3279
    bit 1 set if elements should be zeroed
3279
    bit 1 set if elements should be zeroed
3280
  */
3280
  */
3281
 
3281
 
3282
  size_t    element_size;   /* chunksize of each element, if all same */
3282
  size_t    element_size;   /* chunksize of each element, if all same */
3283
  size_t    contents_size;  /* total size of elements */
3283
  size_t    contents_size;  /* total size of elements */
3284
  size_t    array_size;     /* request size of pointer array */
3284
  size_t    array_size;     /* request size of pointer array */
3285
  void*     mem;            /* malloced aggregate space */
3285
  void*     mem;            /* malloced aggregate space */
3286
  mchunkptr p;              /* corresponding chunk */
3286
  mchunkptr p;              /* corresponding chunk */
3287
  size_t    remainder_size; /* remaining bytes while splitting */
3287
  size_t    remainder_size; /* remaining bytes while splitting */
3288
  void**    marray;         /* either "chunks" or malloced ptr array */
3288
  void**    marray;         /* either "chunks" or malloced ptr array */
3289
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3289
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3290
  flag_t    was_enabled;    /* to disable mmap */
3290
  flag_t    was_enabled;    /* to disable mmap */
3291
  size_t    size;
3291
  size_t    size;
3292
  size_t    i;
3292
  size_t    i;
3293
 
3293
 
3294
  /* compute array length, if needed */
3294
  /* compute array length, if needed */
3295
  if (chunks != 0) {
3295
  if (chunks != 0) {
3296
    if (n_elements == 0)
3296
    if (n_elements == 0)
3297
      return chunks; /* nothing to do */
3297
      return chunks; /* nothing to do */
3298
    marray = chunks;
3298
    marray = chunks;
3299
    array_size = 0;
3299
    array_size = 0;
3300
  }
3300
  }
3301
  else {
3301
  else {
3302
    /* if empty req, must still return chunk representing empty array */
3302
    /* if empty req, must still return chunk representing empty array */
3303
    if (n_elements == 0)
3303
    if (n_elements == 0)
3304
      return (void**)internal_malloc(m, 0);
3304
      return (void**)internal_malloc(m, 0);
3305
    marray = 0;
3305
    marray = 0;
3306
    array_size = request2size(n_elements * (sizeof(void*)));
3306
    array_size = request2size(n_elements * (sizeof(void*)));
3307
  }
3307
  }
3308
 
3308
 
3309
  /* compute total element size */
3309
  /* compute total element size */
3310
  if (opts & 0x1) { /* all-same-size */
3310
  if (opts & 0x1) { /* all-same-size */
3311
    element_size = request2size(*sizes);
3311
    element_size = request2size(*sizes);
3312
    contents_size = n_elements * element_size;
3312
    contents_size = n_elements * element_size;
3313
  }
3313
  }
3314
  else { /* add up all the sizes */
3314
  else { /* add up all the sizes */
3315
    element_size = 0;
3315
    element_size = 0;
3316
    contents_size = 0;
3316
    contents_size = 0;
3317
    for (i = 0; i != n_elements; ++i)
3317
    for (i = 0; i != n_elements; ++i)
3318
      contents_size += request2size(sizes[i]);
3318
      contents_size += request2size(sizes[i]);
3319
  }
3319
  }
3320
 
3320
 
3321
  size = contents_size + array_size;
3321
  size = contents_size + array_size;
3322
 
3322
 
3323
  /*
3323
  /*
3324
     Allocate the aggregate chunk.  First disable direct-mmapping so
3324
     Allocate the aggregate chunk.  First disable direct-mmapping so
3325
     malloc won't use it, since we would not be able to later
3325
     malloc won't use it, since we would not be able to later
3326
     free/realloc space internal to a segregated mmap region.
3326
     free/realloc space internal to a segregated mmap region.
3327
  */
3327
  */
3328
  was_enabled = use_mmap(m);
3328
  was_enabled = use_mmap(m);
3329
  disable_mmap(m);
3329
  disable_mmap(m);
3330
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3330
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3331
  if (was_enabled)
3331
  if (was_enabled)
3332
    enable_mmap(m);
3332
    enable_mmap(m);
3333
  if (mem == 0)
3333
  if (mem == 0)
3334
    return 0;
3334
    return 0;
3335
 
3335
 
3336
  if (PREACTION(m)) return 0;
3336
  if (PREACTION(m)) return 0;
3337
  p = mem2chunk(mem);
3337
  p = mem2chunk(mem);
3338
  remainder_size = chunksize(p);
3338
  remainder_size = chunksize(p);
3339
 
3339
 
3340
  assert(!is_mmapped(p));
3340
  assert(!is_mmapped(p));
3341
 
3341
 
3342
  if (opts & 0x2) {       /* optionally clear the elements */
3342
  if (opts & 0x2) {       /* optionally clear the elements */
3343
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3343
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3344
  }
3344
  }
3345
 
3345
 
3346
  /* If not provided, allocate the pointer array as final part of chunk */
3346
  /* If not provided, allocate the pointer array as final part of chunk */
3347
  if (marray == 0) {
3347
  if (marray == 0) {
3348
    size_t  array_chunk_size;
3348
    size_t  array_chunk_size;
3349
    array_chunk = chunk_plus_offset(p, contents_size);
3349
    array_chunk = chunk_plus_offset(p, contents_size);
3350
    array_chunk_size = remainder_size - contents_size;
3350
    array_chunk_size = remainder_size - contents_size;
3351
    marray = (void**) (chunk2mem(array_chunk));
3351
    marray = (void**) (chunk2mem(array_chunk));
3352
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3352
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3353
    remainder_size = contents_size;
3353
    remainder_size = contents_size;
3354
  }
3354
  }
3355
 
3355
 
3356
  /* split out elements */
3356
  /* split out elements */
3357
  for (i = 0; ; ++i) {
3357
  for (i = 0; ; ++i) {
3358
    marray[i] = chunk2mem(p);
3358
    marray[i] = chunk2mem(p);
3359
    if (i != n_elements-1) {
3359
    if (i != n_elements-1) {
3360
      if (element_size != 0)
3360
      if (element_size != 0)
3361
        size = element_size;
3361
        size = element_size;
3362
      else
3362
      else
3363
        size = request2size(sizes[i]);
3363
        size = request2size(sizes[i]);
3364
      remainder_size -= size;
3364
      remainder_size -= size;
3365
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3365
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3366
      p = chunk_plus_offset(p, size);
3366
      p = chunk_plus_offset(p, size);
3367
    }
3367
    }
3368
    else { /* the final element absorbs any overallocation slop */
3368
    else { /* the final element absorbs any overallocation slop */
3369
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3369
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3370
      break;
3370
      break;
3371
    }
3371
    }
3372
  }
3372
  }
3373
 
3373
 
3374
#if DEBUG
3374
#if DEBUG
3375
  if (marray != chunks) {
3375
  if (marray != chunks) {
3376
    /* final element must have exactly exhausted chunk */
3376
    /* final element must have exactly exhausted chunk */
3377
    if (element_size != 0) {
3377
    if (element_size != 0) {
3378
      assert(remainder_size == element_size);
3378
      assert(remainder_size == element_size);
3379
    }
3379
    }
3380
    else {
3380
    else {
3381
      assert(remainder_size == request2size(sizes[i]));
3381
      assert(remainder_size == request2size(sizes[i]));
3382
    }
3382
    }
3383
    check_inuse_chunk(m, mem2chunk(marray));
3383
    check_inuse_chunk(m, mem2chunk(marray));
3384
  }
3384
  }
3385
  for (i = 0; i != n_elements; ++i)
3385
  for (i = 0; i != n_elements; ++i)
3386
    check_inuse_chunk(m, mem2chunk(marray[i]));
3386
    check_inuse_chunk(m, mem2chunk(marray[i]));
3387
 
3387
 
3388
#endif /* DEBUG */
3388
#endif /* DEBUG */
3389
 
3389
 
3390
  POSTACTION(m);
3390
  POSTACTION(m);
3391
  return marray;
3391
  return marray;
3392
}
3392
}
3393
 
3393
 
3394
 
3394
 
3395
/* -------------------------- public routines ---------------------------- */
3395
/* -------------------------- public routines ---------------------------- */
3396
 
3396
 
3397
#if !ONLY_MSPACES
3397
#if !ONLY_MSPACES
3398
 
3398
 
3399
void* dlmalloc(size_t bytes) {
3399
void* dlmalloc(size_t bytes) {
3400
  /*
3400
  /*
3401
     Basic algorithm:
3401
     Basic algorithm:
3402
     If a small request (< 256 bytes minus per-chunk overhead):
3402
     If a small request (< 256 bytes minus per-chunk overhead):
3403
       1. If one exists, use a remainderless chunk in associated smallbin.
3403
       1. If one exists, use a remainderless chunk in associated smallbin.
3404
          (Remainderless means that there are too few excess bytes to
3404
          (Remainderless means that there are too few excess bytes to
3405
          represent as a chunk.)
3405
          represent as a chunk.)
3406
       2. If it is big enough, use the dv chunk, which is normally the
3406
       2. If it is big enough, use the dv chunk, which is normally the
3407
          chunk adjacent to the one used for the most recent small request.
3407
          chunk adjacent to the one used for the most recent small request.
3408
       3. If one exists, split the smallest available chunk in a bin,
3408
       3. If one exists, split the smallest available chunk in a bin,
3409
          saving remainder in dv.
3409
          saving remainder in dv.
3410
       4. If it is big enough, use the top chunk.
3410
       4. If it is big enough, use the top chunk.
3411
       5. If available, get memory from system and use it
3411
       5. If available, get memory from system and use it
3412
     Otherwise, for a large request:
3412
     Otherwise, for a large request:
3413
       1. Find the smallest available binned chunk that fits, and use it
3413
       1. Find the smallest available binned chunk that fits, and use it
3414
          if it is better fitting than dv chunk, splitting if necessary.
3414
          if it is better fitting than dv chunk, splitting if necessary.
3415
       2. If better fitting than any binned chunk, use the dv chunk.
3415
       2. If better fitting than any binned chunk, use the dv chunk.
3416
       3. If it is big enough, use the top chunk.
3416
       3. If it is big enough, use the top chunk.
3417
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3417
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3418
       5. If available, get memory from system and use it
3418
       5. If available, get memory from system and use it
3419
 
3419
 
3420
     The ugly goto's here ensure that postaction occurs along all paths.
3420
     The ugly goto's here ensure that postaction occurs along all paths.
3421
  */
3421
  */
3422
 
3422
 
3423
  if (!PREACTION(gm)) {
3423
  if (!PREACTION(gm)) {
3424
    void* mem;
3424
    void* mem;
3425
    size_t nb;
3425
    size_t nb;
3426
    if (bytes <= MAX_SMALL_REQUEST) {
3426
    if (bytes <= MAX_SMALL_REQUEST) {
3427
      bindex_t idx;
3427
      bindex_t idx;
3428
      binmap_t smallbits;
3428
      binmap_t smallbits;
3429
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3429
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3430
      idx = small_index(nb);
3430
      idx = small_index(nb);
3431
      smallbits = gm->smallmap >> idx;
3431
      smallbits = gm->smallmap >> idx;
3432
 
3432
 
3433
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3433
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3434
        mchunkptr b, p;
3434
        mchunkptr b, p;
3435
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3435
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3436
        b = smallbin_at(gm, idx);
3436
        b = smallbin_at(gm, idx);
3437
        p = b->fd;
3437
        p = b->fd;
3438
        assert(chunksize(p) == small_index2size(idx));
3438
        assert(chunksize(p) == small_index2size(idx));
3439
        unlink_first_small_chunk(gm, b, p, idx);
3439
        unlink_first_small_chunk(gm, b, p, idx);
3440
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3440
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3441
        mem = chunk2mem(p);
3441
        mem = chunk2mem(p);
3442
        check_malloced_chunk(gm, mem, nb);
3442
        check_malloced_chunk(gm, mem, nb);
3443
        goto postaction;
3443
        goto postaction;
3444
      }
3444
      }
3445
 
3445
 
3446
      else if (nb > gm->dvsize) {
3446
      else if (nb > gm->dvsize) {
3447
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3447
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3448
          mchunkptr b, p, r;
3448
          mchunkptr b, p, r;
3449
          size_t rsize;
3449
          size_t rsize;
3450
          bindex_t i;
3450
          bindex_t i;
3451
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3451
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3452
          binmap_t leastbit = least_bit(leftbits);
3452
          binmap_t leastbit = least_bit(leftbits);
3453
          compute_bit2idx(leastbit, i);
3453
          compute_bit2idx(leastbit, i);
3454
          b = smallbin_at(gm, i);
3454
          b = smallbin_at(gm, i);
3455
          p = b->fd;
3455
          p = b->fd;
3456
          assert(chunksize(p) == small_index2size(i));
3456
          assert(chunksize(p) == small_index2size(i));
3457
          unlink_first_small_chunk(gm, b, p, i);
3457
          unlink_first_small_chunk(gm, b, p, i);
3458
          rsize = small_index2size(i) - nb;
3458
          rsize = small_index2size(i) - nb;
3459
          /* Fit here cannot be remainderless if 4byte sizes */
3459
          /* Fit here cannot be remainderless if 4byte sizes */
3460
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3460
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3461
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3461
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3462
          else {
3462
          else {
3463
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3463
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3464
            r = chunk_plus_offset(p, nb);
3464
            r = chunk_plus_offset(p, nb);
3465
            set_size_and_pinuse_of_free_chunk(r, rsize);
3465
            set_size_and_pinuse_of_free_chunk(r, rsize);
3466
            replace_dv(gm, r, rsize);
3466
            replace_dv(gm, r, rsize);
3467
          }
3467
          }
3468
          mem = chunk2mem(p);
3468
          mem = chunk2mem(p);
3469
          check_malloced_chunk(gm, mem, nb);
3469
          check_malloced_chunk(gm, mem, nb);
3470
          goto postaction;
3470
          goto postaction;
3471
        }
3471
        }
3472
 
3472
 
3473
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3473
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3474
          check_malloced_chunk(gm, mem, nb);
3474
          check_malloced_chunk(gm, mem, nb);
3475
          goto postaction;
3475
          goto postaction;
3476
        }
3476
        }
3477
      }
3477
      }
3478
    }
3478
    }
3479
    else if (bytes >= MAX_REQUEST)
3479
    else if (bytes >= MAX_REQUEST)
3480
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3480
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3481
    else {
3481
    else {
3482
      nb = pad_request(bytes);
3482
      nb = pad_request(bytes);
3483
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3483
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3484
        check_malloced_chunk(gm, mem, nb);
3484
        check_malloced_chunk(gm, mem, nb);
3485
        goto postaction;
3485
        goto postaction;
3486
      }
3486
      }
3487
    }
3487
    }
3488
 
3488
 
3489
    if (nb <= gm->dvsize) {
3489
    if (nb <= gm->dvsize) {
3490
      size_t rsize = gm->dvsize - nb;
3490
      size_t rsize = gm->dvsize - nb;
3491
      mchunkptr p = gm->dv;
3491
      mchunkptr p = gm->dv;
3492
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3492
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3493
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3493
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3494
        gm->dvsize = rsize;
3494
        gm->dvsize = rsize;
3495
        set_size_and_pinuse_of_free_chunk(r, rsize);
3495
        set_size_and_pinuse_of_free_chunk(r, rsize);
3496
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3496
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3497
      }
3497
      }
3498
      else { /* exhaust dv */
3498
      else { /* exhaust dv */
3499
        size_t dvs = gm->dvsize;
3499
        size_t dvs = gm->dvsize;
3500
        gm->dvsize = 0;
3500
        gm->dvsize = 0;
3501
        gm->dv = 0;
3501
        gm->dv = 0;
3502
        set_inuse_and_pinuse(gm, p, dvs);
3502
        set_inuse_and_pinuse(gm, p, dvs);
3503
      }
3503
      }
3504
      mem = chunk2mem(p);
3504
      mem = chunk2mem(p);
3505
      check_malloced_chunk(gm, mem, nb);
3505
      check_malloced_chunk(gm, mem, nb);
3506
      goto postaction;
3506
      goto postaction;
3507
    }
3507
    }
3508
 
3508
 
3509
    else if (nb < gm->topsize) { /* Split top */
3509
    else if (nb < gm->topsize) { /* Split top */
3510
      size_t rsize = gm->topsize -= nb;
3510
      size_t rsize = gm->topsize -= nb;
3511
      mchunkptr p = gm->top;
3511
      mchunkptr p = gm->top;
3512
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3512
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3513
      r->head = rsize | PINUSE_BIT;
3513
      r->head = rsize | PINUSE_BIT;
3514
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3514
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3515
      mem = chunk2mem(p);
3515
      mem = chunk2mem(p);
3516
      check_top_chunk(gm, gm->top);
3516
      check_top_chunk(gm, gm->top);
3517
      check_malloced_chunk(gm, mem, nb);
3517
      check_malloced_chunk(gm, mem, nb);
3518
      goto postaction;
3518
      goto postaction;
3519
    }
3519
    }
3520
 
3520
 
3521
    mem = sys_alloc(gm, nb);
3521
    mem = sys_alloc(gm, nb);
3522
 
3522
 
3523
  postaction:
3523
  postaction:
3524
    POSTACTION(gm);
3524
    POSTACTION(gm);
3525
    return mem;
3525
    return mem;
3526
  }
3526
  }
3527
 
3527
 
3528
  return 0;
3528
  return 0;
3529
}
3529
}
3530
 
3530
 
3531
void dlfree(void* mem) {
3531
void dlfree(void* mem) {
3532
  /*
3532
  /*
3533
     Consolidate freed chunks with preceeding or succeeding bordering
3533
     Consolidate freed chunks with preceeding or succeeding bordering
3534
     free chunks, if they exist, and then place in a bin.  Intermixed
3534
     free chunks, if they exist, and then place in a bin.  Intermixed
3535
     with special cases for top, dv, mmapped chunks, and usage errors.
3535
     with special cases for top, dv, mmapped chunks, and usage errors.
3536
  */
3536
  */
3537
 
3537
 
3538
  if (mem != 0) {
3538
  if (mem != 0) {
3539
    mchunkptr p  = mem2chunk(mem);
3539
    mchunkptr p  = mem2chunk(mem);
3540
#if FOOTERS
3540
#if FOOTERS
3541
    mstate fm = get_mstate_for(p);
3541
    mstate fm = get_mstate_for(p);
3542
    if (!ok_magic(fm)) {
3542
    if (!ok_magic(fm)) {
3543
      USAGE_ERROR_ACTION(fm, p);
3543
      USAGE_ERROR_ACTION(fm, p);
3544
      return;
3544
      return;
3545
    }
3545
    }
3546
#else /* FOOTERS */
3546
#else /* FOOTERS */
3547
#define fm gm
3547
#define fm gm
3548
#endif /* FOOTERS */
3548
#endif /* FOOTERS */
3549
    if (!PREACTION(fm)) {
3549
    if (!PREACTION(fm)) {
3550
      check_inuse_chunk(fm, p);
3550
      check_inuse_chunk(fm, p);
3551
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3551
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3552
        size_t psize = chunksize(p);
3552
        size_t psize = chunksize(p);
3553
        mchunkptr next = chunk_plus_offset(p, psize);
3553
        mchunkptr next = chunk_plus_offset(p, psize);
3554
        if (!pinuse(p)) {
3554
        if (!pinuse(p)) {
3555
          size_t prevsize = p->prev_foot;
3555
          size_t prevsize = p->prev_foot;
3556
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3556
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3557
            prevsize &= ~IS_MMAPPED_BIT;
3557
            prevsize &= ~IS_MMAPPED_BIT;
3558
            psize += prevsize + MMAP_FOOT_PAD;
3558
            psize += prevsize + MMAP_FOOT_PAD;
3559
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3559
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3560
              fm->footprint -= psize;
3560
              fm->footprint -= psize;
3561
            goto postaction;
3561
            goto postaction;
3562
          }
3562
          }
3563
          else {
3563
          else {
3564
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3564
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3565
            psize += prevsize;
3565
            psize += prevsize;
3566
            p = prev;
3566
            p = prev;
3567
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3567
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3568
              if (p != fm->dv) {
3568
              if (p != fm->dv) {
3569
                unlink_chunk(fm, p, prevsize);
3569
                unlink_chunk(fm, p, prevsize);
3570
              }
3570
              }
3571
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3571
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3572
                fm->dvsize = psize;
3572
                fm->dvsize = psize;
3573
                set_free_with_pinuse(p, psize, next);
3573
                set_free_with_pinuse(p, psize, next);
3574
                goto postaction;
3574
                goto postaction;
3575
              }
3575
              }
3576
            }
3576
            }
3577
            else
3577
            else
3578
              goto erroraction;
3578
              goto erroraction;
3579
          }
3579
          }
3580
        }
3580
        }
3581
 
3581
 
3582
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3582
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3583
          if (!cinuse(next)) {  /* consolidate forward */
3583
          if (!cinuse(next)) {  /* consolidate forward */
3584
            if (next == fm->top) {
3584
            if (next == fm->top) {
3585
              size_t tsize = fm->topsize += psize;
3585
              size_t tsize = fm->topsize += psize;
3586
              fm->top = p;
3586
              fm->top = p;
3587
              p->head = tsize | PINUSE_BIT;
3587
              p->head = tsize | PINUSE_BIT;
3588
              if (p == fm->dv) {
3588
              if (p == fm->dv) {
3589
                fm->dv = 0;
3589
                fm->dv = 0;
3590
                fm->dvsize = 0;
3590
                fm->dvsize = 0;
3591
              }
3591
              }
3592
              if (should_trim(fm, tsize))
3592
              if (should_trim(fm, tsize))
3593
                sys_trim(fm, 0);
3593
                sys_trim(fm, 0);
3594
              goto postaction;
3594
              goto postaction;
3595
            }
3595
            }
3596
            else if (next == fm->dv) {
3596
            else if (next == fm->dv) {
3597
              size_t dsize = fm->dvsize += psize;
3597
              size_t dsize = fm->dvsize += psize;
3598
              fm->dv = p;
3598
              fm->dv = p;
3599
              set_size_and_pinuse_of_free_chunk(p, dsize);
3599
              set_size_and_pinuse_of_free_chunk(p, dsize);
3600
              goto postaction;
3600
              goto postaction;
3601
            }
3601
            }
3602
            else {
3602
            else {
3603
              size_t nsize = chunksize(next);
3603
              size_t nsize = chunksize(next);
3604
              psize += nsize;
3604
              psize += nsize;
3605
              unlink_chunk(fm, next, nsize);
3605
              unlink_chunk(fm, next, nsize);
3606
              set_size_and_pinuse_of_free_chunk(p, psize);
3606
              set_size_and_pinuse_of_free_chunk(p, psize);
3607
              if (p == fm->dv) {
3607
              if (p == fm->dv) {
3608
                fm->dvsize = psize;
3608
                fm->dvsize = psize;
3609
                goto postaction;
3609
                goto postaction;
3610
              }
3610
              }
3611
            }
3611
            }
3612
          }
3612
          }
3613
          else
3613
          else
3614
            set_free_with_pinuse(p, psize, next);
3614
            set_free_with_pinuse(p, psize, next);
3615
          insert_chunk(fm, p, psize);
3615
          insert_chunk(fm, p, psize);
3616
          check_free_chunk(fm, p);
3616
          check_free_chunk(fm, p);
3617
          goto postaction;
3617
          goto postaction;
3618
        }
3618
        }
3619
      }
3619
      }
3620
    erroraction:
3620
    erroraction:
3621
      USAGE_ERROR_ACTION(fm, p);
3621
      USAGE_ERROR_ACTION(fm, p);
3622
    postaction:
3622
    postaction:
3623
      POSTACTION(fm);
3623
      POSTACTION(fm);
3624
    }
3624
    }
3625
  }
3625
  }
3626
#if !FOOTERS
3626
#if !FOOTERS
3627
#undef fm
3627
#undef fm
3628
#endif /* FOOTERS */
3628
#endif /* FOOTERS */
3629
}
3629
}
3630
 
3630
 
3631
void* dlcalloc(size_t n_elements, size_t elem_size) {
3631
void* dlcalloc(size_t n_elements, size_t elem_size) {
3632
  void* mem;
3632
  void* mem;
3633
  size_t req = 0;
3633
  size_t req = 0;
3634
  if (n_elements != 0) {
3634
  if (n_elements != 0) {
3635
    req = n_elements * elem_size;
3635
    req = n_elements * elem_size;
3636
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3636
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3637
        (req / n_elements != elem_size))
3637
        (req / n_elements != elem_size))
3638
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3638
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3639
  }
3639
  }
3640
  mem = dlmalloc(req);
3640
  mem = dlmalloc(req);
3641
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3641
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3642
    memset(mem, 0, req);
3642
    memset(mem, 0, req);
3643
  return mem;
3643
  return mem;
3644
}
3644
}
3645
 
3645
 
3646
void* dlrealloc(void* oldmem, size_t bytes) {
3646
void* dlrealloc(void* oldmem, size_t bytes) {
3647
  if (oldmem == 0)
3647
  if (oldmem == 0)
3648
    return dlmalloc(bytes);
3648
    return dlmalloc(bytes);
3649
#ifdef REALLOC_ZERO_BYTES_FREES
3649
#ifdef REALLOC_ZERO_BYTES_FREES
3650
  if (bytes == 0) {
3650
  if (bytes == 0) {
3651
    dlfree(oldmem);
3651
    dlfree(oldmem);
3652
    return 0;
3652
    return 0;
3653
  }
3653
  }
3654
#endif /* REALLOC_ZERO_BYTES_FREES */
3654
#endif /* REALLOC_ZERO_BYTES_FREES */
3655
  else {
3655
  else {
3656
#if ! FOOTERS
3656
#if ! FOOTERS
3657
    mstate m = gm;
3657
    mstate m = gm;
3658
#else /* FOOTERS */
3658
#else /* FOOTERS */
3659
    mstate m = get_mstate_for(mem2chunk(oldmem));
3659
    mstate m = get_mstate_for(mem2chunk(oldmem));
3660
    if (!ok_magic(m)) {
3660
    if (!ok_magic(m)) {
3661
      USAGE_ERROR_ACTION(m, oldmem);
3661
      USAGE_ERROR_ACTION(m, oldmem);
3662
      return 0;
3662
      return 0;
3663
    }
3663
    }
3664
#endif /* FOOTERS */
3664
#endif /* FOOTERS */
3665
    return internal_realloc(m, oldmem, bytes);
3665
    return internal_realloc(m, oldmem, bytes);
3666
  }
3666
  }
3667
}
3667
}
3668
 
3668
 
3669
void* dlmemalign(size_t alignment, size_t bytes) {
3669
void* dlmemalign(size_t alignment, size_t bytes) {
3670
  return internal_memalign(gm, alignment, bytes);
3670
  return internal_memalign(gm, alignment, bytes);
3671
}
3671
}
3672
 
3672
 
3673
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3673
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3674
                                 void* chunks[]) {
3674
                                 void* chunks[]) {
3675
  size_t sz = elem_size; /* serves as 1-element array */
3675
  size_t sz = elem_size; /* serves as 1-element array */
3676
  return ialloc(gm, n_elements, &sz, 3, chunks);
3676
  return ialloc(gm, n_elements, &sz, 3, chunks);
3677
}
3677
}
3678
 
3678
 
3679
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3679
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3680
                                   void* chunks[]) {
3680
                                   void* chunks[]) {
3681
  return ialloc(gm, n_elements, sizes, 0, chunks);
3681
  return ialloc(gm, n_elements, sizes, 0, chunks);
3682
}
3682
}
3683
 
3683
 
3684
void* dlvalloc(size_t bytes) {
3684
void* dlvalloc(size_t bytes) {
3685
  size_t pagesz;
3685
  size_t pagesz;
3686
  init_mparams();
3686
  init_mparams();
3687
  pagesz = mparams.page_size;
3687
  pagesz = mparams.page_size;
3688
  return dlmemalign(pagesz, bytes);
3688
  return dlmemalign(pagesz, bytes);
3689
}
3689
}
3690
 
3690
 
3691
void* dlpvalloc(size_t bytes) {
3691
void* dlpvalloc(size_t bytes) {
3692
  size_t pagesz;
3692
  size_t pagesz;
3693
  init_mparams();
3693
  init_mparams();
3694
  pagesz = mparams.page_size;
3694
  pagesz = mparams.page_size;
3695
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3695
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3696
}
3696
}
3697
 
3697
 
3698
int dlmalloc_trim(size_t pad) {
3698
int dlmalloc_trim(size_t pad) {
3699
  int result = 0;
3699
  int result = 0;
3700
  if (!PREACTION(gm)) {
3700
  if (!PREACTION(gm)) {
3701
    result = sys_trim(gm, pad);
3701
    result = sys_trim(gm, pad);
3702
    POSTACTION(gm);
3702
    POSTACTION(gm);
3703
  }
3703
  }
3704
  return result;
3704
  return result;
3705
}
3705
}
3706
 
3706
 
3707
size_t dlmalloc_footprint(void) {
3707
size_t dlmalloc_footprint(void) {
3708
  return gm->footprint;
3708
  return gm->footprint;
3709
}
3709
}
3710
 
3710
 
3711
size_t dlmalloc_max_footprint(void) {
3711
size_t dlmalloc_max_footprint(void) {
3712
  return gm->max_footprint;
3712
  return gm->max_footprint;
3713
}
3713
}
3714
 
3714
 
3715
#if !NO_MALLINFO
3715
#if !NO_MALLINFO
3716
struct mallinfo dlmallinfo(void) {
3716
struct mallinfo dlmallinfo(void) {
3717
  return internal_mallinfo(gm);
3717
  return internal_mallinfo(gm);
3718
}
3718
}
3719
#endif /* NO_MALLINFO */
3719
#endif /* NO_MALLINFO */
3720
 
3720
 
3721
void dlmalloc_stats() {
3721
void dlmalloc_stats() {
3722
  internal_malloc_stats(gm);
3722
  internal_malloc_stats(gm);
3723
}
3723
}
3724
 
3724
 
3725
size_t dlmalloc_usable_size(void* mem) {
3725
size_t dlmalloc_usable_size(void* mem) {
3726
  if (mem != 0) {
3726
  if (mem != 0) {
3727
    mchunkptr p = mem2chunk(mem);
3727
    mchunkptr p = mem2chunk(mem);
3728
    if (cinuse(p))
3728
    if (cinuse(p))
3729
      return chunksize(p) - overhead_for(p);
3729
      return chunksize(p) - overhead_for(p);
3730
  }
3730
  }
3731
  return 0;
3731
  return 0;
3732
}
3732
}
3733
 
3733
 
3734
int dlmallopt(int param_number, int value) {
3734
int dlmallopt(int param_number, int value) {
3735
  return change_mparam(param_number, value);
3735
  return change_mparam(param_number, value);
3736
}
3736
}
3737
 
3737
 
3738
#endif /* !ONLY_MSPACES */
3738
#endif /* !ONLY_MSPACES */
3739
 
3739
 
3740
/* ----------------------------- user mspaces ---------------------------- */
3740
/* ----------------------------- user mspaces ---------------------------- */
3741
 
3741
 
3742
#if MSPACES
3742
#if MSPACES
3743
 
3743
 
3744
static mstate init_user_mstate(char* tbase, size_t tsize) {
3744
static mstate init_user_mstate(char* tbase, size_t tsize) {
3745
  size_t msize = pad_request(sizeof(struct malloc_state));
3745
  size_t msize = pad_request(sizeof(struct malloc_state));
3746
  mchunkptr mn;
3746
  mchunkptr mn;
3747
  mchunkptr msp = align_as_chunk(tbase);
3747
  mchunkptr msp = align_as_chunk(tbase);
3748
  mstate m = (mstate)(chunk2mem(msp));
3748
  mstate m = (mstate)(chunk2mem(msp));
3749
  memset(m, 0, msize);
3749
  memset(m, 0, msize);
3750
  INITIAL_LOCK(&m->mutex);
3750
  INITIAL_LOCK(&m->mutex);
3751
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3751
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3752
  m->seg.base = m->least_addr = tbase;
3752
  m->seg.base = m->least_addr = tbase;
3753
  m->seg.size = m->footprint = m->max_footprint = tsize;
3753
  m->seg.size = m->footprint = m->max_footprint = tsize;
3754
  m->magic = mparams.magic;
3754
  m->magic = mparams.magic;
3755
  m->mflags = mparams.default_mflags;
3755
  m->mflags = mparams.default_mflags;
3756
  disable_contiguous(m);
3756
  disable_contiguous(m);
3757
  init_bins(m);
3757
  init_bins(m);
3758
  mn = next_chunk(mem2chunk(m));
3758
  mn = next_chunk(mem2chunk(m));
3759
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3759
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3760
  check_top_chunk(m, m->top);
3760
  check_top_chunk(m, m->top);
3761
  return m;
3761
  return m;
3762
}
3762
}
3763
 
3763
 
3764
mspace create_mspace(size_t capacity, int locked) {
3764
mspace create_mspace(size_t capacity, int locked) {
3765
  mstate m = 0;
3765
  mstate m = 0;
3766
  size_t msize = pad_request(sizeof(struct malloc_state));
3766
  size_t msize = pad_request(sizeof(struct malloc_state));
3767
  init_mparams(); /* Ensure pagesize etc initialized */
3767
  init_mparams(); /* Ensure pagesize etc initialized */
3768
 
3768
 
3769
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3769
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3770
    size_t rs = ((capacity == 0)? mparams.granularity :
3770
    size_t rs = ((capacity == 0)? mparams.granularity :
3771
                 (capacity + TOP_FOOT_SIZE + msize));
3771
                 (capacity + TOP_FOOT_SIZE + msize));
3772
    size_t tsize = granularity_align(rs);
3772
    size_t tsize = granularity_align(rs);
3773
    char* tbase = (char*)(CALL_MMAP(tsize));
3773
    char* tbase = (char*)(CALL_MMAP(tsize));
3774
    if (tbase != CMFAIL) {
3774
    if (tbase != CMFAIL) {
3775
      m = init_user_mstate(tbase, tsize);
3775
      m = init_user_mstate(tbase, tsize);
3776
      m->seg.sflags = IS_MMAPPED_BIT;
3776
      m->seg.sflags = IS_MMAPPED_BIT;
3777
      set_lock(m, locked);
3777
      set_lock(m, locked);
3778
    }
3778
    }
3779
  }
3779
  }
3780
  return (mspace)m;
3780
  return (mspace)m;
3781
}
3781
}
3782
 
3782
 
3783
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3783
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3784
  mstate m = 0;
3784
  mstate m = 0;
3785
  size_t msize = pad_request(sizeof(struct malloc_state));
3785
  size_t msize = pad_request(sizeof(struct malloc_state));
3786
  init_mparams(); /* Ensure pagesize etc initialized */
3786
  init_mparams(); /* Ensure pagesize etc initialized */
3787
 
3787
 
3788
  if (capacity > msize + TOP_FOOT_SIZE &&
3788
  if (capacity > msize + TOP_FOOT_SIZE &&
3789
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3789
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3790
    m = init_user_mstate((char*)base, capacity);
3790
    m = init_user_mstate((char*)base, capacity);
3791
    m->seg.sflags = EXTERN_BIT;
3791
    m->seg.sflags = EXTERN_BIT;
3792
    set_lock(m, locked);
3792
    set_lock(m, locked);
3793
  }
3793
  }
3794
  return (mspace)m;
3794
  return (mspace)m;
3795
}
3795
}
3796
 
3796
 
3797
size_t destroy_mspace(mspace msp) {
3797
size_t destroy_mspace(mspace msp) {
3798
  size_t freed = 0;
3798
  size_t freed = 0;
3799
  mstate ms = (mstate)msp;
3799
  mstate ms = (mstate)msp;
3800
  if (ok_magic(ms)) {
3800
  if (ok_magic(ms)) {
3801
    msegmentptr sp = &ms->seg;
3801
    msegmentptr sp = &ms->seg;
3802
    while (sp != 0) {
3802
    while (sp != 0) {
3803
      char* base = sp->base;
3803
      char* base = sp->base;
3804
      size_t size = sp->size;
3804
      size_t size = sp->size;
3805
      flag_t flag = sp->sflags;
3805
      flag_t flag = sp->sflags;
3806
      sp = sp->next;
3806
      sp = sp->next;
3807
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3807
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3808
          CALL_MUNMAP(base, size) == 0)
3808
          CALL_MUNMAP(base, size) == 0)
3809
        freed += size;
3809
        freed += size;
3810
    }
3810
    }
3811
  }
3811
  }
3812
  else {
3812
  else {
3813
    USAGE_ERROR_ACTION(ms,ms);
3813
    USAGE_ERROR_ACTION(ms,ms);
3814
  }
3814
  }
3815
  return freed;
3815
  return freed;
3816
}
3816
}
3817
 
3817
 
3818
/*
3818
/*
3819
  mspace versions of routines are near-clones of the global
3819
  mspace versions of routines are near-clones of the global
3820
  versions. This is not so nice but better than the alternatives.
3820
  versions. This is not so nice but better than the alternatives.
3821
*/
3821
*/
3822
 
3822
 
3823
 
3823
 
3824
void* mspace_malloc(mspace msp, size_t bytes) {
3824
void* mspace_malloc(mspace msp, size_t bytes) {
3825
  mstate ms = (mstate)msp;
3825
  mstate ms = (mstate)msp;
3826
  if (!ok_magic(ms)) {
3826
  if (!ok_magic(ms)) {
3827
    USAGE_ERROR_ACTION(ms,ms);
3827
    USAGE_ERROR_ACTION(ms,ms);
3828
    return 0;
3828
    return 0;
3829
  }
3829
  }
3830
  if (!PREACTION(ms)) {
3830
  if (!PREACTION(ms)) {
3831
    void* mem;
3831
    void* mem;
3832
    size_t nb;
3832
    size_t nb;
3833
    if (bytes <= MAX_SMALL_REQUEST) {
3833
    if (bytes <= MAX_SMALL_REQUEST) {
3834
      bindex_t idx;
3834
      bindex_t idx;
3835
      binmap_t smallbits;
3835
      binmap_t smallbits;
3836
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3836
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3837
      idx = small_index(nb);
3837
      idx = small_index(nb);
3838
      smallbits = ms->smallmap >> idx;
3838
      smallbits = ms->smallmap >> idx;
3839
 
3839
 
3840
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3840
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3841
        mchunkptr b, p;
3841
        mchunkptr b, p;
3842
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3842
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3843
        b = smallbin_at(ms, idx);
3843
        b = smallbin_at(ms, idx);
3844
        p = b->fd;
3844
        p = b->fd;
3845
        assert(chunksize(p) == small_index2size(idx));
3845
        assert(chunksize(p) == small_index2size(idx));
3846
        unlink_first_small_chunk(ms, b, p, idx);
3846
        unlink_first_small_chunk(ms, b, p, idx);
3847
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3847
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3848
        mem = chunk2mem(p);
3848
        mem = chunk2mem(p);
3849
        check_malloced_chunk(ms, mem, nb);
3849
        check_malloced_chunk(ms, mem, nb);
3850
        goto postaction;
3850
        goto postaction;
3851
      }
3851
      }
3852
 
3852
 
3853
      else if (nb > ms->dvsize) {
3853
      else if (nb > ms->dvsize) {
3854
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3854
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3855
          mchunkptr b, p, r;
3855
          mchunkptr b, p, r;
3856
          size_t rsize;
3856
          size_t rsize;
3857
          bindex_t i;
3857
          bindex_t i;
3858
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3858
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3859
          binmap_t leastbit = least_bit(leftbits);
3859
          binmap_t leastbit = least_bit(leftbits);
3860
          compute_bit2idx(leastbit, i);
3860
          compute_bit2idx(leastbit, i);
3861
          b = smallbin_at(ms, i);
3861
          b = smallbin_at(ms, i);
3862
          p = b->fd;
3862
          p = b->fd;
3863
          assert(chunksize(p) == small_index2size(i));
3863
          assert(chunksize(p) == small_index2size(i));
3864
          unlink_first_small_chunk(ms, b, p, i);
3864
          unlink_first_small_chunk(ms, b, p, i);
3865
          rsize = small_index2size(i) - nb;
3865
          rsize = small_index2size(i) - nb;
3866
          /* Fit here cannot be remainderless if 4byte sizes */
3866
          /* Fit here cannot be remainderless if 4byte sizes */
3867
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3867
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3868
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3868
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3869
          else {
3869
          else {
3870
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3870
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3871
            r = chunk_plus_offset(p, nb);
3871
            r = chunk_plus_offset(p, nb);
3872
            set_size_and_pinuse_of_free_chunk(r, rsize);
3872
            set_size_and_pinuse_of_free_chunk(r, rsize);
3873
            replace_dv(ms, r, rsize);
3873
            replace_dv(ms, r, rsize);
3874
          }
3874
          }
3875
          mem = chunk2mem(p);
3875
          mem = chunk2mem(p);
3876
          check_malloced_chunk(ms, mem, nb);
3876
          check_malloced_chunk(ms, mem, nb);
3877
          goto postaction;
3877
          goto postaction;
3878
        }
3878
        }
3879
 
3879
 
3880
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3880
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3881
          check_malloced_chunk(ms, mem, nb);
3881
          check_malloced_chunk(ms, mem, nb);
3882
          goto postaction;
3882
          goto postaction;
3883
        }
3883
        }
3884
      }
3884
      }
3885
    }
3885
    }
3886
    else if (bytes >= MAX_REQUEST)
3886
    else if (bytes >= MAX_REQUEST)
3887
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3887
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3888
    else {
3888
    else {
3889
      nb = pad_request(bytes);
3889
      nb = pad_request(bytes);
3890
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3890
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3891
        check_malloced_chunk(ms, mem, nb);
3891
        check_malloced_chunk(ms, mem, nb);
3892
        goto postaction;
3892
        goto postaction;
3893
      }
3893
      }
3894
    }
3894
    }
3895
 
3895
 
3896
    if (nb <= ms->dvsize) {
3896
    if (nb <= ms->dvsize) {
3897
      size_t rsize = ms->dvsize - nb;
3897
      size_t rsize = ms->dvsize - nb;
3898
      mchunkptr p = ms->dv;
3898
      mchunkptr p = ms->dv;
3899
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3899
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3900
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3900
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3901
        ms->dvsize = rsize;
3901
        ms->dvsize = rsize;
3902
        set_size_and_pinuse_of_free_chunk(r, rsize);
3902
        set_size_and_pinuse_of_free_chunk(r, rsize);
3903
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3903
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3904
      }
3904
      }
3905
      else { /* exhaust dv */
3905
      else { /* exhaust dv */
3906
        size_t dvs = ms->dvsize;
3906
        size_t dvs = ms->dvsize;
3907
        ms->dvsize = 0;
3907
        ms->dvsize = 0;
3908
        ms->dv = 0;
3908
        ms->dv = 0;
3909
        set_inuse_and_pinuse(ms, p, dvs);
3909
        set_inuse_and_pinuse(ms, p, dvs);
3910
      }
3910
      }
3911
      mem = chunk2mem(p);
3911
      mem = chunk2mem(p);
3912
      check_malloced_chunk(ms, mem, nb);
3912
      check_malloced_chunk(ms, mem, nb);
3913
      goto postaction;
3913
      goto postaction;
3914
    }
3914
    }
3915
 
3915
 
3916
    else if (nb < ms->topsize) { /* Split top */
3916
    else if (nb < ms->topsize) { /* Split top */
3917
      size_t rsize = ms->topsize -= nb;
3917
      size_t rsize = ms->topsize -= nb;
3918
      mchunkptr p = ms->top;
3918
      mchunkptr p = ms->top;
3919
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3919
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3920
      r->head = rsize | PINUSE_BIT;
3920
      r->head = rsize | PINUSE_BIT;
3921
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3921
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3922
      mem = chunk2mem(p);
3922
      mem = chunk2mem(p);
3923
      check_top_chunk(ms, ms->top);
3923
      check_top_chunk(ms, ms->top);
3924
      check_malloced_chunk(ms, mem, nb);
3924
      check_malloced_chunk(ms, mem, nb);
3925
      goto postaction;
3925
      goto postaction;
3926
    }
3926
    }
3927
 
3927
 
3928
    mem = sys_alloc(ms, nb);
3928
    mem = sys_alloc(ms, nb);
3929
 
3929
 
3930
  postaction:
3930
  postaction:
3931
    POSTACTION(ms);
3931
    POSTACTION(ms);
3932
    return mem;
3932
    return mem;
3933
  }
3933
  }
3934
 
3934
 
3935
  return 0;
3935
  return 0;
3936
}
3936
}
3937
 
3937
 
3938
void mspace_free(mspace msp, void* mem) {
3938
void mspace_free(mspace msp, void* mem) {
3939
  if (mem != 0) {
3939
  if (mem != 0) {
3940
    mchunkptr p  = mem2chunk(mem);
3940
    mchunkptr p  = mem2chunk(mem);
3941
#if FOOTERS
3941
#if FOOTERS
3942
    mstate fm = get_mstate_for(p);
3942
    mstate fm = get_mstate_for(p);
3943
#else /* FOOTERS */
3943
#else /* FOOTERS */
3944
    mstate fm = (mstate)msp;
3944
    mstate fm = (mstate)msp;
3945
#endif /* FOOTERS */
3945
#endif /* FOOTERS */
3946
    if (!ok_magic(fm)) {
3946
    if (!ok_magic(fm)) {
3947
      USAGE_ERROR_ACTION(fm, p);
3947
      USAGE_ERROR_ACTION(fm, p);
3948
      return;
3948
      return;
3949
    }
3949
    }
3950
    if (!PREACTION(fm)) {
3950
    if (!PREACTION(fm)) {
3951
      check_inuse_chunk(fm, p);
3951
      check_inuse_chunk(fm, p);
3952
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3952
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3953
        size_t psize = chunksize(p);
3953
        size_t psize = chunksize(p);
3954
        mchunkptr next = chunk_plus_offset(p, psize);
3954
        mchunkptr next = chunk_plus_offset(p, psize);
3955
        if (!pinuse(p)) {
3955
        if (!pinuse(p)) {
3956
          size_t prevsize = p->prev_foot;
3956
          size_t prevsize = p->prev_foot;
3957
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3957
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3958
            prevsize &= ~IS_MMAPPED_BIT;
3958
            prevsize &= ~IS_MMAPPED_BIT;
3959
            psize += prevsize + MMAP_FOOT_PAD;
3959
            psize += prevsize + MMAP_FOOT_PAD;
3960
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3960
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3961
              fm->footprint -= psize;
3961
              fm->footprint -= psize;
3962
            goto postaction;
3962
            goto postaction;
3963
          }
3963
          }
3964
          else {
3964
          else {
3965
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3965
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3966
            psize += prevsize;
3966
            psize += prevsize;
3967
            p = prev;
3967
            p = prev;
3968
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3968
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3969
              if (p != fm->dv) {
3969
              if (p != fm->dv) {
3970
                unlink_chunk(fm, p, prevsize);
3970
                unlink_chunk(fm, p, prevsize);
3971
              }
3971
              }
3972
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3972
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3973
                fm->dvsize = psize;
3973
                fm->dvsize = psize;
3974
                set_free_with_pinuse(p, psize, next);
3974
                set_free_with_pinuse(p, psize, next);
3975
                goto postaction;
3975
                goto postaction;
3976
              }
3976
              }
3977
            }
3977
            }
3978
            else
3978
            else
3979
              goto erroraction;
3979
              goto erroraction;
3980
          }
3980
          }
3981
        }
3981
        }
3982
 
3982
 
3983
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3983
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3984
          if (!cinuse(next)) {  /* consolidate forward */
3984
          if (!cinuse(next)) {  /* consolidate forward */
3985
            if (next == fm->top) {
3985
            if (next == fm->top) {
3986
              size_t tsize = fm->topsize += psize;
3986
              size_t tsize = fm->topsize += psize;
3987
              fm->top = p;
3987
              fm->top = p;
3988
              p->head = tsize | PINUSE_BIT;
3988
              p->head = tsize | PINUSE_BIT;
3989
              if (p == fm->dv) {
3989
              if (p == fm->dv) {
3990
                fm->dv = 0;
3990
                fm->dv = 0;
3991
                fm->dvsize = 0;
3991
                fm->dvsize = 0;
3992
              }
3992
              }
3993
              if (should_trim(fm, tsize))
3993
              if (should_trim(fm, tsize))
3994
                sys_trim(fm, 0);
3994
                sys_trim(fm, 0);
3995
              goto postaction;
3995
              goto postaction;
3996
            }
3996
            }
3997
            else if (next == fm->dv) {
3997
            else if (next == fm->dv) {
3998
              size_t dsize = fm->dvsize += psize;
3998
              size_t dsize = fm->dvsize += psize;
3999
              fm->dv = p;
3999
              fm->dv = p;
4000
              set_size_and_pinuse_of_free_chunk(p, dsize);
4000
              set_size_and_pinuse_of_free_chunk(p, dsize);
4001
              goto postaction;
4001
              goto postaction;
4002
            }
4002
            }
4003
            else {
4003
            else {
4004
              size_t nsize = chunksize(next);
4004
              size_t nsize = chunksize(next);
4005
              psize += nsize;
4005
              psize += nsize;
4006
              unlink_chunk(fm, next, nsize);
4006
              unlink_chunk(fm, next, nsize);
4007
              set_size_and_pinuse_of_free_chunk(p, psize);
4007
              set_size_and_pinuse_of_free_chunk(p, psize);
4008
              if (p == fm->dv) {
4008
              if (p == fm->dv) {
4009
                fm->dvsize = psize;
4009
                fm->dvsize = psize;
4010
                goto postaction;
4010
                goto postaction;
4011
              }
4011
              }
4012
            }
4012
            }
4013
          }
4013
          }
4014
          else
4014
          else
4015
            set_free_with_pinuse(p, psize, next);
4015
            set_free_with_pinuse(p, psize, next);
4016
          insert_chunk(fm, p, psize);
4016
          insert_chunk(fm, p, psize);
4017
          check_free_chunk(fm, p);
4017
          check_free_chunk(fm, p);
4018
          goto postaction;
4018
          goto postaction;
4019
        }
4019
        }
4020
      }
4020
      }
4021
    erroraction:
4021
    erroraction:
4022
      USAGE_ERROR_ACTION(fm, p);
4022
      USAGE_ERROR_ACTION(fm, p);
4023
    postaction:
4023
    postaction:
4024
      POSTACTION(fm);
4024
      POSTACTION(fm);
4025
    }
4025
    }
4026
  }
4026
  }
4027
}
4027
}
4028
 
4028
 
4029
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4029
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4030
  void* mem;
4030
  void* mem;
4031
  size_t req = 0;
4031
  size_t req = 0;
4032
  mstate ms = (mstate)msp;
4032
  mstate ms = (mstate)msp;
4033
  if (!ok_magic(ms)) {
4033
  if (!ok_magic(ms)) {
4034
    USAGE_ERROR_ACTION(ms,ms);
4034
    USAGE_ERROR_ACTION(ms,ms);
4035
    return 0;
4035
    return 0;
4036
  }
4036
  }
4037
  if (n_elements != 0) {
4037
  if (n_elements != 0) {
4038
    req = n_elements * elem_size;
4038
    req = n_elements * elem_size;
4039
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4039
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4040
        (req / n_elements != elem_size))
4040
        (req / n_elements != elem_size))
4041
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4041
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4042
  }
4042
  }
4043
  mem = internal_malloc(ms, req);
4043
  mem = internal_malloc(ms, req);
4044
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4044
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4045
    memset(mem, 0, req);
4045
    memset(mem, 0, req);
4046
  return mem;
4046
  return mem;
4047
}
4047
}
4048
 
4048
 
4049
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4049
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4050
  if (oldmem == 0)
4050
  if (oldmem == 0)
4051
    return mspace_malloc(msp, bytes);
4051
    return mspace_malloc(msp, bytes);
4052
#ifdef REALLOC_ZERO_BYTES_FREES
4052
#ifdef REALLOC_ZERO_BYTES_FREES
4053
  if (bytes == 0) {
4053
  if (bytes == 0) {
4054
    mspace_free(msp, oldmem);
4054
    mspace_free(msp, oldmem);
4055
    return 0;
4055
    return 0;
4056
  }
4056
  }
4057
#endif /* REALLOC_ZERO_BYTES_FREES */
4057
#endif /* REALLOC_ZERO_BYTES_FREES */
4058
  else {
4058
  else {
4059
#if FOOTERS
4059
#if FOOTERS
4060
    mchunkptr p  = mem2chunk(oldmem);
4060
    mchunkptr p  = mem2chunk(oldmem);
4061
    mstate ms = get_mstate_for(p);
4061
    mstate ms = get_mstate_for(p);
4062
#else /* FOOTERS */
4062
#else /* FOOTERS */
4063
    mstate ms = (mstate)msp;
4063
    mstate ms = (mstate)msp;
4064
#endif /* FOOTERS */
4064
#endif /* FOOTERS */
4065
    if (!ok_magic(ms)) {
4065
    if (!ok_magic(ms)) {
4066
      USAGE_ERROR_ACTION(ms,ms);
4066
      USAGE_ERROR_ACTION(ms,ms);
4067
      return 0;
4067
      return 0;
4068
    }
4068
    }
4069
    return internal_realloc(ms, oldmem, bytes);
4069
    return internal_realloc(ms, oldmem, bytes);
4070
  }
4070
  }
4071
}
4071
}
4072
 
4072
 
4073
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4073
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4074
  mstate ms = (mstate)msp;
4074
  mstate ms = (mstate)msp;
4075
  if (!ok_magic(ms)) {
4075
  if (!ok_magic(ms)) {
4076
    USAGE_ERROR_ACTION(ms,ms);
4076
    USAGE_ERROR_ACTION(ms,ms);
4077
    return 0;
4077
    return 0;
4078
  }
4078
  }
4079
  return internal_memalign(ms, alignment, bytes);
4079
  return internal_memalign(ms, alignment, bytes);
4080
}
4080
}
4081
 
4081
 
4082
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4082
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4083
                                 size_t elem_size, void* chunks[]) {
4083
                                 size_t elem_size, void* chunks[]) {
4084
  size_t sz = elem_size; /* serves as 1-element array */
4084
  size_t sz = elem_size; /* serves as 1-element array */
4085
  mstate ms = (mstate)msp;
4085
  mstate ms = (mstate)msp;
4086
  if (!ok_magic(ms)) {
4086
  if (!ok_magic(ms)) {
4087
    USAGE_ERROR_ACTION(ms,ms);
4087
    USAGE_ERROR_ACTION(ms,ms);
4088
    return 0;
4088
    return 0;
4089
  }
4089
  }
4090
  return ialloc(ms, n_elements, &sz, 3, chunks);
4090
  return ialloc(ms, n_elements, &sz, 3, chunks);
4091
}
4091
}
4092
 
4092
 
4093
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4093
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4094
                                   size_t sizes[], void* chunks[]) {
4094
                                   size_t sizes[], void* chunks[]) {
4095
  mstate ms = (mstate)msp;
4095
  mstate ms = (mstate)msp;
4096
  if (!ok_magic(ms)) {
4096
  if (!ok_magic(ms)) {
4097
    USAGE_ERROR_ACTION(ms,ms);
4097
    USAGE_ERROR_ACTION(ms,ms);
4098
    return 0;
4098
    return 0;
4099
  }
4099
  }
4100
  return ialloc(ms, n_elements, sizes, 0, chunks);
4100
  return ialloc(ms, n_elements, sizes, 0, chunks);
4101
}
4101
}
4102
 
4102
 
4103
int mspace_trim(mspace msp, size_t pad) {
4103
int mspace_trim(mspace msp, size_t pad) {
4104
  int result = 0;
4104
  int result = 0;
4105
  mstate ms = (mstate)msp;
4105
  mstate ms = (mstate)msp;
4106
  if (ok_magic(ms)) {
4106
  if (ok_magic(ms)) {
4107
    if (!PREACTION(ms)) {
4107
    if (!PREACTION(ms)) {
4108
      result = sys_trim(ms, pad);
4108
      result = sys_trim(ms, pad);
4109
      POSTACTION(ms);
4109
      POSTACTION(ms);
4110
    }
4110
    }
4111
  }
4111
  }
4112
  else {
4112
  else {
4113
    USAGE_ERROR_ACTION(ms,ms);
4113
    USAGE_ERROR_ACTION(ms,ms);
4114
  }
4114
  }
4115
  return result;
4115
  return result;
4116
}
4116
}
4117
 
4117
 
4118
void mspace_malloc_stats(mspace msp) {
4118
void mspace_malloc_stats(mspace msp) {
4119
  mstate ms = (mstate)msp;
4119
  mstate ms = (mstate)msp;
4120
  if (ok_magic(ms)) {
4120
  if (ok_magic(ms)) {
4121
    internal_malloc_stats(ms);
4121
    internal_malloc_stats(ms);
4122
  }
4122
  }
4123
  else {
4123
  else {
4124
    USAGE_ERROR_ACTION(ms,ms);
4124
    USAGE_ERROR_ACTION(ms,ms);
4125
  }
4125
  }
4126
}
4126
}
4127
 
4127
 
4128
size_t mspace_footprint(mspace msp) {
4128
size_t mspace_footprint(mspace msp) {
4129
  size_t result;
4129
  size_t result;
4130
  mstate ms = (mstate)msp;
4130
  mstate ms = (mstate)msp;
4131
  if (ok_magic(ms)) {
4131
  if (ok_magic(ms)) {
4132
    result = ms->footprint;
4132
    result = ms->footprint;
4133
  }
4133
  }
4134
  USAGE_ERROR_ACTION(ms,ms);
4134
  USAGE_ERROR_ACTION(ms,ms);
4135
  return result;
4135
  return result;
4136
}
4136
}
4137
 
4137
 
4138
 
4138
 
4139
size_t mspace_max_footprint(mspace msp) {
4139
size_t mspace_max_footprint(mspace msp) {
4140
  size_t result;
4140
  size_t result;
4141
  mstate ms = (mstate)msp;
4141
  mstate ms = (mstate)msp;
4142
  if (ok_magic(ms)) {
4142
  if (ok_magic(ms)) {
4143
    result = ms->max_footprint;
4143
    result = ms->max_footprint;
4144
  }
4144
  }
4145
  USAGE_ERROR_ACTION(ms,ms);
4145
  USAGE_ERROR_ACTION(ms,ms);
4146
  return result;
4146
  return result;
4147
}
4147
}
4148
 
4148
 
4149
 
4149
 
4150
#if !NO_MALLINFO
4150
#if !NO_MALLINFO
4151
struct mallinfo mspace_mallinfo(mspace msp) {
4151
struct mallinfo mspace_mallinfo(mspace msp) {
4152
  mstate ms = (mstate)msp;
4152
  mstate ms = (mstate)msp;
4153
  if (!ok_magic(ms)) {
4153
  if (!ok_magic(ms)) {
4154
    USAGE_ERROR_ACTION(ms,ms);
4154
    USAGE_ERROR_ACTION(ms,ms);
4155
  }
4155
  }
4156
  return internal_mallinfo(ms);
4156
  return internal_mallinfo(ms);
4157
}
4157
}
4158
#endif /* NO_MALLINFO */
4158
#endif /* NO_MALLINFO */
4159
 
4159
 
4160
int mspace_mallopt(int param_number, int value) {
4160
int mspace_mallopt(int param_number, int value) {
4161
  return change_mparam(param_number, value);
4161
  return change_mparam(param_number, value);
4162
}
4162
}
4163
 
4163
 
4164
#endif /* MSPACES */
4164
#endif /* MSPACES */
4165
 
4165
 
4166
/* -------------------- Alternative MORECORE functions ------------------- */
4166
/* -------------------- Alternative MORECORE functions ------------------- */
4167
 
4167
 
4168
/*
4168
/*
4169
  Guidelines for creating a custom version of MORECORE:
4169
  Guidelines for creating a custom version of MORECORE:
4170
 
4170
 
4171
  * For best performance, MORECORE should allocate in multiples of pagesize.
4171
  * For best performance, MORECORE should allocate in multiples of pagesize.
4172
  * MORECORE may allocate more memory than requested. (Or even less,
4172
  * MORECORE may allocate more memory than requested. (Or even less,
4173
      but this will usually result in a malloc failure.)
4173
      but this will usually result in a malloc failure.)
4174
  * MORECORE must not allocate memory when given argument zero, but
4174
  * MORECORE must not allocate memory when given argument zero, but
4175
      instead return one past the end address of memory from previous
4175
      instead return one past the end address of memory from previous
4176
      nonzero call.
4176
      nonzero call.
4177
  * For best performance, consecutive calls to MORECORE with positive
4177
  * For best performance, consecutive calls to MORECORE with positive
4178
      arguments should return increasing addresses, indicating that
4178
      arguments should return increasing addresses, indicating that
4179
      space has been contiguously extended.
4179
      space has been contiguously extended.
4180
  * Even though consecutive calls to MORECORE need not return contiguous
4180
  * Even though consecutive calls to MORECORE need not return contiguous
4181
      addresses, it must be OK for malloc'ed chunks to span multiple
4181
      addresses, it must be OK for malloc'ed chunks to span multiple
4182
      regions in those cases where they do happen to be contiguous.
4182
      regions in those cases where they do happen to be contiguous.
4183
  * MORECORE need not handle negative arguments -- it may instead
4183
  * MORECORE need not handle negative arguments -- it may instead
4184
      just return MFAIL when given negative arguments.
4184
      just return MFAIL when given negative arguments.
4185
      Negative arguments are always multiples of pagesize. MORECORE
4185
      Negative arguments are always multiples of pagesize. MORECORE
4186
      must not misinterpret negative args as large positive unsigned
4186
      must not misinterpret negative args as large positive unsigned
4187
      args. You can suppress all such calls from even occurring by defining
4187
      args. You can suppress all such calls from even occurring by defining
4188
      MORECORE_CANNOT_TRIM,
4188
      MORECORE_CANNOT_TRIM,
4189
 
4189
 
4190
  As an example alternative MORECORE, here is a custom allocator
4190
  As an example alternative MORECORE, here is a custom allocator
4191
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4191
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4192
  necessarily physically contiguous non-paged memory (locked in,
4192
  necessarily physically contiguous non-paged memory (locked in,
4193
  present and won't get swapped out).  You can use it by uncommenting
4193
  present and won't get swapped out).  You can use it by uncommenting
4194
  this section, adding some #includes, and setting up the appropriate
4194
  this section, adding some #includes, and setting up the appropriate
4195
  defines above:
4195
  defines above:
4196
 
4196
 
4197
      #define MORECORE osMoreCore
4197
      #define MORECORE osMoreCore
4198
 
4198
 
4199
  There is also a shutdown routine that should somehow be called for
4199
  There is also a shutdown routine that should somehow be called for
4200
  cleanup upon program exit.
4200
  cleanup upon program exit.
4201
 
4201
 
4202
  #define MAX_POOL_ENTRIES 100
4202
  #define MAX_POOL_ENTRIES 100
4203
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4203
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4204
  static int next_os_pool;
4204
  static int next_os_pool;
4205
  void *our_os_pools[MAX_POOL_ENTRIES];
4205
  void *our_os_pools[MAX_POOL_ENTRIES];
4206
 
4206
 
4207
  void *osMoreCore(int size)
4207
  void *osMoreCore(int size)
4208
  {
4208
  {
4209
    void *ptr = 0;
4209
    void *ptr = 0;
4210
    static void *sbrk_top = 0;
4210
    static void *sbrk_top = 0;
4211
 
4211
 
4212
    if (size > 0)
4212
    if (size > 0)
4213
    {
4213
    {
4214
      if (size < MINIMUM_MORECORE_SIZE)
4214
      if (size < MINIMUM_MORECORE_SIZE)
4215
         size = MINIMUM_MORECORE_SIZE;
4215
         size = MINIMUM_MORECORE_SIZE;
4216
      if (CurrentExecutionLevel() == kTaskLevel)
4216
      if (CurrentExecutionLevel() == kTaskLevel)
4217
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4217
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4218
      if (ptr == 0)
4218
      if (ptr == 0)
4219
      {
4219
      {
4220
        return (void *) MFAIL;
4220
        return (void *) MFAIL;
4221
      }
4221
      }
4222
      // save ptrs so they can be freed during cleanup
4222
      // save ptrs so they can be freed during cleanup
4223
      our_os_pools[next_os_pool] = ptr;
4223
      our_os_pools[next_os_pool] = ptr;
4224
      next_os_pool++;
4224
      next_os_pool++;
4225
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4225
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4226
      sbrk_top = (char *) ptr + size;
4226
      sbrk_top = (char *) ptr + size;
4227
      return ptr;
4227
      return ptr;
4228
    }
4228
    }
4229
    else if (size < 0)
4229
    else if (size < 0)
4230
    {
4230
    {
4231
      // we don't currently support shrink behavior
4231
      // we don't currently support shrink behavior
4232
      return (void *) MFAIL;
4232
      return (void *) MFAIL;
4233
    }
4233
    }
4234
    else
4234
    else
4235
    {
4235
    {
4236
      return sbrk_top;
4236
      return sbrk_top;
4237
    }
4237
    }
4238
  }
4238
  }
4239
 
4239
 
4240
  // cleanup any allocated memory pools
4240
  // cleanup any allocated memory pools
4241
  // called as last thing before shutting down driver
4241
  // called as last thing before shutting down driver
4242
 
4242
 
4243
  void osCleanupMem(void)
4243
  void osCleanupMem(void)
4244
  {
4244
  {
4245
    void **ptr;
4245
    void **ptr;
4246
 
4246
 
4247
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4247
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4248
      if (*ptr)
4248
      if (*ptr)
4249
      {
4249
      {
4250
         PoolDeallocate(*ptr);
4250
         PoolDeallocate(*ptr);
4251
         *ptr = 0;
4251
         *ptr = 0;
4252
      }
4252
      }
4253
  }
4253
  }
4254
 
4254
 
4255
*/
4255
*/
4256
 
4256
 
4257
 
4257
 
4258
/* -----------------------------------------------------------------------
4258
/* -----------------------------------------------------------------------
4259
History:
4259
History:
4260
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4260
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4261
      * Add max_footprint functions
4261
      * Add max_footprint functions
4262
      * Ensure all appropriate literals are size_t
4262
      * Ensure all appropriate literals are size_t
4263
      * Fix conditional compilation problem for some #define settings
4263
      * Fix conditional compilation problem for some #define settings
4264
      * Avoid concatenating segments with the one provided
4264
      * Avoid concatenating segments with the one provided
4265
        in create_mspace_with_base
4265
        in create_mspace_with_base
4266
      * Rename some variables to avoid compiler shadowing warnings
4266
      * Rename some variables to avoid compiler shadowing warnings
4267
      * Use explicit lock initialization.
4267
      * Use explicit lock initialization.
4268
      * Better handling of sbrk interference.
4268
      * Better handling of sbrk interference.
4269
      * Simplify and fix segment insertion, trimming and mspace_destroy
4269
      * Simplify and fix segment insertion, trimming and mspace_destroy
4270
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4270
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4271
      * Thanks especially to Dennis Flanagan for help on these.
4271
      * Thanks especially to Dennis Flanagan for help on these.
4272
 
4272
 
4273
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4273
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4274
      * Fix memalign brace error.
4274
      * Fix memalign brace error.
4275
 
4275
 
4276
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4276
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4277
      * Fix improper #endif nesting in C++
4277
      * Fix improper #endif nesting in C++
4278
      * Add explicit casts needed for C++
4278
      * Add explicit casts needed for C++
4279
 
4279
 
4280
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4280
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4281
      * Use trees for large bins
4281
      * Use trees for large bins
4282
      * Support mspaces
4282
      * Support mspaces
4283
      * Use segments to unify sbrk-based and mmap-based system allocation,
4283
      * Use segments to unify sbrk-based and mmap-based system allocation,
4284
        removing need for emulation on most platforms without sbrk.
4284
        removing need for emulation on most platforms without sbrk.
4285
      * Default safety checks
4285
      * Default safety checks
4286
      * Optional footer checks. Thanks to William Robertson for the idea.
4286
      * Optional footer checks. Thanks to William Robertson for the idea.
4287
      * Internal code refactoring
4287
      * Internal code refactoring
4288
      * Incorporate suggestions and platform-specific changes.
4288
      * Incorporate suggestions and platform-specific changes.
4289
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4289
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4290
        Aaron Bachmann,  Emery Berger, and others.
4290
        Aaron Bachmann,  Emery Berger, and others.
4291
      * Speed up non-fastbin processing enough to remove fastbins.
4291
      * Speed up non-fastbin processing enough to remove fastbins.
4292
      * Remove useless cfree() to avoid conflicts with other apps.
4292
      * Remove useless cfree() to avoid conflicts with other apps.
4293
      * Remove internal memcpy, memset. Compilers handle builtins better.
4293
      * Remove internal memcpy, memset. Compilers handle builtins better.
4294
      * Remove some options that no one ever used and rename others.
4294
      * Remove some options that no one ever used and rename others.
4295
 
4295
 
4296
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4296
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4297
      * Fix malloc_state bitmap array misdeclaration
4297
      * Fix malloc_state bitmap array misdeclaration
4298
 
4298
 
4299
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4299
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4300
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4300
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4301
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4301
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4302
      * Better detection and support for non-contiguousness of MORECORE.
4302
      * Better detection and support for non-contiguousness of MORECORE.
4303
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4303
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4304
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4304
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4305
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4305
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4306
      * Raised default trim and map thresholds to 256K.
4306
      * Raised default trim and map thresholds to 256K.
4307
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4307
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4308
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4308
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4309
      * Branch-free bin calculation
4309
      * Branch-free bin calculation
4310
      * Default trim and mmap thresholds now 256K.
4310
      * Default trim and mmap thresholds now 256K.
4311
 
4311
 
4312
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4312
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4313
      * Introduce independent_comalloc and independent_calloc.
4313
      * Introduce independent_comalloc and independent_calloc.
4314
        Thanks to Michael Pachos for motivation and help.
4314
        Thanks to Michael Pachos for motivation and help.
4315
      * Make optional .h file available
4315
      * Make optional .h file available
4316
      * Allow > 2GB requests on 32bit systems.
4316
      * Allow > 2GB requests on 32bit systems.
4317
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4317
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4318
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4318
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4319
        and Anonymous.
4319
        and Anonymous.
4320
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4320
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4321
        helping test this.)
4321
        helping test this.)
4322
      * memalign: check alignment arg
4322
      * memalign: check alignment arg
4323
      * realloc: don't try to shift chunks backwards, since this
4323
      * realloc: don't try to shift chunks backwards, since this
4324
        leads to  more fragmentation in some programs and doesn't
4324
        leads to  more fragmentation in some programs and doesn't
4325
        seem to help in any others.
4325
        seem to help in any others.
4326
      * Collect all cases in malloc requiring system memory into sysmalloc
4326
      * Collect all cases in malloc requiring system memory into sysmalloc
4327
      * Use mmap as backup to sbrk
4327
      * Use mmap as backup to sbrk
4328
      * Place all internal state in malloc_state
4328
      * Place all internal state in malloc_state
4329
      * Introduce fastbins (although similar to 2.5.1)
4329
      * Introduce fastbins (although similar to 2.5.1)
4330
      * Many minor tunings and cosmetic improvements
4330
      * Many minor tunings and cosmetic improvements
4331
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4331
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4332
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4332
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4333
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4333
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4334
      * Include errno.h to support default failure action.
4334
      * Include errno.h to support default failure action.
4335
 
4335
 
4336
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4336
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4337
      * return null for negative arguments
4337
      * return null for negative arguments
4338
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4338
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4339
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4339
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4340
          (e.g. WIN32 platforms)
4340
          (e.g. WIN32 platforms)
4341
         * Cleanup header file inclusion for WIN32 platforms
4341
         * Cleanup header file inclusion for WIN32 platforms
4342
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4342
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4343
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4343
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4344
           memory allocation routines
4344
           memory allocation routines
4345
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4345
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4346
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4346
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4347
           usage of 'assert' in non-WIN32 code
4347
           usage of 'assert' in non-WIN32 code
4348
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4348
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4349
           avoid infinite loop
4349
           avoid infinite loop
4350
      * Always call 'fREe()' rather than 'free()'
4350
      * Always call 'fREe()' rather than 'free()'
4351
 
4351
 
4352
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4352
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4353
      * Fixed ordering problem with boundary-stamping
4353
      * Fixed ordering problem with boundary-stamping
4354
 
4354
 
4355
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4355
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4356
      * Added pvalloc, as recommended by H.J. Liu
4356
      * Added pvalloc, as recommended by H.J. Liu
4357
      * Added 64bit pointer support mainly from Wolfram Gloger
4357
      * Added 64bit pointer support mainly from Wolfram Gloger
4358
      * Added anonymously donated WIN32 sbrk emulation
4358
      * Added anonymously donated WIN32 sbrk emulation
4359
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4359
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4360
      * malloc_extend_top: fix mask error that caused wastage after
4360
      * malloc_extend_top: fix mask error that caused wastage after
4361
        foreign sbrks
4361
        foreign sbrks
4362
      * Add linux mremap support code from HJ Liu
4362
      * Add linux mremap support code from HJ Liu
4363
 
4363
 
4364
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4364
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4365
      * Integrated most documentation with the code.
4365
      * Integrated most documentation with the code.
4366
      * Add support for mmap, with help from
4366
      * Add support for mmap, with help from
4367
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4367
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4368
      * Use last_remainder in more cases.
4368
      * Use last_remainder in more cases.
4369
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4369
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4370
      * Use ordered bins instead of best-fit threshhold
4370
      * Use ordered bins instead of best-fit threshhold
4371
      * Eliminate block-local decls to simplify tracing and debugging.
4371
      * Eliminate block-local decls to simplify tracing and debugging.
4372
      * Support another case of realloc via move into top
4372
      * Support another case of realloc via move into top
4373
      * Fix error occuring when initial sbrk_base not word-aligned.
4373
      * Fix error occuring when initial sbrk_base not word-aligned.
4374
      * Rely on page size for units instead of SBRK_UNIT to
4374
      * Rely on page size for units instead of SBRK_UNIT to
4375
        avoid surprises about sbrk alignment conventions.
4375
        avoid surprises about sbrk alignment conventions.
4376
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4376
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4377
        (raymond@es.ele.tue.nl) for the suggestion.
4377
        (raymond@es.ele.tue.nl) for the suggestion.
4378
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4378
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4379
      * More precautions for cases where other routines call sbrk,
4379
      * More precautions for cases where other routines call sbrk,
4380
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4380
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4381
      * Added macros etc., allowing use in linux libc from
4381
      * Added macros etc., allowing use in linux libc from
4382
        H.J. Lu (hjl@gnu.ai.mit.edu)
4382
        H.J. Lu (hjl@gnu.ai.mit.edu)
4383
      * Inverted this history list
4383
      * Inverted this history list
4384
 
4384
 
4385
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4385
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4386
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4386
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4387
      * Removed all preallocation code since under current scheme
4387
      * Removed all preallocation code since under current scheme
4388
        the work required to undo bad preallocations exceeds
4388
        the work required to undo bad preallocations exceeds
4389
        the work saved in good cases for most test programs.
4389
        the work saved in good cases for most test programs.
4390
      * No longer use return list or unconsolidated bins since
4390
      * No longer use return list or unconsolidated bins since
4391
        no scheme using them consistently outperforms those that don't
4391
        no scheme using them consistently outperforms those that don't
4392
        given above changes.
4392
        given above changes.
4393
      * Use best fit for very large chunks to prevent some worst-cases.
4393
      * Use best fit for very large chunks to prevent some worst-cases.
4394
      * Added some support for debugging
4394
      * Added some support for debugging
4395
 
4395
 
4396
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4396
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4397
      * Removed footers when chunks are in use. Thanks to
4397
      * Removed footers when chunks are in use. Thanks to
4398
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4398
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4399
 
4399
 
4400
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4400
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4401
      * Added malloc_trim, with help from Wolfram Gloger
4401
      * Added malloc_trim, with help from Wolfram Gloger
4402
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4402
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4403
 
4403
 
4404
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4404
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4405
 
4405
 
4406
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4406
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4407
      * realloc: try to expand in both directions
4407
      * realloc: try to expand in both directions
4408
      * malloc: swap order of clean-bin strategy;
4408
      * malloc: swap order of clean-bin strategy;
4409
      * realloc: only conditionally expand backwards
4409
      * realloc: only conditionally expand backwards
4410
      * Try not to scavenge used bins
4410
      * Try not to scavenge used bins
4411
      * Use bin counts as a guide to preallocation
4411
      * Use bin counts as a guide to preallocation
4412
      * Occasionally bin return list chunks in first scan
4412
      * Occasionally bin return list chunks in first scan
4413
      * Add a few optimizations from colin@nyx10.cs.du.edu
4413
      * Add a few optimizations from colin@nyx10.cs.du.edu
4414
 
4414
 
4415
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4415
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4416
      * faster bin computation & slightly different binning
4416
      * faster bin computation & slightly different binning
4417
      * merged all consolidations to one part of malloc proper
4417
      * merged all consolidations to one part of malloc proper
4418
         (eliminating old malloc_find_space & malloc_clean_bin)
4418
         (eliminating old malloc_find_space & malloc_clean_bin)
4419
      * Scan 2 returns chunks (not just 1)
4419
      * Scan 2 returns chunks (not just 1)
4420
      * Propagate failure in realloc if malloc returns 0
4420
      * Propagate failure in realloc if malloc returns 0
4421
      * Add stuff to allow compilation on non-ANSI compilers
4421
      * Add stuff to allow compilation on non-ANSI compilers
4422
          from kpv@research.att.com
4422
          from kpv@research.att.com
4423
 
4423
 
4424
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4424
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4425
      * removed potential for odd address access in prev_chunk
4425
      * removed potential for odd address access in prev_chunk
4426
      * removed dependency on getpagesize.h
4426
      * removed dependency on getpagesize.h
4427
      * misc cosmetics and a bit more internal documentation
4427
      * misc cosmetics and a bit more internal documentation
4428
      * anticosmetics: mangled names in macros to evade debugger strangeness
4428
      * anticosmetics: mangled names in macros to evade debugger strangeness
4429
      * tested on sparc, hp-700, dec-mips, rs6000
4429
      * tested on sparc, hp-700, dec-mips, rs6000
4430
          with gcc & native cc (hp, dec only) allowing
4430
          with gcc & native cc (hp, dec only) allowing
4431
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4431
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4432
 
4432
 
4433
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4433
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4434
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4434
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4435
         structure of old version,  but most details differ.)
4435
         structure of old version,  but most details differ.)
4436
 
4436
 
4437
*/
4437
*/
4438
 
4438