Subversion Repositories HelenOS-historic

Rev

Rev 1656 | Go to most recent revision | Only display areas with differences | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 1656 Rev 1663
1
/*
1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
8
 
9
   Note: There may be an updated version of this malloc obtainable at
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
11
         Check before installing!
12
 
12
 
13
* Quickstart
13
* Quickstart
14
 
14
 
15
  This library is all in one file to simplify the most common usage:
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
19
  compile-time and dynamic tuning options.
20
 
20
 
21
  For convenience, an include file for code using this malloc is at:
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
33
  with library versions.
34
 
34
 
35
* Vital statistics:
35
* Vital statistics:
36
 
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
42
       (e.g. 2.7.2) supporting these.)
43
 
43
 
44
  Alignment:                                     8 bytes (default)
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
53
       if FOOTERS is defined.
54
 
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
66
 
67
  Security: static-safe; optionally more or less
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
78
       cannot, detect all possible programming errors.
79
 
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
89
       FOOTERS may also be worth using with MSPACES.)
90
 
90
 
91
       By default detected errors cause the program to abort (calling
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
99
       run out of memory because dropped memory is never reclaimed.
100
 
100
 
101
       If you don't like either of these options, you can define
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
105
       which might (or might not) provide a small performance improvement.
106
 
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
115
       a version of this malloc. (See http://www.malloc.de).
116
 
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
124
       like memset.
125
 
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
128
       others as well.
129
 
129
 
130
* Overview of algorithms
130
* Overview of algorithms
131
 
131
 
132
  This is not the fastest, most space-conserving, most portable, or
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
136
  general-purpose allocator for malloc-intensive programs.
137
 
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
149
  possibly fragmenting memory used only for large chunks.)
150
 
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
156
  system allocation routines, which is often just 1.
157
 
157
 
158
  The implementation is not very modular and seriously overuses
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
161
  but now, enough of them seem not to.
162
 
162
 
163
  Some compilers issue a lot of warnings about code that is
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
166
  ignored.
167
 
167
 
168
  For a longer but out of date high-level description, see
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
170
 
171
* MSPACES
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
185
 
186
  You can similarly create thread-local allocators by storing
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
191
      return mspace_malloc(tlms, bytes);
192
    }
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
200
  originating spaces.
201
 
201
 
202
 -------------------------  Compile-time options ---------------------------
202
 -------------------------  Compile-time options ---------------------------
203
 
203
 
204
Be careful in setting #define values for numerical constants of type
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
206
to size_t precision unless they are explicitly casted.
207
 
207
 
208
WIN32                    default: defined if _WIN32 defined
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
210
  Otherwise defaults are for unix.
211
 
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
217
  the case of 8-byte alignment.
218
 
218
 
219
MSPACES                  default: 0 (false)
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
221
  This is only supported if HAVE_MMAP is true.
222
 
222
 
223
ONLY_MSPACES             default: 0 (false)
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
224
  If true, only compile in mspace versions, not regular versions.
225
 
225
 
226
USE_LOCKS                default: 0 (false)
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
229
  overridden on a per-mspace basis for mspace versions.)
230
 
230
 
231
FOOTERS                  default: 0
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
234
  space and time overhead.
235
 
235
 
236
INSECURE                 default: 0
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
238
 
239
USE_DL_PREFIX            default: NOT defined
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
242
  of a program, using your regular system malloc elsewhere.
243
 
243
 
244
ABORT                    default: defined as abort()
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
253
  can better optimize code conditionally calling it.
254
 
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
264
  generates slower code than the default abort policy.
265
 
265
 
266
DEBUG                    default: NOT defined
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
275
  in the course of computing the summaries.
276
 
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
282
  which will usually make debugging easier.
283
 
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
286
  return memory because there is none available.
287
 
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
289
  True if this system supports sbrk or an emulation of it.
290
 
290
 
291
MORECORE                  default: sbrk
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
302
  version of MORECORE.
303
 
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
310
  and possibly wasted space it would take to discover this though.
311
 
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
316
  arguments.
317
 
317
 
318
HAVE_MMAP                 default: 1 (true)
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
326
  to MMAP, so long as they are adjacent.
327
 
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
330
  extend or shrink allocation spaces.
331
 
331
 
332
MMAP_CLEARS               default: 1 on unix
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
334
  for standard unix mmap using /dev/zero.
335
 
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
342
  slightly faster than the C version.)
343
 
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
349
  initialization.
350
 
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
353
  stamping footers. Otherwise, the current time is used.
354
 
354
 
355
NO_MALLINFO                default: 0
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
357
  of dealing with mismatches between system declarations and
358
  those in this file.
358
  those in this file.
359
 
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
369
  realloc(p, 0).
370
 
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
375
  You might need to manually insert some of the declarations they provide.
376
 
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
390
  "TOP_PAD")
391
 
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
413
  will immediately be returned to the system.
414
 
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
437
  disable mmap by setting to MAX_SIZE_T.
438
 
438
 
439
*/
439
*/
440
 
440
 
441
/** @addtogroup libcmalloc malloc
441
/** @addtogroup libcmalloc malloc
442
  * @brief Malloc originally written by Doug Lea and ported to HelenOS.
442
  * @brief Malloc originally written by Doug Lea and ported to HelenOS.
443
  * @ingroup libc
443
  * @ingroup libc
444
 * @{
444
 * @{
445
 */
445
 */
446
/** @file
446
/** @file
447
 */
447
 */
448
 
448
 
449
 
449
 
450
#include <sys/types.h>  /* For size_t */
450
#include <sys/types.h>  /* For size_t */
451
 
451
 
452
/** Non-default helenos customizations */
452
/** Non-default helenos customizations */
453
#define LACKS_FCNTL_H
453
#define LACKS_FCNTL_H
454
#define LACKS_SYS_MMAN_H
454
#define LACKS_SYS_MMAN_H
455
#define LACKS_SYS_PARAM_H
455
#define LACKS_SYS_PARAM_H
456
#undef HAVE_MMAP
456
#undef HAVE_MMAP
457
#define HAVE_MMAP 0
457
#define HAVE_MMAP 0
458
#define LACKS_ERRNO_H
458
#define LACKS_ERRNO_H
459
/* Set errno? */
459
/* Set errno? */
460
#undef MALLOC_FAILURE_ACTION
460
#undef MALLOC_FAILURE_ACTION
461
#define MALLOC_FAILURE_ACTION
461
#define MALLOC_FAILURE_ACTION
462
 
462
 
463
/* The maximum possible size_t value has all bits set */
463
/* The maximum possible size_t value has all bits set */
464
#define MAX_SIZE_T           (~(size_t)0)
464
#define MAX_SIZE_T           (~(size_t)0)
465
 
465
 
466
#define ONLY_MSPACES 0
466
#define ONLY_MSPACES 0
467
#define MSPACES 0
467
#define MSPACES 0
-
 
468
 
-
 
469
#ifdef MALLOC_ALIGNMENT_16
-
 
470
#define MALLOC_ALIGNMENT ((size_t)16U)
-
 
471
#else
468
#define MALLOC_ALIGNMENT ((size_t)8U)
472
#define MALLOC_ALIGNMENT ((size_t)8U)
-
 
473
#endif
-
 
474
 
469
#define FOOTERS 0
475
#define FOOTERS 0
470
#define ABORT  abort()
476
#define ABORT  abort()
471
#define ABORT_ON_ASSERT_FAILURE 1
477
#define ABORT_ON_ASSERT_FAILURE 1
472
#define PROCEED_ON_ERROR 0
478
#define PROCEED_ON_ERROR 0
473
#define USE_LOCKS 1
479
#define USE_LOCKS 1
474
#define INSECURE 0
480
#define INSECURE 0
475
#define HAVE_MMAP 0
481
#define HAVE_MMAP 0
476
 
482
 
477
#define MMAP_CLEARS 1
483
#define MMAP_CLEARS 1
478
 
484
 
479
#define HAVE_MORECORE 1
485
#define HAVE_MORECORE 1
480
#define MORECORE_CONTIGUOUS 1
486
#define MORECORE_CONTIGUOUS 1
481
#define MORECORE sbrk
487
#define MORECORE sbrk
482
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
488
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
483
 
489
 
484
#ifndef DEFAULT_TRIM_THRESHOLD
490
#ifndef DEFAULT_TRIM_THRESHOLD
485
#ifndef MORECORE_CANNOT_TRIM
491
#ifndef MORECORE_CANNOT_TRIM
486
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
492
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
487
#else   /* MORECORE_CANNOT_TRIM */
493
#else   /* MORECORE_CANNOT_TRIM */
488
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
494
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
489
#endif  /* MORECORE_CANNOT_TRIM */
495
#endif  /* MORECORE_CANNOT_TRIM */
490
#endif  /* DEFAULT_TRIM_THRESHOLD */
496
#endif  /* DEFAULT_TRIM_THRESHOLD */
491
#ifndef DEFAULT_MMAP_THRESHOLD
497
#ifndef DEFAULT_MMAP_THRESHOLD
492
#if HAVE_MMAP
498
#if HAVE_MMAP
493
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
499
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
494
#else   /* HAVE_MMAP */
500
#else   /* HAVE_MMAP */
495
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
501
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
496
#endif  /* HAVE_MMAP */
502
#endif  /* HAVE_MMAP */
497
#endif  /* DEFAULT_MMAP_THRESHOLD */
503
#endif  /* DEFAULT_MMAP_THRESHOLD */
498
#ifndef USE_BUILTIN_FFS
504
#ifndef USE_BUILTIN_FFS
499
#define USE_BUILTIN_FFS 0
505
#define USE_BUILTIN_FFS 0
500
#endif  /* USE_BUILTIN_FFS */
506
#endif  /* USE_BUILTIN_FFS */
501
#ifndef USE_DEV_RANDOM
507
#ifndef USE_DEV_RANDOM
502
#define USE_DEV_RANDOM 0
508
#define USE_DEV_RANDOM 0
503
#endif  /* USE_DEV_RANDOM */
509
#endif  /* USE_DEV_RANDOM */
504
#ifndef NO_MALLINFO
510
#ifndef NO_MALLINFO
505
#define NO_MALLINFO 0
511
#define NO_MALLINFO 0
506
#endif  /* NO_MALLINFO */
512
#endif  /* NO_MALLINFO */
507
#ifndef MALLINFO_FIELD_TYPE
513
#ifndef MALLINFO_FIELD_TYPE
508
#define MALLINFO_FIELD_TYPE size_t
514
#define MALLINFO_FIELD_TYPE size_t
509
#endif  /* MALLINFO_FIELD_TYPE */
515
#endif  /* MALLINFO_FIELD_TYPE */
510
 
516
 
511
/*
517
/*
512
  mallopt tuning options.  SVID/XPG defines four standard parameter
518
  mallopt tuning options.  SVID/XPG defines four standard parameter
513
  numbers for mallopt, normally defined in malloc.h.  None of these
519
  numbers for mallopt, normally defined in malloc.h.  None of these
514
  are used in this malloc, so setting them has no effect. But this
520
  are used in this malloc, so setting them has no effect. But this
515
  malloc does support the following options.
521
  malloc does support the following options.
516
*/
522
*/
517
 
523
 
518
#define M_TRIM_THRESHOLD     (-1)
524
#define M_TRIM_THRESHOLD     (-1)
519
#define M_GRANULARITY        (-2)
525
#define M_GRANULARITY        (-2)
520
#define M_MMAP_THRESHOLD     (-3)
526
#define M_MMAP_THRESHOLD     (-3)
521
 
527
 
522
/*
528
/*
523
  ========================================================================
529
  ========================================================================
524
  To make a fully customizable malloc.h header file, cut everything
530
  To make a fully customizable malloc.h header file, cut everything
525
  above this line, put into file malloc.h, edit to suit, and #include it
531
  above this line, put into file malloc.h, edit to suit, and #include it
526
  on the next line, as well as in programs that use this malloc.
532
  on the next line, as well as in programs that use this malloc.
527
  ========================================================================
533
  ========================================================================
528
*/
534
*/
529
 
535
 
530
#include "malloc.h"
536
#include "malloc.h"
531
 
537
 
532
/*------------------------------ internal #includes ---------------------- */
538
/*------------------------------ internal #includes ---------------------- */
533
 
539
 
534
#include <stdio.h>       /* for printing in malloc_stats */
540
#include <stdio.h>       /* for printing in malloc_stats */
535
#include <string.h>
541
#include <string.h>
536
 
542
 
537
#ifndef LACKS_ERRNO_H
543
#ifndef LACKS_ERRNO_H
538
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
544
#include <errno.h>       /* for MALLOC_FAILURE_ACTION */
539
#endif /* LACKS_ERRNO_H */
545
#endif /* LACKS_ERRNO_H */
540
#if FOOTERS
546
#if FOOTERS
541
#include <time.h>        /* for magic initialization */
547
#include <time.h>        /* for magic initialization */
542
#endif /* FOOTERS */
548
#endif /* FOOTERS */
543
#ifndef LACKS_STDLIB_H
549
#ifndef LACKS_STDLIB_H
544
#include <stdlib.h>      /* for abort() */
550
#include <stdlib.h>      /* for abort() */
545
#endif /* LACKS_STDLIB_H */
551
#endif /* LACKS_STDLIB_H */
546
#ifdef DEBUG
552
#ifdef DEBUG
547
#if ABORT_ON_ASSERT_FAILURE
553
#if ABORT_ON_ASSERT_FAILURE
548
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
554
#define assert(x) {if(!(x)) {printf(#x);ABORT;}}
549
#else /* ABORT_ON_ASSERT_FAILURE */
555
#else /* ABORT_ON_ASSERT_FAILURE */
550
#include <assert.h>
556
#include <assert.h>
551
#endif /* ABORT_ON_ASSERT_FAILURE */
557
#endif /* ABORT_ON_ASSERT_FAILURE */
552
#else  /* DEBUG */
558
#else  /* DEBUG */
553
#define assert(x)
559
#define assert(x)
554
#endif /* DEBUG */
560
#endif /* DEBUG */
555
#if USE_BUILTIN_FFS
561
#if USE_BUILTIN_FFS
556
#ifndef LACKS_STRINGS_H
562
#ifndef LACKS_STRINGS_H
557
#include <strings.h>     /* for ffs */
563
#include <strings.h>     /* for ffs */
558
#endif /* LACKS_STRINGS_H */
564
#endif /* LACKS_STRINGS_H */
559
#endif /* USE_BUILTIN_FFS */
565
#endif /* USE_BUILTIN_FFS */
560
#if HAVE_MMAP
566
#if HAVE_MMAP
561
#ifndef LACKS_SYS_MMAN_H
567
#ifndef LACKS_SYS_MMAN_H
562
#include <sys/mman.h>    /* for mmap */
568
#include <sys/mman.h>    /* for mmap */
563
#endif /* LACKS_SYS_MMAN_H */
569
#endif /* LACKS_SYS_MMAN_H */
564
#ifndef LACKS_FCNTL_H
570
#ifndef LACKS_FCNTL_H
565
#include <fcntl.h>
571
#include <fcntl.h>
566
#endif /* LACKS_FCNTL_H */
572
#endif /* LACKS_FCNTL_H */
567
#endif /* HAVE_MMAP */
573
#endif /* HAVE_MMAP */
568
#if HAVE_MORECORE
574
#if HAVE_MORECORE
569
#ifndef LACKS_UNISTD_H
575
#ifndef LACKS_UNISTD_H
570
#include <unistd.h>     /* for sbrk */
576
#include <unistd.h>     /* for sbrk */
571
#else /* LACKS_UNISTD_H */
577
#else /* LACKS_UNISTD_H */
572
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
578
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
573
extern void*     sbrk(ptrdiff_t);
579
extern void*     sbrk(ptrdiff_t);
574
#endif /* FreeBSD etc */
580
#endif /* FreeBSD etc */
575
#endif /* LACKS_UNISTD_H */
581
#endif /* LACKS_UNISTD_H */
576
#endif /* HAVE_MMAP */
582
#endif /* HAVE_MMAP */
577
 
583
 
578
#ifndef WIN32
584
#ifndef WIN32
579
#ifndef malloc_getpagesize
585
#ifndef malloc_getpagesize
580
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
586
#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
581
#    ifndef _SC_PAGE_SIZE
587
#    ifndef _SC_PAGE_SIZE
582
#      define _SC_PAGE_SIZE _SC_PAGESIZE
588
#      define _SC_PAGE_SIZE _SC_PAGESIZE
583
#    endif
589
#    endif
584
#  endif
590
#  endif
585
#  ifdef _SC_PAGE_SIZE
591
#  ifdef _SC_PAGE_SIZE
586
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
592
#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
587
#  else
593
#  else
588
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
594
#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
589
       extern size_t getpagesize();
595
       extern size_t getpagesize();
590
#      define malloc_getpagesize getpagesize()
596
#      define malloc_getpagesize getpagesize()
591
#    else
597
#    else
592
#      ifdef WIN32 /* use supplied emulation of getpagesize */
598
#      ifdef WIN32 /* use supplied emulation of getpagesize */
593
#        define malloc_getpagesize getpagesize()
599
#        define malloc_getpagesize getpagesize()
594
#      else
600
#      else
595
#        ifndef LACKS_SYS_PARAM_H
601
#        ifndef LACKS_SYS_PARAM_H
596
#          include <sys/param.h>
602
#          include <sys/param.h>
597
#        endif
603
#        endif
598
#        ifdef EXEC_PAGESIZE
604
#        ifdef EXEC_PAGESIZE
599
#          define malloc_getpagesize EXEC_PAGESIZE
605
#          define malloc_getpagesize EXEC_PAGESIZE
600
#        else
606
#        else
601
#          ifdef NBPG
607
#          ifdef NBPG
602
#            ifndef CLSIZE
608
#            ifndef CLSIZE
603
#              define malloc_getpagesize NBPG
609
#              define malloc_getpagesize NBPG
604
#            else
610
#            else
605
#              define malloc_getpagesize (NBPG * CLSIZE)
611
#              define malloc_getpagesize (NBPG * CLSIZE)
606
#            endif
612
#            endif
607
#          else
613
#          else
608
#            ifdef NBPC
614
#            ifdef NBPC
609
#              define malloc_getpagesize NBPC
615
#              define malloc_getpagesize NBPC
610
#            else
616
#            else
611
#              ifdef PAGESIZE
617
#              ifdef PAGESIZE
612
#                define malloc_getpagesize PAGESIZE
618
#                define malloc_getpagesize PAGESIZE
613
#              else /* just guess */
619
#              else /* just guess */
614
#                define malloc_getpagesize ((size_t)4096U)
620
#                define malloc_getpagesize ((size_t)4096U)
615
#              endif
621
#              endif
616
#            endif
622
#            endif
617
#          endif
623
#          endif
618
#        endif
624
#        endif
619
#      endif
625
#      endif
620
#    endif
626
#    endif
621
#  endif
627
#  endif
622
#endif
628
#endif
623
#endif
629
#endif
624
 
630
 
625
/* ------------------- size_t and alignment properties -------------------- */
631
/* ------------------- size_t and alignment properties -------------------- */
626
 
632
 
627
/* The byte and bit size of a size_t */
633
/* The byte and bit size of a size_t */
628
#define SIZE_T_SIZE         (sizeof(size_t))
634
#define SIZE_T_SIZE         (sizeof(size_t))
629
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
635
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
630
 
636
 
631
/* Some constants coerced to size_t */
637
/* Some constants coerced to size_t */
632
/* Annoying but necessary to avoid errors on some plaftorms */
638
/* Annoying but necessary to avoid errors on some plaftorms */
633
#define SIZE_T_ZERO         ((size_t)0)
639
#define SIZE_T_ZERO         ((size_t)0)
634
#define SIZE_T_ONE          ((size_t)1)
640
#define SIZE_T_ONE          ((size_t)1)
635
#define SIZE_T_TWO          ((size_t)2)
641
#define SIZE_T_TWO          ((size_t)2)
636
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
642
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
637
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
643
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
638
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
644
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
639
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
645
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
640
 
646
 
641
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
647
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
642
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
648
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
643
 
649
 
644
/* True if address a has acceptable alignment */
650
/* True if address a has acceptable alignment */
645
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
651
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
646
 
652
 
647
/* the number of bytes to offset an address to align it */
653
/* the number of bytes to offset an address to align it */
648
#define align_offset(A)\
654
#define align_offset(A)\
649
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
655
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
650
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
656
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
651
 
657
 
652
/* -------------------------- MMAP preliminaries ------------------------- */
658
/* -------------------------- MMAP preliminaries ------------------------- */
653
 
659
 
654
/*
660
/*
655
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
661
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
656
   checks to fail so compiler optimizer can delete code rather than
662
   checks to fail so compiler optimizer can delete code rather than
657
   using so many "#if"s.
663
   using so many "#if"s.
658
*/
664
*/
659
 
665
 
660
 
666
 
661
/* MORECORE and MMAP must return MFAIL on failure */
667
/* MORECORE and MMAP must return MFAIL on failure */
662
#define MFAIL                ((void*)(MAX_SIZE_T))
668
#define MFAIL                ((void*)(MAX_SIZE_T))
663
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
669
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
664
 
670
 
665
#if !HAVE_MMAP
671
#if !HAVE_MMAP
666
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
672
#define IS_MMAPPED_BIT       (SIZE_T_ZERO)
667
#define USE_MMAP_BIT         (SIZE_T_ZERO)
673
#define USE_MMAP_BIT         (SIZE_T_ZERO)
668
#define CALL_MMAP(s)         MFAIL
674
#define CALL_MMAP(s)         MFAIL
669
#define CALL_MUNMAP(a, s)    (-1)
675
#define CALL_MUNMAP(a, s)    (-1)
670
#define DIRECT_MMAP(s)       MFAIL
676
#define DIRECT_MMAP(s)       MFAIL
671
 
677
 
672
#else /* HAVE_MMAP */
678
#else /* HAVE_MMAP */
673
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
679
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
674
#define USE_MMAP_BIT         (SIZE_T_ONE)
680
#define USE_MMAP_BIT         (SIZE_T_ONE)
675
 
681
 
676
#ifndef WIN32
682
#ifndef WIN32
677
#define CALL_MUNMAP(a, s)    munmap((a), (s))
683
#define CALL_MUNMAP(a, s)    munmap((a), (s))
678
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
684
#define MMAP_PROT            (PROT_READ|PROT_WRITE)
679
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
685
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
680
#define MAP_ANONYMOUS        MAP_ANON
686
#define MAP_ANONYMOUS        MAP_ANON
681
#endif /* MAP_ANON */
687
#endif /* MAP_ANON */
682
#ifdef MAP_ANONYMOUS
688
#ifdef MAP_ANONYMOUS
683
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
689
#define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
684
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
690
#define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
685
#else /* MAP_ANONYMOUS */
691
#else /* MAP_ANONYMOUS */
686
/*
692
/*
687
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
693
   Nearly all versions of mmap support MAP_ANONYMOUS, so the following
688
   is unlikely to be needed, but is supplied just in case.
694
   is unlikely to be needed, but is supplied just in case.
689
*/
695
*/
690
#define MMAP_FLAGS           (MAP_PRIVATE)
696
#define MMAP_FLAGS           (MAP_PRIVATE)
691
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
697
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
692
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
698
#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
693
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
699
           (dev_zero_fd = open("/dev/zero", O_RDWR), \
694
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
700
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
695
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
701
            mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
696
#endif /* MAP_ANONYMOUS */
702
#endif /* MAP_ANONYMOUS */
697
 
703
 
698
#define DIRECT_MMAP(s)       CALL_MMAP(s)
704
#define DIRECT_MMAP(s)       CALL_MMAP(s)
699
#else /* WIN32 */
705
#else /* WIN32 */
700
 
706
 
701
/* Win32 MMAP via VirtualAlloc */
707
/* Win32 MMAP via VirtualAlloc */
702
static void* win32mmap(size_t size) {
708
static void* win32mmap(size_t size) {
703
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
709
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
704
  return (ptr != 0)? ptr: MFAIL;
710
  return (ptr != 0)? ptr: MFAIL;
705
}
711
}
706
 
712
 
707
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
713
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
708
static void* win32direct_mmap(size_t size) {
714
static void* win32direct_mmap(size_t size) {
709
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
715
  void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
710
                           PAGE_READWRITE);
716
                           PAGE_READWRITE);
711
  return (ptr != 0)? ptr: MFAIL;
717
  return (ptr != 0)? ptr: MFAIL;
712
}
718
}
713
 
719
 
714
/* This function supports releasing coalesed segments */
720
/* This function supports releasing coalesed segments */
715
static int win32munmap(void* ptr, size_t size) {
721
static int win32munmap(void* ptr, size_t size) {
716
  MEMORY_BASIC_INFORMATION minfo;
722
  MEMORY_BASIC_INFORMATION minfo;
717
  char* cptr = ptr;
723
  char* cptr = ptr;
718
  while (size) {
724
  while (size) {
719
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
725
    if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
720
      return -1;
726
      return -1;
721
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
727
    if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
722
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
728
        minfo.State != MEM_COMMIT || minfo.RegionSize > size)
723
      return -1;
729
      return -1;
724
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
730
    if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
725
      return -1;
731
      return -1;
726
    cptr += minfo.RegionSize;
732
    cptr += minfo.RegionSize;
727
    size -= minfo.RegionSize;
733
    size -= minfo.RegionSize;
728
  }
734
  }
729
  return 0;
735
  return 0;
730
}
736
}
731
 
737
 
732
#define CALL_MMAP(s)         win32mmap(s)
738
#define CALL_MMAP(s)         win32mmap(s)
733
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
739
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
734
#define DIRECT_MMAP(s)       win32direct_mmap(s)
740
#define DIRECT_MMAP(s)       win32direct_mmap(s)
735
#endif /* WIN32 */
741
#endif /* WIN32 */
736
#endif /* HAVE_MMAP */
742
#endif /* HAVE_MMAP */
737
 
743
 
738
#if HAVE_MMAP && HAVE_MREMAP
744
#if HAVE_MMAP && HAVE_MREMAP
739
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
745
#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
740
#else  /* HAVE_MMAP && HAVE_MREMAP */
746
#else  /* HAVE_MMAP && HAVE_MREMAP */
741
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
747
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
742
#endif /* HAVE_MMAP && HAVE_MREMAP */
748
#endif /* HAVE_MMAP && HAVE_MREMAP */
743
 
749
 
744
#if HAVE_MORECORE
750
#if HAVE_MORECORE
745
#define CALL_MORECORE(S)     MORECORE(S)
751
#define CALL_MORECORE(S)     MORECORE(S)
746
#else  /* HAVE_MORECORE */
752
#else  /* HAVE_MORECORE */
747
#define CALL_MORECORE(S)     MFAIL
753
#define CALL_MORECORE(S)     MFAIL
748
#endif /* HAVE_MORECORE */
754
#endif /* HAVE_MORECORE */
749
 
755
 
750
/* mstate bit set if continguous morecore disabled or failed */
756
/* mstate bit set if continguous morecore disabled or failed */
751
#define USE_NONCONTIGUOUS_BIT (4U)
757
#define USE_NONCONTIGUOUS_BIT (4U)
752
 
758
 
753
/* segment bit set in create_mspace_with_base */
759
/* segment bit set in create_mspace_with_base */
754
#define EXTERN_BIT            (8U)
760
#define EXTERN_BIT            (8U)
755
 
761
 
756
 
762
 
757
/* --------------------------- Lock preliminaries ------------------------ */
763
/* --------------------------- Lock preliminaries ------------------------ */
758
 
764
 
759
#if USE_LOCKS
765
#if USE_LOCKS
760
 
766
 
761
/*
767
/*
762
  When locks are defined, there are up to two global locks:
768
  When locks are defined, there are up to two global locks:
763
 
769
 
764
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
770
  * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
765
    MORECORE.  In many cases sys_alloc requires two calls, that should
771
    MORECORE.  In many cases sys_alloc requires two calls, that should
766
    not be interleaved with calls by other threads.  This does not
772
    not be interleaved with calls by other threads.  This does not
767
    protect against direct calls to MORECORE by other threads not
773
    protect against direct calls to MORECORE by other threads not
768
    using this lock, so there is still code to cope the best we can on
774
    using this lock, so there is still code to cope the best we can on
769
    interference.
775
    interference.
770
 
776
 
771
  * magic_init_mutex ensures that mparams.magic and other
777
  * magic_init_mutex ensures that mparams.magic and other
772
    unique mparams values are initialized only once.
778
    unique mparams values are initialized only once.
773
*/
779
*/
774
 
780
 
775
/* By default use posix locks */
781
/* By default use posix locks */
776
#include <futex.h>
782
#include <futex.h>
777
#define MLOCK_T atomic_t
783
#define MLOCK_T atomic_t
778
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
784
#define INITIAL_LOCK(l)      futex_initialize(l, 1)
779
/* futex_down cannot fail, but can return different
785
/* futex_down cannot fail, but can return different
780
 * retvals for OK
786
 * retvals for OK
781
 */
787
 */
782
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
788
#define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
783
#define RELEASE_LOCK(l)      futex_up(l)
789
#define RELEASE_LOCK(l)      futex_up(l)
784
 
790
 
785
#if HAVE_MORECORE
791
#if HAVE_MORECORE
786
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
792
static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
787
#endif /* HAVE_MORECORE */
793
#endif /* HAVE_MORECORE */
788
 
794
 
789
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
795
static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
790
 
796
 
791
 
797
 
792
#define USE_LOCK_BIT               (2U)
798
#define USE_LOCK_BIT               (2U)
793
#else  /* USE_LOCKS */
799
#else  /* USE_LOCKS */
794
#define USE_LOCK_BIT               (0U)
800
#define USE_LOCK_BIT               (0U)
795
#define INITIAL_LOCK(l)
801
#define INITIAL_LOCK(l)
796
#endif /* USE_LOCKS */
802
#endif /* USE_LOCKS */
797
 
803
 
798
#if USE_LOCKS && HAVE_MORECORE
804
#if USE_LOCKS && HAVE_MORECORE
799
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
805
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
800
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
806
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
801
#else /* USE_LOCKS && HAVE_MORECORE */
807
#else /* USE_LOCKS && HAVE_MORECORE */
802
#define ACQUIRE_MORECORE_LOCK()
808
#define ACQUIRE_MORECORE_LOCK()
803
#define RELEASE_MORECORE_LOCK()
809
#define RELEASE_MORECORE_LOCK()
804
#endif /* USE_LOCKS && HAVE_MORECORE */
810
#endif /* USE_LOCKS && HAVE_MORECORE */
805
 
811
 
806
#if USE_LOCKS
812
#if USE_LOCKS
807
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
813
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
808
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
814
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
809
#else  /* USE_LOCKS */
815
#else  /* USE_LOCKS */
810
#define ACQUIRE_MAGIC_INIT_LOCK()
816
#define ACQUIRE_MAGIC_INIT_LOCK()
811
#define RELEASE_MAGIC_INIT_LOCK()
817
#define RELEASE_MAGIC_INIT_LOCK()
812
#endif /* USE_LOCKS */
818
#endif /* USE_LOCKS */
813
 
819
 
814
 
820
 
815
/* -----------------------  Chunk representations ------------------------ */
821
/* -----------------------  Chunk representations ------------------------ */
816
 
822
 
817
/*
823
/*
818
  (The following includes lightly edited explanations by Colin Plumb.)
824
  (The following includes lightly edited explanations by Colin Plumb.)
819
 
825
 
820
  The malloc_chunk declaration below is misleading (but accurate and
826
  The malloc_chunk declaration below is misleading (but accurate and
821
  necessary).  It declares a "view" into memory allowing access to
827
  necessary).  It declares a "view" into memory allowing access to
822
  necessary fields at known offsets from a given base.
828
  necessary fields at known offsets from a given base.
823
 
829
 
824
  Chunks of memory are maintained using a `boundary tag' method as
830
  Chunks of memory are maintained using a `boundary tag' method as
825
  originally described by Knuth.  (See the paper by Paul Wilson
831
  originally described by Knuth.  (See the paper by Paul Wilson
826
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
832
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
827
  techniques.)  Sizes of free chunks are stored both in the front of
833
  techniques.)  Sizes of free chunks are stored both in the front of
828
  each chunk and at the end.  This makes consolidating fragmented
834
  each chunk and at the end.  This makes consolidating fragmented
829
  chunks into bigger chunks fast.  The head fields also hold bits
835
  chunks into bigger chunks fast.  The head fields also hold bits
830
  representing whether chunks are free or in use.
836
  representing whether chunks are free or in use.
831
 
837
 
832
  Here are some pictures to make it clearer.  They are "exploded" to
838
  Here are some pictures to make it clearer.  They are "exploded" to
833
  show that the state of a chunk can be thought of as extending from
839
  show that the state of a chunk can be thought of as extending from
834
  the high 31 bits of the head field of its header through the
840
  the high 31 bits of the head field of its header through the
835
  prev_foot and PINUSE_BIT bit of the following chunk header.
841
  prev_foot and PINUSE_BIT bit of the following chunk header.
836
 
842
 
837
  A chunk that's in use looks like:
843
  A chunk that's in use looks like:
838
 
844
 
839
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
845
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
840
           | Size of previous chunk (if P = 1)                             |
846
           | Size of previous chunk (if P = 1)                             |
841
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
847
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
842
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
848
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
843
         | Size of this chunk                                         1| +-+
849
         | Size of this chunk                                         1| +-+
844
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
850
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
845
         |                                                               |
851
         |                                                               |
846
         +-                                                             -+
852
         +-                                                             -+
847
         |                                                               |
853
         |                                                               |
848
         +-                                                             -+
854
         +-                                                             -+
849
         |                                                               :
855
         |                                                               :
850
         +-      size - sizeof(size_t) available payload bytes          -+
856
         +-      size - sizeof(size_t) available payload bytes          -+
851
         :                                                               |
857
         :                                                               |
852
 chunk-> +-                                                             -+
858
 chunk-> +-                                                             -+
853
         |                                                               |
859
         |                                                               |
854
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
860
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
855
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
861
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
856
       | Size of next chunk (may or may not be in use)               | +-+
862
       | Size of next chunk (may or may not be in use)               | +-+
857
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
863
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
858
 
864
 
859
    And if it's free, it looks like this:
865
    And if it's free, it looks like this:
860
 
866
 
861
   chunk-> +-                                                             -+
867
   chunk-> +-                                                             -+
862
           | User payload (must be in use, or we would have merged!)       |
868
           | User payload (must be in use, or we would have merged!)       |
863
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
869
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
864
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
870
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
865
         | Size of this chunk                                         0| +-+
871
         | Size of this chunk                                         0| +-+
866
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
872
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
867
         | Next pointer                                                  |
873
         | Next pointer                                                  |
868
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
874
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
869
         | Prev pointer                                                  |
875
         | Prev pointer                                                  |
870
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
876
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
871
         |                                                               :
877
         |                                                               :
872
         +-      size - sizeof(struct chunk) unused bytes               -+
878
         +-      size - sizeof(struct chunk) unused bytes               -+
873
         :                                                               |
879
         :                                                               |
874
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
880
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
875
         | Size of this chunk                                            |
881
         | Size of this chunk                                            |
876
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
882
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
877
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
883
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
878
       | Size of next chunk (must be in use, or we would have merged)| +-+
884
       | Size of next chunk (must be in use, or we would have merged)| +-+
879
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
885
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
880
       |                                                               :
886
       |                                                               :
881
       +- User payload                                                -+
887
       +- User payload                                                -+
882
       :                                                               |
888
       :                                                               |
883
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
889
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
884
                                                                     |0|
890
                                                                     |0|
885
                                                                     +-+
891
                                                                     +-+
886
  Note that since we always merge adjacent free chunks, the chunks
892
  Note that since we always merge adjacent free chunks, the chunks
887
  adjacent to a free chunk must be in use.
893
  adjacent to a free chunk must be in use.
888
 
894
 
889
  Given a pointer to a chunk (which can be derived trivially from the
895
  Given a pointer to a chunk (which can be derived trivially from the
890
  payload pointer) we can, in O(1) time, find out whether the adjacent
896
  payload pointer) we can, in O(1) time, find out whether the adjacent
891
  chunks are free, and if so, unlink them from the lists that they
897
  chunks are free, and if so, unlink them from the lists that they
892
  are on and merge them with the current chunk.
898
  are on and merge them with the current chunk.
893
 
899
 
894
  Chunks always begin on even word boundaries, so the mem portion
900
  Chunks always begin on even word boundaries, so the mem portion
895
  (which is returned to the user) is also on an even word boundary, and
901
  (which is returned to the user) is also on an even word boundary, and
896
  thus at least double-word aligned.
902
  thus at least double-word aligned.
897
 
903
 
898
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
904
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
899
  chunk size (which is always a multiple of two words), is an in-use
905
  chunk size (which is always a multiple of two words), is an in-use
900
  bit for the *previous* chunk.  If that bit is *clear*, then the
906
  bit for the *previous* chunk.  If that bit is *clear*, then the
901
  word before the current chunk size contains the previous chunk
907
  word before the current chunk size contains the previous chunk
902
  size, and can be used to find the front of the previous chunk.
908
  size, and can be used to find the front of the previous chunk.
903
  The very first chunk allocated always has this bit set, preventing
909
  The very first chunk allocated always has this bit set, preventing
904
  access to non-existent (or non-owned) memory. If pinuse is set for
910
  access to non-existent (or non-owned) memory. If pinuse is set for
905
  any given chunk, then you CANNOT determine the size of the
911
  any given chunk, then you CANNOT determine the size of the
906
  previous chunk, and might even get a memory addressing fault when
912
  previous chunk, and might even get a memory addressing fault when
907
  trying to do so.
913
  trying to do so.
908
 
914
 
909
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
915
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
910
  the chunk size redundantly records whether the current chunk is
916
  the chunk size redundantly records whether the current chunk is
911
  inuse. This redundancy enables usage checks within free and realloc,
917
  inuse. This redundancy enables usage checks within free and realloc,
912
  and reduces indirection when freeing and consolidating chunks.
918
  and reduces indirection when freeing and consolidating chunks.
913
 
919
 
914
  Each freshly allocated chunk must have both cinuse and pinuse set.
920
  Each freshly allocated chunk must have both cinuse and pinuse set.
915
  That is, each allocated chunk borders either a previously allocated
921
  That is, each allocated chunk borders either a previously allocated
916
  and still in-use chunk, or the base of its memory arena. This is
922
  and still in-use chunk, or the base of its memory arena. This is
917
  ensured by making all allocations from the the `lowest' part of any
923
  ensured by making all allocations from the the `lowest' part of any
918
  found chunk.  Further, no free chunk physically borders another one,
924
  found chunk.  Further, no free chunk physically borders another one,
919
  so each free chunk is known to be preceded and followed by either
925
  so each free chunk is known to be preceded and followed by either
920
  inuse chunks or the ends of memory.
926
  inuse chunks or the ends of memory.
921
 
927
 
922
  Note that the `foot' of the current chunk is actually represented
928
  Note that the `foot' of the current chunk is actually represented
923
  as the prev_foot of the NEXT chunk. This makes it easier to
929
  as the prev_foot of the NEXT chunk. This makes it easier to
924
  deal with alignments etc but can be very confusing when trying
930
  deal with alignments etc but can be very confusing when trying
925
  to extend or adapt this code.
931
  to extend or adapt this code.
926
 
932
 
927
  The exceptions to all this are
933
  The exceptions to all this are
928
 
934
 
929
     1. The special chunk `top' is the top-most available chunk (i.e.,
935
     1. The special chunk `top' is the top-most available chunk (i.e.,
930
        the one bordering the end of available memory). It is treated
936
        the one bordering the end of available memory). It is treated
931
        specially.  Top is never included in any bin, is used only if
937
        specially.  Top is never included in any bin, is used only if
932
        no other chunk is available, and is released back to the
938
        no other chunk is available, and is released back to the
933
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
939
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
934
        the top chunk is treated as larger (and thus less well
940
        the top chunk is treated as larger (and thus less well
935
        fitting) than any other available chunk.  The top chunk
941
        fitting) than any other available chunk.  The top chunk
936
        doesn't update its trailing size field since there is no next
942
        doesn't update its trailing size field since there is no next
937
        contiguous chunk that would have to index off it. However,
943
        contiguous chunk that would have to index off it. However,
938
        space is still allocated for it (TOP_FOOT_SIZE) to enable
944
        space is still allocated for it (TOP_FOOT_SIZE) to enable
939
        separation or merging when space is extended.
945
        separation or merging when space is extended.
940
 
946
 
941
     3. Chunks allocated via mmap, which have the lowest-order bit
947
     3. Chunks allocated via mmap, which have the lowest-order bit
942
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
948
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
943
        PINUSE_BIT in their head fields.  Because they are allocated
949
        PINUSE_BIT in their head fields.  Because they are allocated
944
        one-by-one, each must carry its own prev_foot field, which is
950
        one-by-one, each must carry its own prev_foot field, which is
945
        also used to hold the offset this chunk has within its mmapped
951
        also used to hold the offset this chunk has within its mmapped
946
        region, which is needed to preserve alignment. Each mmapped
952
        region, which is needed to preserve alignment. Each mmapped
947
        chunk is trailed by the first two fields of a fake next-chunk
953
        chunk is trailed by the first two fields of a fake next-chunk
948
        for sake of usage checks.
954
        for sake of usage checks.
949
 
955
 
950
*/
956
*/
951
 
957
 
952
struct malloc_chunk {
958
struct malloc_chunk {
953
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
959
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
954
  size_t               head;       /* Size and inuse bits. */
960
  size_t               head;       /* Size and inuse bits. */
955
  struct malloc_chunk* fd;         /* double links -- used only if free. */
961
  struct malloc_chunk* fd;         /* double links -- used only if free. */
956
  struct malloc_chunk* bk;
962
  struct malloc_chunk* bk;
957
};
963
};
958
 
964
 
959
typedef struct malloc_chunk  mchunk;
965
typedef struct malloc_chunk  mchunk;
960
typedef struct malloc_chunk* mchunkptr;
966
typedef struct malloc_chunk* mchunkptr;
961
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
967
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
962
typedef unsigned int bindex_t;         /* Described below */
968
typedef unsigned int bindex_t;         /* Described below */
963
typedef unsigned int binmap_t;         /* Described below */
969
typedef unsigned int binmap_t;         /* Described below */
964
typedef unsigned int flag_t;           /* The type of various bit flag sets */
970
typedef unsigned int flag_t;           /* The type of various bit flag sets */
965
 
971
 
966
/* ------------------- Chunks sizes and alignments ----------------------- */
972
/* ------------------- Chunks sizes and alignments ----------------------- */
967
 
973
 
968
#define MCHUNK_SIZE         (sizeof(mchunk))
974
#define MCHUNK_SIZE         (sizeof(mchunk))
969
 
975
 
970
#if FOOTERS
976
#if FOOTERS
971
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
977
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
972
#else /* FOOTERS */
978
#else /* FOOTERS */
973
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
979
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
974
#endif /* FOOTERS */
980
#endif /* FOOTERS */
975
 
981
 
976
/* MMapped chunks need a second word of overhead ... */
982
/* MMapped chunks need a second word of overhead ... */
977
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
983
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
978
/* ... and additional padding for fake next-chunk at foot */
984
/* ... and additional padding for fake next-chunk at foot */
979
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
985
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
980
 
986
 
981
/* The smallest size we can malloc is an aligned minimal chunk */
987
/* The smallest size we can malloc is an aligned minimal chunk */
982
#define MIN_CHUNK_SIZE\
988
#define MIN_CHUNK_SIZE\
983
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
989
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
984
 
990
 
985
/* conversion from malloc headers to user pointers, and back */
991
/* conversion from malloc headers to user pointers, and back */
986
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
992
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
987
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
993
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
988
/* chunk associated with aligned address A */
994
/* chunk associated with aligned address A */
989
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
995
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
990
 
996
 
991
/* Bounds on request (not chunk) sizes. */
997
/* Bounds on request (not chunk) sizes. */
992
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
998
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
993
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
999
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
994
 
1000
 
995
/* pad request bytes into a usable size */
1001
/* pad request bytes into a usable size */
996
#define pad_request(req) \
1002
#define pad_request(req) \
997
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1003
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
998
 
1004
 
999
/* pad request, checking for minimum (but not maximum) */
1005
/* pad request, checking for minimum (but not maximum) */
1000
#define request2size(req) \
1006
#define request2size(req) \
1001
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1007
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1002
 
1008
 
1003
 
1009
 
1004
/* ------------------ Operations on head and foot fields ----------------- */
1010
/* ------------------ Operations on head and foot fields ----------------- */
1005
 
1011
 
1006
/*
1012
/*
1007
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1013
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1008
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1014
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1009
  use. If the chunk was obtained with mmap, the prev_foot field has
1015
  use. If the chunk was obtained with mmap, the prev_foot field has
1010
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1016
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1011
  mmapped region to the base of the chunk.
1017
  mmapped region to the base of the chunk.
1012
*/
1018
*/
1013
 
1019
 
1014
#define PINUSE_BIT          (SIZE_T_ONE)
1020
#define PINUSE_BIT          (SIZE_T_ONE)
1015
#define CINUSE_BIT          (SIZE_T_TWO)
1021
#define CINUSE_BIT          (SIZE_T_TWO)
1016
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1022
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1017
 
1023
 
1018
/* Head value for fenceposts */
1024
/* Head value for fenceposts */
1019
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1025
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1020
 
1026
 
1021
/* extraction of fields from head words */
1027
/* extraction of fields from head words */
1022
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1028
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1023
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1029
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1024
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1030
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1025
 
1031
 
1026
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1032
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1027
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1033
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1028
 
1034
 
1029
/* Treat space at ptr +/- offset as a chunk */
1035
/* Treat space at ptr +/- offset as a chunk */
1030
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1036
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1031
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1037
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1032
 
1038
 
1033
/* Ptr to next or previous physical malloc_chunk. */
1039
/* Ptr to next or previous physical malloc_chunk. */
1034
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1040
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1035
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1041
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1036
 
1042
 
1037
/* extract next chunk's pinuse bit */
1043
/* extract next chunk's pinuse bit */
1038
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1044
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1039
 
1045
 
1040
/* Get/set size at footer */
1046
/* Get/set size at footer */
1041
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1047
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1042
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1048
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1043
 
1049
 
1044
/* Set size, pinuse bit, and foot */
1050
/* Set size, pinuse bit, and foot */
1045
#define set_size_and_pinuse_of_free_chunk(p, s)\
1051
#define set_size_and_pinuse_of_free_chunk(p, s)\
1046
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1052
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1047
 
1053
 
1048
/* Set size, pinuse bit, foot, and clear next pinuse */
1054
/* Set size, pinuse bit, foot, and clear next pinuse */
1049
#define set_free_with_pinuse(p, s, n)\
1055
#define set_free_with_pinuse(p, s, n)\
1050
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1056
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1051
 
1057
 
1052
#define is_mmapped(p)\
1058
#define is_mmapped(p)\
1053
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1059
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1054
 
1060
 
1055
/* Get the internal overhead associated with chunk p */
1061
/* Get the internal overhead associated with chunk p */
1056
#define overhead_for(p)\
1062
#define overhead_for(p)\
1057
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1063
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1058
 
1064
 
1059
/* Return true if malloced space is not necessarily cleared */
1065
/* Return true if malloced space is not necessarily cleared */
1060
#if MMAP_CLEARS
1066
#if MMAP_CLEARS
1061
#define calloc_must_clear(p) (!is_mmapped(p))
1067
#define calloc_must_clear(p) (!is_mmapped(p))
1062
#else /* MMAP_CLEARS */
1068
#else /* MMAP_CLEARS */
1063
#define calloc_must_clear(p) (1)
1069
#define calloc_must_clear(p) (1)
1064
#endif /* MMAP_CLEARS */
1070
#endif /* MMAP_CLEARS */
1065
 
1071
 
1066
/* ---------------------- Overlaid data structures ----------------------- */
1072
/* ---------------------- Overlaid data structures ----------------------- */
1067
 
1073
 
1068
/*
1074
/*
1069
  When chunks are not in use, they are treated as nodes of either
1075
  When chunks are not in use, they are treated as nodes of either
1070
  lists or trees.
1076
  lists or trees.
1071
 
1077
 
1072
  "Small"  chunks are stored in circular doubly-linked lists, and look
1078
  "Small"  chunks are stored in circular doubly-linked lists, and look
1073
  like this:
1079
  like this:
1074
 
1080
 
1075
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1081
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1076
            |             Size of previous chunk                            |
1082
            |             Size of previous chunk                            |
1077
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1083
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1078
    `head:' |             Size of chunk, in bytes                         |P|
1084
    `head:' |             Size of chunk, in bytes                         |P|
1079
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1085
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1080
            |             Forward pointer to next chunk in list             |
1086
            |             Forward pointer to next chunk in list             |
1081
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1087
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1082
            |             Back pointer to previous chunk in list            |
1088
            |             Back pointer to previous chunk in list            |
1083
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1084
            |             Unused space (may be 0 bytes long)                .
1090
            |             Unused space (may be 0 bytes long)                .
1085
            .                                                               .
1091
            .                                                               .
1086
            .                                                               |
1092
            .                                                               |
1087
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1093
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1088
    `foot:' |             Size of chunk, in bytes                           |
1094
    `foot:' |             Size of chunk, in bytes                           |
1089
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1095
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1090
 
1096
 
1091
  Larger chunks are kept in a form of bitwise digital trees (aka
1097
  Larger chunks are kept in a form of bitwise digital trees (aka
1092
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1098
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1093
  free chunks greater than 256 bytes, their size doesn't impose any
1099
  free chunks greater than 256 bytes, their size doesn't impose any
1094
  constraints on user chunk sizes.  Each node looks like:
1100
  constraints on user chunk sizes.  Each node looks like:
1095
 
1101
 
1096
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1102
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1097
            |             Size of previous chunk                            |
1103
            |             Size of previous chunk                            |
1098
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1104
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1099
    `head:' |             Size of chunk, in bytes                         |P|
1105
    `head:' |             Size of chunk, in bytes                         |P|
1100
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1106
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1101
            |             Forward pointer to next chunk of same size        |
1107
            |             Forward pointer to next chunk of same size        |
1102
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1108
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1103
            |             Back pointer to previous chunk of same size       |
1109
            |             Back pointer to previous chunk of same size       |
1104
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1105
            |             Pointer to left child (child[0])                  |
1111
            |             Pointer to left child (child[0])                  |
1106
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1112
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1107
            |             Pointer to right child (child[1])                 |
1113
            |             Pointer to right child (child[1])                 |
1108
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1114
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1109
            |             Pointer to parent                                 |
1115
            |             Pointer to parent                                 |
1110
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1116
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1111
            |             bin index of this chunk                           |
1117
            |             bin index of this chunk                           |
1112
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1118
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1113
            |             Unused space                                      .
1119
            |             Unused space                                      .
1114
            .                                                               |
1120
            .                                                               |
1115
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1121
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1116
    `foot:' |             Size of chunk, in bytes                           |
1122
    `foot:' |             Size of chunk, in bytes                           |
1117
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1123
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1118
 
1124
 
1119
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1125
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1120
  of the same size are arranged in a circularly-linked list, with only
1126
  of the same size are arranged in a circularly-linked list, with only
1121
  the oldest chunk (the next to be used, in our FIFO ordering)
1127
  the oldest chunk (the next to be used, in our FIFO ordering)
1122
  actually in the tree.  (Tree members are distinguished by a non-null
1128
  actually in the tree.  (Tree members are distinguished by a non-null
1123
  parent pointer.)  If a chunk with the same size an an existing node
1129
  parent pointer.)  If a chunk with the same size an an existing node
1124
  is inserted, it is linked off the existing node using pointers that
1130
  is inserted, it is linked off the existing node using pointers that
1125
  work in the same way as fd/bk pointers of small chunks.
1131
  work in the same way as fd/bk pointers of small chunks.
1126
 
1132
 
1127
  Each tree contains a power of 2 sized range of chunk sizes (the
1133
  Each tree contains a power of 2 sized range of chunk sizes (the
1128
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1134
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1129
  tree level, with the chunks in the smaller half of the range (0x100
1135
  tree level, with the chunks in the smaller half of the range (0x100
1130
  <= x < 0x140 for the top nose) in the left subtree and the larger
1136
  <= x < 0x140 for the top nose) in the left subtree and the larger
1131
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1137
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1132
  done by inspecting individual bits.
1138
  done by inspecting individual bits.
1133
 
1139
 
1134
  Using these rules, each node's left subtree contains all smaller
1140
  Using these rules, each node's left subtree contains all smaller
1135
  sizes than its right subtree.  However, the node at the root of each
1141
  sizes than its right subtree.  However, the node at the root of each
1136
  subtree has no particular ordering relationship to either.  (The
1142
  subtree has no particular ordering relationship to either.  (The
1137
  dividing line between the subtree sizes is based on trie relation.)
1143
  dividing line between the subtree sizes is based on trie relation.)
1138
  If we remove the last chunk of a given size from the interior of the
1144
  If we remove the last chunk of a given size from the interior of the
1139
  tree, we need to replace it with a leaf node.  The tree ordering
1145
  tree, we need to replace it with a leaf node.  The tree ordering
1140
  rules permit a node to be replaced by any leaf below it.
1146
  rules permit a node to be replaced by any leaf below it.
1141
 
1147
 
1142
  The smallest chunk in a tree (a common operation in a best-fit
1148
  The smallest chunk in a tree (a common operation in a best-fit
1143
  allocator) can be found by walking a path to the leftmost leaf in
1149
  allocator) can be found by walking a path to the leftmost leaf in
1144
  the tree.  Unlike a usual binary tree, where we follow left child
1150
  the tree.  Unlike a usual binary tree, where we follow left child
1145
  pointers until we reach a null, here we follow the right child
1151
  pointers until we reach a null, here we follow the right child
1146
  pointer any time the left one is null, until we reach a leaf with
1152
  pointer any time the left one is null, until we reach a leaf with
1147
  both child pointers null. The smallest chunk in the tree will be
1153
  both child pointers null. The smallest chunk in the tree will be
1148
  somewhere along that path.
1154
  somewhere along that path.
1149
 
1155
 
1150
  The worst case number of steps to add, find, or remove a node is
1156
  The worst case number of steps to add, find, or remove a node is
1151
  bounded by the number of bits differentiating chunks within
1157
  bounded by the number of bits differentiating chunks within
1152
  bins. Under current bin calculations, this ranges from 6 up to 21
1158
  bins. Under current bin calculations, this ranges from 6 up to 21
1153
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1159
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1154
  is of course much better.
1160
  is of course much better.
1155
*/
1161
*/
1156
 
1162
 
1157
struct malloc_tree_chunk {
1163
struct malloc_tree_chunk {
1158
  /* The first four fields must be compatible with malloc_chunk */
1164
  /* The first four fields must be compatible with malloc_chunk */
1159
  size_t                    prev_foot;
1165
  size_t                    prev_foot;
1160
  size_t                    head;
1166
  size_t                    head;
1161
  struct malloc_tree_chunk* fd;
1167
  struct malloc_tree_chunk* fd;
1162
  struct malloc_tree_chunk* bk;
1168
  struct malloc_tree_chunk* bk;
1163
 
1169
 
1164
  struct malloc_tree_chunk* child[2];
1170
  struct malloc_tree_chunk* child[2];
1165
  struct malloc_tree_chunk* parent;
1171
  struct malloc_tree_chunk* parent;
1166
  bindex_t                  index;
1172
  bindex_t                  index;
1167
};
1173
};
1168
 
1174
 
1169
typedef struct malloc_tree_chunk  tchunk;
1175
typedef struct malloc_tree_chunk  tchunk;
1170
typedef struct malloc_tree_chunk* tchunkptr;
1176
typedef struct malloc_tree_chunk* tchunkptr;
1171
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1177
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1172
 
1178
 
1173
/* A little helper macro for trees */
1179
/* A little helper macro for trees */
1174
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1180
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1175
 
1181
 
1176
/* ----------------------------- Segments -------------------------------- */
1182
/* ----------------------------- Segments -------------------------------- */
1177
 
1183
 
1178
/*
1184
/*
1179
  Each malloc space may include non-contiguous segments, held in a
1185
  Each malloc space may include non-contiguous segments, held in a
1180
  list headed by an embedded malloc_segment record representing the
1186
  list headed by an embedded malloc_segment record representing the
1181
  top-most space. Segments also include flags holding properties of
1187
  top-most space. Segments also include flags holding properties of
1182
  the space. Large chunks that are directly allocated by mmap are not
1188
  the space. Large chunks that are directly allocated by mmap are not
1183
  included in this list. They are instead independently created and
1189
  included in this list. They are instead independently created and
1184
  destroyed without otherwise keeping track of them.
1190
  destroyed without otherwise keeping track of them.
1185
 
1191
 
1186
  Segment management mainly comes into play for spaces allocated by
1192
  Segment management mainly comes into play for spaces allocated by
1187
  MMAP.  Any call to MMAP might or might not return memory that is
1193
  MMAP.  Any call to MMAP might or might not return memory that is
1188
  adjacent to an existing segment.  MORECORE normally contiguously
1194
  adjacent to an existing segment.  MORECORE normally contiguously
1189
  extends the current space, so this space is almost always adjacent,
1195
  extends the current space, so this space is almost always adjacent,
1190
  which is simpler and faster to deal with. (This is why MORECORE is
1196
  which is simpler and faster to deal with. (This is why MORECORE is
1191
  used preferentially to MMAP when both are available -- see
1197
  used preferentially to MMAP when both are available -- see
1192
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1198
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1193
  hinting mechanisms (inconsistently) supported in various
1199
  hinting mechanisms (inconsistently) supported in various
1194
  implementations of unix mmap, or distinguish reserving from
1200
  implementations of unix mmap, or distinguish reserving from
1195
  committing memory. Instead, we just ask for space, and exploit
1201
  committing memory. Instead, we just ask for space, and exploit
1196
  contiguity when we get it.  It is probably possible to do
1202
  contiguity when we get it.  It is probably possible to do
1197
  better than this on some systems, but no general scheme seems
1203
  better than this on some systems, but no general scheme seems
1198
  to be significantly better.
1204
  to be significantly better.
1199
 
1205
 
1200
  Management entails a simpler variant of the consolidation scheme
1206
  Management entails a simpler variant of the consolidation scheme
1201
  used for chunks to reduce fragmentation -- new adjacent memory is
1207
  used for chunks to reduce fragmentation -- new adjacent memory is
1202
  normally prepended or appended to an existing segment. However,
1208
  normally prepended or appended to an existing segment. However,
1203
  there are limitations compared to chunk consolidation that mostly
1209
  there are limitations compared to chunk consolidation that mostly
1204
  reflect the fact that segment processing is relatively infrequent
1210
  reflect the fact that segment processing is relatively infrequent
1205
  (occurring only when getting memory from system) and that we
1211
  (occurring only when getting memory from system) and that we
1206
  don't expect to have huge numbers of segments:
1212
  don't expect to have huge numbers of segments:
1207
 
1213
 
1208
  * Segments are not indexed, so traversal requires linear scans.  (It
1214
  * Segments are not indexed, so traversal requires linear scans.  (It
1209
    would be possible to index these, but is not worth the extra
1215
    would be possible to index these, but is not worth the extra
1210
    overhead and complexity for most programs on most platforms.)
1216
    overhead and complexity for most programs on most platforms.)
1211
  * New segments are only appended to old ones when holding top-most
1217
  * New segments are only appended to old ones when holding top-most
1212
    memory; if they cannot be prepended to others, they are held in
1218
    memory; if they cannot be prepended to others, they are held in
1213
    different segments.
1219
    different segments.
1214
 
1220
 
1215
  Except for the top-most segment of an mstate, each segment record
1221
  Except for the top-most segment of an mstate, each segment record
1216
  is kept at the tail of its segment. Segments are added by pushing
1222
  is kept at the tail of its segment. Segments are added by pushing
1217
  segment records onto the list headed by &mstate.seg for the
1223
  segment records onto the list headed by &mstate.seg for the
1218
  containing mstate.
1224
  containing mstate.
1219
 
1225
 
1220
  Segment flags control allocation/merge/deallocation policies:
1226
  Segment flags control allocation/merge/deallocation policies:
1221
  * If EXTERN_BIT set, then we did not allocate this segment,
1227
  * If EXTERN_BIT set, then we did not allocate this segment,
1222
    and so should not try to deallocate or merge with others.
1228
    and so should not try to deallocate or merge with others.
1223
    (This currently holds only for the initial segment passed
1229
    (This currently holds only for the initial segment passed
1224
    into create_mspace_with_base.)
1230
    into create_mspace_with_base.)
1225
  * If IS_MMAPPED_BIT set, the segment may be merged with
1231
  * If IS_MMAPPED_BIT set, the segment may be merged with
1226
    other surrounding mmapped segments and trimmed/de-allocated
1232
    other surrounding mmapped segments and trimmed/de-allocated
1227
    using munmap.
1233
    using munmap.
1228
  * If neither bit is set, then the segment was obtained using
1234
  * If neither bit is set, then the segment was obtained using
1229
    MORECORE so can be merged with surrounding MORECORE'd segments
1235
    MORECORE so can be merged with surrounding MORECORE'd segments
1230
    and deallocated/trimmed using MORECORE with negative arguments.
1236
    and deallocated/trimmed using MORECORE with negative arguments.
1231
*/
1237
*/
1232
 
1238
 
1233
struct malloc_segment {
1239
struct malloc_segment {
1234
  char*        base;             /* base address */
1240
  char*        base;             /* base address */
1235
  size_t       size;             /* allocated size */
1241
  size_t       size;             /* allocated size */
1236
  struct malloc_segment* next;   /* ptr to next segment */
1242
  struct malloc_segment* next;   /* ptr to next segment */
1237
  flag_t       sflags;           /* mmap and extern flag */
1243
  flag_t       sflags;           /* mmap and extern flag */
1238
};
1244
};
1239
 
1245
 
1240
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1246
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1241
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1247
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1242
 
1248
 
1243
typedef struct malloc_segment  msegment;
1249
typedef struct malloc_segment  msegment;
1244
typedef struct malloc_segment* msegmentptr;
1250
typedef struct malloc_segment* msegmentptr;
1245
 
1251
 
1246
/* ---------------------------- malloc_state ----------------------------- */
1252
/* ---------------------------- malloc_state ----------------------------- */
1247
 
1253
 
1248
/*
1254
/*
1249
   A malloc_state holds all of the bookkeeping for a space.
1255
   A malloc_state holds all of the bookkeeping for a space.
1250
   The main fields are:
1256
   The main fields are:
1251
 
1257
 
1252
  Top
1258
  Top
1253
    The topmost chunk of the currently active segment. Its size is
1259
    The topmost chunk of the currently active segment. Its size is
1254
    cached in topsize.  The actual size of topmost space is
1260
    cached in topsize.  The actual size of topmost space is
1255
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1261
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1256
    fenceposts and segment records if necessary when getting more
1262
    fenceposts and segment records if necessary when getting more
1257
    space from the system.  The size at which to autotrim top is
1263
    space from the system.  The size at which to autotrim top is
1258
    cached from mparams in trim_check, except that it is disabled if
1264
    cached from mparams in trim_check, except that it is disabled if
1259
    an autotrim fails.
1265
    an autotrim fails.
1260
 
1266
 
1261
  Designated victim (dv)
1267
  Designated victim (dv)
1262
    This is the preferred chunk for servicing small requests that
1268
    This is the preferred chunk for servicing small requests that
1263
    don't have exact fits.  It is normally the chunk split off most
1269
    don't have exact fits.  It is normally the chunk split off most
1264
    recently to service another small request.  Its size is cached in
1270
    recently to service another small request.  Its size is cached in
1265
    dvsize. The link fields of this chunk are not maintained since it
1271
    dvsize. The link fields of this chunk are not maintained since it
1266
    is not kept in a bin.
1272
    is not kept in a bin.
1267
 
1273
 
1268
  SmallBins
1274
  SmallBins
1269
    An array of bin headers for free chunks.  These bins hold chunks
1275
    An array of bin headers for free chunks.  These bins hold chunks
1270
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1276
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1271
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1277
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1272
    use in double-linked lists, each bin header acts as a malloc_chunk
1278
    use in double-linked lists, each bin header acts as a malloc_chunk
1273
    pointing to the real first node, if it exists (else pointing to
1279
    pointing to the real first node, if it exists (else pointing to
1274
    itself).  This avoids special-casing for headers.  But to avoid
1280
    itself).  This avoids special-casing for headers.  But to avoid
1275
    waste, we allocate only the fd/bk pointers of bins, and then use
1281
    waste, we allocate only the fd/bk pointers of bins, and then use
1276
    repositioning tricks to treat these as the fields of a chunk.
1282
    repositioning tricks to treat these as the fields of a chunk.
1277
 
1283
 
1278
  TreeBins
1284
  TreeBins
1279
    Treebins are pointers to the roots of trees holding a range of
1285
    Treebins are pointers to the roots of trees holding a range of
1280
    sizes. There are 2 equally spaced treebins for each power of two
1286
    sizes. There are 2 equally spaced treebins for each power of two
1281
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1287
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1282
    larger.
1288
    larger.
1283
 
1289
 
1284
  Bin maps
1290
  Bin maps
1285
    There is one bit map for small bins ("smallmap") and one for
1291
    There is one bit map for small bins ("smallmap") and one for
1286
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1292
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1287
    clears the bit when empty.  Bit operations are then used to avoid
1293
    clears the bit when empty.  Bit operations are then used to avoid
1288
    bin-by-bin searching -- nearly all "search" is done without ever
1294
    bin-by-bin searching -- nearly all "search" is done without ever
1289
    looking at bins that won't be selected.  The bit maps
1295
    looking at bins that won't be selected.  The bit maps
1290
    conservatively use 32 bits per map word, even if on 64bit system.
1296
    conservatively use 32 bits per map word, even if on 64bit system.
1291
    For a good description of some of the bit-based techniques used
1297
    For a good description of some of the bit-based techniques used
1292
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1298
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1293
    supplement at http://hackersdelight.org/). Many of these are
1299
    supplement at http://hackersdelight.org/). Many of these are
1294
    intended to reduce the branchiness of paths through malloc etc, as
1300
    intended to reduce the branchiness of paths through malloc etc, as
1295
    well as to reduce the number of memory locations read or written.
1301
    well as to reduce the number of memory locations read or written.
1296
 
1302
 
1297
  Segments
1303
  Segments
1298
    A list of segments headed by an embedded malloc_segment record
1304
    A list of segments headed by an embedded malloc_segment record
1299
    representing the initial space.
1305
    representing the initial space.
1300
 
1306
 
1301
  Address check support
1307
  Address check support
1302
    The least_addr field is the least address ever obtained from
1308
    The least_addr field is the least address ever obtained from
1303
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1309
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1304
    than this are trapped (unless INSECURE is defined).
1310
    than this are trapped (unless INSECURE is defined).
1305
 
1311
 
1306
  Magic tag
1312
  Magic tag
1307
    A cross-check field that should always hold same value as mparams.magic.
1313
    A cross-check field that should always hold same value as mparams.magic.
1308
 
1314
 
1309
  Flags
1315
  Flags
1310
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1316
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1311
 
1317
 
1312
  Statistics
1318
  Statistics
1313
    Each space keeps track of current and maximum system memory
1319
    Each space keeps track of current and maximum system memory
1314
    obtained via MORECORE or MMAP.
1320
    obtained via MORECORE or MMAP.
1315
 
1321
 
1316
  Locking
1322
  Locking
1317
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1323
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1318
    around every public call using this mspace.
1324
    around every public call using this mspace.
1319
*/
1325
*/
1320
 
1326
 
1321
/* Bin types, widths and sizes */
1327
/* Bin types, widths and sizes */
1322
#define NSMALLBINS        (32U)
1328
#define NSMALLBINS        (32U)
1323
#define NTREEBINS         (32U)
1329
#define NTREEBINS         (32U)
1324
#define SMALLBIN_SHIFT    (3U)
1330
#define SMALLBIN_SHIFT    (3U)
1325
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1331
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1326
#define TREEBIN_SHIFT     (8U)
1332
#define TREEBIN_SHIFT     (8U)
1327
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1333
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1328
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1334
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1329
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1335
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1330
 
1336
 
1331
struct malloc_state {
1337
struct malloc_state {
1332
  binmap_t   smallmap;
1338
  binmap_t   smallmap;
1333
  binmap_t   treemap;
1339
  binmap_t   treemap;
1334
  size_t     dvsize;
1340
  size_t     dvsize;
1335
  size_t     topsize;
1341
  size_t     topsize;
1336
  char*      least_addr;
1342
  char*      least_addr;
1337
  mchunkptr  dv;
1343
  mchunkptr  dv;
1338
  mchunkptr  top;
1344
  mchunkptr  top;
1339
  size_t     trim_check;
1345
  size_t     trim_check;
1340
  size_t     magic;
1346
  size_t     magic;
1341
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1347
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1342
  tbinptr    treebins[NTREEBINS];
1348
  tbinptr    treebins[NTREEBINS];
1343
  size_t     footprint;
1349
  size_t     footprint;
1344
  size_t     max_footprint;
1350
  size_t     max_footprint;
1345
  flag_t     mflags;
1351
  flag_t     mflags;
1346
#if USE_LOCKS
1352
#if USE_LOCKS
1347
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1353
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1348
#endif /* USE_LOCKS */
1354
#endif /* USE_LOCKS */
1349
  msegment   seg;
1355
  msegment   seg;
1350
};
1356
};
1351
 
1357
 
1352
typedef struct malloc_state*    mstate;
1358
typedef struct malloc_state*    mstate;
1353
 
1359
 
1354
/* ------------- Global malloc_state and malloc_params ------------------- */
1360
/* ------------- Global malloc_state and malloc_params ------------------- */
1355
 
1361
 
1356
/*
1362
/*
1357
  malloc_params holds global properties, including those that can be
1363
  malloc_params holds global properties, including those that can be
1358
  dynamically set using mallopt. There is a single instance, mparams,
1364
  dynamically set using mallopt. There is a single instance, mparams,
1359
  initialized in init_mparams.
1365
  initialized in init_mparams.
1360
*/
1366
*/
1361
 
1367
 
1362
struct malloc_params {
1368
struct malloc_params {
1363
  size_t magic;
1369
  size_t magic;
1364
  size_t page_size;
1370
  size_t page_size;
1365
  size_t granularity;
1371
  size_t granularity;
1366
  size_t mmap_threshold;
1372
  size_t mmap_threshold;
1367
  size_t trim_threshold;
1373
  size_t trim_threshold;
1368
  flag_t default_mflags;
1374
  flag_t default_mflags;
1369
};
1375
};
1370
 
1376
 
1371
static struct malloc_params mparams;
1377
static struct malloc_params mparams;
1372
 
1378
 
1373
/* The global malloc_state used for all non-"mspace" calls */
1379
/* The global malloc_state used for all non-"mspace" calls */
1374
static struct malloc_state _gm_;
1380
static struct malloc_state _gm_;
1375
#define gm                 (&_gm_)
1381
#define gm                 (&_gm_)
1376
#define is_global(M)       ((M) == &_gm_)
1382
#define is_global(M)       ((M) == &_gm_)
1377
#define is_initialized(M)  ((M)->top != 0)
1383
#define is_initialized(M)  ((M)->top != 0)
1378
 
1384
 
1379
/* -------------------------- system alloc setup ------------------------- */
1385
/* -------------------------- system alloc setup ------------------------- */
1380
 
1386
 
1381
/* Operations on mflags */
1387
/* Operations on mflags */
1382
 
1388
 
1383
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1389
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1384
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1390
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1385
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1391
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1386
 
1392
 
1387
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1393
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1388
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1394
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1389
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1395
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1390
 
1396
 
1391
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1397
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1392
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1398
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1393
 
1399
 
1394
#define set_lock(M,L)\
1400
#define set_lock(M,L)\
1395
 ((M)->mflags = (L)?\
1401
 ((M)->mflags = (L)?\
1396
  ((M)->mflags | USE_LOCK_BIT) :\
1402
  ((M)->mflags | USE_LOCK_BIT) :\
1397
  ((M)->mflags & ~USE_LOCK_BIT))
1403
  ((M)->mflags & ~USE_LOCK_BIT))
1398
 
1404
 
1399
/* page-align a size */
1405
/* page-align a size */
1400
#define page_align(S)\
1406
#define page_align(S)\
1401
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1407
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1402
 
1408
 
1403
/* granularity-align a size */
1409
/* granularity-align a size */
1404
#define granularity_align(S)\
1410
#define granularity_align(S)\
1405
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1411
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1406
 
1412
 
1407
#define is_page_aligned(S)\
1413
#define is_page_aligned(S)\
1408
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1414
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1409
#define is_granularity_aligned(S)\
1415
#define is_granularity_aligned(S)\
1410
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1416
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1411
 
1417
 
1412
/*  True if segment S holds address A */
1418
/*  True if segment S holds address A */
1413
#define segment_holds(S, A)\
1419
#define segment_holds(S, A)\
1414
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1420
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1415
 
1421
 
1416
/* Return segment holding given address */
1422
/* Return segment holding given address */
1417
static msegmentptr segment_holding(mstate m, char* addr) {
1423
static msegmentptr segment_holding(mstate m, char* addr) {
1418
  msegmentptr sp = &m->seg;
1424
  msegmentptr sp = &m->seg;
1419
  for (;;) {
1425
  for (;;) {
1420
    if (addr >= sp->base && addr < sp->base + sp->size)
1426
    if (addr >= sp->base && addr < sp->base + sp->size)
1421
      return sp;
1427
      return sp;
1422
    if ((sp = sp->next) == 0)
1428
    if ((sp = sp->next) == 0)
1423
      return 0;
1429
      return 0;
1424
  }
1430
  }
1425
}
1431
}
1426
 
1432
 
1427
/* Return true if segment contains a segment link */
1433
/* Return true if segment contains a segment link */
1428
static int has_segment_link(mstate m, msegmentptr ss) {
1434
static int has_segment_link(mstate m, msegmentptr ss) {
1429
  msegmentptr sp = &m->seg;
1435
  msegmentptr sp = &m->seg;
1430
  for (;;) {
1436
  for (;;) {
1431
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1437
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1432
      return 1;
1438
      return 1;
1433
    if ((sp = sp->next) == 0)
1439
    if ((sp = sp->next) == 0)
1434
      return 0;
1440
      return 0;
1435
  }
1441
  }
1436
}
1442
}
1437
 
1443
 
1438
#ifndef MORECORE_CANNOT_TRIM
1444
#ifndef MORECORE_CANNOT_TRIM
1439
#define should_trim(M,s)  ((s) > (M)->trim_check)
1445
#define should_trim(M,s)  ((s) > (M)->trim_check)
1440
#else  /* MORECORE_CANNOT_TRIM */
1446
#else  /* MORECORE_CANNOT_TRIM */
1441
#define should_trim(M,s)  (0)
1447
#define should_trim(M,s)  (0)
1442
#endif /* MORECORE_CANNOT_TRIM */
1448
#endif /* MORECORE_CANNOT_TRIM */
1443
 
1449
 
1444
/*
1450
/*
1445
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1451
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1446
  that may be needed to place segment records and fenceposts when new
1452
  that may be needed to place segment records and fenceposts when new
1447
  noncontiguous segments are added.
1453
  noncontiguous segments are added.
1448
*/
1454
*/
1449
#define TOP_FOOT_SIZE\
1455
#define TOP_FOOT_SIZE\
1450
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1456
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1451
 
1457
 
1452
 
1458
 
1453
/* -------------------------------  Hooks -------------------------------- */
1459
/* -------------------------------  Hooks -------------------------------- */
1454
 
1460
 
1455
/*
1461
/*
1456
  PREACTION should be defined to return 0 on success, and nonzero on
1462
  PREACTION should be defined to return 0 on success, and nonzero on
1457
  failure. If you are not using locking, you can redefine these to do
1463
  failure. If you are not using locking, you can redefine these to do
1458
  anything you like.
1464
  anything you like.
1459
*/
1465
*/
1460
 
1466
 
1461
#if USE_LOCKS
1467
#if USE_LOCKS
1462
 
1468
 
1463
/* Ensure locks are initialized */
1469
/* Ensure locks are initialized */
1464
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1470
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1465
 
1471
 
1466
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1472
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1467
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1473
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1468
#else /* USE_LOCKS */
1474
#else /* USE_LOCKS */
1469
 
1475
 
1470
#ifndef PREACTION
1476
#ifndef PREACTION
1471
#define PREACTION(M) (0)
1477
#define PREACTION(M) (0)
1472
#endif  /* PREACTION */
1478
#endif  /* PREACTION */
1473
 
1479
 
1474
#ifndef POSTACTION
1480
#ifndef POSTACTION
1475
#define POSTACTION(M)
1481
#define POSTACTION(M)
1476
#endif  /* POSTACTION */
1482
#endif  /* POSTACTION */
1477
 
1483
 
1478
#endif /* USE_LOCKS */
1484
#endif /* USE_LOCKS */
1479
 
1485
 
1480
/*
1486
/*
1481
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1487
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1482
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1488
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1483
  reallocs. The argument p is an address that might have triggered the
1489
  reallocs. The argument p is an address that might have triggered the
1484
  fault. It is ignored by the two predefined actions, but might be
1490
  fault. It is ignored by the two predefined actions, but might be
1485
  useful in custom actions that try to help diagnose errors.
1491
  useful in custom actions that try to help diagnose errors.
1486
*/
1492
*/
1487
 
1493
 
1488
#if PROCEED_ON_ERROR
1494
#if PROCEED_ON_ERROR
1489
 
1495
 
1490
/* A count of the number of corruption errors causing resets */
1496
/* A count of the number of corruption errors causing resets */
1491
int malloc_corruption_error_count;
1497
int malloc_corruption_error_count;
1492
 
1498
 
1493
/* default corruption action */
1499
/* default corruption action */
1494
static void reset_on_error(mstate m);
1500
static void reset_on_error(mstate m);
1495
 
1501
 
1496
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1502
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1497
#define USAGE_ERROR_ACTION(m, p)
1503
#define USAGE_ERROR_ACTION(m, p)
1498
 
1504
 
1499
#else /* PROCEED_ON_ERROR */
1505
#else /* PROCEED_ON_ERROR */
1500
 
1506
 
1501
#ifndef CORRUPTION_ERROR_ACTION
1507
#ifndef CORRUPTION_ERROR_ACTION
1502
#define CORRUPTION_ERROR_ACTION(m) ABORT
1508
#define CORRUPTION_ERROR_ACTION(m) ABORT
1503
#endif /* CORRUPTION_ERROR_ACTION */
1509
#endif /* CORRUPTION_ERROR_ACTION */
1504
 
1510
 
1505
#ifndef USAGE_ERROR_ACTION
1511
#ifndef USAGE_ERROR_ACTION
1506
#define USAGE_ERROR_ACTION(m,p) ABORT
1512
#define USAGE_ERROR_ACTION(m,p) ABORT
1507
#endif /* USAGE_ERROR_ACTION */
1513
#endif /* USAGE_ERROR_ACTION */
1508
 
1514
 
1509
#endif /* PROCEED_ON_ERROR */
1515
#endif /* PROCEED_ON_ERROR */
1510
 
1516
 
1511
/* -------------------------- Debugging setup ---------------------------- */
1517
/* -------------------------- Debugging setup ---------------------------- */
1512
 
1518
 
1513
#if ! DEBUG
1519
#if ! DEBUG
1514
 
1520
 
1515
#define check_free_chunk(M,P)
1521
#define check_free_chunk(M,P)
1516
#define check_inuse_chunk(M,P)
1522
#define check_inuse_chunk(M,P)
1517
#define check_malloced_chunk(M,P,N)
1523
#define check_malloced_chunk(M,P,N)
1518
#define check_mmapped_chunk(M,P)
1524
#define check_mmapped_chunk(M,P)
1519
#define check_malloc_state(M)
1525
#define check_malloc_state(M)
1520
#define check_top_chunk(M,P)
1526
#define check_top_chunk(M,P)
1521
 
1527
 
1522
#else /* DEBUG */
1528
#else /* DEBUG */
1523
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1529
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1524
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1530
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1525
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1531
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1526
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1532
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1527
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1533
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1528
#define check_malloc_state(M)       do_check_malloc_state(M)
1534
#define check_malloc_state(M)       do_check_malloc_state(M)
1529
 
1535
 
1530
static void   do_check_any_chunk(mstate m, mchunkptr p);
1536
static void   do_check_any_chunk(mstate m, mchunkptr p);
1531
static void   do_check_top_chunk(mstate m, mchunkptr p);
1537
static void   do_check_top_chunk(mstate m, mchunkptr p);
1532
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1538
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1533
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1539
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1534
static void   do_check_free_chunk(mstate m, mchunkptr p);
1540
static void   do_check_free_chunk(mstate m, mchunkptr p);
1535
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1541
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1536
static void   do_check_tree(mstate m, tchunkptr t);
1542
static void   do_check_tree(mstate m, tchunkptr t);
1537
static void   do_check_treebin(mstate m, bindex_t i);
1543
static void   do_check_treebin(mstate m, bindex_t i);
1538
static void   do_check_smallbin(mstate m, bindex_t i);
1544
static void   do_check_smallbin(mstate m, bindex_t i);
1539
static void   do_check_malloc_state(mstate m);
1545
static void   do_check_malloc_state(mstate m);
1540
static int    bin_find(mstate m, mchunkptr x);
1546
static int    bin_find(mstate m, mchunkptr x);
1541
static size_t traverse_and_check(mstate m);
1547
static size_t traverse_and_check(mstate m);
1542
#endif /* DEBUG */
1548
#endif /* DEBUG */
1543
 
1549
 
1544
/* ---------------------------- Indexing Bins ---------------------------- */
1550
/* ---------------------------- Indexing Bins ---------------------------- */
1545
 
1551
 
1546
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1552
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1547
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1553
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1548
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1554
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1549
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1555
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1550
 
1556
 
1551
/* addressing by index. See above about smallbin repositioning */
1557
/* addressing by index. See above about smallbin repositioning */
1552
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1558
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1553
#define treebin_at(M,i)     (&((M)->treebins[i]))
1559
#define treebin_at(M,i)     (&((M)->treebins[i]))
1554
 
1560
 
1555
/* assign tree index for size S to variable I */
1561
/* assign tree index for size S to variable I */
1556
#if defined(__GNUC__) && defined(i386)
1562
#if defined(__GNUC__) && defined(i386)
1557
#define compute_tree_index(S, I)\
1563
#define compute_tree_index(S, I)\
1558
{\
1564
{\
1559
  size_t X = S >> TREEBIN_SHIFT;\
1565
  size_t X = S >> TREEBIN_SHIFT;\
1560
  if (X == 0)\
1566
  if (X == 0)\
1561
    I = 0;\
1567
    I = 0;\
1562
  else if (X > 0xFFFF)\
1568
  else if (X > 0xFFFF)\
1563
    I = NTREEBINS-1;\
1569
    I = NTREEBINS-1;\
1564
  else {\
1570
  else {\
1565
    unsigned int K;\
1571
    unsigned int K;\
1566
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1572
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1567
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1573
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1568
  }\
1574
  }\
1569
}
1575
}
1570
#else /* GNUC */
1576
#else /* GNUC */
1571
#define compute_tree_index(S, I)\
1577
#define compute_tree_index(S, I)\
1572
{\
1578
{\
1573
  size_t X = S >> TREEBIN_SHIFT;\
1579
  size_t X = S >> TREEBIN_SHIFT;\
1574
  if (X == 0)\
1580
  if (X == 0)\
1575
    I = 0;\
1581
    I = 0;\
1576
  else if (X > 0xFFFF)\
1582
  else if (X > 0xFFFF)\
1577
    I = NTREEBINS-1;\
1583
    I = NTREEBINS-1;\
1578
  else {\
1584
  else {\
1579
    unsigned int Y = (unsigned int)X;\
1585
    unsigned int Y = (unsigned int)X;\
1580
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1586
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1581
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1587
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1582
    N += K;\
1588
    N += K;\
1583
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1589
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1584
    K = 14 - N + ((Y <<= K) >> 15);\
1590
    K = 14 - N + ((Y <<= K) >> 15);\
1585
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1591
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1586
  }\
1592
  }\
1587
}
1593
}
1588
#endif /* GNUC */
1594
#endif /* GNUC */
1589
 
1595
 
1590
/* Bit representing maximum resolved size in a treebin at i */
1596
/* Bit representing maximum resolved size in a treebin at i */
1591
#define bit_for_tree_index(i) \
1597
#define bit_for_tree_index(i) \
1592
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1598
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1593
 
1599
 
1594
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1600
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1595
#define leftshift_for_tree_index(i) \
1601
#define leftshift_for_tree_index(i) \
1596
   ((i == NTREEBINS-1)? 0 : \
1602
   ((i == NTREEBINS-1)? 0 : \
1597
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1603
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1598
 
1604
 
1599
/* The size of the smallest chunk held in bin with index i */
1605
/* The size of the smallest chunk held in bin with index i */
1600
#define minsize_for_tree_index(i) \
1606
#define minsize_for_tree_index(i) \
1601
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1607
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1602
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1608
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1603
 
1609
 
1604
 
1610
 
1605
/* ------------------------ Operations on bin maps ----------------------- */
1611
/* ------------------------ Operations on bin maps ----------------------- */
1606
 
1612
 
1607
/* bit corresponding to given index */
1613
/* bit corresponding to given index */
1608
#define idx2bit(i)              ((binmap_t)(1) << (i))
1614
#define idx2bit(i)              ((binmap_t)(1) << (i))
1609
 
1615
 
1610
/* Mark/Clear bits with given index */
1616
/* Mark/Clear bits with given index */
1611
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1617
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1612
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1618
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1613
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1619
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1614
 
1620
 
1615
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1621
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1616
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1622
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1617
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1623
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1618
 
1624
 
1619
/* index corresponding to given bit */
1625
/* index corresponding to given bit */
1620
 
1626
 
1621
#if defined(__GNUC__) && defined(i386)
1627
#if defined(__GNUC__) && defined(i386)
1622
#define compute_bit2idx(X, I)\
1628
#define compute_bit2idx(X, I)\
1623
{\
1629
{\
1624
  unsigned int J;\
1630
  unsigned int J;\
1625
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1631
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1626
  I = (bindex_t)J;\
1632
  I = (bindex_t)J;\
1627
}
1633
}
1628
 
1634
 
1629
#else /* GNUC */
1635
#else /* GNUC */
1630
#if  USE_BUILTIN_FFS
1636
#if  USE_BUILTIN_FFS
1631
#define compute_bit2idx(X, I) I = ffs(X)-1
1637
#define compute_bit2idx(X, I) I = ffs(X)-1
1632
 
1638
 
1633
#else /* USE_BUILTIN_FFS */
1639
#else /* USE_BUILTIN_FFS */
1634
#define compute_bit2idx(X, I)\
1640
#define compute_bit2idx(X, I)\
1635
{\
1641
{\
1636
  unsigned int Y = X - 1;\
1642
  unsigned int Y = X - 1;\
1637
  unsigned int K = Y >> (16-4) & 16;\
1643
  unsigned int K = Y >> (16-4) & 16;\
1638
  unsigned int N = K;        Y >>= K;\
1644
  unsigned int N = K;        Y >>= K;\
1639
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1645
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1640
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1646
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1641
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1647
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1642
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1648
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1643
  I = (bindex_t)(N + Y);\
1649
  I = (bindex_t)(N + Y);\
1644
}
1650
}
1645
#endif /* USE_BUILTIN_FFS */
1651
#endif /* USE_BUILTIN_FFS */
1646
#endif /* GNUC */
1652
#endif /* GNUC */
1647
 
1653
 
1648
/* isolate the least set bit of a bitmap */
1654
/* isolate the least set bit of a bitmap */
1649
#define least_bit(x)         ((x) & -(x))
1655
#define least_bit(x)         ((x) & -(x))
1650
 
1656
 
1651
/* mask with all bits to left of least bit of x on */
1657
/* mask with all bits to left of least bit of x on */
1652
#define left_bits(x)         ((x<<1) | -(x<<1))
1658
#define left_bits(x)         ((x<<1) | -(x<<1))
1653
 
1659
 
1654
/* mask with all bits to left of or equal to least bit of x on */
1660
/* mask with all bits to left of or equal to least bit of x on */
1655
#define same_or_left_bits(x) ((x) | -(x))
1661
#define same_or_left_bits(x) ((x) | -(x))
1656
 
1662
 
1657
 
1663
 
1658
/* ----------------------- Runtime Check Support ------------------------- */
1664
/* ----------------------- Runtime Check Support ------------------------- */
1659
 
1665
 
1660
/*
1666
/*
1661
  For security, the main invariant is that malloc/free/etc never
1667
  For security, the main invariant is that malloc/free/etc never
1662
  writes to a static address other than malloc_state, unless static
1668
  writes to a static address other than malloc_state, unless static
1663
  malloc_state itself has been corrupted, which cannot occur via
1669
  malloc_state itself has been corrupted, which cannot occur via
1664
  malloc (because of these checks). In essence this means that we
1670
  malloc (because of these checks). In essence this means that we
1665
  believe all pointers, sizes, maps etc held in malloc_state, but
1671
  believe all pointers, sizes, maps etc held in malloc_state, but
1666
  check all of those linked or offsetted from other embedded data
1672
  check all of those linked or offsetted from other embedded data
1667
  structures.  These checks are interspersed with main code in a way
1673
  structures.  These checks are interspersed with main code in a way
1668
  that tends to minimize their run-time cost.
1674
  that tends to minimize their run-time cost.
1669
 
1675
 
1670
  When FOOTERS is defined, in addition to range checking, we also
1676
  When FOOTERS is defined, in addition to range checking, we also
1671
  verify footer fields of inuse chunks, which can be used guarantee
1677
  verify footer fields of inuse chunks, which can be used guarantee
1672
  that the mstate controlling malloc/free is intact.  This is a
1678
  that the mstate controlling malloc/free is intact.  This is a
1673
  streamlined version of the approach described by William Robertson
1679
  streamlined version of the approach described by William Robertson
1674
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1680
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1675
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1681
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1676
  of an inuse chunk holds the xor of its mstate and a random seed,
1682
  of an inuse chunk holds the xor of its mstate and a random seed,
1677
  that is checked upon calls to free() and realloc().  This is
1683
  that is checked upon calls to free() and realloc().  This is
1678
  (probablistically) unguessable from outside the program, but can be
1684
  (probablistically) unguessable from outside the program, but can be
1679
  computed by any code successfully malloc'ing any chunk, so does not
1685
  computed by any code successfully malloc'ing any chunk, so does not
1680
  itself provide protection against code that has already broken
1686
  itself provide protection against code that has already broken
1681
  security through some other means.  Unlike Robertson et al, we
1687
  security through some other means.  Unlike Robertson et al, we
1682
  always dynamically check addresses of all offset chunks (previous,
1688
  always dynamically check addresses of all offset chunks (previous,
1683
  next, etc). This turns out to be cheaper than relying on hashes.
1689
  next, etc). This turns out to be cheaper than relying on hashes.
1684
*/
1690
*/
1685
 
1691
 
1686
#if !INSECURE
1692
#if !INSECURE
1687
/* Check if address a is at least as high as any from MORECORE or MMAP */
1693
/* Check if address a is at least as high as any from MORECORE or MMAP */
1688
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1694
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1689
/* Check if address of next chunk n is higher than base chunk p */
1695
/* Check if address of next chunk n is higher than base chunk p */
1690
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1696
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1691
/* Check if p has its cinuse bit on */
1697
/* Check if p has its cinuse bit on */
1692
#define ok_cinuse(p)     cinuse(p)
1698
#define ok_cinuse(p)     cinuse(p)
1693
/* Check if p has its pinuse bit on */
1699
/* Check if p has its pinuse bit on */
1694
#define ok_pinuse(p)     pinuse(p)
1700
#define ok_pinuse(p)     pinuse(p)
1695
 
1701
 
1696
#else /* !INSECURE */
1702
#else /* !INSECURE */
1697
#define ok_address(M, a) (1)
1703
#define ok_address(M, a) (1)
1698
#define ok_next(b, n)    (1)
1704
#define ok_next(b, n)    (1)
1699
#define ok_cinuse(p)     (1)
1705
#define ok_cinuse(p)     (1)
1700
#define ok_pinuse(p)     (1)
1706
#define ok_pinuse(p)     (1)
1701
#endif /* !INSECURE */
1707
#endif /* !INSECURE */
1702
 
1708
 
1703
#if (FOOTERS && !INSECURE)
1709
#if (FOOTERS && !INSECURE)
1704
/* Check if (alleged) mstate m has expected magic field */
1710
/* Check if (alleged) mstate m has expected magic field */
1705
#define ok_magic(M)      ((M)->magic == mparams.magic)
1711
#define ok_magic(M)      ((M)->magic == mparams.magic)
1706
#else  /* (FOOTERS && !INSECURE) */
1712
#else  /* (FOOTERS && !INSECURE) */
1707
#define ok_magic(M)      (1)
1713
#define ok_magic(M)      (1)
1708
#endif /* (FOOTERS && !INSECURE) */
1714
#endif /* (FOOTERS && !INSECURE) */
1709
 
1715
 
1710
 
1716
 
1711
/* In gcc, use __builtin_expect to minimize impact of checks */
1717
/* In gcc, use __builtin_expect to minimize impact of checks */
1712
#if !INSECURE
1718
#if !INSECURE
1713
#if defined(__GNUC__) && __GNUC__ >= 3
1719
#if defined(__GNUC__) && __GNUC__ >= 3
1714
#define RTCHECK(e)  __builtin_expect(e, 1)
1720
#define RTCHECK(e)  __builtin_expect(e, 1)
1715
#else /* GNUC */
1721
#else /* GNUC */
1716
#define RTCHECK(e)  (e)
1722
#define RTCHECK(e)  (e)
1717
#endif /* GNUC */
1723
#endif /* GNUC */
1718
#else /* !INSECURE */
1724
#else /* !INSECURE */
1719
#define RTCHECK(e)  (1)
1725
#define RTCHECK(e)  (1)
1720
#endif /* !INSECURE */
1726
#endif /* !INSECURE */
1721
 
1727
 
1722
/* macros to set up inuse chunks with or without footers */
1728
/* macros to set up inuse chunks with or without footers */
1723
 
1729
 
1724
#if !FOOTERS
1730
#if !FOOTERS
1725
 
1731
 
1726
#define mark_inuse_foot(M,p,s)
1732
#define mark_inuse_foot(M,p,s)
1727
 
1733
 
1728
/* Set cinuse bit and pinuse bit of next chunk */
1734
/* Set cinuse bit and pinuse bit of next chunk */
1729
#define set_inuse(M,p,s)\
1735
#define set_inuse(M,p,s)\
1730
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1736
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1731
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1737
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1732
 
1738
 
1733
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1739
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
1734
#define set_inuse_and_pinuse(M,p,s)\
1740
#define set_inuse_and_pinuse(M,p,s)\
1735
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1741
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1736
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1742
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
1737
 
1743
 
1738
/* Set size, cinuse and pinuse bit of this chunk */
1744
/* Set size, cinuse and pinuse bit of this chunk */
1739
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1745
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1740
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1746
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
1741
 
1747
 
1742
#else /* FOOTERS */
1748
#else /* FOOTERS */
1743
 
1749
 
1744
/* Set foot of inuse chunk to be xor of mstate and seed */
1750
/* Set foot of inuse chunk to be xor of mstate and seed */
1745
#define mark_inuse_foot(M,p,s)\
1751
#define mark_inuse_foot(M,p,s)\
1746
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1752
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
1747
 
1753
 
1748
#define get_mstate_for(p)\
1754
#define get_mstate_for(p)\
1749
  ((mstate)(((mchunkptr)((char*)(p) +\
1755
  ((mstate)(((mchunkptr)((char*)(p) +\
1750
    (chunksize(p))))->prev_foot ^ mparams.magic))
1756
    (chunksize(p))))->prev_foot ^ mparams.magic))
1751
 
1757
 
1752
#define set_inuse(M,p,s)\
1758
#define set_inuse(M,p,s)\
1753
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1759
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
1754
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1760
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
1755
  mark_inuse_foot(M,p,s))
1761
  mark_inuse_foot(M,p,s))
1756
 
1762
 
1757
#define set_inuse_and_pinuse(M,p,s)\
1763
#define set_inuse_and_pinuse(M,p,s)\
1758
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1764
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1759
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1765
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
1760
 mark_inuse_foot(M,p,s))
1766
 mark_inuse_foot(M,p,s))
1761
 
1767
 
1762
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1768
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
1763
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1769
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
1764
  mark_inuse_foot(M, p, s))
1770
  mark_inuse_foot(M, p, s))
1765
 
1771
 
1766
#endif /* !FOOTERS */
1772
#endif /* !FOOTERS */
1767
 
1773
 
1768
/* ---------------------------- setting mparams -------------------------- */
1774
/* ---------------------------- setting mparams -------------------------- */
1769
 
1775
 
1770
/* Initialize mparams */
1776
/* Initialize mparams */
1771
static int init_mparams(void) {
1777
static int init_mparams(void) {
1772
  if (mparams.page_size == 0) {
1778
  if (mparams.page_size == 0) {
1773
    size_t s;
1779
    size_t s;
1774
 
1780
 
1775
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1781
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1776
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1782
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
1777
#if MORECORE_CONTIGUOUS
1783
#if MORECORE_CONTIGUOUS
1778
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1784
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
1779
#else  /* MORECORE_CONTIGUOUS */
1785
#else  /* MORECORE_CONTIGUOUS */
1780
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1786
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
1781
#endif /* MORECORE_CONTIGUOUS */
1787
#endif /* MORECORE_CONTIGUOUS */
1782
 
1788
 
1783
#if (FOOTERS && !INSECURE)
1789
#if (FOOTERS && !INSECURE)
1784
    {
1790
    {
1785
#if USE_DEV_RANDOM
1791
#if USE_DEV_RANDOM
1786
      int fd;
1792
      int fd;
1787
      unsigned char buf[sizeof(size_t)];
1793
      unsigned char buf[sizeof(size_t)];
1788
      /* Try to use /dev/urandom, else fall back on using time */
1794
      /* Try to use /dev/urandom, else fall back on using time */
1789
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1795
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
1790
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1796
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
1791
        s = *((size_t *) buf);
1797
        s = *((size_t *) buf);
1792
        close(fd);
1798
        close(fd);
1793
      }
1799
      }
1794
      else
1800
      else
1795
#endif /* USE_DEV_RANDOM */
1801
#endif /* USE_DEV_RANDOM */
1796
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1802
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
1797
 
1803
 
1798
      s |= (size_t)8U;    /* ensure nonzero */
1804
      s |= (size_t)8U;    /* ensure nonzero */
1799
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1805
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
1800
 
1806
 
1801
    }
1807
    }
1802
#else /* (FOOTERS && !INSECURE) */
1808
#else /* (FOOTERS && !INSECURE) */
1803
    s = (size_t)0x58585858U;
1809
    s = (size_t)0x58585858U;
1804
#endif /* (FOOTERS && !INSECURE) */
1810
#endif /* (FOOTERS && !INSECURE) */
1805
    ACQUIRE_MAGIC_INIT_LOCK();
1811
    ACQUIRE_MAGIC_INIT_LOCK();
1806
    if (mparams.magic == 0) {
1812
    if (mparams.magic == 0) {
1807
      mparams.magic = s;
1813
      mparams.magic = s;
1808
      /* Set up lock for main malloc area */
1814
      /* Set up lock for main malloc area */
1809
      INITIAL_LOCK(&gm->mutex);
1815
      INITIAL_LOCK(&gm->mutex);
1810
      gm->mflags = mparams.default_mflags;
1816
      gm->mflags = mparams.default_mflags;
1811
    }
1817
    }
1812
    RELEASE_MAGIC_INIT_LOCK();
1818
    RELEASE_MAGIC_INIT_LOCK();
1813
 
1819
 
1814
#ifndef WIN32
1820
#ifndef WIN32
1815
    mparams.page_size = malloc_getpagesize;
1821
    mparams.page_size = malloc_getpagesize;
1816
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1822
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
1817
                           DEFAULT_GRANULARITY : mparams.page_size);
1823
                           DEFAULT_GRANULARITY : mparams.page_size);
1818
#else /* WIN32 */
1824
#else /* WIN32 */
1819
    {
1825
    {
1820
      SYSTEM_INFO system_info;
1826
      SYSTEM_INFO system_info;
1821
      GetSystemInfo(&system_info);
1827
      GetSystemInfo(&system_info);
1822
      mparams.page_size = system_info.dwPageSize;
1828
      mparams.page_size = system_info.dwPageSize;
1823
      mparams.granularity = system_info.dwAllocationGranularity;
1829
      mparams.granularity = system_info.dwAllocationGranularity;
1824
    }
1830
    }
1825
#endif /* WIN32 */
1831
#endif /* WIN32 */
1826
 
1832
 
1827
    /* Sanity-check configuration:
1833
    /* Sanity-check configuration:
1828
       size_t must be unsigned and as wide as pointer type.
1834
       size_t must be unsigned and as wide as pointer type.
1829
       ints must be at least 4 bytes.
1835
       ints must be at least 4 bytes.
1830
       alignment must be at least 8.
1836
       alignment must be at least 8.
1831
       Alignment, min chunk size, and page size must all be powers of 2.
1837
       Alignment, min chunk size, and page size must all be powers of 2.
1832
    */
1838
    */
1833
    if ((sizeof(size_t) != sizeof(char*)) ||
1839
    if ((sizeof(size_t) != sizeof(char*)) ||
1834
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1840
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
1835
        (sizeof(int) < 4)  ||
1841
        (sizeof(int) < 4)  ||
1836
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1842
        (MALLOC_ALIGNMENT < (size_t)8U) ||
1837
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1843
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
1838
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1844
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
1839
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1845
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
1840
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1846
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
1841
      ABORT;
1847
      ABORT;
1842
  }
1848
  }
1843
  return 0;
1849
  return 0;
1844
}
1850
}
1845
 
1851
 
1846
/* support for mallopt */
1852
/* support for mallopt */
1847
static int change_mparam(int param_number, int value) {
1853
static int change_mparam(int param_number, int value) {
1848
  size_t val = (size_t)value;
1854
  size_t val = (size_t)value;
1849
  init_mparams();
1855
  init_mparams();
1850
  switch(param_number) {
1856
  switch(param_number) {
1851
  case M_TRIM_THRESHOLD:
1857
  case M_TRIM_THRESHOLD:
1852
    mparams.trim_threshold = val;
1858
    mparams.trim_threshold = val;
1853
    return 1;
1859
    return 1;
1854
  case M_GRANULARITY:
1860
  case M_GRANULARITY:
1855
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1861
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
1856
      mparams.granularity = val;
1862
      mparams.granularity = val;
1857
      return 1;
1863
      return 1;
1858
    }
1864
    }
1859
    else
1865
    else
1860
      return 0;
1866
      return 0;
1861
  case M_MMAP_THRESHOLD:
1867
  case M_MMAP_THRESHOLD:
1862
    mparams.mmap_threshold = val;
1868
    mparams.mmap_threshold = val;
1863
    return 1;
1869
    return 1;
1864
  default:
1870
  default:
1865
    return 0;
1871
    return 0;
1866
  }
1872
  }
1867
}
1873
}
1868
 
1874
 
1869
#if DEBUG
1875
#if DEBUG
1870
/* ------------------------- Debugging Support --------------------------- */
1876
/* ------------------------- Debugging Support --------------------------- */
1871
 
1877
 
1872
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1878
/* Check properties of any chunk, whether free, inuse, mmapped etc  */
1873
static void do_check_any_chunk(mstate m, mchunkptr p) {
1879
static void do_check_any_chunk(mstate m, mchunkptr p) {
1874
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1880
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1875
  assert(ok_address(m, p));
1881
  assert(ok_address(m, p));
1876
}
1882
}
1877
 
1883
 
1878
/* Check properties of top chunk */
1884
/* Check properties of top chunk */
1879
static void do_check_top_chunk(mstate m, mchunkptr p) {
1885
static void do_check_top_chunk(mstate m, mchunkptr p) {
1880
  msegmentptr sp = segment_holding(m, (char*)p);
1886
  msegmentptr sp = segment_holding(m, (char*)p);
1881
  size_t  sz = chunksize(p);
1887
  size_t  sz = chunksize(p);
1882
  assert(sp != 0);
1888
  assert(sp != 0);
1883
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1889
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1884
  assert(ok_address(m, p));
1890
  assert(ok_address(m, p));
1885
  assert(sz == m->topsize);
1891
  assert(sz == m->topsize);
1886
  assert(sz > 0);
1892
  assert(sz > 0);
1887
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1893
  assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
1888
  assert(pinuse(p));
1894
  assert(pinuse(p));
1889
  assert(!next_pinuse(p));
1895
  assert(!next_pinuse(p));
1890
}
1896
}
1891
 
1897
 
1892
/* Check properties of (inuse) mmapped chunks */
1898
/* Check properties of (inuse) mmapped chunks */
1893
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1899
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
1894
  size_t  sz = chunksize(p);
1900
  size_t  sz = chunksize(p);
1895
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1901
  size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
1896
  assert(is_mmapped(p));
1902
  assert(is_mmapped(p));
1897
  assert(use_mmap(m));
1903
  assert(use_mmap(m));
1898
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1904
  assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
1899
  assert(ok_address(m, p));
1905
  assert(ok_address(m, p));
1900
  assert(!is_small(sz));
1906
  assert(!is_small(sz));
1901
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1907
  assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
1902
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1908
  assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
1903
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1909
  assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
1904
}
1910
}
1905
 
1911
 
1906
/* Check properties of inuse chunks */
1912
/* Check properties of inuse chunks */
1907
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1913
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
1908
  do_check_any_chunk(m, p);
1914
  do_check_any_chunk(m, p);
1909
  assert(cinuse(p));
1915
  assert(cinuse(p));
1910
  assert(next_pinuse(p));
1916
  assert(next_pinuse(p));
1911
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1917
  /* If not pinuse and not mmapped, previous chunk has OK offset */
1912
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1918
  assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
1913
  if (is_mmapped(p))
1919
  if (is_mmapped(p))
1914
    do_check_mmapped_chunk(m, p);
1920
    do_check_mmapped_chunk(m, p);
1915
}
1921
}
1916
 
1922
 
1917
/* Check properties of free chunks */
1923
/* Check properties of free chunks */
1918
static void do_check_free_chunk(mstate m, mchunkptr p) {
1924
static void do_check_free_chunk(mstate m, mchunkptr p) {
1919
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1925
  size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1920
  mchunkptr next = chunk_plus_offset(p, sz);
1926
  mchunkptr next = chunk_plus_offset(p, sz);
1921
  do_check_any_chunk(m, p);
1927
  do_check_any_chunk(m, p);
1922
  assert(!cinuse(p));
1928
  assert(!cinuse(p));
1923
  assert(!next_pinuse(p));
1929
  assert(!next_pinuse(p));
1924
  assert (!is_mmapped(p));
1930
  assert (!is_mmapped(p));
1925
  if (p != m->dv && p != m->top) {
1931
  if (p != m->dv && p != m->top) {
1926
    if (sz >= MIN_CHUNK_SIZE) {
1932
    if (sz >= MIN_CHUNK_SIZE) {
1927
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1933
      assert((sz & CHUNK_ALIGN_MASK) == 0);
1928
      assert(is_aligned(chunk2mem(p)));
1934
      assert(is_aligned(chunk2mem(p)));
1929
      assert(next->prev_foot == sz);
1935
      assert(next->prev_foot == sz);
1930
      assert(pinuse(p));
1936
      assert(pinuse(p));
1931
      assert (next == m->top || cinuse(next));
1937
      assert (next == m->top || cinuse(next));
1932
      assert(p->fd->bk == p);
1938
      assert(p->fd->bk == p);
1933
      assert(p->bk->fd == p);
1939
      assert(p->bk->fd == p);
1934
    }
1940
    }
1935
    else  /* markers are always of size SIZE_T_SIZE */
1941
    else  /* markers are always of size SIZE_T_SIZE */
1936
      assert(sz == SIZE_T_SIZE);
1942
      assert(sz == SIZE_T_SIZE);
1937
  }
1943
  }
1938
}
1944
}
1939
 
1945
 
1940
/* Check properties of malloced chunks at the point they are malloced */
1946
/* Check properties of malloced chunks at the point they are malloced */
1941
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1947
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
1942
  if (mem != 0) {
1948
  if (mem != 0) {
1943
    mchunkptr p = mem2chunk(mem);
1949
    mchunkptr p = mem2chunk(mem);
1944
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1950
    size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
1945
    do_check_inuse_chunk(m, p);
1951
    do_check_inuse_chunk(m, p);
1946
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1952
    assert((sz & CHUNK_ALIGN_MASK) == 0);
1947
    assert(sz >= MIN_CHUNK_SIZE);
1953
    assert(sz >= MIN_CHUNK_SIZE);
1948
    assert(sz >= s);
1954
    assert(sz >= s);
1949
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1955
    /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
1950
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1956
    assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
1951
  }
1957
  }
1952
}
1958
}
1953
 
1959
 
1954
/* Check a tree and its subtrees.  */
1960
/* Check a tree and its subtrees.  */
1955
static void do_check_tree(mstate m, tchunkptr t) {
1961
static void do_check_tree(mstate m, tchunkptr t) {
1956
  tchunkptr head = 0;
1962
  tchunkptr head = 0;
1957
  tchunkptr u = t;
1963
  tchunkptr u = t;
1958
  bindex_t tindex = t->index;
1964
  bindex_t tindex = t->index;
1959
  size_t tsize = chunksize(t);
1965
  size_t tsize = chunksize(t);
1960
  bindex_t idx;
1966
  bindex_t idx;
1961
  compute_tree_index(tsize, idx);
1967
  compute_tree_index(tsize, idx);
1962
  assert(tindex == idx);
1968
  assert(tindex == idx);
1963
  assert(tsize >= MIN_LARGE_SIZE);
1969
  assert(tsize >= MIN_LARGE_SIZE);
1964
  assert(tsize >= minsize_for_tree_index(idx));
1970
  assert(tsize >= minsize_for_tree_index(idx));
1965
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1971
  assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
1966
 
1972
 
1967
  do { /* traverse through chain of same-sized nodes */
1973
  do { /* traverse through chain of same-sized nodes */
1968
    do_check_any_chunk(m, ((mchunkptr)u));
1974
    do_check_any_chunk(m, ((mchunkptr)u));
1969
    assert(u->index == tindex);
1975
    assert(u->index == tindex);
1970
    assert(chunksize(u) == tsize);
1976
    assert(chunksize(u) == tsize);
1971
    assert(!cinuse(u));
1977
    assert(!cinuse(u));
1972
    assert(!next_pinuse(u));
1978
    assert(!next_pinuse(u));
1973
    assert(u->fd->bk == u);
1979
    assert(u->fd->bk == u);
1974
    assert(u->bk->fd == u);
1980
    assert(u->bk->fd == u);
1975
    if (u->parent == 0) {
1981
    if (u->parent == 0) {
1976
      assert(u->child[0] == 0);
1982
      assert(u->child[0] == 0);
1977
      assert(u->child[1] == 0);
1983
      assert(u->child[1] == 0);
1978
    }
1984
    }
1979
    else {
1985
    else {
1980
      assert(head == 0); /* only one node on chain has parent */
1986
      assert(head == 0); /* only one node on chain has parent */
1981
      head = u;
1987
      head = u;
1982
      assert(u->parent != u);
1988
      assert(u->parent != u);
1983
      assert (u->parent->child[0] == u ||
1989
      assert (u->parent->child[0] == u ||
1984
              u->parent->child[1] == u ||
1990
              u->parent->child[1] == u ||
1985
              *((tbinptr*)(u->parent)) == u);
1991
              *((tbinptr*)(u->parent)) == u);
1986
      if (u->child[0] != 0) {
1992
      if (u->child[0] != 0) {
1987
        assert(u->child[0]->parent == u);
1993
        assert(u->child[0]->parent == u);
1988
        assert(u->child[0] != u);
1994
        assert(u->child[0] != u);
1989
        do_check_tree(m, u->child[0]);
1995
        do_check_tree(m, u->child[0]);
1990
      }
1996
      }
1991
      if (u->child[1] != 0) {
1997
      if (u->child[1] != 0) {
1992
        assert(u->child[1]->parent == u);
1998
        assert(u->child[1]->parent == u);
1993
        assert(u->child[1] != u);
1999
        assert(u->child[1] != u);
1994
        do_check_tree(m, u->child[1]);
2000
        do_check_tree(m, u->child[1]);
1995
      }
2001
      }
1996
      if (u->child[0] != 0 && u->child[1] != 0) {
2002
      if (u->child[0] != 0 && u->child[1] != 0) {
1997
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2003
        assert(chunksize(u->child[0]) < chunksize(u->child[1]));
1998
      }
2004
      }
1999
    }
2005
    }
2000
    u = u->fd;
2006
    u = u->fd;
2001
  } while (u != t);
2007
  } while (u != t);
2002
  assert(head != 0);
2008
  assert(head != 0);
2003
}
2009
}
2004
 
2010
 
2005
/*  Check all the chunks in a treebin.  */
2011
/*  Check all the chunks in a treebin.  */
2006
static void do_check_treebin(mstate m, bindex_t i) {
2012
static void do_check_treebin(mstate m, bindex_t i) {
2007
  tbinptr* tb = treebin_at(m, i);
2013
  tbinptr* tb = treebin_at(m, i);
2008
  tchunkptr t = *tb;
2014
  tchunkptr t = *tb;
2009
  int empty = (m->treemap & (1U << i)) == 0;
2015
  int empty = (m->treemap & (1U << i)) == 0;
2010
  if (t == 0)
2016
  if (t == 0)
2011
    assert(empty);
2017
    assert(empty);
2012
  if (!empty)
2018
  if (!empty)
2013
    do_check_tree(m, t);
2019
    do_check_tree(m, t);
2014
}
2020
}
2015
 
2021
 
2016
/*  Check all the chunks in a smallbin.  */
2022
/*  Check all the chunks in a smallbin.  */
2017
static void do_check_smallbin(mstate m, bindex_t i) {
2023
static void do_check_smallbin(mstate m, bindex_t i) {
2018
  sbinptr b = smallbin_at(m, i);
2024
  sbinptr b = smallbin_at(m, i);
2019
  mchunkptr p = b->bk;
2025
  mchunkptr p = b->bk;
2020
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2026
  unsigned int empty = (m->smallmap & (1U << i)) == 0;
2021
  if (p == b)
2027
  if (p == b)
2022
    assert(empty);
2028
    assert(empty);
2023
  if (!empty) {
2029
  if (!empty) {
2024
    for (; p != b; p = p->bk) {
2030
    for (; p != b; p = p->bk) {
2025
      size_t size = chunksize(p);
2031
      size_t size = chunksize(p);
2026
      mchunkptr q;
2032
      mchunkptr q;
2027
      /* each chunk claims to be free */
2033
      /* each chunk claims to be free */
2028
      do_check_free_chunk(m, p);
2034
      do_check_free_chunk(m, p);
2029
      /* chunk belongs in bin */
2035
      /* chunk belongs in bin */
2030
      assert(small_index(size) == i);
2036
      assert(small_index(size) == i);
2031
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2037
      assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2032
      /* chunk is followed by an inuse chunk */
2038
      /* chunk is followed by an inuse chunk */
2033
      q = next_chunk(p);
2039
      q = next_chunk(p);
2034
      if (q->head != FENCEPOST_HEAD)
2040
      if (q->head != FENCEPOST_HEAD)
2035
        do_check_inuse_chunk(m, q);
2041
        do_check_inuse_chunk(m, q);
2036
    }
2042
    }
2037
  }
2043
  }
2038
}
2044
}
2039
 
2045
 
2040
/* Find x in a bin. Used in other check functions. */
2046
/* Find x in a bin. Used in other check functions. */
2041
static int bin_find(mstate m, mchunkptr x) {
2047
static int bin_find(mstate m, mchunkptr x) {
2042
  size_t size = chunksize(x);
2048
  size_t size = chunksize(x);
2043
  if (is_small(size)) {
2049
  if (is_small(size)) {
2044
    bindex_t sidx = small_index(size);
2050
    bindex_t sidx = small_index(size);
2045
    sbinptr b = smallbin_at(m, sidx);
2051
    sbinptr b = smallbin_at(m, sidx);
2046
    if (smallmap_is_marked(m, sidx)) {
2052
    if (smallmap_is_marked(m, sidx)) {
2047
      mchunkptr p = b;
2053
      mchunkptr p = b;
2048
      do {
2054
      do {
2049
        if (p == x)
2055
        if (p == x)
2050
          return 1;
2056
          return 1;
2051
      } while ((p = p->fd) != b);
2057
      } while ((p = p->fd) != b);
2052
    }
2058
    }
2053
  }
2059
  }
2054
  else {
2060
  else {
2055
    bindex_t tidx;
2061
    bindex_t tidx;
2056
    compute_tree_index(size, tidx);
2062
    compute_tree_index(size, tidx);
2057
    if (treemap_is_marked(m, tidx)) {
2063
    if (treemap_is_marked(m, tidx)) {
2058
      tchunkptr t = *treebin_at(m, tidx);
2064
      tchunkptr t = *treebin_at(m, tidx);
2059
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2065
      size_t sizebits = size << leftshift_for_tree_index(tidx);
2060
      while (t != 0 && chunksize(t) != size) {
2066
      while (t != 0 && chunksize(t) != size) {
2061
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2067
        t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2062
        sizebits <<= 1;
2068
        sizebits <<= 1;
2063
      }
2069
      }
2064
      if (t != 0) {
2070
      if (t != 0) {
2065
        tchunkptr u = t;
2071
        tchunkptr u = t;
2066
        do {
2072
        do {
2067
          if (u == (tchunkptr)x)
2073
          if (u == (tchunkptr)x)
2068
            return 1;
2074
            return 1;
2069
        } while ((u = u->fd) != t);
2075
        } while ((u = u->fd) != t);
2070
      }
2076
      }
2071
    }
2077
    }
2072
  }
2078
  }
2073
  return 0;
2079
  return 0;
2074
}
2080
}
2075
 
2081
 
2076
/* Traverse each chunk and check it; return total */
2082
/* Traverse each chunk and check it; return total */
2077
static size_t traverse_and_check(mstate m) {
2083
static size_t traverse_and_check(mstate m) {
2078
  size_t sum = 0;
2084
  size_t sum = 0;
2079
  if (is_initialized(m)) {
2085
  if (is_initialized(m)) {
2080
    msegmentptr s = &m->seg;
2086
    msegmentptr s = &m->seg;
2081
    sum += m->topsize + TOP_FOOT_SIZE;
2087
    sum += m->topsize + TOP_FOOT_SIZE;
2082
    while (s != 0) {
2088
    while (s != 0) {
2083
      mchunkptr q = align_as_chunk(s->base);
2089
      mchunkptr q = align_as_chunk(s->base);
2084
      mchunkptr lastq = 0;
2090
      mchunkptr lastq = 0;
2085
      assert(pinuse(q));
2091
      assert(pinuse(q));
2086
      while (segment_holds(s, q) &&
2092
      while (segment_holds(s, q) &&
2087
             q != m->top && q->head != FENCEPOST_HEAD) {
2093
             q != m->top && q->head != FENCEPOST_HEAD) {
2088
        sum += chunksize(q);
2094
        sum += chunksize(q);
2089
        if (cinuse(q)) {
2095
        if (cinuse(q)) {
2090
          assert(!bin_find(m, q));
2096
          assert(!bin_find(m, q));
2091
          do_check_inuse_chunk(m, q);
2097
          do_check_inuse_chunk(m, q);
2092
        }
2098
        }
2093
        else {
2099
        else {
2094
          assert(q == m->dv || bin_find(m, q));
2100
          assert(q == m->dv || bin_find(m, q));
2095
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2101
          assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2096
          do_check_free_chunk(m, q);
2102
          do_check_free_chunk(m, q);
2097
        }
2103
        }
2098
        lastq = q;
2104
        lastq = q;
2099
        q = next_chunk(q);
2105
        q = next_chunk(q);
2100
      }
2106
      }
2101
      s = s->next;
2107
      s = s->next;
2102
    }
2108
    }
2103
  }
2109
  }
2104
  return sum;
2110
  return sum;
2105
}
2111
}
2106
 
2112
 
2107
/* Check all properties of malloc_state. */
2113
/* Check all properties of malloc_state. */
2108
static void do_check_malloc_state(mstate m) {
2114
static void do_check_malloc_state(mstate m) {
2109
  bindex_t i;
2115
  bindex_t i;
2110
  size_t total;
2116
  size_t total;
2111
  /* check bins */
2117
  /* check bins */
2112
  for (i = 0; i < NSMALLBINS; ++i)
2118
  for (i = 0; i < NSMALLBINS; ++i)
2113
    do_check_smallbin(m, i);
2119
    do_check_smallbin(m, i);
2114
  for (i = 0; i < NTREEBINS; ++i)
2120
  for (i = 0; i < NTREEBINS; ++i)
2115
    do_check_treebin(m, i);
2121
    do_check_treebin(m, i);
2116
 
2122
 
2117
  if (m->dvsize != 0) { /* check dv chunk */
2123
  if (m->dvsize != 0) { /* check dv chunk */
2118
    do_check_any_chunk(m, m->dv);
2124
    do_check_any_chunk(m, m->dv);
2119
    assert(m->dvsize == chunksize(m->dv));
2125
    assert(m->dvsize == chunksize(m->dv));
2120
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2126
    assert(m->dvsize >= MIN_CHUNK_SIZE);
2121
    assert(bin_find(m, m->dv) == 0);
2127
    assert(bin_find(m, m->dv) == 0);
2122
  }
2128
  }
2123
 
2129
 
2124
  if (m->top != 0) {   /* check top chunk */
2130
  if (m->top != 0) {   /* check top chunk */
2125
    do_check_top_chunk(m, m->top);
2131
    do_check_top_chunk(m, m->top);
2126
    assert(m->topsize == chunksize(m->top));
2132
    assert(m->topsize == chunksize(m->top));
2127
    assert(m->topsize > 0);
2133
    assert(m->topsize > 0);
2128
    assert(bin_find(m, m->top) == 0);
2134
    assert(bin_find(m, m->top) == 0);
2129
  }
2135
  }
2130
 
2136
 
2131
  total = traverse_and_check(m);
2137
  total = traverse_and_check(m);
2132
  assert(total <= m->footprint);
2138
  assert(total <= m->footprint);
2133
  assert(m->footprint <= m->max_footprint);
2139
  assert(m->footprint <= m->max_footprint);
2134
}
2140
}
2135
#endif /* DEBUG */
2141
#endif /* DEBUG */
2136
 
2142
 
2137
/* ----------------------------- statistics ------------------------------ */
2143
/* ----------------------------- statistics ------------------------------ */
2138
 
2144
 
2139
#if !NO_MALLINFO
2145
#if !NO_MALLINFO
2140
static struct mallinfo internal_mallinfo(mstate m) {
2146
static struct mallinfo internal_mallinfo(mstate m) {
2141
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2147
  struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2142
  if (!PREACTION(m)) {
2148
  if (!PREACTION(m)) {
2143
    check_malloc_state(m);
2149
    check_malloc_state(m);
2144
    if (is_initialized(m)) {
2150
    if (is_initialized(m)) {
2145
      size_t nfree = SIZE_T_ONE; /* top always free */
2151
      size_t nfree = SIZE_T_ONE; /* top always free */
2146
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2152
      size_t mfree = m->topsize + TOP_FOOT_SIZE;
2147
      size_t sum = mfree;
2153
      size_t sum = mfree;
2148
      msegmentptr s = &m->seg;
2154
      msegmentptr s = &m->seg;
2149
      while (s != 0) {
2155
      while (s != 0) {
2150
        mchunkptr q = align_as_chunk(s->base);
2156
        mchunkptr q = align_as_chunk(s->base);
2151
        while (segment_holds(s, q) &&
2157
        while (segment_holds(s, q) &&
2152
               q != m->top && q->head != FENCEPOST_HEAD) {
2158
               q != m->top && q->head != FENCEPOST_HEAD) {
2153
          size_t sz = chunksize(q);
2159
          size_t sz = chunksize(q);
2154
          sum += sz;
2160
          sum += sz;
2155
          if (!cinuse(q)) {
2161
          if (!cinuse(q)) {
2156
            mfree += sz;
2162
            mfree += sz;
2157
            ++nfree;
2163
            ++nfree;
2158
          }
2164
          }
2159
          q = next_chunk(q);
2165
          q = next_chunk(q);
2160
        }
2166
        }
2161
        s = s->next;
2167
        s = s->next;
2162
      }
2168
      }
2163
 
2169
 
2164
      nm.arena    = sum;
2170
      nm.arena    = sum;
2165
      nm.ordblks  = nfree;
2171
      nm.ordblks  = nfree;
2166
      nm.hblkhd   = m->footprint - sum;
2172
      nm.hblkhd   = m->footprint - sum;
2167
      nm.usmblks  = m->max_footprint;
2173
      nm.usmblks  = m->max_footprint;
2168
      nm.uordblks = m->footprint - mfree;
2174
      nm.uordblks = m->footprint - mfree;
2169
      nm.fordblks = mfree;
2175
      nm.fordblks = mfree;
2170
      nm.keepcost = m->topsize;
2176
      nm.keepcost = m->topsize;
2171
    }
2177
    }
2172
 
2178
 
2173
    POSTACTION(m);
2179
    POSTACTION(m);
2174
  }
2180
  }
2175
  return nm;
2181
  return nm;
2176
}
2182
}
2177
#endif /* !NO_MALLINFO */
2183
#endif /* !NO_MALLINFO */
2178
 
2184
 
2179
static void internal_malloc_stats(mstate m) {
2185
static void internal_malloc_stats(mstate m) {
2180
  if (!PREACTION(m)) {
2186
  if (!PREACTION(m)) {
2181
    size_t maxfp = 0;
2187
    size_t maxfp = 0;
2182
    size_t fp = 0;
2188
    size_t fp = 0;
2183
    size_t used = 0;
2189
    size_t used = 0;
2184
    check_malloc_state(m);
2190
    check_malloc_state(m);
2185
    if (is_initialized(m)) {
2191
    if (is_initialized(m)) {
2186
      msegmentptr s = &m->seg;
2192
      msegmentptr s = &m->seg;
2187
      maxfp = m->max_footprint;
2193
      maxfp = m->max_footprint;
2188
      fp = m->footprint;
2194
      fp = m->footprint;
2189
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2195
      used = fp - (m->topsize + TOP_FOOT_SIZE);
2190
 
2196
 
2191
      while (s != 0) {
2197
      while (s != 0) {
2192
        mchunkptr q = align_as_chunk(s->base);
2198
        mchunkptr q = align_as_chunk(s->base);
2193
        while (segment_holds(s, q) &&
2199
        while (segment_holds(s, q) &&
2194
               q != m->top && q->head != FENCEPOST_HEAD) {
2200
               q != m->top && q->head != FENCEPOST_HEAD) {
2195
          if (!cinuse(q))
2201
          if (!cinuse(q))
2196
            used -= chunksize(q);
2202
            used -= chunksize(q);
2197
          q = next_chunk(q);
2203
          q = next_chunk(q);
2198
        }
2204
        }
2199
        s = s->next;
2205
        s = s->next;
2200
      }
2206
      }
2201
    }
2207
    }
2202
 
2208
 
2203
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2209
    fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2204
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2210
    fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2205
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2211
    fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2206
 
2212
 
2207
    POSTACTION(m);
2213
    POSTACTION(m);
2208
  }
2214
  }
2209
}
2215
}
2210
 
2216
 
2211
/* ----------------------- Operations on smallbins ----------------------- */
2217
/* ----------------------- Operations on smallbins ----------------------- */
2212
 
2218
 
2213
/*
2219
/*
2214
  Various forms of linking and unlinking are defined as macros.  Even
2220
  Various forms of linking and unlinking are defined as macros.  Even
2215
  the ones for trees, which are very long but have very short typical
2221
  the ones for trees, which are very long but have very short typical
2216
  paths.  This is ugly but reduces reliance on inlining support of
2222
  paths.  This is ugly but reduces reliance on inlining support of
2217
  compilers.
2223
  compilers.
2218
*/
2224
*/
2219
 
2225
 
2220
/* Link a free chunk into a smallbin  */
2226
/* Link a free chunk into a smallbin  */
2221
#define insert_small_chunk(M, P, S) {\
2227
#define insert_small_chunk(M, P, S) {\
2222
  bindex_t I  = small_index(S);\
2228
  bindex_t I  = small_index(S);\
2223
  mchunkptr B = smallbin_at(M, I);\
2229
  mchunkptr B = smallbin_at(M, I);\
2224
  mchunkptr F = B;\
2230
  mchunkptr F = B;\
2225
  assert(S >= MIN_CHUNK_SIZE);\
2231
  assert(S >= MIN_CHUNK_SIZE);\
2226
  if (!smallmap_is_marked(M, I))\
2232
  if (!smallmap_is_marked(M, I))\
2227
    mark_smallmap(M, I);\
2233
    mark_smallmap(M, I);\
2228
  else if (RTCHECK(ok_address(M, B->fd)))\
2234
  else if (RTCHECK(ok_address(M, B->fd)))\
2229
    F = B->fd;\
2235
    F = B->fd;\
2230
  else {\
2236
  else {\
2231
    CORRUPTION_ERROR_ACTION(M);\
2237
    CORRUPTION_ERROR_ACTION(M);\
2232
  }\
2238
  }\
2233
  B->fd = P;\
2239
  B->fd = P;\
2234
  F->bk = P;\
2240
  F->bk = P;\
2235
  P->fd = F;\
2241
  P->fd = F;\
2236
  P->bk = B;\
2242
  P->bk = B;\
2237
}
2243
}
2238
 
2244
 
2239
/* Unlink a chunk from a smallbin  */
2245
/* Unlink a chunk from a smallbin  */
2240
#define unlink_small_chunk(M, P, S) {\
2246
#define unlink_small_chunk(M, P, S) {\
2241
  mchunkptr F = P->fd;\
2247
  mchunkptr F = P->fd;\
2242
  mchunkptr B = P->bk;\
2248
  mchunkptr B = P->bk;\
2243
  bindex_t I = small_index(S);\
2249
  bindex_t I = small_index(S);\
2244
  assert(P != B);\
2250
  assert(P != B);\
2245
  assert(P != F);\
2251
  assert(P != F);\
2246
  assert(chunksize(P) == small_index2size(I));\
2252
  assert(chunksize(P) == small_index2size(I));\
2247
  if (F == B)\
2253
  if (F == B)\
2248
    clear_smallmap(M, I);\
2254
    clear_smallmap(M, I);\
2249
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2255
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2250
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2256
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2251
    F->bk = B;\
2257
    F->bk = B;\
2252
    B->fd = F;\
2258
    B->fd = F;\
2253
  }\
2259
  }\
2254
  else {\
2260
  else {\
2255
    CORRUPTION_ERROR_ACTION(M);\
2261
    CORRUPTION_ERROR_ACTION(M);\
2256
  }\
2262
  }\
2257
}
2263
}
2258
 
2264
 
2259
/* Unlink the first chunk from a smallbin */
2265
/* Unlink the first chunk from a smallbin */
2260
#define unlink_first_small_chunk(M, B, P, I) {\
2266
#define unlink_first_small_chunk(M, B, P, I) {\
2261
  mchunkptr F = P->fd;\
2267
  mchunkptr F = P->fd;\
2262
  assert(P != B);\
2268
  assert(P != B);\
2263
  assert(P != F);\
2269
  assert(P != F);\
2264
  assert(chunksize(P) == small_index2size(I));\
2270
  assert(chunksize(P) == small_index2size(I));\
2265
  if (B == F)\
2271
  if (B == F)\
2266
    clear_smallmap(M, I);\
2272
    clear_smallmap(M, I);\
2267
  else if (RTCHECK(ok_address(M, F))) {\
2273
  else if (RTCHECK(ok_address(M, F))) {\
2268
    B->fd = F;\
2274
    B->fd = F;\
2269
    F->bk = B;\
2275
    F->bk = B;\
2270
  }\
2276
  }\
2271
  else {\
2277
  else {\
2272
    CORRUPTION_ERROR_ACTION(M);\
2278
    CORRUPTION_ERROR_ACTION(M);\
2273
  }\
2279
  }\
2274
}
2280
}
2275
 
2281
 
2276
/* Replace dv node, binning the old one */
2282
/* Replace dv node, binning the old one */
2277
/* Used only when dvsize known to be small */
2283
/* Used only when dvsize known to be small */
2278
#define replace_dv(M, P, S) {\
2284
#define replace_dv(M, P, S) {\
2279
  size_t DVS = M->dvsize;\
2285
  size_t DVS = M->dvsize;\
2280
  if (DVS != 0) {\
2286
  if (DVS != 0) {\
2281
    mchunkptr DV = M->dv;\
2287
    mchunkptr DV = M->dv;\
2282
    assert(is_small(DVS));\
2288
    assert(is_small(DVS));\
2283
    insert_small_chunk(M, DV, DVS);\
2289
    insert_small_chunk(M, DV, DVS);\
2284
  }\
2290
  }\
2285
  M->dvsize = S;\
2291
  M->dvsize = S;\
2286
  M->dv = P;\
2292
  M->dv = P;\
2287
}
2293
}
2288
 
2294
 
2289
/* ------------------------- Operations on trees ------------------------- */
2295
/* ------------------------- Operations on trees ------------------------- */
2290
 
2296
 
2291
/* Insert chunk into tree */
2297
/* Insert chunk into tree */
2292
#define insert_large_chunk(M, X, S) {\
2298
#define insert_large_chunk(M, X, S) {\
2293
  tbinptr* H;\
2299
  tbinptr* H;\
2294
  bindex_t I;\
2300
  bindex_t I;\
2295
  compute_tree_index(S, I);\
2301
  compute_tree_index(S, I);\
2296
  H = treebin_at(M, I);\
2302
  H = treebin_at(M, I);\
2297
  X->index = I;\
2303
  X->index = I;\
2298
  X->child[0] = X->child[1] = 0;\
2304
  X->child[0] = X->child[1] = 0;\
2299
  if (!treemap_is_marked(M, I)) {\
2305
  if (!treemap_is_marked(M, I)) {\
2300
    mark_treemap(M, I);\
2306
    mark_treemap(M, I);\
2301
    *H = X;\
2307
    *H = X;\
2302
    X->parent = (tchunkptr)H;\
2308
    X->parent = (tchunkptr)H;\
2303
    X->fd = X->bk = X;\
2309
    X->fd = X->bk = X;\
2304
  }\
2310
  }\
2305
  else {\
2311
  else {\
2306
    tchunkptr T = *H;\
2312
    tchunkptr T = *H;\
2307
    size_t K = S << leftshift_for_tree_index(I);\
2313
    size_t K = S << leftshift_for_tree_index(I);\
2308
    for (;;) {\
2314
    for (;;) {\
2309
      if (chunksize(T) != S) {\
2315
      if (chunksize(T) != S) {\
2310
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2316
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2311
        K <<= 1;\
2317
        K <<= 1;\
2312
        if (*C != 0)\
2318
        if (*C != 0)\
2313
          T = *C;\
2319
          T = *C;\
2314
        else if (RTCHECK(ok_address(M, C))) {\
2320
        else if (RTCHECK(ok_address(M, C))) {\
2315
          *C = X;\
2321
          *C = X;\
2316
          X->parent = T;\
2322
          X->parent = T;\
2317
          X->fd = X->bk = X;\
2323
          X->fd = X->bk = X;\
2318
          break;\
2324
          break;\
2319
        }\
2325
        }\
2320
        else {\
2326
        else {\
2321
          CORRUPTION_ERROR_ACTION(M);\
2327
          CORRUPTION_ERROR_ACTION(M);\
2322
          break;\
2328
          break;\
2323
        }\
2329
        }\
2324
      }\
2330
      }\
2325
      else {\
2331
      else {\
2326
        tchunkptr F = T->fd;\
2332
        tchunkptr F = T->fd;\
2327
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2333
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2328
          T->fd = F->bk = X;\
2334
          T->fd = F->bk = X;\
2329
          X->fd = F;\
2335
          X->fd = F;\
2330
          X->bk = T;\
2336
          X->bk = T;\
2331
          X->parent = 0;\
2337
          X->parent = 0;\
2332
          break;\
2338
          break;\
2333
        }\
2339
        }\
2334
        else {\
2340
        else {\
2335
          CORRUPTION_ERROR_ACTION(M);\
2341
          CORRUPTION_ERROR_ACTION(M);\
2336
          break;\
2342
          break;\
2337
        }\
2343
        }\
2338
      }\
2344
      }\
2339
    }\
2345
    }\
2340
  }\
2346
  }\
2341
}
2347
}
2342
 
2348
 
2343
/*
2349
/*
2344
  Unlink steps:
2350
  Unlink steps:
2345
 
2351
 
2346
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2352
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2347
     and choose its bk node as its replacement.
2353
     and choose its bk node as its replacement.
2348
  2. If x was the last node of its size, but not a leaf node, it must
2354
  2. If x was the last node of its size, but not a leaf node, it must
2349
     be replaced with a leaf node (not merely one with an open left or
2355
     be replaced with a leaf node (not merely one with an open left or
2350
     right), to make sure that lefts and rights of descendents
2356
     right), to make sure that lefts and rights of descendents
2351
     correspond properly to bit masks.  We use the rightmost descendent
2357
     correspond properly to bit masks.  We use the rightmost descendent
2352
     of x.  We could use any other leaf, but this is easy to locate and
2358
     of x.  We could use any other leaf, but this is easy to locate and
2353
     tends to counteract removal of leftmosts elsewhere, and so keeps
2359
     tends to counteract removal of leftmosts elsewhere, and so keeps
2354
     paths shorter than minimally guaranteed.  This doesn't loop much
2360
     paths shorter than minimally guaranteed.  This doesn't loop much
2355
     because on average a node in a tree is near the bottom.
2361
     because on average a node in a tree is near the bottom.
2356
  3. If x is the base of a chain (i.e., has parent links) relink
2362
  3. If x is the base of a chain (i.e., has parent links) relink
2357
     x's parent and children to x's replacement (or null if none).
2363
     x's parent and children to x's replacement (or null if none).
2358
*/
2364
*/
2359
 
2365
 
2360
#define unlink_large_chunk(M, X) {\
2366
#define unlink_large_chunk(M, X) {\
2361
  tchunkptr XP = X->parent;\
2367
  tchunkptr XP = X->parent;\
2362
  tchunkptr R;\
2368
  tchunkptr R;\
2363
  if (X->bk != X) {\
2369
  if (X->bk != X) {\
2364
    tchunkptr F = X->fd;\
2370
    tchunkptr F = X->fd;\
2365
    R = X->bk;\
2371
    R = X->bk;\
2366
    if (RTCHECK(ok_address(M, F))) {\
2372
    if (RTCHECK(ok_address(M, F))) {\
2367
      F->bk = R;\
2373
      F->bk = R;\
2368
      R->fd = F;\
2374
      R->fd = F;\
2369
    }\
2375
    }\
2370
    else {\
2376
    else {\
2371
      CORRUPTION_ERROR_ACTION(M);\
2377
      CORRUPTION_ERROR_ACTION(M);\
2372
    }\
2378
    }\
2373
  }\
2379
  }\
2374
  else {\
2380
  else {\
2375
    tchunkptr* RP;\
2381
    tchunkptr* RP;\
2376
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2382
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2377
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2383
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2378
      tchunkptr* CP;\
2384
      tchunkptr* CP;\
2379
      while ((*(CP = &(R->child[1])) != 0) ||\
2385
      while ((*(CP = &(R->child[1])) != 0) ||\
2380
             (*(CP = &(R->child[0])) != 0)) {\
2386
             (*(CP = &(R->child[0])) != 0)) {\
2381
        R = *(RP = CP);\
2387
        R = *(RP = CP);\
2382
      }\
2388
      }\
2383
      if (RTCHECK(ok_address(M, RP)))\
2389
      if (RTCHECK(ok_address(M, RP)))\
2384
        *RP = 0;\
2390
        *RP = 0;\
2385
      else {\
2391
      else {\
2386
        CORRUPTION_ERROR_ACTION(M);\
2392
        CORRUPTION_ERROR_ACTION(M);\
2387
      }\
2393
      }\
2388
    }\
2394
    }\
2389
  }\
2395
  }\
2390
  if (XP != 0) {\
2396
  if (XP != 0) {\
2391
    tbinptr* H = treebin_at(M, X->index);\
2397
    tbinptr* H = treebin_at(M, X->index);\
2392
    if (X == *H) {\
2398
    if (X == *H) {\
2393
      if ((*H = R) == 0) \
2399
      if ((*H = R) == 0) \
2394
        clear_treemap(M, X->index);\
2400
        clear_treemap(M, X->index);\
2395
    }\
2401
    }\
2396
    else if (RTCHECK(ok_address(M, XP))) {\
2402
    else if (RTCHECK(ok_address(M, XP))) {\
2397
      if (XP->child[0] == X) \
2403
      if (XP->child[0] == X) \
2398
        XP->child[0] = R;\
2404
        XP->child[0] = R;\
2399
      else \
2405
      else \
2400
        XP->child[1] = R;\
2406
        XP->child[1] = R;\
2401
    }\
2407
    }\
2402
    else\
2408
    else\
2403
      CORRUPTION_ERROR_ACTION(M);\
2409
      CORRUPTION_ERROR_ACTION(M);\
2404
    if (R != 0) {\
2410
    if (R != 0) {\
2405
      if (RTCHECK(ok_address(M, R))) {\
2411
      if (RTCHECK(ok_address(M, R))) {\
2406
        tchunkptr C0, C1;\
2412
        tchunkptr C0, C1;\
2407
        R->parent = XP;\
2413
        R->parent = XP;\
2408
        if ((C0 = X->child[0]) != 0) {\
2414
        if ((C0 = X->child[0]) != 0) {\
2409
          if (RTCHECK(ok_address(M, C0))) {\
2415
          if (RTCHECK(ok_address(M, C0))) {\
2410
            R->child[0] = C0;\
2416
            R->child[0] = C0;\
2411
            C0->parent = R;\
2417
            C0->parent = R;\
2412
          }\
2418
          }\
2413
          else\
2419
          else\
2414
            CORRUPTION_ERROR_ACTION(M);\
2420
            CORRUPTION_ERROR_ACTION(M);\
2415
        }\
2421
        }\
2416
        if ((C1 = X->child[1]) != 0) {\
2422
        if ((C1 = X->child[1]) != 0) {\
2417
          if (RTCHECK(ok_address(M, C1))) {\
2423
          if (RTCHECK(ok_address(M, C1))) {\
2418
            R->child[1] = C1;\
2424
            R->child[1] = C1;\
2419
            C1->parent = R;\
2425
            C1->parent = R;\
2420
          }\
2426
          }\
2421
          else\
2427
          else\
2422
            CORRUPTION_ERROR_ACTION(M);\
2428
            CORRUPTION_ERROR_ACTION(M);\
2423
        }\
2429
        }\
2424
      }\
2430
      }\
2425
      else\
2431
      else\
2426
        CORRUPTION_ERROR_ACTION(M);\
2432
        CORRUPTION_ERROR_ACTION(M);\
2427
    }\
2433
    }\
2428
  }\
2434
  }\
2429
}
2435
}
2430
 
2436
 
2431
/* Relays to large vs small bin operations */
2437
/* Relays to large vs small bin operations */
2432
 
2438
 
2433
#define insert_chunk(M, P, S)\
2439
#define insert_chunk(M, P, S)\
2434
  if (is_small(S)) insert_small_chunk(M, P, S)\
2440
  if (is_small(S)) insert_small_chunk(M, P, S)\
2435
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2441
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2436
 
2442
 
2437
#define unlink_chunk(M, P, S)\
2443
#define unlink_chunk(M, P, S)\
2438
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2444
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2439
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2445
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2440
 
2446
 
2441
 
2447
 
2442
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2448
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2443
 
2449
 
2444
#if ONLY_MSPACES
2450
#if ONLY_MSPACES
2445
#define internal_malloc(m, b) mspace_malloc(m, b)
2451
#define internal_malloc(m, b) mspace_malloc(m, b)
2446
#define internal_free(m, mem) mspace_free(m,mem);
2452
#define internal_free(m, mem) mspace_free(m,mem);
2447
#else /* ONLY_MSPACES */
2453
#else /* ONLY_MSPACES */
2448
#if MSPACES
2454
#if MSPACES
2449
#define internal_malloc(m, b)\
2455
#define internal_malloc(m, b)\
2450
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2456
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2451
#define internal_free(m, mem)\
2457
#define internal_free(m, mem)\
2452
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2458
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2453
#else /* MSPACES */
2459
#else /* MSPACES */
2454
#define internal_malloc(m, b) dlmalloc(b)
2460
#define internal_malloc(m, b) dlmalloc(b)
2455
#define internal_free(m, mem) dlfree(mem)
2461
#define internal_free(m, mem) dlfree(mem)
2456
#endif /* MSPACES */
2462
#endif /* MSPACES */
2457
#endif /* ONLY_MSPACES */
2463
#endif /* ONLY_MSPACES */
2458
 
2464
 
2459
/* -----------------------  Direct-mmapping chunks ----------------------- */
2465
/* -----------------------  Direct-mmapping chunks ----------------------- */
2460
 
2466
 
2461
/*
2467
/*
2462
  Directly mmapped chunks are set up with an offset to the start of
2468
  Directly mmapped chunks are set up with an offset to the start of
2463
  the mmapped region stored in the prev_foot field of the chunk. This
2469
  the mmapped region stored in the prev_foot field of the chunk. This
2464
  allows reconstruction of the required argument to MUNMAP when freed,
2470
  allows reconstruction of the required argument to MUNMAP when freed,
2465
  and also allows adjustment of the returned chunk to meet alignment
2471
  and also allows adjustment of the returned chunk to meet alignment
2466
  requirements (especially in memalign).  There is also enough space
2472
  requirements (especially in memalign).  There is also enough space
2467
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2473
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2468
  the PINUSE bit so frees can be checked.
2474
  the PINUSE bit so frees can be checked.
2469
*/
2475
*/
2470
 
2476
 
2471
/* Malloc using mmap */
2477
/* Malloc using mmap */
2472
static void* mmap_alloc(mstate m, size_t nb) {
2478
static void* mmap_alloc(mstate m, size_t nb) {
2473
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2479
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2474
  if (mmsize > nb) {     /* Check for wrap around 0 */
2480
  if (mmsize > nb) {     /* Check for wrap around 0 */
2475
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2481
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2476
    if (mm != CMFAIL) {
2482
    if (mm != CMFAIL) {
2477
      size_t offset = align_offset(chunk2mem(mm));
2483
      size_t offset = align_offset(chunk2mem(mm));
2478
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2484
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2479
      mchunkptr p = (mchunkptr)(mm + offset);
2485
      mchunkptr p = (mchunkptr)(mm + offset);
2480
      p->prev_foot = offset | IS_MMAPPED_BIT;
2486
      p->prev_foot = offset | IS_MMAPPED_BIT;
2481
      (p)->head = (psize|CINUSE_BIT);
2487
      (p)->head = (psize|CINUSE_BIT);
2482
      mark_inuse_foot(m, p, psize);
2488
      mark_inuse_foot(m, p, psize);
2483
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2489
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2484
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2490
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2485
 
2491
 
2486
      if (mm < m->least_addr)
2492
      if (mm < m->least_addr)
2487
        m->least_addr = mm;
2493
        m->least_addr = mm;
2488
      if ((m->footprint += mmsize) > m->max_footprint)
2494
      if ((m->footprint += mmsize) > m->max_footprint)
2489
        m->max_footprint = m->footprint;
2495
        m->max_footprint = m->footprint;
2490
      assert(is_aligned(chunk2mem(p)));
2496
      assert(is_aligned(chunk2mem(p)));
2491
      check_mmapped_chunk(m, p);
2497
      check_mmapped_chunk(m, p);
2492
      return chunk2mem(p);
2498
      return chunk2mem(p);
2493
    }
2499
    }
2494
  }
2500
  }
2495
  return 0;
2501
  return 0;
2496
}
2502
}
2497
 
2503
 
2498
/* Realloc using mmap */
2504
/* Realloc using mmap */
2499
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2505
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2500
  size_t oldsize = chunksize(oldp);
2506
  size_t oldsize = chunksize(oldp);
2501
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2507
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2502
    return 0;
2508
    return 0;
2503
  /* Keep old chunk if big enough but not too big */
2509
  /* Keep old chunk if big enough but not too big */
2504
  if (oldsize >= nb + SIZE_T_SIZE &&
2510
  if (oldsize >= nb + SIZE_T_SIZE &&
2505
      (oldsize - nb) <= (mparams.granularity << 1))
2511
      (oldsize - nb) <= (mparams.granularity << 1))
2506
    return oldp;
2512
    return oldp;
2507
  else {
2513
  else {
2508
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2514
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2509
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2515
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2510
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2516
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2511
                                         CHUNK_ALIGN_MASK);
2517
                                         CHUNK_ALIGN_MASK);
2512
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2518
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2513
                                  oldmmsize, newmmsize, 1);
2519
                                  oldmmsize, newmmsize, 1);
2514
    if (cp != CMFAIL) {
2520
    if (cp != CMFAIL) {
2515
      mchunkptr newp = (mchunkptr)(cp + offset);
2521
      mchunkptr newp = (mchunkptr)(cp + offset);
2516
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2522
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2517
      newp->head = (psize|CINUSE_BIT);
2523
      newp->head = (psize|CINUSE_BIT);
2518
      mark_inuse_foot(m, newp, psize);
2524
      mark_inuse_foot(m, newp, psize);
2519
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2525
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2520
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2526
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2521
 
2527
 
2522
      if (cp < m->least_addr)
2528
      if (cp < m->least_addr)
2523
        m->least_addr = cp;
2529
        m->least_addr = cp;
2524
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2530
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2525
        m->max_footprint = m->footprint;
2531
        m->max_footprint = m->footprint;
2526
      check_mmapped_chunk(m, newp);
2532
      check_mmapped_chunk(m, newp);
2527
      return newp;
2533
      return newp;
2528
    }
2534
    }
2529
  }
2535
  }
2530
  return 0;
2536
  return 0;
2531
}
2537
}
2532
 
2538
 
2533
/* -------------------------- mspace management -------------------------- */
2539
/* -------------------------- mspace management -------------------------- */
2534
 
2540
 
2535
/* Initialize top chunk and its size */
2541
/* Initialize top chunk and its size */
2536
static void init_top(mstate m, mchunkptr p, size_t psize) {
2542
static void init_top(mstate m, mchunkptr p, size_t psize) {
2537
  /* Ensure alignment */
2543
  /* Ensure alignment */
2538
  size_t offset = align_offset(chunk2mem(p));
2544
  size_t offset = align_offset(chunk2mem(p));
2539
  p = (mchunkptr)((char*)p + offset);
2545
  p = (mchunkptr)((char*)p + offset);
2540
  psize -= offset;
2546
  psize -= offset;
2541
 
2547
 
2542
  m->top = p;
2548
  m->top = p;
2543
  m->topsize = psize;
2549
  m->topsize = psize;
2544
  p->head = psize | PINUSE_BIT;
2550
  p->head = psize | PINUSE_BIT;
2545
  /* set size of fake trailing chunk holding overhead space only once */
2551
  /* set size of fake trailing chunk holding overhead space only once */
2546
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2552
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2547
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2553
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2548
}
2554
}
2549
 
2555
 
2550
/* Initialize bins for a new mstate that is otherwise zeroed out */
2556
/* Initialize bins for a new mstate that is otherwise zeroed out */
2551
static void init_bins(mstate m) {
2557
static void init_bins(mstate m) {
2552
  /* Establish circular links for smallbins */
2558
  /* Establish circular links for smallbins */
2553
  bindex_t i;
2559
  bindex_t i;
2554
  for (i = 0; i < NSMALLBINS; ++i) {
2560
  for (i = 0; i < NSMALLBINS; ++i) {
2555
    sbinptr bin = smallbin_at(m,i);
2561
    sbinptr bin = smallbin_at(m,i);
2556
    bin->fd = bin->bk = bin;
2562
    bin->fd = bin->bk = bin;
2557
  }
2563
  }
2558
}
2564
}
2559
 
2565
 
2560
#if PROCEED_ON_ERROR
2566
#if PROCEED_ON_ERROR
2561
 
2567
 
2562
/* default corruption action */
2568
/* default corruption action */
2563
static void reset_on_error(mstate m) {
2569
static void reset_on_error(mstate m) {
2564
  int i;
2570
  int i;
2565
  ++malloc_corruption_error_count;
2571
  ++malloc_corruption_error_count;
2566
  /* Reinitialize fields to forget about all memory */
2572
  /* Reinitialize fields to forget about all memory */
2567
  m->smallbins = m->treebins = 0;
2573
  m->smallbins = m->treebins = 0;
2568
  m->dvsize = m->topsize = 0;
2574
  m->dvsize = m->topsize = 0;
2569
  m->seg.base = 0;
2575
  m->seg.base = 0;
2570
  m->seg.size = 0;
2576
  m->seg.size = 0;
2571
  m->seg.next = 0;
2577
  m->seg.next = 0;
2572
  m->top = m->dv = 0;
2578
  m->top = m->dv = 0;
2573
  for (i = 0; i < NTREEBINS; ++i)
2579
  for (i = 0; i < NTREEBINS; ++i)
2574
    *treebin_at(m, i) = 0;
2580
    *treebin_at(m, i) = 0;
2575
  init_bins(m);
2581
  init_bins(m);
2576
}
2582
}
2577
#endif /* PROCEED_ON_ERROR */
2583
#endif /* PROCEED_ON_ERROR */
2578
 
2584
 
2579
/* Allocate chunk and prepend remainder with chunk in successor base. */
2585
/* Allocate chunk and prepend remainder with chunk in successor base. */
2580
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2586
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2581
                           size_t nb) {
2587
                           size_t nb) {
2582
  mchunkptr p = align_as_chunk(newbase);
2588
  mchunkptr p = align_as_chunk(newbase);
2583
  mchunkptr oldfirst = align_as_chunk(oldbase);
2589
  mchunkptr oldfirst = align_as_chunk(oldbase);
2584
  size_t psize = (char*)oldfirst - (char*)p;
2590
  size_t psize = (char*)oldfirst - (char*)p;
2585
  mchunkptr q = chunk_plus_offset(p, nb);
2591
  mchunkptr q = chunk_plus_offset(p, nb);
2586
  size_t qsize = psize - nb;
2592
  size_t qsize = psize - nb;
2587
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2593
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2588
 
2594
 
2589
  assert((char*)oldfirst > (char*)q);
2595
  assert((char*)oldfirst > (char*)q);
2590
  assert(pinuse(oldfirst));
2596
  assert(pinuse(oldfirst));
2591
  assert(qsize >= MIN_CHUNK_SIZE);
2597
  assert(qsize >= MIN_CHUNK_SIZE);
2592
 
2598
 
2593
  /* consolidate remainder with first chunk of old base */
2599
  /* consolidate remainder with first chunk of old base */
2594
  if (oldfirst == m->top) {
2600
  if (oldfirst == m->top) {
2595
    size_t tsize = m->topsize += qsize;
2601
    size_t tsize = m->topsize += qsize;
2596
    m->top = q;
2602
    m->top = q;
2597
    q->head = tsize | PINUSE_BIT;
2603
    q->head = tsize | PINUSE_BIT;
2598
    check_top_chunk(m, q);
2604
    check_top_chunk(m, q);
2599
  }
2605
  }
2600
  else if (oldfirst == m->dv) {
2606
  else if (oldfirst == m->dv) {
2601
    size_t dsize = m->dvsize += qsize;
2607
    size_t dsize = m->dvsize += qsize;
2602
    m->dv = q;
2608
    m->dv = q;
2603
    set_size_and_pinuse_of_free_chunk(q, dsize);
2609
    set_size_and_pinuse_of_free_chunk(q, dsize);
2604
  }
2610
  }
2605
  else {
2611
  else {
2606
    if (!cinuse(oldfirst)) {
2612
    if (!cinuse(oldfirst)) {
2607
      size_t nsize = chunksize(oldfirst);
2613
      size_t nsize = chunksize(oldfirst);
2608
      unlink_chunk(m, oldfirst, nsize);
2614
      unlink_chunk(m, oldfirst, nsize);
2609
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2615
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2610
      qsize += nsize;
2616
      qsize += nsize;
2611
    }
2617
    }
2612
    set_free_with_pinuse(q, qsize, oldfirst);
2618
    set_free_with_pinuse(q, qsize, oldfirst);
2613
    insert_chunk(m, q, qsize);
2619
    insert_chunk(m, q, qsize);
2614
    check_free_chunk(m, q);
2620
    check_free_chunk(m, q);
2615
  }
2621
  }
2616
 
2622
 
2617
  check_malloced_chunk(m, chunk2mem(p), nb);
2623
  check_malloced_chunk(m, chunk2mem(p), nb);
2618
  return chunk2mem(p);
2624
  return chunk2mem(p);
2619
}
2625
}
2620
 
2626
 
2621
 
2627
 
2622
/* Add a segment to hold a new noncontiguous region */
2628
/* Add a segment to hold a new noncontiguous region */
2623
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2629
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2624
  /* Determine locations and sizes of segment, fenceposts, old top */
2630
  /* Determine locations and sizes of segment, fenceposts, old top */
2625
  char* old_top = (char*)m->top;
2631
  char* old_top = (char*)m->top;
2626
  msegmentptr oldsp = segment_holding(m, old_top);
2632
  msegmentptr oldsp = segment_holding(m, old_top);
2627
  char* old_end = oldsp->base + oldsp->size;
2633
  char* old_end = oldsp->base + oldsp->size;
2628
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2634
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2629
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2635
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2630
  size_t offset = align_offset(chunk2mem(rawsp));
2636
  size_t offset = align_offset(chunk2mem(rawsp));
2631
  char* asp = rawsp + offset;
2637
  char* asp = rawsp + offset;
2632
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2638
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2633
  mchunkptr sp = (mchunkptr)csp;
2639
  mchunkptr sp = (mchunkptr)csp;
2634
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2640
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2635
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2641
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2636
  mchunkptr p = tnext;
2642
  mchunkptr p = tnext;
2637
  int nfences = 0;
2643
  int nfences = 0;
2638
 
2644
 
2639
  /* reset top to new space */
2645
  /* reset top to new space */
2640
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2646
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2641
 
2647
 
2642
  /* Set up segment record */
2648
  /* Set up segment record */
2643
  assert(is_aligned(ss));
2649
  assert(is_aligned(ss));
2644
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2650
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2645
  *ss = m->seg; /* Push current record */
2651
  *ss = m->seg; /* Push current record */
2646
  m->seg.base = tbase;
2652
  m->seg.base = tbase;
2647
  m->seg.size = tsize;
2653
  m->seg.size = tsize;
2648
  m->seg.sflags = mmapped;
2654
  m->seg.sflags = mmapped;
2649
  m->seg.next = ss;
2655
  m->seg.next = ss;
2650
 
2656
 
2651
  /* Insert trailing fenceposts */
2657
  /* Insert trailing fenceposts */
2652
  for (;;) {
2658
  for (;;) {
2653
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2659
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2654
    p->head = FENCEPOST_HEAD;
2660
    p->head = FENCEPOST_HEAD;
2655
    ++nfences;
2661
    ++nfences;
2656
    if ((char*)(&(nextp->head)) < old_end)
2662
    if ((char*)(&(nextp->head)) < old_end)
2657
      p = nextp;
2663
      p = nextp;
2658
    else
2664
    else
2659
      break;
2665
      break;
2660
  }
2666
  }
2661
  assert(nfences >= 2);
2667
  assert(nfences >= 2);
2662
 
2668
 
2663
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2669
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2664
  if (csp != old_top) {
2670
  if (csp != old_top) {
2665
    mchunkptr q = (mchunkptr)old_top;
2671
    mchunkptr q = (mchunkptr)old_top;
2666
    size_t psize = csp - old_top;
2672
    size_t psize = csp - old_top;
2667
    mchunkptr tn = chunk_plus_offset(q, psize);
2673
    mchunkptr tn = chunk_plus_offset(q, psize);
2668
    set_free_with_pinuse(q, psize, tn);
2674
    set_free_with_pinuse(q, psize, tn);
2669
    insert_chunk(m, q, psize);
2675
    insert_chunk(m, q, psize);
2670
  }
2676
  }
2671
 
2677
 
2672
  check_top_chunk(m, m->top);
2678
  check_top_chunk(m, m->top);
2673
}
2679
}
2674
 
2680
 
2675
/* -------------------------- System allocation -------------------------- */
2681
/* -------------------------- System allocation -------------------------- */
2676
 
2682
 
2677
/* Get memory from system using MORECORE or MMAP */
2683
/* Get memory from system using MORECORE or MMAP */
2678
static void* sys_alloc(mstate m, size_t nb) {
2684
static void* sys_alloc(mstate m, size_t nb) {
2679
  char* tbase = CMFAIL;
2685
  char* tbase = CMFAIL;
2680
  size_t tsize = 0;
2686
  size_t tsize = 0;
2681
  flag_t mmap_flag = 0;
2687
  flag_t mmap_flag = 0;
2682
 
2688
 
2683
  init_mparams();
2689
  init_mparams();
2684
 
2690
 
2685
  /* Directly map large chunks */
2691
  /* Directly map large chunks */
2686
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2692
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2687
    void* mem = mmap_alloc(m, nb);
2693
    void* mem = mmap_alloc(m, nb);
2688
    if (mem != 0)
2694
    if (mem != 0)
2689
      return mem;
2695
      return mem;
2690
  }
2696
  }
2691
 
2697
 
2692
  /*
2698
  /*
2693
    Try getting memory in any of three ways (in most-preferred to
2699
    Try getting memory in any of three ways (in most-preferred to
2694
    least-preferred order):
2700
    least-preferred order):
2695
    1. A call to MORECORE that can normally contiguously extend memory.
2701
    1. A call to MORECORE that can normally contiguously extend memory.
2696
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2702
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2697
       or main space is mmapped or a previous contiguous call failed)
2703
       or main space is mmapped or a previous contiguous call failed)
2698
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2704
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2699
       Note that under the default settings, if MORECORE is unable to
2705
       Note that under the default settings, if MORECORE is unable to
2700
       fulfill a request, and HAVE_MMAP is true, then mmap is
2706
       fulfill a request, and HAVE_MMAP is true, then mmap is
2701
       used as a noncontiguous system allocator. This is a useful backup
2707
       used as a noncontiguous system allocator. This is a useful backup
2702
       strategy for systems with holes in address spaces -- in this case
2708
       strategy for systems with holes in address spaces -- in this case
2703
       sbrk cannot contiguously expand the heap, but mmap may be able to
2709
       sbrk cannot contiguously expand the heap, but mmap may be able to
2704
       find space.
2710
       find space.
2705
    3. A call to MORECORE that cannot usually contiguously extend memory.
2711
    3. A call to MORECORE that cannot usually contiguously extend memory.
2706
       (disabled if not HAVE_MORECORE)
2712
       (disabled if not HAVE_MORECORE)
2707
  */
2713
  */
2708
 
2714
 
2709
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2715
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2710
    char* br = CMFAIL;
2716
    char* br = CMFAIL;
2711
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2717
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2712
    size_t asize = 0;
2718
    size_t asize = 0;
2713
    ACQUIRE_MORECORE_LOCK();
2719
    ACQUIRE_MORECORE_LOCK();
2714
 
2720
 
2715
    if (ss == 0) {  /* First time through or recovery */
2721
    if (ss == 0) {  /* First time through or recovery */
2716
      char* base = (char*)CALL_MORECORE(0);
2722
      char* base = (char*)CALL_MORECORE(0);
2717
      if (base != CMFAIL) {
2723
      if (base != CMFAIL) {
2718
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2724
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2719
        /* Adjust to end on a page boundary */
2725
        /* Adjust to end on a page boundary */
2720
        if (!is_page_aligned(base))
2726
        if (!is_page_aligned(base))
2721
          asize += (page_align((size_t)base) - (size_t)base);
2727
          asize += (page_align((size_t)base) - (size_t)base);
2722
        /* Can't call MORECORE if size is negative when treated as signed */
2728
        /* Can't call MORECORE if size is negative when treated as signed */
2723
        if (asize < HALF_MAX_SIZE_T &&
2729
        if (asize < HALF_MAX_SIZE_T &&
2724
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2730
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2725
          tbase = base;
2731
          tbase = base;
2726
          tsize = asize;
2732
          tsize = asize;
2727
        }
2733
        }
2728
      }
2734
      }
2729
    }
2735
    }
2730
    else {
2736
    else {
2731
      /* Subtract out existing available top space from MORECORE request. */
2737
      /* Subtract out existing available top space from MORECORE request. */
2732
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2738
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2733
      /* Use mem here only if it did continuously extend old space */
2739
      /* Use mem here only if it did continuously extend old space */
2734
      if (asize < HALF_MAX_SIZE_T &&
2740
      if (asize < HALF_MAX_SIZE_T &&
2735
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2741
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2736
        tbase = br;
2742
        tbase = br;
2737
        tsize = asize;
2743
        tsize = asize;
2738
      }
2744
      }
2739
    }
2745
    }
2740
 
2746
 
2741
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2747
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2742
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2748
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2743
        if (asize < HALF_MAX_SIZE_T &&
2749
        if (asize < HALF_MAX_SIZE_T &&
2744
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2750
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2745
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2751
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2746
          if (esize < HALF_MAX_SIZE_T) {
2752
          if (esize < HALF_MAX_SIZE_T) {
2747
            char* end = (char*)CALL_MORECORE(esize);
2753
            char* end = (char*)CALL_MORECORE(esize);
2748
            if (end != CMFAIL)
2754
            if (end != CMFAIL)
2749
              asize += esize;
2755
              asize += esize;
2750
            else {            /* Can't use; try to release */
2756
            else {            /* Can't use; try to release */
2751
              CALL_MORECORE(-asize);
2757
              CALL_MORECORE(-asize);
2752
              br = CMFAIL;
2758
              br = CMFAIL;
2753
            }
2759
            }
2754
          }
2760
          }
2755
        }
2761
        }
2756
      }
2762
      }
2757
      if (br != CMFAIL) {    /* Use the space we did get */
2763
      if (br != CMFAIL) {    /* Use the space we did get */
2758
        tbase = br;
2764
        tbase = br;
2759
        tsize = asize;
2765
        tsize = asize;
2760
      }
2766
      }
2761
      else
2767
      else
2762
        disable_contiguous(m); /* Don't try contiguous path in the future */
2768
        disable_contiguous(m); /* Don't try contiguous path in the future */
2763
    }
2769
    }
2764
 
2770
 
2765
    RELEASE_MORECORE_LOCK();
2771
    RELEASE_MORECORE_LOCK();
2766
  }
2772
  }
2767
 
2773
 
2768
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2774
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2769
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2775
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2770
    size_t rsize = granularity_align(req);
2776
    size_t rsize = granularity_align(req);
2771
    if (rsize > nb) { /* Fail if wraps around zero */
2777
    if (rsize > nb) { /* Fail if wraps around zero */
2772
      char* mp = (char*)(CALL_MMAP(rsize));
2778
      char* mp = (char*)(CALL_MMAP(rsize));
2773
      if (mp != CMFAIL) {
2779
      if (mp != CMFAIL) {
2774
        tbase = mp;
2780
        tbase = mp;
2775
        tsize = rsize;
2781
        tsize = rsize;
2776
        mmap_flag = IS_MMAPPED_BIT;
2782
        mmap_flag = IS_MMAPPED_BIT;
2777
      }
2783
      }
2778
    }
2784
    }
2779
  }
2785
  }
2780
 
2786
 
2781
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2787
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2782
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2788
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2783
    if (asize < HALF_MAX_SIZE_T) {
2789
    if (asize < HALF_MAX_SIZE_T) {
2784
      char* br = CMFAIL;
2790
      char* br = CMFAIL;
2785
      char* end = CMFAIL;
2791
      char* end = CMFAIL;
2786
      ACQUIRE_MORECORE_LOCK();
2792
      ACQUIRE_MORECORE_LOCK();
2787
      br = (char*)(CALL_MORECORE(asize));
2793
      br = (char*)(CALL_MORECORE(asize));
2788
      end = (char*)(CALL_MORECORE(0));
2794
      end = (char*)(CALL_MORECORE(0));
2789
      RELEASE_MORECORE_LOCK();
2795
      RELEASE_MORECORE_LOCK();
2790
      if (br != CMFAIL && end != CMFAIL && br < end) {
2796
      if (br != CMFAIL && end != CMFAIL && br < end) {
2791
        size_t ssize = end - br;
2797
        size_t ssize = end - br;
2792
        if (ssize > nb + TOP_FOOT_SIZE) {
2798
        if (ssize > nb + TOP_FOOT_SIZE) {
2793
          tbase = br;
2799
          tbase = br;
2794
          tsize = ssize;
2800
          tsize = ssize;
2795
        }
2801
        }
2796
      }
2802
      }
2797
    }
2803
    }
2798
  }
2804
  }
2799
 
2805
 
2800
  if (tbase != CMFAIL) {
2806
  if (tbase != CMFAIL) {
2801
 
2807
 
2802
    if ((m->footprint += tsize) > m->max_footprint)
2808
    if ((m->footprint += tsize) > m->max_footprint)
2803
      m->max_footprint = m->footprint;
2809
      m->max_footprint = m->footprint;
2804
 
2810
 
2805
    if (!is_initialized(m)) { /* first-time initialization */
2811
    if (!is_initialized(m)) { /* first-time initialization */
2806
      m->seg.base = m->least_addr = tbase;
2812
      m->seg.base = m->least_addr = tbase;
2807
      m->seg.size = tsize;
2813
      m->seg.size = tsize;
2808
      m->seg.sflags = mmap_flag;
2814
      m->seg.sflags = mmap_flag;
2809
      m->magic = mparams.magic;
2815
      m->magic = mparams.magic;
2810
      init_bins(m);
2816
      init_bins(m);
2811
      if (is_global(m))
2817
      if (is_global(m))
2812
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2818
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2813
      else {
2819
      else {
2814
        /* Offset top by embedded malloc_state */
2820
        /* Offset top by embedded malloc_state */
2815
        mchunkptr mn = next_chunk(mem2chunk(m));
2821
        mchunkptr mn = next_chunk(mem2chunk(m));
2816
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2822
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2817
      }
2823
      }
2818
    }
2824
    }
2819
 
2825
 
2820
    else {
2826
    else {
2821
      /* Try to merge with an existing segment */
2827
      /* Try to merge with an existing segment */
2822
      msegmentptr sp = &m->seg;
2828
      msegmentptr sp = &m->seg;
2823
      while (sp != 0 && tbase != sp->base + sp->size)
2829
      while (sp != 0 && tbase != sp->base + sp->size)
2824
        sp = sp->next;
2830
        sp = sp->next;
2825
      if (sp != 0 &&
2831
      if (sp != 0 &&
2826
          !is_extern_segment(sp) &&
2832
          !is_extern_segment(sp) &&
2827
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2833
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2828
          segment_holds(sp, m->top)) { /* append */
2834
          segment_holds(sp, m->top)) { /* append */
2829
        sp->size += tsize;
2835
        sp->size += tsize;
2830
        init_top(m, m->top, m->topsize + tsize);
2836
        init_top(m, m->top, m->topsize + tsize);
2831
      }
2837
      }
2832
      else {
2838
      else {
2833
        if (tbase < m->least_addr)
2839
        if (tbase < m->least_addr)
2834
          m->least_addr = tbase;
2840
          m->least_addr = tbase;
2835
        sp = &m->seg;
2841
        sp = &m->seg;
2836
        while (sp != 0 && sp->base != tbase + tsize)
2842
        while (sp != 0 && sp->base != tbase + tsize)
2837
          sp = sp->next;
2843
          sp = sp->next;
2838
        if (sp != 0 &&
2844
        if (sp != 0 &&
2839
            !is_extern_segment(sp) &&
2845
            !is_extern_segment(sp) &&
2840
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2846
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2841
          char* oldbase = sp->base;
2847
          char* oldbase = sp->base;
2842
          sp->base = tbase;
2848
          sp->base = tbase;
2843
          sp->size += tsize;
2849
          sp->size += tsize;
2844
          return prepend_alloc(m, tbase, oldbase, nb);
2850
          return prepend_alloc(m, tbase, oldbase, nb);
2845
        }
2851
        }
2846
        else
2852
        else
2847
          add_segment(m, tbase, tsize, mmap_flag);
2853
          add_segment(m, tbase, tsize, mmap_flag);
2848
      }
2854
      }
2849
    }
2855
    }
2850
 
2856
 
2851
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2857
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2852
      size_t rsize = m->topsize -= nb;
2858
      size_t rsize = m->topsize -= nb;
2853
      mchunkptr p = m->top;
2859
      mchunkptr p = m->top;
2854
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2860
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2855
      r->head = rsize | PINUSE_BIT;
2861
      r->head = rsize | PINUSE_BIT;
2856
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2862
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2857
      check_top_chunk(m, m->top);
2863
      check_top_chunk(m, m->top);
2858
      check_malloced_chunk(m, chunk2mem(p), nb);
2864
      check_malloced_chunk(m, chunk2mem(p), nb);
2859
      return chunk2mem(p);
2865
      return chunk2mem(p);
2860
    }
2866
    }
2861
  }
2867
  }
2862
 
2868
 
2863
  MALLOC_FAILURE_ACTION;
2869
  MALLOC_FAILURE_ACTION;
2864
  return 0;
2870
  return 0;
2865
}
2871
}
2866
 
2872
 
2867
/* -----------------------  system deallocation -------------------------- */
2873
/* -----------------------  system deallocation -------------------------- */
2868
 
2874
 
2869
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2875
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2870
static size_t release_unused_segments(mstate m) {
2876
static size_t release_unused_segments(mstate m) {
2871
  size_t released = 0;
2877
  size_t released = 0;
2872
  msegmentptr pred = &m->seg;
2878
  msegmentptr pred = &m->seg;
2873
  msegmentptr sp = pred->next;
2879
  msegmentptr sp = pred->next;
2874
  while (sp != 0) {
2880
  while (sp != 0) {
2875
    char* base = sp->base;
2881
    char* base = sp->base;
2876
    size_t size = sp->size;
2882
    size_t size = sp->size;
2877
    msegmentptr next = sp->next;
2883
    msegmentptr next = sp->next;
2878
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2884
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2879
      mchunkptr p = align_as_chunk(base);
2885
      mchunkptr p = align_as_chunk(base);
2880
      size_t psize = chunksize(p);
2886
      size_t psize = chunksize(p);
2881
      /* Can unmap if first chunk holds entire segment and not pinned */
2887
      /* Can unmap if first chunk holds entire segment and not pinned */
2882
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2888
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2883
        tchunkptr tp = (tchunkptr)p;
2889
        tchunkptr tp = (tchunkptr)p;
2884
        assert(segment_holds(sp, (char*)sp));
2890
        assert(segment_holds(sp, (char*)sp));
2885
        if (p == m->dv) {
2891
        if (p == m->dv) {
2886
          m->dv = 0;
2892
          m->dv = 0;
2887
          m->dvsize = 0;
2893
          m->dvsize = 0;
2888
        }
2894
        }
2889
        else {
2895
        else {
2890
          unlink_large_chunk(m, tp);
2896
          unlink_large_chunk(m, tp);
2891
        }
2897
        }
2892
        if (CALL_MUNMAP(base, size) == 0) {
2898
        if (CALL_MUNMAP(base, size) == 0) {
2893
          released += size;
2899
          released += size;
2894
          m->footprint -= size;
2900
          m->footprint -= size;
2895
          /* unlink obsoleted record */
2901
          /* unlink obsoleted record */
2896
          sp = pred;
2902
          sp = pred;
2897
          sp->next = next;
2903
          sp->next = next;
2898
        }
2904
        }
2899
        else { /* back out if cannot unmap */
2905
        else { /* back out if cannot unmap */
2900
          insert_large_chunk(m, tp, psize);
2906
          insert_large_chunk(m, tp, psize);
2901
        }
2907
        }
2902
      }
2908
      }
2903
    }
2909
    }
2904
    pred = sp;
2910
    pred = sp;
2905
    sp = next;
2911
    sp = next;
2906
  }
2912
  }
2907
  return released;
2913
  return released;
2908
}
2914
}
2909
 
2915
 
2910
static int sys_trim(mstate m, size_t pad) {
2916
static int sys_trim(mstate m, size_t pad) {
2911
  size_t released = 0;
2917
  size_t released = 0;
2912
  if (pad < MAX_REQUEST && is_initialized(m)) {
2918
  if (pad < MAX_REQUEST && is_initialized(m)) {
2913
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2919
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2914
 
2920
 
2915
    if (m->topsize > pad) {
2921
    if (m->topsize > pad) {
2916
      /* Shrink top space in granularity-size units, keeping at least one */
2922
      /* Shrink top space in granularity-size units, keeping at least one */
2917
      size_t unit = mparams.granularity;
2923
      size_t unit = mparams.granularity;
2918
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2924
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2919
                      SIZE_T_ONE) * unit;
2925
                      SIZE_T_ONE) * unit;
2920
      msegmentptr sp = segment_holding(m, (char*)m->top);
2926
      msegmentptr sp = segment_holding(m, (char*)m->top);
2921
 
2927
 
2922
      if (!is_extern_segment(sp)) {
2928
      if (!is_extern_segment(sp)) {
2923
        if (is_mmapped_segment(sp)) {
2929
        if (is_mmapped_segment(sp)) {
2924
          if (HAVE_MMAP &&
2930
          if (HAVE_MMAP &&
2925
              sp->size >= extra &&
2931
              sp->size >= extra &&
2926
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2932
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2927
            size_t newsize = sp->size - extra;
2933
            size_t newsize = sp->size - extra;
2928
            /* Prefer mremap, fall back to munmap */
2934
            /* Prefer mremap, fall back to munmap */
2929
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2935
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2930
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2936
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2931
              released = extra;
2937
              released = extra;
2932
            }
2938
            }
2933
          }
2939
          }
2934
        }
2940
        }
2935
        else if (HAVE_MORECORE) {
2941
        else if (HAVE_MORECORE) {
2936
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2942
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2937
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2943
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2938
          ACQUIRE_MORECORE_LOCK();
2944
          ACQUIRE_MORECORE_LOCK();
2939
          {
2945
          {
2940
            /* Make sure end of memory is where we last set it. */
2946
            /* Make sure end of memory is where we last set it. */
2941
            char* old_br = (char*)(CALL_MORECORE(0));
2947
            char* old_br = (char*)(CALL_MORECORE(0));
2942
            if (old_br == sp->base + sp->size) {
2948
            if (old_br == sp->base + sp->size) {
2943
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2949
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2944
              char* new_br = (char*)(CALL_MORECORE(0));
2950
              char* new_br = (char*)(CALL_MORECORE(0));
2945
              if (rel_br != CMFAIL && new_br < old_br)
2951
              if (rel_br != CMFAIL && new_br < old_br)
2946
                released = old_br - new_br;
2952
                released = old_br - new_br;
2947
            }
2953
            }
2948
          }
2954
          }
2949
          RELEASE_MORECORE_LOCK();
2955
          RELEASE_MORECORE_LOCK();
2950
        }
2956
        }
2951
      }
2957
      }
2952
 
2958
 
2953
      if (released != 0) {
2959
      if (released != 0) {
2954
        sp->size -= released;
2960
        sp->size -= released;
2955
        m->footprint -= released;
2961
        m->footprint -= released;
2956
        init_top(m, m->top, m->topsize - released);
2962
        init_top(m, m->top, m->topsize - released);
2957
        check_top_chunk(m, m->top);
2963
        check_top_chunk(m, m->top);
2958
      }
2964
      }
2959
    }
2965
    }
2960
 
2966
 
2961
    /* Unmap any unused mmapped segments */
2967
    /* Unmap any unused mmapped segments */
2962
    if (HAVE_MMAP)
2968
    if (HAVE_MMAP)
2963
      released += release_unused_segments(m);
2969
      released += release_unused_segments(m);
2964
 
2970
 
2965
    /* On failure, disable autotrim to avoid repeated failed future calls */
2971
    /* On failure, disable autotrim to avoid repeated failed future calls */
2966
    if (released == 0)
2972
    if (released == 0)
2967
      m->trim_check = MAX_SIZE_T;
2973
      m->trim_check = MAX_SIZE_T;
2968
  }
2974
  }
2969
 
2975
 
2970
  return (released != 0)? 1 : 0;
2976
  return (released != 0)? 1 : 0;
2971
}
2977
}
2972
 
2978
 
2973
/* ---------------------------- malloc support --------------------------- */
2979
/* ---------------------------- malloc support --------------------------- */
2974
 
2980
 
2975
/* allocate a large request from the best fitting chunk in a treebin */
2981
/* allocate a large request from the best fitting chunk in a treebin */
2976
static void* tmalloc_large(mstate m, size_t nb) {
2982
static void* tmalloc_large(mstate m, size_t nb) {
2977
  tchunkptr v = 0;
2983
  tchunkptr v = 0;
2978
  size_t rsize = -nb; /* Unsigned negation */
2984
  size_t rsize = -nb; /* Unsigned negation */
2979
  tchunkptr t;
2985
  tchunkptr t;
2980
  bindex_t idx;
2986
  bindex_t idx;
2981
  compute_tree_index(nb, idx);
2987
  compute_tree_index(nb, idx);
2982
 
2988
 
2983
  if ((t = *treebin_at(m, idx)) != 0) {
2989
  if ((t = *treebin_at(m, idx)) != 0) {
2984
    /* Traverse tree for this bin looking for node with size == nb */
2990
    /* Traverse tree for this bin looking for node with size == nb */
2985
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2991
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2986
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2992
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2987
    for (;;) {
2993
    for (;;) {
2988
      tchunkptr rt;
2994
      tchunkptr rt;
2989
      size_t trem = chunksize(t) - nb;
2995
      size_t trem = chunksize(t) - nb;
2990
      if (trem < rsize) {
2996
      if (trem < rsize) {
2991
        v = t;
2997
        v = t;
2992
        if ((rsize = trem) == 0)
2998
        if ((rsize = trem) == 0)
2993
          break;
2999
          break;
2994
      }
3000
      }
2995
      rt = t->child[1];
3001
      rt = t->child[1];
2996
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3002
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2997
      if (rt != 0 && rt != t)
3003
      if (rt != 0 && rt != t)
2998
        rst = rt;
3004
        rst = rt;
2999
      if (t == 0) {
3005
      if (t == 0) {
3000
        t = rst; /* set t to least subtree holding sizes > nb */
3006
        t = rst; /* set t to least subtree holding sizes > nb */
3001
        break;
3007
        break;
3002
      }
3008
      }
3003
      sizebits <<= 1;
3009
      sizebits <<= 1;
3004
    }
3010
    }
3005
  }
3011
  }
3006
 
3012
 
3007
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3013
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3008
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3014
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3009
    if (leftbits != 0) {
3015
    if (leftbits != 0) {
3010
      bindex_t i;
3016
      bindex_t i;
3011
      binmap_t leastbit = least_bit(leftbits);
3017
      binmap_t leastbit = least_bit(leftbits);
3012
      compute_bit2idx(leastbit, i);
3018
      compute_bit2idx(leastbit, i);
3013
      t = *treebin_at(m, i);
3019
      t = *treebin_at(m, i);
3014
    }
3020
    }
3015
  }
3021
  }
3016
 
3022
 
3017
  while (t != 0) { /* find smallest of tree or subtree */
3023
  while (t != 0) { /* find smallest of tree or subtree */
3018
    size_t trem = chunksize(t) - nb;
3024
    size_t trem = chunksize(t) - nb;
3019
    if (trem < rsize) {
3025
    if (trem < rsize) {
3020
      rsize = trem;
3026
      rsize = trem;
3021
      v = t;
3027
      v = t;
3022
    }
3028
    }
3023
    t = leftmost_child(t);
3029
    t = leftmost_child(t);
3024
  }
3030
  }
3025
 
3031
 
3026
  /*  If dv is a better fit, return 0 so malloc will use it */
3032
  /*  If dv is a better fit, return 0 so malloc will use it */
3027
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3033
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3028
    if (RTCHECK(ok_address(m, v))) { /* split */
3034
    if (RTCHECK(ok_address(m, v))) { /* split */
3029
      mchunkptr r = chunk_plus_offset(v, nb);
3035
      mchunkptr r = chunk_plus_offset(v, nb);
3030
      assert(chunksize(v) == rsize + nb);
3036
      assert(chunksize(v) == rsize + nb);
3031
      if (RTCHECK(ok_next(v, r))) {
3037
      if (RTCHECK(ok_next(v, r))) {
3032
        unlink_large_chunk(m, v);
3038
        unlink_large_chunk(m, v);
3033
        if (rsize < MIN_CHUNK_SIZE)
3039
        if (rsize < MIN_CHUNK_SIZE)
3034
          set_inuse_and_pinuse(m, v, (rsize + nb));
3040
          set_inuse_and_pinuse(m, v, (rsize + nb));
3035
        else {
3041
        else {
3036
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3042
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3037
          set_size_and_pinuse_of_free_chunk(r, rsize);
3043
          set_size_and_pinuse_of_free_chunk(r, rsize);
3038
          insert_chunk(m, r, rsize);
3044
          insert_chunk(m, r, rsize);
3039
        }
3045
        }
3040
        return chunk2mem(v);
3046
        return chunk2mem(v);
3041
      }
3047
      }
3042
    }
3048
    }
3043
    CORRUPTION_ERROR_ACTION(m);
3049
    CORRUPTION_ERROR_ACTION(m);
3044
  }
3050
  }
3045
  return 0;
3051
  return 0;
3046
}
3052
}
3047
 
3053
 
3048
/* allocate a small request from the best fitting chunk in a treebin */
3054
/* allocate a small request from the best fitting chunk in a treebin */
3049
static void* tmalloc_small(mstate m, size_t nb) {
3055
static void* tmalloc_small(mstate m, size_t nb) {
3050
  tchunkptr t, v;
3056
  tchunkptr t, v;
3051
  size_t rsize;
3057
  size_t rsize;
3052
  bindex_t i;
3058
  bindex_t i;
3053
  binmap_t leastbit = least_bit(m->treemap);
3059
  binmap_t leastbit = least_bit(m->treemap);
3054
  compute_bit2idx(leastbit, i);
3060
  compute_bit2idx(leastbit, i);
3055
 
3061
 
3056
  v = t = *treebin_at(m, i);
3062
  v = t = *treebin_at(m, i);
3057
  rsize = chunksize(t) - nb;
3063
  rsize = chunksize(t) - nb;
3058
 
3064
 
3059
  while ((t = leftmost_child(t)) != 0) {
3065
  while ((t = leftmost_child(t)) != 0) {
3060
    size_t trem = chunksize(t) - nb;
3066
    size_t trem = chunksize(t) - nb;
3061
    if (trem < rsize) {
3067
    if (trem < rsize) {
3062
      rsize = trem;
3068
      rsize = trem;
3063
      v = t;
3069
      v = t;
3064
    }
3070
    }
3065
  }
3071
  }
3066
 
3072
 
3067
  if (RTCHECK(ok_address(m, v))) {
3073
  if (RTCHECK(ok_address(m, v))) {
3068
    mchunkptr r = chunk_plus_offset(v, nb);
3074
    mchunkptr r = chunk_plus_offset(v, nb);
3069
    assert(chunksize(v) == rsize + nb);
3075
    assert(chunksize(v) == rsize + nb);
3070
    if (RTCHECK(ok_next(v, r))) {
3076
    if (RTCHECK(ok_next(v, r))) {
3071
      unlink_large_chunk(m, v);
3077
      unlink_large_chunk(m, v);
3072
      if (rsize < MIN_CHUNK_SIZE)
3078
      if (rsize < MIN_CHUNK_SIZE)
3073
        set_inuse_and_pinuse(m, v, (rsize + nb));
3079
        set_inuse_and_pinuse(m, v, (rsize + nb));
3074
      else {
3080
      else {
3075
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3081
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3076
        set_size_and_pinuse_of_free_chunk(r, rsize);
3082
        set_size_and_pinuse_of_free_chunk(r, rsize);
3077
        replace_dv(m, r, rsize);
3083
        replace_dv(m, r, rsize);
3078
      }
3084
      }
3079
      return chunk2mem(v);
3085
      return chunk2mem(v);
3080
    }
3086
    }
3081
  }
3087
  }
3082
 
3088
 
3083
  CORRUPTION_ERROR_ACTION(m);
3089
  CORRUPTION_ERROR_ACTION(m);
3084
  return 0;
3090
  return 0;
3085
}
3091
}
3086
 
3092
 
3087
/* --------------------------- realloc support --------------------------- */
3093
/* --------------------------- realloc support --------------------------- */
3088
 
3094
 
3089
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3095
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3090
  if (bytes >= MAX_REQUEST) {
3096
  if (bytes >= MAX_REQUEST) {
3091
    MALLOC_FAILURE_ACTION;
3097
    MALLOC_FAILURE_ACTION;
3092
    return 0;
3098
    return 0;
3093
  }
3099
  }
3094
  if (!PREACTION(m)) {
3100
  if (!PREACTION(m)) {
3095
    mchunkptr oldp = mem2chunk(oldmem);
3101
    mchunkptr oldp = mem2chunk(oldmem);
3096
    size_t oldsize = chunksize(oldp);
3102
    size_t oldsize = chunksize(oldp);
3097
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3103
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3098
    mchunkptr newp = 0;
3104
    mchunkptr newp = 0;
3099
    void* extra = 0;
3105
    void* extra = 0;
3100
 
3106
 
3101
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3107
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3102
 
3108
 
3103
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3109
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3104
                ok_next(oldp, next) && ok_pinuse(next))) {
3110
                ok_next(oldp, next) && ok_pinuse(next))) {
3105
      size_t nb = request2size(bytes);
3111
      size_t nb = request2size(bytes);
3106
      if (is_mmapped(oldp))
3112
      if (is_mmapped(oldp))
3107
        newp = mmap_resize(m, oldp, nb);
3113
        newp = mmap_resize(m, oldp, nb);
3108
      else if (oldsize >= nb) { /* already big enough */
3114
      else if (oldsize >= nb) { /* already big enough */
3109
        size_t rsize = oldsize - nb;
3115
        size_t rsize = oldsize - nb;
3110
        newp = oldp;
3116
        newp = oldp;
3111
        if (rsize >= MIN_CHUNK_SIZE) {
3117
        if (rsize >= MIN_CHUNK_SIZE) {
3112
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3118
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3113
          set_inuse(m, newp, nb);
3119
          set_inuse(m, newp, nb);
3114
          set_inuse(m, remainder, rsize);
3120
          set_inuse(m, remainder, rsize);
3115
          extra = chunk2mem(remainder);
3121
          extra = chunk2mem(remainder);
3116
        }
3122
        }
3117
      }
3123
      }
3118
      else if (next == m->top && oldsize + m->topsize > nb) {
3124
      else if (next == m->top && oldsize + m->topsize > nb) {
3119
        /* Expand into top */
3125
        /* Expand into top */
3120
        size_t newsize = oldsize + m->topsize;
3126
        size_t newsize = oldsize + m->topsize;
3121
        size_t newtopsize = newsize - nb;
3127
        size_t newtopsize = newsize - nb;
3122
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3128
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3123
        set_inuse(m, oldp, nb);
3129
        set_inuse(m, oldp, nb);
3124
        newtop->head = newtopsize |PINUSE_BIT;
3130
        newtop->head = newtopsize |PINUSE_BIT;
3125
        m->top = newtop;
3131
        m->top = newtop;
3126
        m->topsize = newtopsize;
3132
        m->topsize = newtopsize;
3127
        newp = oldp;
3133
        newp = oldp;
3128
      }
3134
      }
3129
    }
3135
    }
3130
    else {
3136
    else {
3131
      USAGE_ERROR_ACTION(m, oldmem);
3137
      USAGE_ERROR_ACTION(m, oldmem);
3132
      POSTACTION(m);
3138
      POSTACTION(m);
3133
      return 0;
3139
      return 0;
3134
    }
3140
    }
3135
 
3141
 
3136
    POSTACTION(m);
3142
    POSTACTION(m);
3137
 
3143
 
3138
    if (newp != 0) {
3144
    if (newp != 0) {
3139
      if (extra != 0) {
3145
      if (extra != 0) {
3140
        internal_free(m, extra);
3146
        internal_free(m, extra);
3141
      }
3147
      }
3142
      check_inuse_chunk(m, newp);
3148
      check_inuse_chunk(m, newp);
3143
      return chunk2mem(newp);
3149
      return chunk2mem(newp);
3144
    }
3150
    }
3145
    else {
3151
    else {
3146
      void* newmem = internal_malloc(m, bytes);
3152
      void* newmem = internal_malloc(m, bytes);
3147
      if (newmem != 0) {
3153
      if (newmem != 0) {
3148
        size_t oc = oldsize - overhead_for(oldp);
3154
        size_t oc = oldsize - overhead_for(oldp);
3149
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3155
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3150
        internal_free(m, oldmem);
3156
        internal_free(m, oldmem);
3151
      }
3157
      }
3152
      return newmem;
3158
      return newmem;
3153
    }
3159
    }
3154
  }
3160
  }
3155
  return 0;
3161
  return 0;
3156
}
3162
}
3157
 
3163
 
3158
/* --------------------------- memalign support -------------------------- */
3164
/* --------------------------- memalign support -------------------------- */
3159
 
3165
 
3160
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3166
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3161
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3167
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3162
    return internal_malloc(m, bytes);
3168
    return internal_malloc(m, bytes);
3163
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3169
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3164
    alignment = MIN_CHUNK_SIZE;
3170
    alignment = MIN_CHUNK_SIZE;
3165
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3171
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3166
    size_t a = MALLOC_ALIGNMENT << 1;
3172
    size_t a = MALLOC_ALIGNMENT << 1;
3167
    while (a < alignment) a <<= 1;
3173
    while (a < alignment) a <<= 1;
3168
    alignment = a;
3174
    alignment = a;
3169
  }
3175
  }
3170
 
3176
 
3171
  if (bytes >= MAX_REQUEST - alignment) {
3177
  if (bytes >= MAX_REQUEST - alignment) {
3172
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3178
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3173
      MALLOC_FAILURE_ACTION;
3179
      MALLOC_FAILURE_ACTION;
3174
    }
3180
    }
3175
  }
3181
  }
3176
  else {
3182
  else {
3177
    size_t nb = request2size(bytes);
3183
    size_t nb = request2size(bytes);
3178
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3184
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3179
    char* mem = (char*)internal_malloc(m, req);
3185
    char* mem = (char*)internal_malloc(m, req);
3180
    if (mem != 0) {
3186
    if (mem != 0) {
3181
      void* leader = 0;
3187
      void* leader = 0;
3182
      void* trailer = 0;
3188
      void* trailer = 0;
3183
      mchunkptr p = mem2chunk(mem);
3189
      mchunkptr p = mem2chunk(mem);
3184
 
3190
 
3185
      if (PREACTION(m)) return 0;
3191
      if (PREACTION(m)) return 0;
3186
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3192
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3187
        /*
3193
        /*
3188
          Find an aligned spot inside chunk.  Since we need to give
3194
          Find an aligned spot inside chunk.  Since we need to give
3189
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3195
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3190
          the first calculation places us at a spot with less than
3196
          the first calculation places us at a spot with less than
3191
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3197
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3192
          We've allocated enough total room so that this is always
3198
          We've allocated enough total room so that this is always
3193
          possible.
3199
          possible.
3194
        */
3200
        */
3195
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3201
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3196
                                                       alignment -
3202
                                                       alignment -
3197
                                                       SIZE_T_ONE)) &
3203
                                                       SIZE_T_ONE)) &
3198
                                             -alignment));
3204
                                             -alignment));
3199
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3205
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3200
          br : br+alignment;
3206
          br : br+alignment;
3201
        mchunkptr newp = (mchunkptr)pos;
3207
        mchunkptr newp = (mchunkptr)pos;
3202
        size_t leadsize = pos - (char*)(p);
3208
        size_t leadsize = pos - (char*)(p);
3203
        size_t newsize = chunksize(p) - leadsize;
3209
        size_t newsize = chunksize(p) - leadsize;
3204
 
3210
 
3205
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3211
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3206
          newp->prev_foot = p->prev_foot + leadsize;
3212
          newp->prev_foot = p->prev_foot + leadsize;
3207
          newp->head = (newsize|CINUSE_BIT);
3213
          newp->head = (newsize|CINUSE_BIT);
3208
        }
3214
        }
3209
        else { /* Otherwise, give back leader, use the rest */
3215
        else { /* Otherwise, give back leader, use the rest */
3210
          set_inuse(m, newp, newsize);
3216
          set_inuse(m, newp, newsize);
3211
          set_inuse(m, p, leadsize);
3217
          set_inuse(m, p, leadsize);
3212
          leader = chunk2mem(p);
3218
          leader = chunk2mem(p);
3213
        }
3219
        }
3214
        p = newp;
3220
        p = newp;
3215
      }
3221
      }
3216
 
3222
 
3217
      /* Give back spare room at the end */
3223
      /* Give back spare room at the end */
3218
      if (!is_mmapped(p)) {
3224
      if (!is_mmapped(p)) {
3219
        size_t size = chunksize(p);
3225
        size_t size = chunksize(p);
3220
        if (size > nb + MIN_CHUNK_SIZE) {
3226
        if (size > nb + MIN_CHUNK_SIZE) {
3221
          size_t remainder_size = size - nb;
3227
          size_t remainder_size = size - nb;
3222
          mchunkptr remainder = chunk_plus_offset(p, nb);
3228
          mchunkptr remainder = chunk_plus_offset(p, nb);
3223
          set_inuse(m, p, nb);
3229
          set_inuse(m, p, nb);
3224
          set_inuse(m, remainder, remainder_size);
3230
          set_inuse(m, remainder, remainder_size);
3225
          trailer = chunk2mem(remainder);
3231
          trailer = chunk2mem(remainder);
3226
        }
3232
        }
3227
      }
3233
      }
3228
 
3234
 
3229
      assert (chunksize(p) >= nb);
3235
      assert (chunksize(p) >= nb);
3230
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3236
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3231
      check_inuse_chunk(m, p);
3237
      check_inuse_chunk(m, p);
3232
      POSTACTION(m);
3238
      POSTACTION(m);
3233
      if (leader != 0) {
3239
      if (leader != 0) {
3234
        internal_free(m, leader);
3240
        internal_free(m, leader);
3235
      }
3241
      }
3236
      if (trailer != 0) {
3242
      if (trailer != 0) {
3237
        internal_free(m, trailer);
3243
        internal_free(m, trailer);
3238
      }
3244
      }
3239
      return chunk2mem(p);
3245
      return chunk2mem(p);
3240
    }
3246
    }
3241
  }
3247
  }
3242
  return 0;
3248
  return 0;
3243
}
3249
}
3244
 
3250
 
3245
/* ------------------------ comalloc/coalloc support --------------------- */
3251
/* ------------------------ comalloc/coalloc support --------------------- */
3246
 
3252
 
3247
static void** ialloc(mstate m,
3253
static void** ialloc(mstate m,
3248
                     size_t n_elements,
3254
                     size_t n_elements,
3249
                     size_t* sizes,
3255
                     size_t* sizes,
3250
                     int opts,
3256
                     int opts,
3251
                     void* chunks[]) {
3257
                     void* chunks[]) {
3252
  /*
3258
  /*
3253
    This provides common support for independent_X routines, handling
3259
    This provides common support for independent_X routines, handling
3254
    all of the combinations that can result.
3260
    all of the combinations that can result.
3255
 
3261
 
3256
    The opts arg has:
3262
    The opts arg has:
3257
    bit 0 set if all elements are same size (using sizes[0])
3263
    bit 0 set if all elements are same size (using sizes[0])
3258
    bit 1 set if elements should be zeroed
3264
    bit 1 set if elements should be zeroed
3259
  */
3265
  */
3260
 
3266
 
3261
  size_t    element_size;   /* chunksize of each element, if all same */
3267
  size_t    element_size;   /* chunksize of each element, if all same */
3262
  size_t    contents_size;  /* total size of elements */
3268
  size_t    contents_size;  /* total size of elements */
3263
  size_t    array_size;     /* request size of pointer array */
3269
  size_t    array_size;     /* request size of pointer array */
3264
  void*     mem;            /* malloced aggregate space */
3270
  void*     mem;            /* malloced aggregate space */
3265
  mchunkptr p;              /* corresponding chunk */
3271
  mchunkptr p;              /* corresponding chunk */
3266
  size_t    remainder_size; /* remaining bytes while splitting */
3272
  size_t    remainder_size; /* remaining bytes while splitting */
3267
  void**    marray;         /* either "chunks" or malloced ptr array */
3273
  void**    marray;         /* either "chunks" or malloced ptr array */
3268
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3274
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3269
  flag_t    was_enabled;    /* to disable mmap */
3275
  flag_t    was_enabled;    /* to disable mmap */
3270
  size_t    size;
3276
  size_t    size;
3271
  size_t    i;
3277
  size_t    i;
3272
 
3278
 
3273
  /* compute array length, if needed */
3279
  /* compute array length, if needed */
3274
  if (chunks != 0) {
3280
  if (chunks != 0) {
3275
    if (n_elements == 0)
3281
    if (n_elements == 0)
3276
      return chunks; /* nothing to do */
3282
      return chunks; /* nothing to do */
3277
    marray = chunks;
3283
    marray = chunks;
3278
    array_size = 0;
3284
    array_size = 0;
3279
  }
3285
  }
3280
  else {
3286
  else {
3281
    /* if empty req, must still return chunk representing empty array */
3287
    /* if empty req, must still return chunk representing empty array */
3282
    if (n_elements == 0)
3288
    if (n_elements == 0)
3283
      return (void**)internal_malloc(m, 0);
3289
      return (void**)internal_malloc(m, 0);
3284
    marray = 0;
3290
    marray = 0;
3285
    array_size = request2size(n_elements * (sizeof(void*)));
3291
    array_size = request2size(n_elements * (sizeof(void*)));
3286
  }
3292
  }
3287
 
3293
 
3288
  /* compute total element size */
3294
  /* compute total element size */
3289
  if (opts & 0x1) { /* all-same-size */
3295
  if (opts & 0x1) { /* all-same-size */
3290
    element_size = request2size(*sizes);
3296
    element_size = request2size(*sizes);
3291
    contents_size = n_elements * element_size;
3297
    contents_size = n_elements * element_size;
3292
  }
3298
  }
3293
  else { /* add up all the sizes */
3299
  else { /* add up all the sizes */
3294
    element_size = 0;
3300
    element_size = 0;
3295
    contents_size = 0;
3301
    contents_size = 0;
3296
    for (i = 0; i != n_elements; ++i)
3302
    for (i = 0; i != n_elements; ++i)
3297
      contents_size += request2size(sizes[i]);
3303
      contents_size += request2size(sizes[i]);
3298
  }
3304
  }
3299
 
3305
 
3300
  size = contents_size + array_size;
3306
  size = contents_size + array_size;
3301
 
3307
 
3302
  /*
3308
  /*
3303
     Allocate the aggregate chunk.  First disable direct-mmapping so
3309
     Allocate the aggregate chunk.  First disable direct-mmapping so
3304
     malloc won't use it, since we would not be able to later
3310
     malloc won't use it, since we would not be able to later
3305
     free/realloc space internal to a segregated mmap region.
3311
     free/realloc space internal to a segregated mmap region.
3306
  */
3312
  */
3307
  was_enabled = use_mmap(m);
3313
  was_enabled = use_mmap(m);
3308
  disable_mmap(m);
3314
  disable_mmap(m);
3309
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3315
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3310
  if (was_enabled)
3316
  if (was_enabled)
3311
    enable_mmap(m);
3317
    enable_mmap(m);
3312
  if (mem == 0)
3318
  if (mem == 0)
3313
    return 0;
3319
    return 0;
3314
 
3320
 
3315
  if (PREACTION(m)) return 0;
3321
  if (PREACTION(m)) return 0;
3316
  p = mem2chunk(mem);
3322
  p = mem2chunk(mem);
3317
  remainder_size = chunksize(p);
3323
  remainder_size = chunksize(p);
3318
 
3324
 
3319
  assert(!is_mmapped(p));
3325
  assert(!is_mmapped(p));
3320
 
3326
 
3321
  if (opts & 0x2) {       /* optionally clear the elements */
3327
  if (opts & 0x2) {       /* optionally clear the elements */
3322
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3328
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3323
  }
3329
  }
3324
 
3330
 
3325
  /* If not provided, allocate the pointer array as final part of chunk */
3331
  /* If not provided, allocate the pointer array as final part of chunk */
3326
  if (marray == 0) {
3332
  if (marray == 0) {
3327
    size_t  array_chunk_size;
3333
    size_t  array_chunk_size;
3328
    array_chunk = chunk_plus_offset(p, contents_size);
3334
    array_chunk = chunk_plus_offset(p, contents_size);
3329
    array_chunk_size = remainder_size - contents_size;
3335
    array_chunk_size = remainder_size - contents_size;
3330
    marray = (void**) (chunk2mem(array_chunk));
3336
    marray = (void**) (chunk2mem(array_chunk));
3331
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3337
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3332
    remainder_size = contents_size;
3338
    remainder_size = contents_size;
3333
  }
3339
  }
3334
 
3340
 
3335
  /* split out elements */
3341
  /* split out elements */
3336
  for (i = 0; ; ++i) {
3342
  for (i = 0; ; ++i) {
3337
    marray[i] = chunk2mem(p);
3343
    marray[i] = chunk2mem(p);
3338
    if (i != n_elements-1) {
3344
    if (i != n_elements-1) {
3339
      if (element_size != 0)
3345
      if (element_size != 0)
3340
        size = element_size;
3346
        size = element_size;
3341
      else
3347
      else
3342
        size = request2size(sizes[i]);
3348
        size = request2size(sizes[i]);
3343
      remainder_size -= size;
3349
      remainder_size -= size;
3344
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3350
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3345
      p = chunk_plus_offset(p, size);
3351
      p = chunk_plus_offset(p, size);
3346
    }
3352
    }
3347
    else { /* the final element absorbs any overallocation slop */
3353
    else { /* the final element absorbs any overallocation slop */
3348
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3354
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3349
      break;
3355
      break;
3350
    }
3356
    }
3351
  }
3357
  }
3352
 
3358
 
3353
#if DEBUG
3359
#if DEBUG
3354
  if (marray != chunks) {
3360
  if (marray != chunks) {
3355
    /* final element must have exactly exhausted chunk */
3361
    /* final element must have exactly exhausted chunk */
3356
    if (element_size != 0) {
3362
    if (element_size != 0) {
3357
      assert(remainder_size == element_size);
3363
      assert(remainder_size == element_size);
3358
    }
3364
    }
3359
    else {
3365
    else {
3360
      assert(remainder_size == request2size(sizes[i]));
3366
      assert(remainder_size == request2size(sizes[i]));
3361
    }
3367
    }
3362
    check_inuse_chunk(m, mem2chunk(marray));
3368
    check_inuse_chunk(m, mem2chunk(marray));
3363
  }
3369
  }
3364
  for (i = 0; i != n_elements; ++i)
3370
  for (i = 0; i != n_elements; ++i)
3365
    check_inuse_chunk(m, mem2chunk(marray[i]));
3371
    check_inuse_chunk(m, mem2chunk(marray[i]));
3366
 
3372
 
3367
#endif /* DEBUG */
3373
#endif /* DEBUG */
3368
 
3374
 
3369
  POSTACTION(m);
3375
  POSTACTION(m);
3370
  return marray;
3376
  return marray;
3371
}
3377
}
3372
 
3378
 
3373
 
3379
 
3374
/* -------------------------- public routines ---------------------------- */
3380
/* -------------------------- public routines ---------------------------- */
3375
 
3381
 
3376
#if !ONLY_MSPACES
3382
#if !ONLY_MSPACES
3377
 
3383
 
3378
void* dlmalloc(size_t bytes) {
3384
void* dlmalloc(size_t bytes) {
3379
  /*
3385
  /*
3380
     Basic algorithm:
3386
     Basic algorithm:
3381
     If a small request (< 256 bytes minus per-chunk overhead):
3387
     If a small request (< 256 bytes minus per-chunk overhead):
3382
       1. If one exists, use a remainderless chunk in associated smallbin.
3388
       1. If one exists, use a remainderless chunk in associated smallbin.
3383
          (Remainderless means that there are too few excess bytes to
3389
          (Remainderless means that there are too few excess bytes to
3384
          represent as a chunk.)
3390
          represent as a chunk.)
3385
       2. If it is big enough, use the dv chunk, which is normally the
3391
       2. If it is big enough, use the dv chunk, which is normally the
3386
          chunk adjacent to the one used for the most recent small request.
3392
          chunk adjacent to the one used for the most recent small request.
3387
       3. If one exists, split the smallest available chunk in a bin,
3393
       3. If one exists, split the smallest available chunk in a bin,
3388
          saving remainder in dv.
3394
          saving remainder in dv.
3389
       4. If it is big enough, use the top chunk.
3395
       4. If it is big enough, use the top chunk.
3390
       5. If available, get memory from system and use it
3396
       5. If available, get memory from system and use it
3391
     Otherwise, for a large request:
3397
     Otherwise, for a large request:
3392
       1. Find the smallest available binned chunk that fits, and use it
3398
       1. Find the smallest available binned chunk that fits, and use it
3393
          if it is better fitting than dv chunk, splitting if necessary.
3399
          if it is better fitting than dv chunk, splitting if necessary.
3394
       2. If better fitting than any binned chunk, use the dv chunk.
3400
       2. If better fitting than any binned chunk, use the dv chunk.
3395
       3. If it is big enough, use the top chunk.
3401
       3. If it is big enough, use the top chunk.
3396
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3402
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3397
       5. If available, get memory from system and use it
3403
       5. If available, get memory from system and use it
3398
 
3404
 
3399
     The ugly goto's here ensure that postaction occurs along all paths.
3405
     The ugly goto's here ensure that postaction occurs along all paths.
3400
  */
3406
  */
3401
 
3407
 
3402
  if (!PREACTION(gm)) {
3408
  if (!PREACTION(gm)) {
3403
    void* mem;
3409
    void* mem;
3404
    size_t nb;
3410
    size_t nb;
3405
    if (bytes <= MAX_SMALL_REQUEST) {
3411
    if (bytes <= MAX_SMALL_REQUEST) {
3406
      bindex_t idx;
3412
      bindex_t idx;
3407
      binmap_t smallbits;
3413
      binmap_t smallbits;
3408
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3414
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3409
      idx = small_index(nb);
3415
      idx = small_index(nb);
3410
      smallbits = gm->smallmap >> idx;
3416
      smallbits = gm->smallmap >> idx;
3411
 
3417
 
3412
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3418
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3413
        mchunkptr b, p;
3419
        mchunkptr b, p;
3414
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3420
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3415
        b = smallbin_at(gm, idx);
3421
        b = smallbin_at(gm, idx);
3416
        p = b->fd;
3422
        p = b->fd;
3417
        assert(chunksize(p) == small_index2size(idx));
3423
        assert(chunksize(p) == small_index2size(idx));
3418
        unlink_first_small_chunk(gm, b, p, idx);
3424
        unlink_first_small_chunk(gm, b, p, idx);
3419
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3425
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3420
        mem = chunk2mem(p);
3426
        mem = chunk2mem(p);
3421
        check_malloced_chunk(gm, mem, nb);
3427
        check_malloced_chunk(gm, mem, nb);
3422
        goto postaction;
3428
        goto postaction;
3423
      }
3429
      }
3424
 
3430
 
3425
      else if (nb > gm->dvsize) {
3431
      else if (nb > gm->dvsize) {
3426
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3432
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3427
          mchunkptr b, p, r;
3433
          mchunkptr b, p, r;
3428
          size_t rsize;
3434
          size_t rsize;
3429
          bindex_t i;
3435
          bindex_t i;
3430
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3436
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3431
          binmap_t leastbit = least_bit(leftbits);
3437
          binmap_t leastbit = least_bit(leftbits);
3432
          compute_bit2idx(leastbit, i);
3438
          compute_bit2idx(leastbit, i);
3433
          b = smallbin_at(gm, i);
3439
          b = smallbin_at(gm, i);
3434
          p = b->fd;
3440
          p = b->fd;
3435
          assert(chunksize(p) == small_index2size(i));
3441
          assert(chunksize(p) == small_index2size(i));
3436
          unlink_first_small_chunk(gm, b, p, i);
3442
          unlink_first_small_chunk(gm, b, p, i);
3437
          rsize = small_index2size(i) - nb;
3443
          rsize = small_index2size(i) - nb;
3438
          /* Fit here cannot be remainderless if 4byte sizes */
3444
          /* Fit here cannot be remainderless if 4byte sizes */
3439
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3445
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3440
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3446
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3441
          else {
3447
          else {
3442
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3448
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3443
            r = chunk_plus_offset(p, nb);
3449
            r = chunk_plus_offset(p, nb);
3444
            set_size_and_pinuse_of_free_chunk(r, rsize);
3450
            set_size_and_pinuse_of_free_chunk(r, rsize);
3445
            replace_dv(gm, r, rsize);
3451
            replace_dv(gm, r, rsize);
3446
          }
3452
          }
3447
          mem = chunk2mem(p);
3453
          mem = chunk2mem(p);
3448
          check_malloced_chunk(gm, mem, nb);
3454
          check_malloced_chunk(gm, mem, nb);
3449
          goto postaction;
3455
          goto postaction;
3450
        }
3456
        }
3451
 
3457
 
3452
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3458
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3453
          check_malloced_chunk(gm, mem, nb);
3459
          check_malloced_chunk(gm, mem, nb);
3454
          goto postaction;
3460
          goto postaction;
3455
        }
3461
        }
3456
      }
3462
      }
3457
    }
3463
    }
3458
    else if (bytes >= MAX_REQUEST)
3464
    else if (bytes >= MAX_REQUEST)
3459
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3465
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3460
    else {
3466
    else {
3461
      nb = pad_request(bytes);
3467
      nb = pad_request(bytes);
3462
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3468
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3463
        check_malloced_chunk(gm, mem, nb);
3469
        check_malloced_chunk(gm, mem, nb);
3464
        goto postaction;
3470
        goto postaction;
3465
      }
3471
      }
3466
    }
3472
    }
3467
 
3473
 
3468
    if (nb <= gm->dvsize) {
3474
    if (nb <= gm->dvsize) {
3469
      size_t rsize = gm->dvsize - nb;
3475
      size_t rsize = gm->dvsize - nb;
3470
      mchunkptr p = gm->dv;
3476
      mchunkptr p = gm->dv;
3471
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3477
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3472
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3478
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3473
        gm->dvsize = rsize;
3479
        gm->dvsize = rsize;
3474
        set_size_and_pinuse_of_free_chunk(r, rsize);
3480
        set_size_and_pinuse_of_free_chunk(r, rsize);
3475
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3481
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3476
      }
3482
      }
3477
      else { /* exhaust dv */
3483
      else { /* exhaust dv */
3478
        size_t dvs = gm->dvsize;
3484
        size_t dvs = gm->dvsize;
3479
        gm->dvsize = 0;
3485
        gm->dvsize = 0;
3480
        gm->dv = 0;
3486
        gm->dv = 0;
3481
        set_inuse_and_pinuse(gm, p, dvs);
3487
        set_inuse_and_pinuse(gm, p, dvs);
3482
      }
3488
      }
3483
      mem = chunk2mem(p);
3489
      mem = chunk2mem(p);
3484
      check_malloced_chunk(gm, mem, nb);
3490
      check_malloced_chunk(gm, mem, nb);
3485
      goto postaction;
3491
      goto postaction;
3486
    }
3492
    }
3487
 
3493
 
3488
    else if (nb < gm->topsize) { /* Split top */
3494
    else if (nb < gm->topsize) { /* Split top */
3489
      size_t rsize = gm->topsize -= nb;
3495
      size_t rsize = gm->topsize -= nb;
3490
      mchunkptr p = gm->top;
3496
      mchunkptr p = gm->top;
3491
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3497
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3492
      r->head = rsize | PINUSE_BIT;
3498
      r->head = rsize | PINUSE_BIT;
3493
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3499
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3494
      mem = chunk2mem(p);
3500
      mem = chunk2mem(p);
3495
      check_top_chunk(gm, gm->top);
3501
      check_top_chunk(gm, gm->top);
3496
      check_malloced_chunk(gm, mem, nb);
3502
      check_malloced_chunk(gm, mem, nb);
3497
      goto postaction;
3503
      goto postaction;
3498
    }
3504
    }
3499
 
3505
 
3500
    mem = sys_alloc(gm, nb);
3506
    mem = sys_alloc(gm, nb);
3501
 
3507
 
3502
  postaction:
3508
  postaction:
3503
    POSTACTION(gm);
3509
    POSTACTION(gm);
3504
    return mem;
3510
    return mem;
3505
  }
3511
  }
3506
 
3512
 
3507
  return 0;
3513
  return 0;
3508
}
3514
}
3509
 
3515
 
3510
void dlfree(void* mem) {
3516
void dlfree(void* mem) {
3511
  /*
3517
  /*
3512
     Consolidate freed chunks with preceeding or succeeding bordering
3518
     Consolidate freed chunks with preceeding or succeeding bordering
3513
     free chunks, if they exist, and then place in a bin.  Intermixed
3519
     free chunks, if they exist, and then place in a bin.  Intermixed
3514
     with special cases for top, dv, mmapped chunks, and usage errors.
3520
     with special cases for top, dv, mmapped chunks, and usage errors.
3515
  */
3521
  */
3516
 
3522
 
3517
  if (mem != 0) {
3523
  if (mem != 0) {
3518
    mchunkptr p  = mem2chunk(mem);
3524
    mchunkptr p  = mem2chunk(mem);
3519
#if FOOTERS
3525
#if FOOTERS
3520
    mstate fm = get_mstate_for(p);
3526
    mstate fm = get_mstate_for(p);
3521
    if (!ok_magic(fm)) {
3527
    if (!ok_magic(fm)) {
3522
      USAGE_ERROR_ACTION(fm, p);
3528
      USAGE_ERROR_ACTION(fm, p);
3523
      return;
3529
      return;
3524
    }
3530
    }
3525
#else /* FOOTERS */
3531
#else /* FOOTERS */
3526
#define fm gm
3532
#define fm gm
3527
#endif /* FOOTERS */
3533
#endif /* FOOTERS */
3528
    if (!PREACTION(fm)) {
3534
    if (!PREACTION(fm)) {
3529
      check_inuse_chunk(fm, p);
3535
      check_inuse_chunk(fm, p);
3530
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3536
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3531
        size_t psize = chunksize(p);
3537
        size_t psize = chunksize(p);
3532
        mchunkptr next = chunk_plus_offset(p, psize);
3538
        mchunkptr next = chunk_plus_offset(p, psize);
3533
        if (!pinuse(p)) {
3539
        if (!pinuse(p)) {
3534
          size_t prevsize = p->prev_foot;
3540
          size_t prevsize = p->prev_foot;
3535
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3541
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3536
            prevsize &= ~IS_MMAPPED_BIT;
3542
            prevsize &= ~IS_MMAPPED_BIT;
3537
            psize += prevsize + MMAP_FOOT_PAD;
3543
            psize += prevsize + MMAP_FOOT_PAD;
3538
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3544
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3539
              fm->footprint -= psize;
3545
              fm->footprint -= psize;
3540
            goto postaction;
3546
            goto postaction;
3541
          }
3547
          }
3542
          else {
3548
          else {
3543
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3549
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3544
            psize += prevsize;
3550
            psize += prevsize;
3545
            p = prev;
3551
            p = prev;
3546
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3552
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3547
              if (p != fm->dv) {
3553
              if (p != fm->dv) {
3548
                unlink_chunk(fm, p, prevsize);
3554
                unlink_chunk(fm, p, prevsize);
3549
              }
3555
              }
3550
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3556
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3551
                fm->dvsize = psize;
3557
                fm->dvsize = psize;
3552
                set_free_with_pinuse(p, psize, next);
3558
                set_free_with_pinuse(p, psize, next);
3553
                goto postaction;
3559
                goto postaction;
3554
              }
3560
              }
3555
            }
3561
            }
3556
            else
3562
            else
3557
              goto erroraction;
3563
              goto erroraction;
3558
          }
3564
          }
3559
        }
3565
        }
3560
 
3566
 
3561
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3567
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3562
          if (!cinuse(next)) {  /* consolidate forward */
3568
          if (!cinuse(next)) {  /* consolidate forward */
3563
            if (next == fm->top) {
3569
            if (next == fm->top) {
3564
              size_t tsize = fm->topsize += psize;
3570
              size_t tsize = fm->topsize += psize;
3565
              fm->top = p;
3571
              fm->top = p;
3566
              p->head = tsize | PINUSE_BIT;
3572
              p->head = tsize | PINUSE_BIT;
3567
              if (p == fm->dv) {
3573
              if (p == fm->dv) {
3568
                fm->dv = 0;
3574
                fm->dv = 0;
3569
                fm->dvsize = 0;
3575
                fm->dvsize = 0;
3570
              }
3576
              }
3571
              if (should_trim(fm, tsize))
3577
              if (should_trim(fm, tsize))
3572
                sys_trim(fm, 0);
3578
                sys_trim(fm, 0);
3573
              goto postaction;
3579
              goto postaction;
3574
            }
3580
            }
3575
            else if (next == fm->dv) {
3581
            else if (next == fm->dv) {
3576
              size_t dsize = fm->dvsize += psize;
3582
              size_t dsize = fm->dvsize += psize;
3577
              fm->dv = p;
3583
              fm->dv = p;
3578
              set_size_and_pinuse_of_free_chunk(p, dsize);
3584
              set_size_and_pinuse_of_free_chunk(p, dsize);
3579
              goto postaction;
3585
              goto postaction;
3580
            }
3586
            }
3581
            else {
3587
            else {
3582
              size_t nsize = chunksize(next);
3588
              size_t nsize = chunksize(next);
3583
              psize += nsize;
3589
              psize += nsize;
3584
              unlink_chunk(fm, next, nsize);
3590
              unlink_chunk(fm, next, nsize);
3585
              set_size_and_pinuse_of_free_chunk(p, psize);
3591
              set_size_and_pinuse_of_free_chunk(p, psize);
3586
              if (p == fm->dv) {
3592
              if (p == fm->dv) {
3587
                fm->dvsize = psize;
3593
                fm->dvsize = psize;
3588
                goto postaction;
3594
                goto postaction;
3589
              }
3595
              }
3590
            }
3596
            }
3591
          }
3597
          }
3592
          else
3598
          else
3593
            set_free_with_pinuse(p, psize, next);
3599
            set_free_with_pinuse(p, psize, next);
3594
          insert_chunk(fm, p, psize);
3600
          insert_chunk(fm, p, psize);
3595
          check_free_chunk(fm, p);
3601
          check_free_chunk(fm, p);
3596
          goto postaction;
3602
          goto postaction;
3597
        }
3603
        }
3598
      }
3604
      }
3599
    erroraction:
3605
    erroraction:
3600
      USAGE_ERROR_ACTION(fm, p);
3606
      USAGE_ERROR_ACTION(fm, p);
3601
    postaction:
3607
    postaction:
3602
      POSTACTION(fm);
3608
      POSTACTION(fm);
3603
    }
3609
    }
3604
  }
3610
  }
3605
#if !FOOTERS
3611
#if !FOOTERS
3606
#undef fm
3612
#undef fm
3607
#endif /* FOOTERS */
3613
#endif /* FOOTERS */
3608
}
3614
}
3609
 
3615
 
3610
void* dlcalloc(size_t n_elements, size_t elem_size) {
3616
void* dlcalloc(size_t n_elements, size_t elem_size) {
3611
  void* mem;
3617
  void* mem;
3612
  size_t req = 0;
3618
  size_t req = 0;
3613
  if (n_elements != 0) {
3619
  if (n_elements != 0) {
3614
    req = n_elements * elem_size;
3620
    req = n_elements * elem_size;
3615
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3621
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3616
        (req / n_elements != elem_size))
3622
        (req / n_elements != elem_size))
3617
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3623
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3618
  }
3624
  }
3619
  mem = dlmalloc(req);
3625
  mem = dlmalloc(req);
3620
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3626
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3621
    memset(mem, 0, req);
3627
    memset(mem, 0, req);
3622
  return mem;
3628
  return mem;
3623
}
3629
}
3624
 
3630
 
3625
void* dlrealloc(void* oldmem, size_t bytes) {
3631
void* dlrealloc(void* oldmem, size_t bytes) {
3626
  if (oldmem == 0)
3632
  if (oldmem == 0)
3627
    return dlmalloc(bytes);
3633
    return dlmalloc(bytes);
3628
#ifdef REALLOC_ZERO_BYTES_FREES
3634
#ifdef REALLOC_ZERO_BYTES_FREES
3629
  if (bytes == 0) {
3635
  if (bytes == 0) {
3630
    dlfree(oldmem);
3636
    dlfree(oldmem);
3631
    return 0;
3637
    return 0;
3632
  }
3638
  }
3633
#endif /* REALLOC_ZERO_BYTES_FREES */
3639
#endif /* REALLOC_ZERO_BYTES_FREES */
3634
  else {
3640
  else {
3635
#if ! FOOTERS
3641
#if ! FOOTERS
3636
    mstate m = gm;
3642
    mstate m = gm;
3637
#else /* FOOTERS */
3643
#else /* FOOTERS */
3638
    mstate m = get_mstate_for(mem2chunk(oldmem));
3644
    mstate m = get_mstate_for(mem2chunk(oldmem));
3639
    if (!ok_magic(m)) {
3645
    if (!ok_magic(m)) {
3640
      USAGE_ERROR_ACTION(m, oldmem);
3646
      USAGE_ERROR_ACTION(m, oldmem);
3641
      return 0;
3647
      return 0;
3642
    }
3648
    }
3643
#endif /* FOOTERS */
3649
#endif /* FOOTERS */
3644
    return internal_realloc(m, oldmem, bytes);
3650
    return internal_realloc(m, oldmem, bytes);
3645
  }
3651
  }
3646
}
3652
}
3647
 
3653
 
3648
void* dlmemalign(size_t alignment, size_t bytes) {
3654
void* dlmemalign(size_t alignment, size_t bytes) {
3649
  return internal_memalign(gm, alignment, bytes);
3655
  return internal_memalign(gm, alignment, bytes);
3650
}
3656
}
3651
 
3657
 
3652
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3658
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3653
                                 void* chunks[]) {
3659
                                 void* chunks[]) {
3654
  size_t sz = elem_size; /* serves as 1-element array */
3660
  size_t sz = elem_size; /* serves as 1-element array */
3655
  return ialloc(gm, n_elements, &sz, 3, chunks);
3661
  return ialloc(gm, n_elements, &sz, 3, chunks);
3656
}
3662
}
3657
 
3663
 
3658
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3664
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3659
                                   void* chunks[]) {
3665
                                   void* chunks[]) {
3660
  return ialloc(gm, n_elements, sizes, 0, chunks);
3666
  return ialloc(gm, n_elements, sizes, 0, chunks);
3661
}
3667
}
3662
 
3668
 
3663
void* dlvalloc(size_t bytes) {
3669
void* dlvalloc(size_t bytes) {
3664
  size_t pagesz;
3670
  size_t pagesz;
3665
  init_mparams();
3671
  init_mparams();
3666
  pagesz = mparams.page_size;
3672
  pagesz = mparams.page_size;
3667
  return dlmemalign(pagesz, bytes);
3673
  return dlmemalign(pagesz, bytes);
3668
}
3674
}
3669
 
3675
 
3670
void* dlpvalloc(size_t bytes) {
3676
void* dlpvalloc(size_t bytes) {
3671
  size_t pagesz;
3677
  size_t pagesz;
3672
  init_mparams();
3678
  init_mparams();
3673
  pagesz = mparams.page_size;
3679
  pagesz = mparams.page_size;
3674
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3680
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3675
}
3681
}
3676
 
3682
 
3677
int dlmalloc_trim(size_t pad) {
3683
int dlmalloc_trim(size_t pad) {
3678
  int result = 0;
3684
  int result = 0;
3679
  if (!PREACTION(gm)) {
3685
  if (!PREACTION(gm)) {
3680
    result = sys_trim(gm, pad);
3686
    result = sys_trim(gm, pad);
3681
    POSTACTION(gm);
3687
    POSTACTION(gm);
3682
  }
3688
  }
3683
  return result;
3689
  return result;
3684
}
3690
}
3685
 
3691
 
3686
size_t dlmalloc_footprint(void) {
3692
size_t dlmalloc_footprint(void) {
3687
  return gm->footprint;
3693
  return gm->footprint;
3688
}
3694
}
3689
 
3695
 
3690
size_t dlmalloc_max_footprint(void) {
3696
size_t dlmalloc_max_footprint(void) {
3691
  return gm->max_footprint;
3697
  return gm->max_footprint;
3692
}
3698
}
3693
 
3699
 
3694
#if !NO_MALLINFO
3700
#if !NO_MALLINFO
3695
struct mallinfo dlmallinfo(void) {
3701
struct mallinfo dlmallinfo(void) {
3696
  return internal_mallinfo(gm);
3702
  return internal_mallinfo(gm);
3697
}
3703
}
3698
#endif /* NO_MALLINFO */
3704
#endif /* NO_MALLINFO */
3699
 
3705
 
3700
void dlmalloc_stats() {
3706
void dlmalloc_stats() {
3701
  internal_malloc_stats(gm);
3707
  internal_malloc_stats(gm);
3702
}
3708
}
3703
 
3709
 
3704
size_t dlmalloc_usable_size(void* mem) {
3710
size_t dlmalloc_usable_size(void* mem) {
3705
  if (mem != 0) {
3711
  if (mem != 0) {
3706
    mchunkptr p = mem2chunk(mem);
3712
    mchunkptr p = mem2chunk(mem);
3707
    if (cinuse(p))
3713
    if (cinuse(p))
3708
      return chunksize(p) - overhead_for(p);
3714
      return chunksize(p) - overhead_for(p);
3709
  }
3715
  }
3710
  return 0;
3716
  return 0;
3711
}
3717
}
3712
 
3718
 
3713
int dlmallopt(int param_number, int value) {
3719
int dlmallopt(int param_number, int value) {
3714
  return change_mparam(param_number, value);
3720
  return change_mparam(param_number, value);
3715
}
3721
}
3716
 
3722
 
3717
#endif /* !ONLY_MSPACES */
3723
#endif /* !ONLY_MSPACES */
3718
 
3724
 
3719
/* ----------------------------- user mspaces ---------------------------- */
3725
/* ----------------------------- user mspaces ---------------------------- */
3720
 
3726
 
3721
#if MSPACES
3727
#if MSPACES
3722
 
3728
 
3723
static mstate init_user_mstate(char* tbase, size_t tsize) {
3729
static mstate init_user_mstate(char* tbase, size_t tsize) {
3724
  size_t msize = pad_request(sizeof(struct malloc_state));
3730
  size_t msize = pad_request(sizeof(struct malloc_state));
3725
  mchunkptr mn;
3731
  mchunkptr mn;
3726
  mchunkptr msp = align_as_chunk(tbase);
3732
  mchunkptr msp = align_as_chunk(tbase);
3727
  mstate m = (mstate)(chunk2mem(msp));
3733
  mstate m = (mstate)(chunk2mem(msp));
3728
  memset(m, 0, msize);
3734
  memset(m, 0, msize);
3729
  INITIAL_LOCK(&m->mutex);
3735
  INITIAL_LOCK(&m->mutex);
3730
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3736
  msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
3731
  m->seg.base = m->least_addr = tbase;
3737
  m->seg.base = m->least_addr = tbase;
3732
  m->seg.size = m->footprint = m->max_footprint = tsize;
3738
  m->seg.size = m->footprint = m->max_footprint = tsize;
3733
  m->magic = mparams.magic;
3739
  m->magic = mparams.magic;
3734
  m->mflags = mparams.default_mflags;
3740
  m->mflags = mparams.default_mflags;
3735
  disable_contiguous(m);
3741
  disable_contiguous(m);
3736
  init_bins(m);
3742
  init_bins(m);
3737
  mn = next_chunk(mem2chunk(m));
3743
  mn = next_chunk(mem2chunk(m));
3738
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3744
  init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
3739
  check_top_chunk(m, m->top);
3745
  check_top_chunk(m, m->top);
3740
  return m;
3746
  return m;
3741
}
3747
}
3742
 
3748
 
3743
mspace create_mspace(size_t capacity, int locked) {
3749
mspace create_mspace(size_t capacity, int locked) {
3744
  mstate m = 0;
3750
  mstate m = 0;
3745
  size_t msize = pad_request(sizeof(struct malloc_state));
3751
  size_t msize = pad_request(sizeof(struct malloc_state));
3746
  init_mparams(); /* Ensure pagesize etc initialized */
3752
  init_mparams(); /* Ensure pagesize etc initialized */
3747
 
3753
 
3748
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3754
  if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3749
    size_t rs = ((capacity == 0)? mparams.granularity :
3755
    size_t rs = ((capacity == 0)? mparams.granularity :
3750
                 (capacity + TOP_FOOT_SIZE + msize));
3756
                 (capacity + TOP_FOOT_SIZE + msize));
3751
    size_t tsize = granularity_align(rs);
3757
    size_t tsize = granularity_align(rs);
3752
    char* tbase = (char*)(CALL_MMAP(tsize));
3758
    char* tbase = (char*)(CALL_MMAP(tsize));
3753
    if (tbase != CMFAIL) {
3759
    if (tbase != CMFAIL) {
3754
      m = init_user_mstate(tbase, tsize);
3760
      m = init_user_mstate(tbase, tsize);
3755
      m->seg.sflags = IS_MMAPPED_BIT;
3761
      m->seg.sflags = IS_MMAPPED_BIT;
3756
      set_lock(m, locked);
3762
      set_lock(m, locked);
3757
    }
3763
    }
3758
  }
3764
  }
3759
  return (mspace)m;
3765
  return (mspace)m;
3760
}
3766
}
3761
 
3767
 
3762
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3768
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
3763
  mstate m = 0;
3769
  mstate m = 0;
3764
  size_t msize = pad_request(sizeof(struct malloc_state));
3770
  size_t msize = pad_request(sizeof(struct malloc_state));
3765
  init_mparams(); /* Ensure pagesize etc initialized */
3771
  init_mparams(); /* Ensure pagesize etc initialized */
3766
 
3772
 
3767
  if (capacity > msize + TOP_FOOT_SIZE &&
3773
  if (capacity > msize + TOP_FOOT_SIZE &&
3768
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3774
      capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
3769
    m = init_user_mstate((char*)base, capacity);
3775
    m = init_user_mstate((char*)base, capacity);
3770
    m->seg.sflags = EXTERN_BIT;
3776
    m->seg.sflags = EXTERN_BIT;
3771
    set_lock(m, locked);
3777
    set_lock(m, locked);
3772
  }
3778
  }
3773
  return (mspace)m;
3779
  return (mspace)m;
3774
}
3780
}
3775
 
3781
 
3776
size_t destroy_mspace(mspace msp) {
3782
size_t destroy_mspace(mspace msp) {
3777
  size_t freed = 0;
3783
  size_t freed = 0;
3778
  mstate ms = (mstate)msp;
3784
  mstate ms = (mstate)msp;
3779
  if (ok_magic(ms)) {
3785
  if (ok_magic(ms)) {
3780
    msegmentptr sp = &ms->seg;
3786
    msegmentptr sp = &ms->seg;
3781
    while (sp != 0) {
3787
    while (sp != 0) {
3782
      char* base = sp->base;
3788
      char* base = sp->base;
3783
      size_t size = sp->size;
3789
      size_t size = sp->size;
3784
      flag_t flag = sp->sflags;
3790
      flag_t flag = sp->sflags;
3785
      sp = sp->next;
3791
      sp = sp->next;
3786
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3792
      if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
3787
          CALL_MUNMAP(base, size) == 0)
3793
          CALL_MUNMAP(base, size) == 0)
3788
        freed += size;
3794
        freed += size;
3789
    }
3795
    }
3790
  }
3796
  }
3791
  else {
3797
  else {
3792
    USAGE_ERROR_ACTION(ms,ms);
3798
    USAGE_ERROR_ACTION(ms,ms);
3793
  }
3799
  }
3794
  return freed;
3800
  return freed;
3795
}
3801
}
3796
 
3802
 
3797
/*
3803
/*
3798
  mspace versions of routines are near-clones of the global
3804
  mspace versions of routines are near-clones of the global
3799
  versions. This is not so nice but better than the alternatives.
3805
  versions. This is not so nice but better than the alternatives.
3800
*/
3806
*/
3801
 
3807
 
3802
 
3808
 
3803
void* mspace_malloc(mspace msp, size_t bytes) {
3809
void* mspace_malloc(mspace msp, size_t bytes) {
3804
  mstate ms = (mstate)msp;
3810
  mstate ms = (mstate)msp;
3805
  if (!ok_magic(ms)) {
3811
  if (!ok_magic(ms)) {
3806
    USAGE_ERROR_ACTION(ms,ms);
3812
    USAGE_ERROR_ACTION(ms,ms);
3807
    return 0;
3813
    return 0;
3808
  }
3814
  }
3809
  if (!PREACTION(ms)) {
3815
  if (!PREACTION(ms)) {
3810
    void* mem;
3816
    void* mem;
3811
    size_t nb;
3817
    size_t nb;
3812
    if (bytes <= MAX_SMALL_REQUEST) {
3818
    if (bytes <= MAX_SMALL_REQUEST) {
3813
      bindex_t idx;
3819
      bindex_t idx;
3814
      binmap_t smallbits;
3820
      binmap_t smallbits;
3815
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3821
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3816
      idx = small_index(nb);
3822
      idx = small_index(nb);
3817
      smallbits = ms->smallmap >> idx;
3823
      smallbits = ms->smallmap >> idx;
3818
 
3824
 
3819
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3825
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3820
        mchunkptr b, p;
3826
        mchunkptr b, p;
3821
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3827
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3822
        b = smallbin_at(ms, idx);
3828
        b = smallbin_at(ms, idx);
3823
        p = b->fd;
3829
        p = b->fd;
3824
        assert(chunksize(p) == small_index2size(idx));
3830
        assert(chunksize(p) == small_index2size(idx));
3825
        unlink_first_small_chunk(ms, b, p, idx);
3831
        unlink_first_small_chunk(ms, b, p, idx);
3826
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3832
        set_inuse_and_pinuse(ms, p, small_index2size(idx));
3827
        mem = chunk2mem(p);
3833
        mem = chunk2mem(p);
3828
        check_malloced_chunk(ms, mem, nb);
3834
        check_malloced_chunk(ms, mem, nb);
3829
        goto postaction;
3835
        goto postaction;
3830
      }
3836
      }
3831
 
3837
 
3832
      else if (nb > ms->dvsize) {
3838
      else if (nb > ms->dvsize) {
3833
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3839
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3834
          mchunkptr b, p, r;
3840
          mchunkptr b, p, r;
3835
          size_t rsize;
3841
          size_t rsize;
3836
          bindex_t i;
3842
          bindex_t i;
3837
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3843
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3838
          binmap_t leastbit = least_bit(leftbits);
3844
          binmap_t leastbit = least_bit(leftbits);
3839
          compute_bit2idx(leastbit, i);
3845
          compute_bit2idx(leastbit, i);
3840
          b = smallbin_at(ms, i);
3846
          b = smallbin_at(ms, i);
3841
          p = b->fd;
3847
          p = b->fd;
3842
          assert(chunksize(p) == small_index2size(i));
3848
          assert(chunksize(p) == small_index2size(i));
3843
          unlink_first_small_chunk(ms, b, p, i);
3849
          unlink_first_small_chunk(ms, b, p, i);
3844
          rsize = small_index2size(i) - nb;
3850
          rsize = small_index2size(i) - nb;
3845
          /* Fit here cannot be remainderless if 4byte sizes */
3851
          /* Fit here cannot be remainderless if 4byte sizes */
3846
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3852
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3847
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3853
            set_inuse_and_pinuse(ms, p, small_index2size(i));
3848
          else {
3854
          else {
3849
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3855
            set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3850
            r = chunk_plus_offset(p, nb);
3856
            r = chunk_plus_offset(p, nb);
3851
            set_size_and_pinuse_of_free_chunk(r, rsize);
3857
            set_size_and_pinuse_of_free_chunk(r, rsize);
3852
            replace_dv(ms, r, rsize);
3858
            replace_dv(ms, r, rsize);
3853
          }
3859
          }
3854
          mem = chunk2mem(p);
3860
          mem = chunk2mem(p);
3855
          check_malloced_chunk(ms, mem, nb);
3861
          check_malloced_chunk(ms, mem, nb);
3856
          goto postaction;
3862
          goto postaction;
3857
        }
3863
        }
3858
 
3864
 
3859
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3865
        else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
3860
          check_malloced_chunk(ms, mem, nb);
3866
          check_malloced_chunk(ms, mem, nb);
3861
          goto postaction;
3867
          goto postaction;
3862
        }
3868
        }
3863
      }
3869
      }
3864
    }
3870
    }
3865
    else if (bytes >= MAX_REQUEST)
3871
    else if (bytes >= MAX_REQUEST)
3866
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3872
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3867
    else {
3873
    else {
3868
      nb = pad_request(bytes);
3874
      nb = pad_request(bytes);
3869
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3875
      if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
3870
        check_malloced_chunk(ms, mem, nb);
3876
        check_malloced_chunk(ms, mem, nb);
3871
        goto postaction;
3877
        goto postaction;
3872
      }
3878
      }
3873
    }
3879
    }
3874
 
3880
 
3875
    if (nb <= ms->dvsize) {
3881
    if (nb <= ms->dvsize) {
3876
      size_t rsize = ms->dvsize - nb;
3882
      size_t rsize = ms->dvsize - nb;
3877
      mchunkptr p = ms->dv;
3883
      mchunkptr p = ms->dv;
3878
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3884
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3879
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3885
        mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
3880
        ms->dvsize = rsize;
3886
        ms->dvsize = rsize;
3881
        set_size_and_pinuse_of_free_chunk(r, rsize);
3887
        set_size_and_pinuse_of_free_chunk(r, rsize);
3882
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3888
        set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3883
      }
3889
      }
3884
      else { /* exhaust dv */
3890
      else { /* exhaust dv */
3885
        size_t dvs = ms->dvsize;
3891
        size_t dvs = ms->dvsize;
3886
        ms->dvsize = 0;
3892
        ms->dvsize = 0;
3887
        ms->dv = 0;
3893
        ms->dv = 0;
3888
        set_inuse_and_pinuse(ms, p, dvs);
3894
        set_inuse_and_pinuse(ms, p, dvs);
3889
      }
3895
      }
3890
      mem = chunk2mem(p);
3896
      mem = chunk2mem(p);
3891
      check_malloced_chunk(ms, mem, nb);
3897
      check_malloced_chunk(ms, mem, nb);
3892
      goto postaction;
3898
      goto postaction;
3893
    }
3899
    }
3894
 
3900
 
3895
    else if (nb < ms->topsize) { /* Split top */
3901
    else if (nb < ms->topsize) { /* Split top */
3896
      size_t rsize = ms->topsize -= nb;
3902
      size_t rsize = ms->topsize -= nb;
3897
      mchunkptr p = ms->top;
3903
      mchunkptr p = ms->top;
3898
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3904
      mchunkptr r = ms->top = chunk_plus_offset(p, nb);
3899
      r->head = rsize | PINUSE_BIT;
3905
      r->head = rsize | PINUSE_BIT;
3900
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3906
      set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
3901
      mem = chunk2mem(p);
3907
      mem = chunk2mem(p);
3902
      check_top_chunk(ms, ms->top);
3908
      check_top_chunk(ms, ms->top);
3903
      check_malloced_chunk(ms, mem, nb);
3909
      check_malloced_chunk(ms, mem, nb);
3904
      goto postaction;
3910
      goto postaction;
3905
    }
3911
    }
3906
 
3912
 
3907
    mem = sys_alloc(ms, nb);
3913
    mem = sys_alloc(ms, nb);
3908
 
3914
 
3909
  postaction:
3915
  postaction:
3910
    POSTACTION(ms);
3916
    POSTACTION(ms);
3911
    return mem;
3917
    return mem;
3912
  }
3918
  }
3913
 
3919
 
3914
  return 0;
3920
  return 0;
3915
}
3921
}
3916
 
3922
 
3917
void mspace_free(mspace msp, void* mem) {
3923
void mspace_free(mspace msp, void* mem) {
3918
  if (mem != 0) {
3924
  if (mem != 0) {
3919
    mchunkptr p  = mem2chunk(mem);
3925
    mchunkptr p  = mem2chunk(mem);
3920
#if FOOTERS
3926
#if FOOTERS
3921
    mstate fm = get_mstate_for(p);
3927
    mstate fm = get_mstate_for(p);
3922
#else /* FOOTERS */
3928
#else /* FOOTERS */
3923
    mstate fm = (mstate)msp;
3929
    mstate fm = (mstate)msp;
3924
#endif /* FOOTERS */
3930
#endif /* FOOTERS */
3925
    if (!ok_magic(fm)) {
3931
    if (!ok_magic(fm)) {
3926
      USAGE_ERROR_ACTION(fm, p);
3932
      USAGE_ERROR_ACTION(fm, p);
3927
      return;
3933
      return;
3928
    }
3934
    }
3929
    if (!PREACTION(fm)) {
3935
    if (!PREACTION(fm)) {
3930
      check_inuse_chunk(fm, p);
3936
      check_inuse_chunk(fm, p);
3931
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3937
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3932
        size_t psize = chunksize(p);
3938
        size_t psize = chunksize(p);
3933
        mchunkptr next = chunk_plus_offset(p, psize);
3939
        mchunkptr next = chunk_plus_offset(p, psize);
3934
        if (!pinuse(p)) {
3940
        if (!pinuse(p)) {
3935
          size_t prevsize = p->prev_foot;
3941
          size_t prevsize = p->prev_foot;
3936
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3942
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3937
            prevsize &= ~IS_MMAPPED_BIT;
3943
            prevsize &= ~IS_MMAPPED_BIT;
3938
            psize += prevsize + MMAP_FOOT_PAD;
3944
            psize += prevsize + MMAP_FOOT_PAD;
3939
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3945
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3940
              fm->footprint -= psize;
3946
              fm->footprint -= psize;
3941
            goto postaction;
3947
            goto postaction;
3942
          }
3948
          }
3943
          else {
3949
          else {
3944
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3950
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3945
            psize += prevsize;
3951
            psize += prevsize;
3946
            p = prev;
3952
            p = prev;
3947
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3953
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3948
              if (p != fm->dv) {
3954
              if (p != fm->dv) {
3949
                unlink_chunk(fm, p, prevsize);
3955
                unlink_chunk(fm, p, prevsize);
3950
              }
3956
              }
3951
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3957
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3952
                fm->dvsize = psize;
3958
                fm->dvsize = psize;
3953
                set_free_with_pinuse(p, psize, next);
3959
                set_free_with_pinuse(p, psize, next);
3954
                goto postaction;
3960
                goto postaction;
3955
              }
3961
              }
3956
            }
3962
            }
3957
            else
3963
            else
3958
              goto erroraction;
3964
              goto erroraction;
3959
          }
3965
          }
3960
        }
3966
        }
3961
 
3967
 
3962
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3968
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3963
          if (!cinuse(next)) {  /* consolidate forward */
3969
          if (!cinuse(next)) {  /* consolidate forward */
3964
            if (next == fm->top) {
3970
            if (next == fm->top) {
3965
              size_t tsize = fm->topsize += psize;
3971
              size_t tsize = fm->topsize += psize;
3966
              fm->top = p;
3972
              fm->top = p;
3967
              p->head = tsize | PINUSE_BIT;
3973
              p->head = tsize | PINUSE_BIT;
3968
              if (p == fm->dv) {
3974
              if (p == fm->dv) {
3969
                fm->dv = 0;
3975
                fm->dv = 0;
3970
                fm->dvsize = 0;
3976
                fm->dvsize = 0;
3971
              }
3977
              }
3972
              if (should_trim(fm, tsize))
3978
              if (should_trim(fm, tsize))
3973
                sys_trim(fm, 0);
3979
                sys_trim(fm, 0);
3974
              goto postaction;
3980
              goto postaction;
3975
            }
3981
            }
3976
            else if (next == fm->dv) {
3982
            else if (next == fm->dv) {
3977
              size_t dsize = fm->dvsize += psize;
3983
              size_t dsize = fm->dvsize += psize;
3978
              fm->dv = p;
3984
              fm->dv = p;
3979
              set_size_and_pinuse_of_free_chunk(p, dsize);
3985
              set_size_and_pinuse_of_free_chunk(p, dsize);
3980
              goto postaction;
3986
              goto postaction;
3981
            }
3987
            }
3982
            else {
3988
            else {
3983
              size_t nsize = chunksize(next);
3989
              size_t nsize = chunksize(next);
3984
              psize += nsize;
3990
              psize += nsize;
3985
              unlink_chunk(fm, next, nsize);
3991
              unlink_chunk(fm, next, nsize);
3986
              set_size_and_pinuse_of_free_chunk(p, psize);
3992
              set_size_and_pinuse_of_free_chunk(p, psize);
3987
              if (p == fm->dv) {
3993
              if (p == fm->dv) {
3988
                fm->dvsize = psize;
3994
                fm->dvsize = psize;
3989
                goto postaction;
3995
                goto postaction;
3990
              }
3996
              }
3991
            }
3997
            }
3992
          }
3998
          }
3993
          else
3999
          else
3994
            set_free_with_pinuse(p, psize, next);
4000
            set_free_with_pinuse(p, psize, next);
3995
          insert_chunk(fm, p, psize);
4001
          insert_chunk(fm, p, psize);
3996
          check_free_chunk(fm, p);
4002
          check_free_chunk(fm, p);
3997
          goto postaction;
4003
          goto postaction;
3998
        }
4004
        }
3999
      }
4005
      }
4000
    erroraction:
4006
    erroraction:
4001
      USAGE_ERROR_ACTION(fm, p);
4007
      USAGE_ERROR_ACTION(fm, p);
4002
    postaction:
4008
    postaction:
4003
      POSTACTION(fm);
4009
      POSTACTION(fm);
4004
    }
4010
    }
4005
  }
4011
  }
4006
}
4012
}
4007
 
4013
 
4008
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4014
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4009
  void* mem;
4015
  void* mem;
4010
  size_t req = 0;
4016
  size_t req = 0;
4011
  mstate ms = (mstate)msp;
4017
  mstate ms = (mstate)msp;
4012
  if (!ok_magic(ms)) {
4018
  if (!ok_magic(ms)) {
4013
    USAGE_ERROR_ACTION(ms,ms);
4019
    USAGE_ERROR_ACTION(ms,ms);
4014
    return 0;
4020
    return 0;
4015
  }
4021
  }
4016
  if (n_elements != 0) {
4022
  if (n_elements != 0) {
4017
    req = n_elements * elem_size;
4023
    req = n_elements * elem_size;
4018
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4024
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4019
        (req / n_elements != elem_size))
4025
        (req / n_elements != elem_size))
4020
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4026
      req = MAX_SIZE_T; /* force downstream failure on overflow */
4021
  }
4027
  }
4022
  mem = internal_malloc(ms, req);
4028
  mem = internal_malloc(ms, req);
4023
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4029
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4024
    memset(mem, 0, req);
4030
    memset(mem, 0, req);
4025
  return mem;
4031
  return mem;
4026
}
4032
}
4027
 
4033
 
4028
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4034
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4029
  if (oldmem == 0)
4035
  if (oldmem == 0)
4030
    return mspace_malloc(msp, bytes);
4036
    return mspace_malloc(msp, bytes);
4031
#ifdef REALLOC_ZERO_BYTES_FREES
4037
#ifdef REALLOC_ZERO_BYTES_FREES
4032
  if (bytes == 0) {
4038
  if (bytes == 0) {
4033
    mspace_free(msp, oldmem);
4039
    mspace_free(msp, oldmem);
4034
    return 0;
4040
    return 0;
4035
  }
4041
  }
4036
#endif /* REALLOC_ZERO_BYTES_FREES */
4042
#endif /* REALLOC_ZERO_BYTES_FREES */
4037
  else {
4043
  else {
4038
#if FOOTERS
4044
#if FOOTERS
4039
    mchunkptr p  = mem2chunk(oldmem);
4045
    mchunkptr p  = mem2chunk(oldmem);
4040
    mstate ms = get_mstate_for(p);
4046
    mstate ms = get_mstate_for(p);
4041
#else /* FOOTERS */
4047
#else /* FOOTERS */
4042
    mstate ms = (mstate)msp;
4048
    mstate ms = (mstate)msp;
4043
#endif /* FOOTERS */
4049
#endif /* FOOTERS */
4044
    if (!ok_magic(ms)) {
4050
    if (!ok_magic(ms)) {
4045
      USAGE_ERROR_ACTION(ms,ms);
4051
      USAGE_ERROR_ACTION(ms,ms);
4046
      return 0;
4052
      return 0;
4047
    }
4053
    }
4048
    return internal_realloc(ms, oldmem, bytes);
4054
    return internal_realloc(ms, oldmem, bytes);
4049
  }
4055
  }
4050
}
4056
}
4051
 
4057
 
4052
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4058
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4053
  mstate ms = (mstate)msp;
4059
  mstate ms = (mstate)msp;
4054
  if (!ok_magic(ms)) {
4060
  if (!ok_magic(ms)) {
4055
    USAGE_ERROR_ACTION(ms,ms);
4061
    USAGE_ERROR_ACTION(ms,ms);
4056
    return 0;
4062
    return 0;
4057
  }
4063
  }
4058
  return internal_memalign(ms, alignment, bytes);
4064
  return internal_memalign(ms, alignment, bytes);
4059
}
4065
}
4060
 
4066
 
4061
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4067
void** mspace_independent_calloc(mspace msp, size_t n_elements,
4062
                                 size_t elem_size, void* chunks[]) {
4068
                                 size_t elem_size, void* chunks[]) {
4063
  size_t sz = elem_size; /* serves as 1-element array */
4069
  size_t sz = elem_size; /* serves as 1-element array */
4064
  mstate ms = (mstate)msp;
4070
  mstate ms = (mstate)msp;
4065
  if (!ok_magic(ms)) {
4071
  if (!ok_magic(ms)) {
4066
    USAGE_ERROR_ACTION(ms,ms);
4072
    USAGE_ERROR_ACTION(ms,ms);
4067
    return 0;
4073
    return 0;
4068
  }
4074
  }
4069
  return ialloc(ms, n_elements, &sz, 3, chunks);
4075
  return ialloc(ms, n_elements, &sz, 3, chunks);
4070
}
4076
}
4071
 
4077
 
4072
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4078
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4073
                                   size_t sizes[], void* chunks[]) {
4079
                                   size_t sizes[], void* chunks[]) {
4074
  mstate ms = (mstate)msp;
4080
  mstate ms = (mstate)msp;
4075
  if (!ok_magic(ms)) {
4081
  if (!ok_magic(ms)) {
4076
    USAGE_ERROR_ACTION(ms,ms);
4082
    USAGE_ERROR_ACTION(ms,ms);
4077
    return 0;
4083
    return 0;
4078
  }
4084
  }
4079
  return ialloc(ms, n_elements, sizes, 0, chunks);
4085
  return ialloc(ms, n_elements, sizes, 0, chunks);
4080
}
4086
}
4081
 
4087
 
4082
int mspace_trim(mspace msp, size_t pad) {
4088
int mspace_trim(mspace msp, size_t pad) {
4083
  int result = 0;
4089
  int result = 0;
4084
  mstate ms = (mstate)msp;
4090
  mstate ms = (mstate)msp;
4085
  if (ok_magic(ms)) {
4091
  if (ok_magic(ms)) {
4086
    if (!PREACTION(ms)) {
4092
    if (!PREACTION(ms)) {
4087
      result = sys_trim(ms, pad);
4093
      result = sys_trim(ms, pad);
4088
      POSTACTION(ms);
4094
      POSTACTION(ms);
4089
    }
4095
    }
4090
  }
4096
  }
4091
  else {
4097
  else {
4092
    USAGE_ERROR_ACTION(ms,ms);
4098
    USAGE_ERROR_ACTION(ms,ms);
4093
  }
4099
  }
4094
  return result;
4100
  return result;
4095
}
4101
}
4096
 
4102
 
4097
void mspace_malloc_stats(mspace msp) {
4103
void mspace_malloc_stats(mspace msp) {
4098
  mstate ms = (mstate)msp;
4104
  mstate ms = (mstate)msp;
4099
  if (ok_magic(ms)) {
4105
  if (ok_magic(ms)) {
4100
    internal_malloc_stats(ms);
4106
    internal_malloc_stats(ms);
4101
  }
4107
  }
4102
  else {
4108
  else {
4103
    USAGE_ERROR_ACTION(ms,ms);
4109
    USAGE_ERROR_ACTION(ms,ms);
4104
  }
4110
  }
4105
}
4111
}
4106
 
4112
 
4107
size_t mspace_footprint(mspace msp) {
4113
size_t mspace_footprint(mspace msp) {
4108
  size_t result;
4114
  size_t result;
4109
  mstate ms = (mstate)msp;
4115
  mstate ms = (mstate)msp;
4110
  if (ok_magic(ms)) {
4116
  if (ok_magic(ms)) {
4111
    result = ms->footprint;
4117
    result = ms->footprint;
4112
  }
4118
  }
4113
  USAGE_ERROR_ACTION(ms,ms);
4119
  USAGE_ERROR_ACTION(ms,ms);
4114
  return result;
4120
  return result;
4115
}
4121
}
4116
 
4122
 
4117
 
4123
 
4118
size_t mspace_max_footprint(mspace msp) {
4124
size_t mspace_max_footprint(mspace msp) {
4119
  size_t result;
4125
  size_t result;
4120
  mstate ms = (mstate)msp;
4126
  mstate ms = (mstate)msp;
4121
  if (ok_magic(ms)) {
4127
  if (ok_magic(ms)) {
4122
    result = ms->max_footprint;
4128
    result = ms->max_footprint;
4123
  }
4129
  }
4124
  USAGE_ERROR_ACTION(ms,ms);
4130
  USAGE_ERROR_ACTION(ms,ms);
4125
  return result;
4131
  return result;
4126
}
4132
}
4127
 
4133
 
4128
 
4134
 
4129
#if !NO_MALLINFO
4135
#if !NO_MALLINFO
4130
struct mallinfo mspace_mallinfo(mspace msp) {
4136
struct mallinfo mspace_mallinfo(mspace msp) {
4131
  mstate ms = (mstate)msp;
4137
  mstate ms = (mstate)msp;
4132
  if (!ok_magic(ms)) {
4138
  if (!ok_magic(ms)) {
4133
    USAGE_ERROR_ACTION(ms,ms);
4139
    USAGE_ERROR_ACTION(ms,ms);
4134
  }
4140
  }
4135
  return internal_mallinfo(ms);
4141
  return internal_mallinfo(ms);
4136
}
4142
}
4137
#endif /* NO_MALLINFO */
4143
#endif /* NO_MALLINFO */
4138
 
4144
 
4139
int mspace_mallopt(int param_number, int value) {
4145
int mspace_mallopt(int param_number, int value) {
4140
  return change_mparam(param_number, value);
4146
  return change_mparam(param_number, value);
4141
}
4147
}
4142
 
4148
 
4143
#endif /* MSPACES */
4149
#endif /* MSPACES */
4144
 
4150
 
4145
/* -------------------- Alternative MORECORE functions ------------------- */
4151
/* -------------------- Alternative MORECORE functions ------------------- */
4146
 
4152
 
4147
/*
4153
/*
4148
  Guidelines for creating a custom version of MORECORE:
4154
  Guidelines for creating a custom version of MORECORE:
4149
 
4155
 
4150
  * For best performance, MORECORE should allocate in multiples of pagesize.
4156
  * For best performance, MORECORE should allocate in multiples of pagesize.
4151
  * MORECORE may allocate more memory than requested. (Or even less,
4157
  * MORECORE may allocate more memory than requested. (Or even less,
4152
      but this will usually result in a malloc failure.)
4158
      but this will usually result in a malloc failure.)
4153
  * MORECORE must not allocate memory when given argument zero, but
4159
  * MORECORE must not allocate memory when given argument zero, but
4154
      instead return one past the end address of memory from previous
4160
      instead return one past the end address of memory from previous
4155
      nonzero call.
4161
      nonzero call.
4156
  * For best performance, consecutive calls to MORECORE with positive
4162
  * For best performance, consecutive calls to MORECORE with positive
4157
      arguments should return increasing addresses, indicating that
4163
      arguments should return increasing addresses, indicating that
4158
      space has been contiguously extended.
4164
      space has been contiguously extended.
4159
  * Even though consecutive calls to MORECORE need not return contiguous
4165
  * Even though consecutive calls to MORECORE need not return contiguous
4160
      addresses, it must be OK for malloc'ed chunks to span multiple
4166
      addresses, it must be OK for malloc'ed chunks to span multiple
4161
      regions in those cases where they do happen to be contiguous.
4167
      regions in those cases where they do happen to be contiguous.
4162
  * MORECORE need not handle negative arguments -- it may instead
4168
  * MORECORE need not handle negative arguments -- it may instead
4163
      just return MFAIL when given negative arguments.
4169
      just return MFAIL when given negative arguments.
4164
      Negative arguments are always multiples of pagesize. MORECORE
4170
      Negative arguments are always multiples of pagesize. MORECORE
4165
      must not misinterpret negative args as large positive unsigned
4171
      must not misinterpret negative args as large positive unsigned
4166
      args. You can suppress all such calls from even occurring by defining
4172
      args. You can suppress all such calls from even occurring by defining
4167
      MORECORE_CANNOT_TRIM,
4173
      MORECORE_CANNOT_TRIM,
4168
 
4174
 
4169
  As an example alternative MORECORE, here is a custom allocator
4175
  As an example alternative MORECORE, here is a custom allocator
4170
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4176
  kindly contributed for pre-OSX macOS.  It uses virtually but not
4171
  necessarily physically contiguous non-paged memory (locked in,
4177
  necessarily physically contiguous non-paged memory (locked in,
4172
  present and won't get swapped out).  You can use it by uncommenting
4178
  present and won't get swapped out).  You can use it by uncommenting
4173
  this section, adding some #includes, and setting up the appropriate
4179
  this section, adding some #includes, and setting up the appropriate
4174
  defines above:
4180
  defines above:
4175
 
4181
 
4176
      #define MORECORE osMoreCore
4182
      #define MORECORE osMoreCore
4177
 
4183
 
4178
  There is also a shutdown routine that should somehow be called for
4184
  There is also a shutdown routine that should somehow be called for
4179
  cleanup upon program exit.
4185
  cleanup upon program exit.
4180
 
4186
 
4181
  #define MAX_POOL_ENTRIES 100
4187
  #define MAX_POOL_ENTRIES 100
4182
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4188
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4183
  static int next_os_pool;
4189
  static int next_os_pool;
4184
  void *our_os_pools[MAX_POOL_ENTRIES];
4190
  void *our_os_pools[MAX_POOL_ENTRIES];
4185
 
4191
 
4186
  void *osMoreCore(int size)
4192
  void *osMoreCore(int size)
4187
  {
4193
  {
4188
    void *ptr = 0;
4194
    void *ptr = 0;
4189
    static void *sbrk_top = 0;
4195
    static void *sbrk_top = 0;
4190
 
4196
 
4191
    if (size > 0)
4197
    if (size > 0)
4192
    {
4198
    {
4193
      if (size < MINIMUM_MORECORE_SIZE)
4199
      if (size < MINIMUM_MORECORE_SIZE)
4194
         size = MINIMUM_MORECORE_SIZE;
4200
         size = MINIMUM_MORECORE_SIZE;
4195
      if (CurrentExecutionLevel() == kTaskLevel)
4201
      if (CurrentExecutionLevel() == kTaskLevel)
4196
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4202
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4197
      if (ptr == 0)
4203
      if (ptr == 0)
4198
      {
4204
      {
4199
        return (void *) MFAIL;
4205
        return (void *) MFAIL;
4200
      }
4206
      }
4201
      // save ptrs so they can be freed during cleanup
4207
      // save ptrs so they can be freed during cleanup
4202
      our_os_pools[next_os_pool] = ptr;
4208
      our_os_pools[next_os_pool] = ptr;
4203
      next_os_pool++;
4209
      next_os_pool++;
4204
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4210
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4205
      sbrk_top = (char *) ptr + size;
4211
      sbrk_top = (char *) ptr + size;
4206
      return ptr;
4212
      return ptr;
4207
    }
4213
    }
4208
    else if (size < 0)
4214
    else if (size < 0)
4209
    {
4215
    {
4210
      // we don't currently support shrink behavior
4216
      // we don't currently support shrink behavior
4211
      return (void *) MFAIL;
4217
      return (void *) MFAIL;
4212
    }
4218
    }
4213
    else
4219
    else
4214
    {
4220
    {
4215
      return sbrk_top;
4221
      return sbrk_top;
4216
    }
4222
    }
4217
  }
4223
  }
4218
 
4224
 
4219
  // cleanup any allocated memory pools
4225
  // cleanup any allocated memory pools
4220
  // called as last thing before shutting down driver
4226
  // called as last thing before shutting down driver
4221
 
4227
 
4222
  void osCleanupMem(void)
4228
  void osCleanupMem(void)
4223
  {
4229
  {
4224
    void **ptr;
4230
    void **ptr;
4225
 
4231
 
4226
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4232
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4227
      if (*ptr)
4233
      if (*ptr)
4228
      {
4234
      {
4229
         PoolDeallocate(*ptr);
4235
         PoolDeallocate(*ptr);
4230
         *ptr = 0;
4236
         *ptr = 0;
4231
      }
4237
      }
4232
  }
4238
  }
4233
 
4239
 
4234
*/
4240
*/
4235
 
4241
 
4236
 
4242
 
4237
/* -----------------------------------------------------------------------
4243
/* -----------------------------------------------------------------------
4238
History:
4244
History:
4239
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4245
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4240
      * Add max_footprint functions
4246
      * Add max_footprint functions
4241
      * Ensure all appropriate literals are size_t
4247
      * Ensure all appropriate literals are size_t
4242
      * Fix conditional compilation problem for some #define settings
4248
      * Fix conditional compilation problem for some #define settings
4243
      * Avoid concatenating segments with the one provided
4249
      * Avoid concatenating segments with the one provided
4244
        in create_mspace_with_base
4250
        in create_mspace_with_base
4245
      * Rename some variables to avoid compiler shadowing warnings
4251
      * Rename some variables to avoid compiler shadowing warnings
4246
      * Use explicit lock initialization.
4252
      * Use explicit lock initialization.
4247
      * Better handling of sbrk interference.
4253
      * Better handling of sbrk interference.
4248
      * Simplify and fix segment insertion, trimming and mspace_destroy
4254
      * Simplify and fix segment insertion, trimming and mspace_destroy
4249
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4255
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4250
      * Thanks especially to Dennis Flanagan for help on these.
4256
      * Thanks especially to Dennis Flanagan for help on these.
4251
 
4257
 
4252
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4258
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4253
      * Fix memalign brace error.
4259
      * Fix memalign brace error.
4254
 
4260
 
4255
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4261
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4256
      * Fix improper #endif nesting in C++
4262
      * Fix improper #endif nesting in C++
4257
      * Add explicit casts needed for C++
4263
      * Add explicit casts needed for C++
4258
 
4264
 
4259
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4265
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4260
      * Use trees for large bins
4266
      * Use trees for large bins
4261
      * Support mspaces
4267
      * Support mspaces
4262
      * Use segments to unify sbrk-based and mmap-based system allocation,
4268
      * Use segments to unify sbrk-based and mmap-based system allocation,
4263
        removing need for emulation on most platforms without sbrk.
4269
        removing need for emulation on most platforms without sbrk.
4264
      * Default safety checks
4270
      * Default safety checks
4265
      * Optional footer checks. Thanks to William Robertson for the idea.
4271
      * Optional footer checks. Thanks to William Robertson for the idea.
4266
      * Internal code refactoring
4272
      * Internal code refactoring
4267
      * Incorporate suggestions and platform-specific changes.
4273
      * Incorporate suggestions and platform-specific changes.
4268
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4274
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4269
        Aaron Bachmann,  Emery Berger, and others.
4275
        Aaron Bachmann,  Emery Berger, and others.
4270
      * Speed up non-fastbin processing enough to remove fastbins.
4276
      * Speed up non-fastbin processing enough to remove fastbins.
4271
      * Remove useless cfree() to avoid conflicts with other apps.
4277
      * Remove useless cfree() to avoid conflicts with other apps.
4272
      * Remove internal memcpy, memset. Compilers handle builtins better.
4278
      * Remove internal memcpy, memset. Compilers handle builtins better.
4273
      * Remove some options that no one ever used and rename others.
4279
      * Remove some options that no one ever used and rename others.
4274
 
4280
 
4275
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4281
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4276
      * Fix malloc_state bitmap array misdeclaration
4282
      * Fix malloc_state bitmap array misdeclaration
4277
 
4283
 
4278
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4284
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4279
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4285
      * Allow tuning of FIRST_SORTED_BIN_SIZE
4280
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4286
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4281
      * Better detection and support for non-contiguousness of MORECORE.
4287
      * Better detection and support for non-contiguousness of MORECORE.
4282
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4288
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4283
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4289
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
4284
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4290
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
4285
      * Raised default trim and map thresholds to 256K.
4291
      * Raised default trim and map thresholds to 256K.
4286
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4292
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
4287
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4293
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4288
      * Branch-free bin calculation
4294
      * Branch-free bin calculation
4289
      * Default trim and mmap thresholds now 256K.
4295
      * Default trim and mmap thresholds now 256K.
4290
 
4296
 
4291
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4297
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
4292
      * Introduce independent_comalloc and independent_calloc.
4298
      * Introduce independent_comalloc and independent_calloc.
4293
        Thanks to Michael Pachos for motivation and help.
4299
        Thanks to Michael Pachos for motivation and help.
4294
      * Make optional .h file available
4300
      * Make optional .h file available
4295
      * Allow > 2GB requests on 32bit systems.
4301
      * Allow > 2GB requests on 32bit systems.
4296
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4302
      * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4297
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4303
        Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4298
        and Anonymous.
4304
        and Anonymous.
4299
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4305
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4300
        helping test this.)
4306
        helping test this.)
4301
      * memalign: check alignment arg
4307
      * memalign: check alignment arg
4302
      * realloc: don't try to shift chunks backwards, since this
4308
      * realloc: don't try to shift chunks backwards, since this
4303
        leads to  more fragmentation in some programs and doesn't
4309
        leads to  more fragmentation in some programs and doesn't
4304
        seem to help in any others.
4310
        seem to help in any others.
4305
      * Collect all cases in malloc requiring system memory into sysmalloc
4311
      * Collect all cases in malloc requiring system memory into sysmalloc
4306
      * Use mmap as backup to sbrk
4312
      * Use mmap as backup to sbrk
4307
      * Place all internal state in malloc_state
4313
      * Place all internal state in malloc_state
4308
      * Introduce fastbins (although similar to 2.5.1)
4314
      * Introduce fastbins (although similar to 2.5.1)
4309
      * Many minor tunings and cosmetic improvements
4315
      * Many minor tunings and cosmetic improvements
4310
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4316
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4311
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4317
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4312
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4318
        Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4313
      * Include errno.h to support default failure action.
4319
      * Include errno.h to support default failure action.
4314
 
4320
 
4315
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4321
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
4316
      * return null for negative arguments
4322
      * return null for negative arguments
4317
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4323
      * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4318
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4324
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4319
          (e.g. WIN32 platforms)
4325
          (e.g. WIN32 platforms)
4320
         * Cleanup header file inclusion for WIN32 platforms
4326
         * Cleanup header file inclusion for WIN32 platforms
4321
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4327
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4322
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4328
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4323
           memory allocation routines
4329
           memory allocation routines
4324
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4330
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4325
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4331
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4326
           usage of 'assert' in non-WIN32 code
4332
           usage of 'assert' in non-WIN32 code
4327
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4333
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4328
           avoid infinite loop
4334
           avoid infinite loop
4329
      * Always call 'fREe()' rather than 'free()'
4335
      * Always call 'fREe()' rather than 'free()'
4330
 
4336
 
4331
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4337
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
4332
      * Fixed ordering problem with boundary-stamping
4338
      * Fixed ordering problem with boundary-stamping
4333
 
4339
 
4334
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4340
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
4335
      * Added pvalloc, as recommended by H.J. Liu
4341
      * Added pvalloc, as recommended by H.J. Liu
4336
      * Added 64bit pointer support mainly from Wolfram Gloger
4342
      * Added 64bit pointer support mainly from Wolfram Gloger
4337
      * Added anonymously donated WIN32 sbrk emulation
4343
      * Added anonymously donated WIN32 sbrk emulation
4338
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4344
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4339
      * malloc_extend_top: fix mask error that caused wastage after
4345
      * malloc_extend_top: fix mask error that caused wastage after
4340
        foreign sbrks
4346
        foreign sbrks
4341
      * Add linux mremap support code from HJ Liu
4347
      * Add linux mremap support code from HJ Liu
4342
 
4348
 
4343
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4349
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
4344
      * Integrated most documentation with the code.
4350
      * Integrated most documentation with the code.
4345
      * Add support for mmap, with help from
4351
      * Add support for mmap, with help from
4346
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4352
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4347
      * Use last_remainder in more cases.
4353
      * Use last_remainder in more cases.
4348
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4354
      * Pack bins using idea from  colin@nyx10.cs.du.edu
4349
      * Use ordered bins instead of best-fit threshhold
4355
      * Use ordered bins instead of best-fit threshhold
4350
      * Eliminate block-local decls to simplify tracing and debugging.
4356
      * Eliminate block-local decls to simplify tracing and debugging.
4351
      * Support another case of realloc via move into top
4357
      * Support another case of realloc via move into top
4352
      * Fix error occuring when initial sbrk_base not word-aligned.
4358
      * Fix error occuring when initial sbrk_base not word-aligned.
4353
      * Rely on page size for units instead of SBRK_UNIT to
4359
      * Rely on page size for units instead of SBRK_UNIT to
4354
        avoid surprises about sbrk alignment conventions.
4360
        avoid surprises about sbrk alignment conventions.
4355
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4361
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4356
        (raymond@es.ele.tue.nl) for the suggestion.
4362
        (raymond@es.ele.tue.nl) for the suggestion.
4357
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4363
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4358
      * More precautions for cases where other routines call sbrk,
4364
      * More precautions for cases where other routines call sbrk,
4359
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4365
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4360
      * Added macros etc., allowing use in linux libc from
4366
      * Added macros etc., allowing use in linux libc from
4361
        H.J. Lu (hjl@gnu.ai.mit.edu)
4367
        H.J. Lu (hjl@gnu.ai.mit.edu)
4362
      * Inverted this history list
4368
      * Inverted this history list
4363
 
4369
 
4364
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4370
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
4365
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4371
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4366
      * Removed all preallocation code since under current scheme
4372
      * Removed all preallocation code since under current scheme
4367
        the work required to undo bad preallocations exceeds
4373
        the work required to undo bad preallocations exceeds
4368
        the work saved in good cases for most test programs.
4374
        the work saved in good cases for most test programs.
4369
      * No longer use return list or unconsolidated bins since
4375
      * No longer use return list or unconsolidated bins since
4370
        no scheme using them consistently outperforms those that don't
4376
        no scheme using them consistently outperforms those that don't
4371
        given above changes.
4377
        given above changes.
4372
      * Use best fit for very large chunks to prevent some worst-cases.
4378
      * Use best fit for very large chunks to prevent some worst-cases.
4373
      * Added some support for debugging
4379
      * Added some support for debugging
4374
 
4380
 
4375
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4381
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
4376
      * Removed footers when chunks are in use. Thanks to
4382
      * Removed footers when chunks are in use. Thanks to
4377
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4383
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4378
 
4384
 
4379
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4385
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
4380
      * Added malloc_trim, with help from Wolfram Gloger
4386
      * Added malloc_trim, with help from Wolfram Gloger
4381
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4387
        (wmglo@Dent.MED.Uni-Muenchen.DE).
4382
 
4388
 
4383
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4389
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
4384
 
4390
 
4385
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4391
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
4386
      * realloc: try to expand in both directions
4392
      * realloc: try to expand in both directions
4387
      * malloc: swap order of clean-bin strategy;
4393
      * malloc: swap order of clean-bin strategy;
4388
      * realloc: only conditionally expand backwards
4394
      * realloc: only conditionally expand backwards
4389
      * Try not to scavenge used bins
4395
      * Try not to scavenge used bins
4390
      * Use bin counts as a guide to preallocation
4396
      * Use bin counts as a guide to preallocation
4391
      * Occasionally bin return list chunks in first scan
4397
      * Occasionally bin return list chunks in first scan
4392
      * Add a few optimizations from colin@nyx10.cs.du.edu
4398
      * Add a few optimizations from colin@nyx10.cs.du.edu
4393
 
4399
 
4394
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4400
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
4395
      * faster bin computation & slightly different binning
4401
      * faster bin computation & slightly different binning
4396
      * merged all consolidations to one part of malloc proper
4402
      * merged all consolidations to one part of malloc proper
4397
         (eliminating old malloc_find_space & malloc_clean_bin)
4403
         (eliminating old malloc_find_space & malloc_clean_bin)
4398
      * Scan 2 returns chunks (not just 1)
4404
      * Scan 2 returns chunks (not just 1)
4399
      * Propagate failure in realloc if malloc returns 0
4405
      * Propagate failure in realloc if malloc returns 0
4400
      * Add stuff to allow compilation on non-ANSI compilers
4406
      * Add stuff to allow compilation on non-ANSI compilers
4401
          from kpv@research.att.com
4407
          from kpv@research.att.com
4402
 
4408
 
4403
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4409
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
4404
      * removed potential for odd address access in prev_chunk
4410
      * removed potential for odd address access in prev_chunk
4405
      * removed dependency on getpagesize.h
4411
      * removed dependency on getpagesize.h
4406
      * misc cosmetics and a bit more internal documentation
4412
      * misc cosmetics and a bit more internal documentation
4407
      * anticosmetics: mangled names in macros to evade debugger strangeness
4413
      * anticosmetics: mangled names in macros to evade debugger strangeness
4408
      * tested on sparc, hp-700, dec-mips, rs6000
4414
      * tested on sparc, hp-700, dec-mips, rs6000
4409
          with gcc & native cc (hp, dec only) allowing
4415
          with gcc & native cc (hp, dec only) allowing
4410
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4416
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4411
 
4417
 
4412
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4418
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
4413
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4419
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4414
         structure of old version,  but most details differ.)
4420
         structure of old version,  but most details differ.)
4415
 
4421
 
4416
*/
4422
*/
4417
 
4423
 
4418
/** @}
4424
/** @}
4419
 */
4425
 */
4420
 
4426
 
4421
 
4427