Rev 968 | Rev 999 | Go to most recent revision | Only display areas with differences | Regard whitespace | Details | Blame | Last modification | View Log | RSS feed
Rev 968 | Rev 985 | ||
---|---|---|---|
1 | /* |
1 | /* |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by |
3 | Doug Lea and released to the public domain, as explained at |
3 | Doug Lea and released to the public domain, as explained at |
4 | http://creativecommons.org/licenses/publicdomain. Send questions, |
4 | http://creativecommons.org/licenses/publicdomain. Send questions, |
5 | comments, complaints, performance data, etc to dl@cs.oswego.edu |
5 | comments, complaints, performance data, etc to dl@cs.oswego.edu |
6 | 6 | ||
7 | * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee) |
7 | * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee) |
8 | 8 | ||
9 | Note: There may be an updated version of this malloc obtainable at |
9 | Note: There may be an updated version of this malloc obtainable at |
10 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
10 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
11 | Check before installing! |
11 | Check before installing! |
12 | 12 | ||
13 | * Quickstart |
13 | * Quickstart |
14 | 14 | ||
15 | This library is all in one file to simplify the most common usage: |
15 | This library is all in one file to simplify the most common usage: |
16 | ftp it, compile it (-O3), and link it into another program. All of |
16 | ftp it, compile it (-O3), and link it into another program. All of |
17 | the compile-time options default to reasonable values for use on |
17 | the compile-time options default to reasonable values for use on |
18 | most platforms. You might later want to step through various |
18 | most platforms. You might later want to step through various |
19 | compile-time and dynamic tuning options. |
19 | compile-time and dynamic tuning options. |
20 | 20 | ||
21 | For convenience, an include file for code using this malloc is at: |
21 | For convenience, an include file for code using this malloc is at: |
22 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h |
22 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h |
23 | You don't really need this .h file unless you call functions not |
23 | You don't really need this .h file unless you call functions not |
24 | defined in your system include files. The .h file contains only the |
24 | defined in your system include files. The .h file contains only the |
25 | excerpts from this file needed for using this malloc on ANSI C/C++ |
25 | excerpts from this file needed for using this malloc on ANSI C/C++ |
26 | systems, so long as you haven't changed compile-time options about |
26 | systems, so long as you haven't changed compile-time options about |
27 | naming and tuning parameters. If you do, then you can create your |
27 | naming and tuning parameters. If you do, then you can create your |
28 | own malloc.h that does include all settings by cutting at the point |
28 | own malloc.h that does include all settings by cutting at the point |
29 | indicated below. Note that you may already by default be using a C |
29 | indicated below. Note that you may already by default be using a C |
30 | library containing a malloc that is based on some version of this |
30 | library containing a malloc that is based on some version of this |
31 | malloc (for example in linux). You might still want to use the one |
31 | malloc (for example in linux). You might still want to use the one |
32 | in this file to customize settings or to avoid overheads associated |
32 | in this file to customize settings or to avoid overheads associated |
33 | with library versions. |
33 | with library versions. |
34 | 34 | ||
35 | * Vital statistics: |
35 | * Vital statistics: |
36 | 36 | ||
37 | Supported pointer/size_t representation: 4 or 8 bytes |
37 | Supported pointer/size_t representation: 4 or 8 bytes |
38 | size_t MUST be an unsigned type of the same width as |
38 | size_t MUST be an unsigned type of the same width as |
39 | pointers. (If you are using an ancient system that declares |
39 | pointers. (If you are using an ancient system that declares |
40 | size_t as a signed type, or need it to be a different width |
40 | size_t as a signed type, or need it to be a different width |
41 | than pointers, you can use a previous release of this malloc |
41 | than pointers, you can use a previous release of this malloc |
42 | (e.g. 2.7.2) supporting these.) |
42 | (e.g. 2.7.2) supporting these.) |
43 | 43 | ||
44 | Alignment: 8 bytes (default) |
44 | Alignment: 8 bytes (default) |
45 | This suffices for nearly all current machines and C compilers. |
45 | This suffices for nearly all current machines and C compilers. |
46 | However, you can define MALLOC_ALIGNMENT to be wider than this |
46 | However, you can define MALLOC_ALIGNMENT to be wider than this |
47 | if necessary (up to 128bytes), at the expense of using more space. |
47 | if necessary (up to 128bytes), at the expense of using more space. |
48 | 48 | ||
49 | Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) |
49 | Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) |
50 | 8 or 16 bytes (if 8byte sizes) |
50 | 8 or 16 bytes (if 8byte sizes) |
51 | Each malloced chunk has a hidden word of overhead holding size |
51 | Each malloced chunk has a hidden word of overhead holding size |
52 | and status information, and additional cross-check word |
52 | and status information, and additional cross-check word |
53 | if FOOTERS is defined. |
53 | if FOOTERS is defined. |
54 | 54 | ||
55 | Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) |
55 | Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) |
56 | 8-byte ptrs: 32 bytes (including overhead) |
56 | 8-byte ptrs: 32 bytes (including overhead) |
57 | 57 | ||
58 | Even a request for zero bytes (i.e., malloc(0)) returns a |
58 | Even a request for zero bytes (i.e., malloc(0)) returns a |
59 | pointer to something of the minimum allocatable size. |
59 | pointer to something of the minimum allocatable size. |
60 | The maximum overhead wastage (i.e., number of extra bytes |
60 | The maximum overhead wastage (i.e., number of extra bytes |
61 | allocated than were requested in malloc) is less than or equal |
61 | allocated than were requested in malloc) is less than or equal |
62 | to the minimum size, except for requests >= mmap_threshold that |
62 | to the minimum size, except for requests >= mmap_threshold that |
63 | are serviced via mmap(), where the worst case wastage is about |
63 | are serviced via mmap(), where the worst case wastage is about |
64 | 32 bytes plus the remainder from a system page (the minimal |
64 | 32 bytes plus the remainder from a system page (the minimal |
65 | mmap unit); typically 4096 or 8192 bytes. |
65 | mmap unit); typically 4096 or 8192 bytes. |
66 | 66 | ||
67 | Security: static-safe; optionally more or less |
67 | Security: static-safe; optionally more or less |
68 | The "security" of malloc refers to the ability of malicious |
68 | The "security" of malloc refers to the ability of malicious |
69 | code to accentuate the effects of errors (for example, freeing |
69 | code to accentuate the effects of errors (for example, freeing |
70 | space that is not currently malloc'ed or overwriting past the |
70 | space that is not currently malloc'ed or overwriting past the |
71 | ends of chunks) in code that calls malloc. This malloc |
71 | ends of chunks) in code that calls malloc. This malloc |
72 | guarantees not to modify any memory locations below the base of |
72 | guarantees not to modify any memory locations below the base of |
73 | heap, i.e., static variables, even in the presence of usage |
73 | heap, i.e., static variables, even in the presence of usage |
74 | errors. The routines additionally detect most improper frees |
74 | errors. The routines additionally detect most improper frees |
75 | and reallocs. All this holds as long as the static bookkeeping |
75 | and reallocs. All this holds as long as the static bookkeeping |
76 | for malloc itself is not corrupted by some other means. This |
76 | for malloc itself is not corrupted by some other means. This |
77 | is only one aspect of security -- these checks do not, and |
77 | is only one aspect of security -- these checks do not, and |
78 | cannot, detect all possible programming errors. |
78 | cannot, detect all possible programming errors. |
79 | 79 | ||
80 | If FOOTERS is defined nonzero, then each allocated chunk |
80 | If FOOTERS is defined nonzero, then each allocated chunk |
81 | carries an additional check word to verify that it was malloced |
81 | carries an additional check word to verify that it was malloced |
82 | from its space. These check words are the same within each |
82 | from its space. These check words are the same within each |
83 | execution of a program using malloc, but differ across |
83 | execution of a program using malloc, but differ across |
84 | executions, so externally crafted fake chunks cannot be |
84 | executions, so externally crafted fake chunks cannot be |
85 | freed. This improves security by rejecting frees/reallocs that |
85 | freed. This improves security by rejecting frees/reallocs that |
86 | could corrupt heap memory, in addition to the checks preventing |
86 | could corrupt heap memory, in addition to the checks preventing |
87 | writes to statics that are always on. This may further improve |
87 | writes to statics that are always on. This may further improve |
88 | security at the expense of time and space overhead. (Note that |
88 | security at the expense of time and space overhead. (Note that |
89 | FOOTERS may also be worth using with MSPACES.) |
89 | FOOTERS may also be worth using with MSPACES.) |
90 | 90 | ||
91 | By default detected errors cause the program to abort (calling |
91 | By default detected errors cause the program to abort (calling |
92 | "abort()"). You can override this to instead proceed past |
92 | "abort()"). You can override this to instead proceed past |
93 | errors by defining PROCEED_ON_ERROR. In this case, a bad free |
93 | errors by defining PROCEED_ON_ERROR. In this case, a bad free |
94 | has no effect, and a malloc that encounters a bad address |
94 | has no effect, and a malloc that encounters a bad address |
95 | caused by user overwrites will ignore the bad address by |
95 | caused by user overwrites will ignore the bad address by |
96 | dropping pointers and indices to all known memory. This may |
96 | dropping pointers and indices to all known memory. This may |
97 | be appropriate for programs that should continue if at all |
97 | be appropriate for programs that should continue if at all |
98 | possible in the face of programming errors, although they may |
98 | possible in the face of programming errors, although they may |
99 | run out of memory because dropped memory is never reclaimed. |
99 | run out of memory because dropped memory is never reclaimed. |
100 | 100 | ||
101 | If you don't like either of these options, you can define |
101 | If you don't like either of these options, you can define |
102 | CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything |
102 | CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything |
103 | else. And if if you are sure that your program using malloc has |
103 | else. And if if you are sure that your program using malloc has |
104 | no errors or vulnerabilities, you can define INSECURE to 1, |
104 | no errors or vulnerabilities, you can define INSECURE to 1, |
105 | which might (or might not) provide a small performance improvement. |
105 | which might (or might not) provide a small performance improvement. |
106 | 106 | ||
107 | Thread-safety: NOT thread-safe unless USE_LOCKS defined |
107 | Thread-safety: NOT thread-safe unless USE_LOCKS defined |
108 | When USE_LOCKS is defined, each public call to malloc, free, |
108 | When USE_LOCKS is defined, each public call to malloc, free, |
109 | etc is surrounded with either a pthread mutex or a win32 |
109 | etc is surrounded with either a pthread mutex or a win32 |
110 | spinlock (depending on WIN32). This is not especially fast, and |
110 | spinlock (depending on WIN32). This is not especially fast, and |
111 | can be a major bottleneck. It is designed only to provide |
111 | can be a major bottleneck. It is designed only to provide |
112 | minimal protection in concurrent environments, and to provide a |
112 | minimal protection in concurrent environments, and to provide a |
113 | basis for extensions. If you are using malloc in a concurrent |
113 | basis for extensions. If you are using malloc in a concurrent |
114 | program, consider instead using ptmalloc, which is derived from |
114 | program, consider instead using ptmalloc, which is derived from |
115 | a version of this malloc. (See http://www.malloc.de). |
115 | a version of this malloc. (See http://www.malloc.de). |
116 | 116 | ||
117 | System requirements: Any combination of MORECORE and/or MMAP/MUNMAP |
117 | System requirements: Any combination of MORECORE and/or MMAP/MUNMAP |
118 | This malloc can use unix sbrk or any emulation (invoked using |
118 | This malloc can use unix sbrk or any emulation (invoked using |
119 | the CALL_MORECORE macro) and/or mmap/munmap or any emulation |
119 | the CALL_MORECORE macro) and/or mmap/munmap or any emulation |
120 | (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system |
120 | (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system |
121 | memory. On most unix systems, it tends to work best if both |
121 | memory. On most unix systems, it tends to work best if both |
122 | MORECORE and MMAP are enabled. On Win32, it uses emulations |
122 | MORECORE and MMAP are enabled. On Win32, it uses emulations |
123 | based on VirtualAlloc. It also uses common C library functions |
123 | based on VirtualAlloc. It also uses common C library functions |
124 | like memset. |
124 | like memset. |
125 | 125 | ||
126 | Compliance: I believe it is compliant with the Single Unix Specification |
126 | Compliance: I believe it is compliant with the Single Unix Specification |
127 | (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably |
127 | (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably |
128 | others as well. |
128 | others as well. |
129 | 129 | ||
130 | * Overview of algorithms |
130 | * Overview of algorithms |
131 | 131 | ||
132 | This is not the fastest, most space-conserving, most portable, or |
132 | This is not the fastest, most space-conserving, most portable, or |
133 | most tunable malloc ever written. However it is among the fastest |
133 | most tunable malloc ever written. However it is among the fastest |
134 | while also being among the most space-conserving, portable and |
134 | while also being among the most space-conserving, portable and |
135 | tunable. Consistent balance across these factors results in a good |
135 | tunable. Consistent balance across these factors results in a good |
136 | general-purpose allocator for malloc-intensive programs. |
136 | general-purpose allocator for malloc-intensive programs. |
137 | 137 | ||
138 | In most ways, this malloc is a best-fit allocator. Generally, it |
138 | In most ways, this malloc is a best-fit allocator. Generally, it |
139 | chooses the best-fitting existing chunk for a request, with ties |
139 | chooses the best-fitting existing chunk for a request, with ties |
140 | broken in approximately least-recently-used order. (This strategy |
140 | broken in approximately least-recently-used order. (This strategy |
141 | normally maintains low fragmentation.) However, for requests less |
141 | normally maintains low fragmentation.) However, for requests less |
142 | than 256bytes, it deviates from best-fit when there is not an |
142 | than 256bytes, it deviates from best-fit when there is not an |
143 | exactly fitting available chunk by preferring to use space adjacent |
143 | exactly fitting available chunk by preferring to use space adjacent |
144 | to that used for the previous small request, as well as by breaking |
144 | to that used for the previous small request, as well as by breaking |
145 | ties in approximately most-recently-used order. (These enhance |
145 | ties in approximately most-recently-used order. (These enhance |
146 | locality of series of small allocations.) And for very large requests |
146 | locality of series of small allocations.) And for very large requests |
147 | (>= 256Kb by default), it relies on system memory mapping |
147 | (>= 256Kb by default), it relies on system memory mapping |
148 | facilities, if supported. (This helps avoid carrying around and |
148 | facilities, if supported. (This helps avoid carrying around and |
149 | possibly fragmenting memory used only for large chunks.) |
149 | possibly fragmenting memory used only for large chunks.) |
150 | 150 | ||
151 | All operations (except malloc_stats and mallinfo) have execution |
151 | All operations (except malloc_stats and mallinfo) have execution |
152 | times that are bounded by a constant factor of the number of bits in |
152 | times that are bounded by a constant factor of the number of bits in |
153 | a size_t, not counting any clearing in calloc or copying in realloc, |
153 | a size_t, not counting any clearing in calloc or copying in realloc, |
154 | or actions surrounding MORECORE and MMAP that have times |
154 | or actions surrounding MORECORE and MMAP that have times |
155 | proportional to the number of non-contiguous regions returned by |
155 | proportional to the number of non-contiguous regions returned by |
156 | system allocation routines, which is often just 1. |
156 | system allocation routines, which is often just 1. |
157 | 157 | ||
158 | The implementation is not very modular and seriously overuses |
158 | The implementation is not very modular and seriously overuses |
159 | macros. Perhaps someday all C compilers will do as good a job |
159 | macros. Perhaps someday all C compilers will do as good a job |
160 | inlining modular code as can now be done by brute-force expansion, |
160 | inlining modular code as can now be done by brute-force expansion, |
161 | but now, enough of them seem not to. |
161 | but now, enough of them seem not to. |
162 | 162 | ||
163 | Some compilers issue a lot of warnings about code that is |
163 | Some compilers issue a lot of warnings about code that is |
164 | dead/unreachable only on some platforms, and also about intentional |
164 | dead/unreachable only on some platforms, and also about intentional |
165 | uses of negation on unsigned types. All known cases of each can be |
165 | uses of negation on unsigned types. All known cases of each can be |
166 | ignored. |
166 | ignored. |
167 | 167 | ||
168 | For a longer but out of date high-level description, see |
168 | For a longer but out of date high-level description, see |
169 | http://gee.cs.oswego.edu/dl/html/malloc.html |
169 | http://gee.cs.oswego.edu/dl/html/malloc.html |
170 | 170 | ||
171 | * MSPACES |
171 | * MSPACES |
172 | If MSPACES is defined, then in addition to malloc, free, etc., |
172 | If MSPACES is defined, then in addition to malloc, free, etc., |
173 | this file also defines mspace_malloc, mspace_free, etc. These |
173 | this file also defines mspace_malloc, mspace_free, etc. These |
174 | are versions of malloc routines that take an "mspace" argument |
174 | are versions of malloc routines that take an "mspace" argument |
175 | obtained using create_mspace, to control all internal bookkeeping. |
175 | obtained using create_mspace, to control all internal bookkeeping. |
176 | If ONLY_MSPACES is defined, only these versions are compiled. |
176 | If ONLY_MSPACES is defined, only these versions are compiled. |
177 | So if you would like to use this allocator for only some allocations, |
177 | So if you would like to use this allocator for only some allocations, |
178 | and your system malloc for others, you can compile with |
178 | and your system malloc for others, you can compile with |
179 | ONLY_MSPACES and then do something like... |
179 | ONLY_MSPACES and then do something like... |
180 | static mspace mymspace = create_mspace(0,0); // for example |
180 | static mspace mymspace = create_mspace(0,0); // for example |
181 | #define mymalloc(bytes) mspace_malloc(mymspace, bytes) |
181 | #define mymalloc(bytes) mspace_malloc(mymspace, bytes) |
182 | 182 | ||
183 | (Note: If you only need one instance of an mspace, you can instead |
183 | (Note: If you only need one instance of an mspace, you can instead |
184 | use "USE_DL_PREFIX" to relabel the global malloc.) |
184 | use "USE_DL_PREFIX" to relabel the global malloc.) |
185 | 185 | ||
186 | You can similarly create thread-local allocators by storing |
186 | You can similarly create thread-local allocators by storing |
187 | mspaces as thread-locals. For example: |
187 | mspaces as thread-locals. For example: |
188 | static __thread mspace tlms = 0; |
188 | static __thread mspace tlms = 0; |
189 | void* tlmalloc(size_t bytes) { |
189 | void* tlmalloc(size_t bytes) { |
190 | if (tlms == 0) tlms = create_mspace(0, 0); |
190 | if (tlms == 0) tlms = create_mspace(0, 0); |
191 | return mspace_malloc(tlms, bytes); |
191 | return mspace_malloc(tlms, bytes); |
192 | } |
192 | } |
193 | void tlfree(void* mem) { mspace_free(tlms, mem); } |
193 | void tlfree(void* mem) { mspace_free(tlms, mem); } |
194 | 194 | ||
195 | Unless FOOTERS is defined, each mspace is completely independent. |
195 | Unless FOOTERS is defined, each mspace is completely independent. |
196 | You cannot allocate from one and free to another (although |
196 | You cannot allocate from one and free to another (although |
197 | conformance is only weakly checked, so usage errors are not always |
197 | conformance is only weakly checked, so usage errors are not always |
198 | caught). If FOOTERS is defined, then each chunk carries around a tag |
198 | caught). If FOOTERS is defined, then each chunk carries around a tag |
199 | indicating its originating mspace, and frees are directed to their |
199 | indicating its originating mspace, and frees are directed to their |
200 | originating spaces. |
200 | originating spaces. |
201 | 201 | ||
202 | ------------------------- Compile-time options --------------------------- |
202 | ------------------------- Compile-time options --------------------------- |
203 | 203 | ||
204 | Be careful in setting #define values for numerical constants of type |
204 | Be careful in setting #define values for numerical constants of type |
205 | size_t. On some systems, literal values are not automatically extended |
205 | size_t. On some systems, literal values are not automatically extended |
206 | to size_t precision unless they are explicitly casted. |
206 | to size_t precision unless they are explicitly casted. |
207 | 207 | ||
208 | WIN32 default: defined if _WIN32 defined |
208 | WIN32 default: defined if _WIN32 defined |
209 | Defining WIN32 sets up defaults for MS environment and compilers. |
209 | Defining WIN32 sets up defaults for MS environment and compilers. |
210 | Otherwise defaults are for unix. |
210 | Otherwise defaults are for unix. |
211 | 211 | ||
212 | MALLOC_ALIGNMENT default: (size_t)8 |
212 | MALLOC_ALIGNMENT default: (size_t)8 |
213 | Controls the minimum alignment for malloc'ed chunks. It must be a |
213 | Controls the minimum alignment for malloc'ed chunks. It must be a |
214 | power of two and at least 8, even on machines for which smaller |
214 | power of two and at least 8, even on machines for which smaller |
215 | alignments would suffice. It may be defined as larger than this |
215 | alignments would suffice. It may be defined as larger than this |
216 | though. Note however that code and data structures are optimized for |
216 | though. Note however that code and data structures are optimized for |
217 | the case of 8-byte alignment. |
217 | the case of 8-byte alignment. |
218 | 218 | ||
219 | MSPACES default: 0 (false) |
219 | MSPACES default: 0 (false) |
220 | If true, compile in support for independent allocation spaces. |
220 | If true, compile in support for independent allocation spaces. |
221 | This is only supported if HAVE_MMAP is true. |
221 | This is only supported if HAVE_MMAP is true. |
222 | 222 | ||
223 | ONLY_MSPACES default: 0 (false) |
223 | ONLY_MSPACES default: 0 (false) |
224 | If true, only compile in mspace versions, not regular versions. |
224 | If true, only compile in mspace versions, not regular versions. |
225 | 225 | ||
226 | USE_LOCKS default: 0 (false) |
226 | USE_LOCKS default: 0 (false) |
227 | Causes each call to each public routine to be surrounded with |
227 | Causes each call to each public routine to be surrounded with |
228 | pthread or WIN32 mutex lock/unlock. (If set true, this can be |
228 | pthread or WIN32 mutex lock/unlock. (If set true, this can be |
229 | overridden on a per-mspace basis for mspace versions.) |
229 | overridden on a per-mspace basis for mspace versions.) |
230 | 230 | ||
231 | FOOTERS default: 0 |
231 | FOOTERS default: 0 |
232 | If true, provide extra checking and dispatching by placing |
232 | If true, provide extra checking and dispatching by placing |
233 | information in the footers of allocated chunks. This adds |
233 | information in the footers of allocated chunks. This adds |
234 | space and time overhead. |
234 | space and time overhead. |
235 | 235 | ||
236 | INSECURE default: 0 |
236 | INSECURE default: 0 |
237 | If true, omit checks for usage errors and heap space overwrites. |
237 | If true, omit checks for usage errors and heap space overwrites. |
238 | 238 | ||
239 | USE_DL_PREFIX default: NOT defined |
239 | USE_DL_PREFIX default: NOT defined |
240 | Causes compiler to prefix all public routines with the string 'dl'. |
240 | Causes compiler to prefix all public routines with the string 'dl'. |
241 | This can be useful when you only want to use this malloc in one part |
241 | This can be useful when you only want to use this malloc in one part |
242 | of a program, using your regular system malloc elsewhere. |
242 | of a program, using your regular system malloc elsewhere. |
243 | 243 | ||
244 | ABORT default: defined as abort() |
244 | ABORT default: defined as abort() |
245 | Defines how to abort on failed checks. On most systems, a failed |
245 | Defines how to abort on failed checks. On most systems, a failed |
246 | check cannot die with an "assert" or even print an informative |
246 | check cannot die with an "assert" or even print an informative |
247 | message, because the underlying print routines in turn call malloc, |
247 | message, because the underlying print routines in turn call malloc, |
248 | which will fail again. Generally, the best policy is to simply call |
248 | which will fail again. Generally, the best policy is to simply call |
249 | abort(). It's not very useful to do more than this because many |
249 | abort(). It's not very useful to do more than this because many |
250 | errors due to overwriting will show up as address faults (null, odd |
250 | errors due to overwriting will show up as address faults (null, odd |
251 | addresses etc) rather than malloc-triggered checks, so will also |
251 | addresses etc) rather than malloc-triggered checks, so will also |
252 | abort. Also, most compilers know that abort() does not return, so |
252 | abort. Also, most compilers know that abort() does not return, so |
253 | can better optimize code conditionally calling it. |
253 | can better optimize code conditionally calling it. |
254 | 254 | ||
255 | PROCEED_ON_ERROR default: defined as 0 (false) |
255 | PROCEED_ON_ERROR default: defined as 0 (false) |
256 | Controls whether detected bad addresses cause them to bypassed |
256 | Controls whether detected bad addresses cause them to bypassed |
257 | rather than aborting. If set, detected bad arguments to free and |
257 | rather than aborting. If set, detected bad arguments to free and |
258 | realloc are ignored. And all bookkeeping information is zeroed out |
258 | realloc are ignored. And all bookkeeping information is zeroed out |
259 | upon a detected overwrite of freed heap space, thus losing the |
259 | upon a detected overwrite of freed heap space, thus losing the |
260 | ability to ever return it from malloc again, but enabling the |
260 | ability to ever return it from malloc again, but enabling the |
261 | application to proceed. If PROCEED_ON_ERROR is defined, the |
261 | application to proceed. If PROCEED_ON_ERROR is defined, the |
262 | static variable malloc_corruption_error_count is compiled in |
262 | static variable malloc_corruption_error_count is compiled in |
263 | and can be examined to see if errors have occurred. This option |
263 | and can be examined to see if errors have occurred. This option |
264 | generates slower code than the default abort policy. |
264 | generates slower code than the default abort policy. |
265 | 265 | ||
266 | DEBUG default: NOT defined |
266 | DEBUG default: NOT defined |
267 | The DEBUG setting is mainly intended for people trying to modify |
267 | The DEBUG setting is mainly intended for people trying to modify |
268 | this code or diagnose problems when porting to new platforms. |
268 | this code or diagnose problems when porting to new platforms. |
269 | However, it may also be able to better isolate user errors than just |
269 | However, it may also be able to better isolate user errors than just |
270 | using runtime checks. The assertions in the check routines spell |
270 | using runtime checks. The assertions in the check routines spell |
271 | out in more detail the assumptions and invariants underlying the |
271 | out in more detail the assumptions and invariants underlying the |
272 | algorithms. The checking is fairly extensive, and will slow down |
272 | algorithms. The checking is fairly extensive, and will slow down |
273 | execution noticeably. Calling malloc_stats or mallinfo with DEBUG |
273 | execution noticeably. Calling malloc_stats or mallinfo with DEBUG |
274 | set will attempt to check every non-mmapped allocated and free chunk |
274 | set will attempt to check every non-mmapped allocated and free chunk |
275 | in the course of computing the summaries. |
275 | in the course of computing the summaries. |
276 | 276 | ||
277 | ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) |
277 | ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) |
278 | Debugging assertion failures can be nearly impossible if your |
278 | Debugging assertion failures can be nearly impossible if your |
279 | version of the assert macro causes malloc to be called, which will |
279 | version of the assert macro causes malloc to be called, which will |
280 | lead to a cascade of further failures, blowing the runtime stack. |
280 | lead to a cascade of further failures, blowing the runtime stack. |
281 | ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), |
281 | ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), |
282 | which will usually make debugging easier. |
282 | which will usually make debugging easier. |
283 | 283 | ||
284 | MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 |
284 | MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 |
285 | The action to take before "return 0" when malloc fails to be able to |
285 | The action to take before "return 0" when malloc fails to be able to |
286 | return memory because there is none available. |
286 | return memory because there is none available. |
287 | 287 | ||
288 | HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES |
288 | HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES |
289 | True if this system supports sbrk or an emulation of it. |
289 | True if this system supports sbrk or an emulation of it. |
290 | 290 | ||
291 | MORECORE default: sbrk |
291 | MORECORE default: sbrk |
292 | The name of the sbrk-style system routine to call to obtain more |
292 | The name of the sbrk-style system routine to call to obtain more |
293 | memory. See below for guidance on writing custom MORECORE |
293 | memory. See below for guidance on writing custom MORECORE |
294 | functions. The type of the argument to sbrk/MORECORE varies across |
294 | functions. The type of the argument to sbrk/MORECORE varies across |
295 | systems. It cannot be size_t, because it supports negative |
295 | systems. It cannot be size_t, because it supports negative |
296 | arguments, so it is normally the signed type of the same width as |
296 | arguments, so it is normally the signed type of the same width as |
297 | size_t (sometimes declared as "intptr_t"). It doesn't much matter |
297 | size_t (sometimes declared as "intptr_t"). It doesn't much matter |
298 | though. Internally, we only call it with arguments less than half |
298 | though. Internally, we only call it with arguments less than half |
299 | the max value of a size_t, which should work across all reasonable |
299 | the max value of a size_t, which should work across all reasonable |
300 | possibilities, although sometimes generating compiler warnings. See |
300 | possibilities, although sometimes generating compiler warnings. See |
301 | near the end of this file for guidelines for creating a custom |
301 | near the end of this file for guidelines for creating a custom |
302 | version of MORECORE. |
302 | version of MORECORE. |
303 | 303 | ||
304 | MORECORE_CONTIGUOUS default: 1 (true) |
304 | MORECORE_CONTIGUOUS default: 1 (true) |
305 | If true, take advantage of fact that consecutive calls to MORECORE |
305 | If true, take advantage of fact that consecutive calls to MORECORE |
306 | with positive arguments always return contiguous increasing |
306 | with positive arguments always return contiguous increasing |
307 | addresses. This is true of unix sbrk. It does not hurt too much to |
307 | addresses. This is true of unix sbrk. It does not hurt too much to |
308 | set it true anyway, since malloc copes with non-contiguities. |
308 | set it true anyway, since malloc copes with non-contiguities. |
309 | Setting it false when definitely non-contiguous saves time |
309 | Setting it false when definitely non-contiguous saves time |
310 | and possibly wasted space it would take to discover this though. |
310 | and possibly wasted space it would take to discover this though. |
311 | 311 | ||
312 | MORECORE_CANNOT_TRIM default: NOT defined |
312 | MORECORE_CANNOT_TRIM default: NOT defined |
313 | True if MORECORE cannot release space back to the system when given |
313 | True if MORECORE cannot release space back to the system when given |
314 | negative arguments. This is generally necessary only if you are |
314 | negative arguments. This is generally necessary only if you are |
315 | using a hand-crafted MORECORE function that cannot handle negative |
315 | using a hand-crafted MORECORE function that cannot handle negative |
316 | arguments. |
316 | arguments. |
317 | 317 | ||
318 | HAVE_MMAP default: 1 (true) |
318 | HAVE_MMAP default: 1 (true) |
319 | True if this system supports mmap or an emulation of it. If so, and |
319 | True if this system supports mmap or an emulation of it. If so, and |
320 | HAVE_MORECORE is not true, MMAP is used for all system |
320 | HAVE_MORECORE is not true, MMAP is used for all system |
321 | allocation. If set and HAVE_MORECORE is true as well, MMAP is |
321 | allocation. If set and HAVE_MORECORE is true as well, MMAP is |
322 | primarily used to directly allocate very large blocks. It is also |
322 | primarily used to directly allocate very large blocks. It is also |
323 | used as a backup strategy in cases where MORECORE fails to provide |
323 | used as a backup strategy in cases where MORECORE fails to provide |
324 | space from system. Note: A single call to MUNMAP is assumed to be |
324 | space from system. Note: A single call to MUNMAP is assumed to be |
325 | able to unmap memory that may have be allocated using multiple calls |
325 | able to unmap memory that may have be allocated using multiple calls |
326 | to MMAP, so long as they are adjacent. |
326 | to MMAP, so long as they are adjacent. |
327 | 327 | ||
328 | HAVE_MREMAP default: 1 on linux, else 0 |
328 | HAVE_MREMAP default: 1 on linux, else 0 |
329 | If true realloc() uses mremap() to re-allocate large blocks and |
329 | If true realloc() uses mremap() to re-allocate large blocks and |
330 | extend or shrink allocation spaces. |
330 | extend or shrink allocation spaces. |
331 | 331 | ||
332 | MMAP_CLEARS default: 1 on unix |
332 | MMAP_CLEARS default: 1 on unix |
333 | True if mmap clears memory so calloc doesn't need to. This is true |
333 | True if mmap clears memory so calloc doesn't need to. This is true |
334 | for standard unix mmap using /dev/zero. |
334 | for standard unix mmap using /dev/zero. |
335 | 335 | ||
336 | USE_BUILTIN_FFS default: 0 (i.e., not used) |
336 | USE_BUILTIN_FFS default: 0 (i.e., not used) |
337 | Causes malloc to use the builtin ffs() function to compute indices. |
337 | Causes malloc to use the builtin ffs() function to compute indices. |
338 | Some compilers may recognize and intrinsify ffs to be faster than the |
338 | Some compilers may recognize and intrinsify ffs to be faster than the |
339 | supplied C version. Also, the case of x86 using gcc is special-cased |
339 | supplied C version. Also, the case of x86 using gcc is special-cased |
340 | to an asm instruction, so is already as fast as it can be, and so |
340 | to an asm instruction, so is already as fast as it can be, and so |
341 | this setting has no effect. (On most x86s, the asm version is only |
341 | this setting has no effect. (On most x86s, the asm version is only |
342 | slightly faster than the C version.) |
342 | slightly faster than the C version.) |
343 | 343 | ||
344 | malloc_getpagesize default: derive from system includes, or 4096. |
344 | malloc_getpagesize default: derive from system includes, or 4096. |
345 | The system page size. To the extent possible, this malloc manages |
345 | The system page size. To the extent possible, this malloc manages |
346 | memory from the system in page-size units. This may be (and |
346 | memory from the system in page-size units. This may be (and |
347 | usually is) a function rather than a constant. This is ignored |
347 | usually is) a function rather than a constant. This is ignored |
348 | if WIN32, where page size is determined using getSystemInfo during |
348 | if WIN32, where page size is determined using getSystemInfo during |
349 | initialization. |
349 | initialization. |
350 | 350 | ||
351 | USE_DEV_RANDOM default: 0 (i.e., not used) |
351 | USE_DEV_RANDOM default: 0 (i.e., not used) |
352 | Causes malloc to use /dev/random to initialize secure magic seed for |
352 | Causes malloc to use /dev/random to initialize secure magic seed for |
353 | stamping footers. Otherwise, the current time is used. |
353 | stamping footers. Otherwise, the current time is used. |
354 | 354 | ||
355 | NO_MALLINFO default: 0 |
355 | NO_MALLINFO default: 0 |
356 | If defined, don't compile "mallinfo". This can be a simple way |
356 | If defined, don't compile "mallinfo". This can be a simple way |
357 | of dealing with mismatches between system declarations and |
357 | of dealing with mismatches between system declarations and |
358 | those in this file. |
358 | those in this file. |
359 | 359 | ||
360 | MALLINFO_FIELD_TYPE default: size_t |
360 | MALLINFO_FIELD_TYPE default: size_t |
361 | The type of the fields in the mallinfo struct. This was originally |
361 | The type of the fields in the mallinfo struct. This was originally |
362 | defined as "int" in SVID etc, but is more usefully defined as |
362 | defined as "int" in SVID etc, but is more usefully defined as |
363 | size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set |
363 | size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set |
364 | 364 | ||
365 | REALLOC_ZERO_BYTES_FREES default: not defined |
365 | REALLOC_ZERO_BYTES_FREES default: not defined |
366 | This should be set if a call to realloc with zero bytes should |
366 | This should be set if a call to realloc with zero bytes should |
367 | be the same as a call to free. Some people think it should. Otherwise, |
367 | be the same as a call to free. Some people think it should. Otherwise, |
368 | since this malloc returns a unique pointer for malloc(0), so does |
368 | since this malloc returns a unique pointer for malloc(0), so does |
369 | realloc(p, 0). |
369 | realloc(p, 0). |
370 | 370 | ||
371 | LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H |
371 | LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H |
372 | LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H |
372 | LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H |
373 | LACKS_STDLIB_H default: NOT defined unless on WIN32 |
373 | LACKS_STDLIB_H default: NOT defined unless on WIN32 |
374 | Define these if your system does not have these header files. |
374 | Define these if your system does not have these header files. |
375 | You might need to manually insert some of the declarations they provide. |
375 | You might need to manually insert some of the declarations they provide. |
376 | 376 | ||
377 | DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, |
377 | DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, |
378 | system_info.dwAllocationGranularity in WIN32, |
378 | system_info.dwAllocationGranularity in WIN32, |
379 | otherwise 64K. |
379 | otherwise 64K. |
380 | Also settable using mallopt(M_GRANULARITY, x) |
380 | Also settable using mallopt(M_GRANULARITY, x) |
381 | The unit for allocating and deallocating memory from the system. On |
381 | The unit for allocating and deallocating memory from the system. On |
382 | most systems with contiguous MORECORE, there is no reason to |
382 | most systems with contiguous MORECORE, there is no reason to |
383 | make this more than a page. However, systems with MMAP tend to |
383 | make this more than a page. However, systems with MMAP tend to |
384 | either require or encourage larger granularities. You can increase |
384 | either require or encourage larger granularities. You can increase |
385 | this value to prevent system allocation functions to be called so |
385 | this value to prevent system allocation functions to be called so |
386 | often, especially if they are slow. The value must be at least one |
386 | often, especially if they are slow. The value must be at least one |
387 | page and must be a power of two. Setting to 0 causes initialization |
387 | page and must be a power of two. Setting to 0 causes initialization |
388 | to either page size or win32 region size. (Note: In previous |
388 | to either page size or win32 region size. (Note: In previous |
389 | versions of malloc, the equivalent of this option was called |
389 | versions of malloc, the equivalent of this option was called |
390 | "TOP_PAD") |
390 | "TOP_PAD") |
391 | 391 | ||
392 | DEFAULT_TRIM_THRESHOLD default: 2MB |
392 | DEFAULT_TRIM_THRESHOLD default: 2MB |
393 | Also settable using mallopt(M_TRIM_THRESHOLD, x) |
393 | Also settable using mallopt(M_TRIM_THRESHOLD, x) |
394 | The maximum amount of unused top-most memory to keep before |
394 | The maximum amount of unused top-most memory to keep before |
395 | releasing via malloc_trim in free(). Automatic trimming is mainly |
395 | releasing via malloc_trim in free(). Automatic trimming is mainly |
396 | useful in long-lived programs using contiguous MORECORE. Because |
396 | useful in long-lived programs using contiguous MORECORE. Because |
397 | trimming via sbrk can be slow on some systems, and can sometimes be |
397 | trimming via sbrk can be slow on some systems, and can sometimes be |
398 | wasteful (in cases where programs immediately afterward allocate |
398 | wasteful (in cases where programs immediately afterward allocate |
399 | more large chunks) the value should be high enough so that your |
399 | more large chunks) the value should be high enough so that your |
400 | overall system performance would improve by releasing this much |
400 | overall system performance would improve by releasing this much |
401 | memory. As a rough guide, you might set to a value close to the |
401 | memory. As a rough guide, you might set to a value close to the |
402 | average size of a process (program) running on your system. |
402 | average size of a process (program) running on your system. |
403 | Releasing this much memory would allow such a process to run in |
403 | Releasing this much memory would allow such a process to run in |
404 | memory. Generally, it is worth tuning trim thresholds when a |
404 | memory. Generally, it is worth tuning trim thresholds when a |
405 | program undergoes phases where several large chunks are allocated |
405 | program undergoes phases where several large chunks are allocated |
406 | and released in ways that can reuse each other's storage, perhaps |
406 | and released in ways that can reuse each other's storage, perhaps |
407 | mixed with phases where there are no such chunks at all. The trim |
407 | mixed with phases where there are no such chunks at all. The trim |
408 | value must be greater than page size to have any useful effect. To |
408 | value must be greater than page size to have any useful effect. To |
409 | disable trimming completely, you can set to MAX_SIZE_T. Note that the trick |
409 | disable trimming completely, you can set to MAX_SIZE_T. Note that the trick |
410 | some people use of mallocing a huge space and then freeing it at |
410 | some people use of mallocing a huge space and then freeing it at |
411 | program startup, in an attempt to reserve system memory, doesn't |
411 | program startup, in an attempt to reserve system memory, doesn't |
412 | have the intended effect under automatic trimming, since that memory |
412 | have the intended effect under automatic trimming, since that memory |
413 | will immediately be returned to the system. |
413 | will immediately be returned to the system. |
414 | 414 | ||
415 | DEFAULT_MMAP_THRESHOLD default: 256K |
415 | DEFAULT_MMAP_THRESHOLD default: 256K |
416 | Also settable using mallopt(M_MMAP_THRESHOLD, x) |
416 | Also settable using mallopt(M_MMAP_THRESHOLD, x) |
417 | The request size threshold for using MMAP to directly service a |
417 | The request size threshold for using MMAP to directly service a |
418 | request. Requests of at least this size that cannot be allocated |
418 | request. Requests of at least this size that cannot be allocated |
419 | using already-existing space will be serviced via mmap. (If enough |
419 | using already-existing space will be serviced via mmap. (If enough |
420 | normal freed space already exists it is used instead.) Using mmap |
420 | normal freed space already exists it is used instead.) Using mmap |
421 | segregates relatively large chunks of memory so that they can be |
421 | segregates relatively large chunks of memory so that they can be |
422 | individually obtained and released from the host system. A request |
422 | individually obtained and released from the host system. A request |
423 | serviced through mmap is never reused by any other request (at least |
423 | serviced through mmap is never reused by any other request (at least |
424 | not directly; the system may just so happen to remap successive |
424 | not directly; the system may just so happen to remap successive |
425 | requests to the same locations). Segregating space in this way has |
425 | requests to the same locations). Segregating space in this way has |
426 | the benefits that: Mmapped space can always be individually released |
426 | the benefits that: Mmapped space can always be individually released |
427 | back to the system, which helps keep the system level memory demands |
427 | back to the system, which helps keep the system level memory demands |
428 | of a long-lived program low. Also, mapped memory doesn't become |
428 | of a long-lived program low. Also, mapped memory doesn't become |
429 | `locked' between other chunks, as can happen with normally allocated |
429 | `locked' between other chunks, as can happen with normally allocated |
430 | chunks, which means that even trimming via malloc_trim would not |
430 | chunks, which means that even trimming via malloc_trim would not |
431 | release them. However, it has the disadvantage that the space |
431 | release them. However, it has the disadvantage that the space |
432 | cannot be reclaimed, consolidated, and then used to service later |
432 | cannot be reclaimed, consolidated, and then used to service later |
433 | requests, as happens with normal chunks. The advantages of mmap |
433 | requests, as happens with normal chunks. The advantages of mmap |
434 | nearly always outweigh disadvantages for "large" chunks, but the |
434 | nearly always outweigh disadvantages for "large" chunks, but the |
435 | value of "large" may vary across systems. The default is an |
435 | value of "large" may vary across systems. The default is an |
436 | empirically derived value that works well in most systems. You can |
436 | empirically derived value that works well in most systems. You can |
437 | disable mmap by setting to MAX_SIZE_T. |
437 | disable mmap by setting to MAX_SIZE_T. |
438 | 438 | ||
439 | */ |
439 | */ |
440 | 440 | ||
441 | #ifndef WIN32 |
441 | #ifndef WIN32 |
442 | #ifdef _WIN32 |
442 | #ifdef _WIN32 |
443 | #define WIN32 1 |
443 | #define WIN32 1 |
444 | #endif /* _WIN32 */ |
444 | #endif /* _WIN32 */ |
445 | #endif /* WIN32 */ |
445 | #endif /* WIN32 */ |
446 | #ifdef WIN32 |
446 | #ifdef WIN32 |
447 | #define WIN32_LEAN_AND_MEAN |
447 | #define WIN32_LEAN_AND_MEAN |
448 | #include <windows.h> |
448 | #include <windows.h> |
449 | #define HAVE_MMAP 1 |
449 | #define HAVE_MMAP 1 |
450 | #define HAVE_MORECORE 0 |
450 | #define HAVE_MORECORE 0 |
451 | #define LACKS_UNISTD_H |
451 | #define LACKS_UNISTD_H |
452 | #define LACKS_SYS_PARAM_H |
452 | #define LACKS_SYS_PARAM_H |
453 | #define LACKS_SYS_MMAN_H |
453 | #define LACKS_SYS_MMAN_H |
454 | #define LACKS_STRING_H |
454 | #define LACKS_STRING_H |
455 | #define LACKS_STRINGS_H |
455 | #define LACKS_STRINGS_H |
456 | #define LACKS_SYS_TYPES_H |
456 | #define LACKS_SYS_TYPES_H |
457 | #define LACKS_ERRNO_H |
457 | #define LACKS_ERRNO_H |
458 | #define MALLOC_FAILURE_ACTION |
458 | #define MALLOC_FAILURE_ACTION |
459 | #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */ |
459 | #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */ |
460 | #endif /* WIN32 */ |
460 | #endif /* WIN32 */ |
461 | 461 | ||
462 | #if defined(DARWIN) || defined(_DARWIN) |
462 | #if defined(DARWIN) || defined(_DARWIN) |
463 | /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ |
463 | /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ |
464 | #ifndef HAVE_MORECORE |
464 | #ifndef HAVE_MORECORE |
465 | #define HAVE_MORECORE 0 |
465 | #define HAVE_MORECORE 0 |
466 | #define HAVE_MMAP 1 |
466 | #define HAVE_MMAP 1 |
467 | #endif /* HAVE_MORECORE */ |
467 | #endif /* HAVE_MORECORE */ |
468 | #endif /* DARWIN */ |
468 | #endif /* DARWIN */ |
469 | 469 | ||
470 | #ifndef LACKS_SYS_TYPES_H |
470 | #ifndef LACKS_SYS_TYPES_H |
471 | #include <sys/types.h> /* For size_t */ |
471 | #include <sys/types.h> /* For size_t */ |
472 | #endif /* LACKS_SYS_TYPES_H */ |
472 | #endif /* LACKS_SYS_TYPES_H */ |
473 | 473 | ||
474 | /* The maximum possible size_t value has all bits set */ |
474 | /* The maximum possible size_t value has all bits set */ |
475 | #define MAX_SIZE_T (~(size_t)0) |
475 | #define MAX_SIZE_T (~(size_t)0) |
476 | 476 | ||
477 | #ifndef ONLY_MSPACES |
477 | #ifndef ONLY_MSPACES |
478 | #define ONLY_MSPACES 0 |
478 | #define ONLY_MSPACES 0 |
479 | #endif /* ONLY_MSPACES */ |
479 | #endif /* ONLY_MSPACES */ |
480 | #ifndef MSPACES |
480 | #ifndef MSPACES |
481 | #if ONLY_MSPACES |
481 | #if ONLY_MSPACES |
482 | #define MSPACES 1 |
482 | #define MSPACES 1 |
483 | #else /* ONLY_MSPACES */ |
483 | #else /* ONLY_MSPACES */ |
484 | #define MSPACES 0 |
484 | #define MSPACES 0 |
485 | #endif /* ONLY_MSPACES */ |
485 | #endif /* ONLY_MSPACES */ |
486 | #endif /* MSPACES */ |
486 | #endif /* MSPACES */ |
487 | #ifndef MALLOC_ALIGNMENT |
487 | #ifndef MALLOC_ALIGNMENT |
488 | #define MALLOC_ALIGNMENT ((size_t)8U) |
488 | #define MALLOC_ALIGNMENT ((size_t)8U) |
489 | #endif /* MALLOC_ALIGNMENT */ |
489 | #endif /* MALLOC_ALIGNMENT */ |
490 | #ifndef FOOTERS |
490 | #ifndef FOOTERS |
491 | #define FOOTERS 0 |
491 | #define FOOTERS 0 |
492 | #endif /* FOOTERS */ |
492 | #endif /* FOOTERS */ |
493 | #ifndef ABORT |
493 | #ifndef ABORT |
494 | #define ABORT abort() |
494 | #define ABORT abort() |
495 | #endif /* ABORT */ |
495 | #endif /* ABORT */ |
496 | #ifndef ABORT_ON_ASSERT_FAILURE |
496 | #ifndef ABORT_ON_ASSERT_FAILURE |
497 | #define ABORT_ON_ASSERT_FAILURE 1 |
497 | #define ABORT_ON_ASSERT_FAILURE 1 |
498 | #endif /* ABORT_ON_ASSERT_FAILURE */ |
498 | #endif /* ABORT_ON_ASSERT_FAILURE */ |
499 | #ifndef PROCEED_ON_ERROR |
499 | #ifndef PROCEED_ON_ERROR |
500 | #define PROCEED_ON_ERROR 0 |
500 | #define PROCEED_ON_ERROR 0 |
501 | #endif /* PROCEED_ON_ERROR */ |
501 | #endif /* PROCEED_ON_ERROR */ |
502 | #ifndef USE_LOCKS |
502 | #ifndef USE_LOCKS |
503 | #define USE_LOCKS 0 |
503 | #define USE_LOCKS 0 |
504 | #endif /* USE_LOCKS */ |
504 | #endif /* USE_LOCKS */ |
505 | #ifndef INSECURE |
505 | #ifndef INSECURE |
506 | #define INSECURE 0 |
506 | #define INSECURE 0 |
507 | #endif /* INSECURE */ |
507 | #endif /* INSECURE */ |
508 | #ifndef HAVE_MMAP |
508 | #ifndef HAVE_MMAP |
509 | #define HAVE_MMAP 1 |
509 | #define HAVE_MMAP 1 |
510 | #endif /* HAVE_MMAP */ |
510 | #endif /* HAVE_MMAP */ |
511 | #ifndef MMAP_CLEARS |
511 | #ifndef MMAP_CLEARS |
512 | #define MMAP_CLEARS 1 |
512 | #define MMAP_CLEARS 1 |
513 | #endif /* MMAP_CLEARS */ |
513 | #endif /* MMAP_CLEARS */ |
514 | #ifndef HAVE_MREMAP |
514 | #ifndef HAVE_MREMAP |
515 | #ifdef linux |
515 | #ifdef linux |
516 | #define HAVE_MREMAP 1 |
516 | #define HAVE_MREMAP 1 |
517 | #else /* linux */ |
517 | #else /* linux */ |
518 | #define HAVE_MREMAP 0 |
518 | #define HAVE_MREMAP 0 |
519 | #endif /* linux */ |
519 | #endif /* linux */ |
520 | #endif /* HAVE_MREMAP */ |
520 | #endif /* HAVE_MREMAP */ |
521 | #ifndef MALLOC_FAILURE_ACTION |
521 | #ifndef MALLOC_FAILURE_ACTION |
522 | #define MALLOC_FAILURE_ACTION errno = ENOMEM; |
522 | #define MALLOC_FAILURE_ACTION errno = ENOMEM; |
523 | #endif /* MALLOC_FAILURE_ACTION */ |
523 | #endif /* MALLOC_FAILURE_ACTION */ |
524 | #ifndef HAVE_MORECORE |
524 | #ifndef HAVE_MORECORE |
525 | #if ONLY_MSPACES |
525 | #if ONLY_MSPACES |
526 | #define HAVE_MORECORE 0 |
526 | #define HAVE_MORECORE 0 |
527 | #else /* ONLY_MSPACES */ |
527 | #else /* ONLY_MSPACES */ |
528 | #define HAVE_MORECORE 1 |
528 | #define HAVE_MORECORE 1 |
529 | #endif /* ONLY_MSPACES */ |
529 | #endif /* ONLY_MSPACES */ |
530 | #endif /* HAVE_MORECORE */ |
530 | #endif /* HAVE_MORECORE */ |
531 | #if !HAVE_MORECORE |
531 | #if !HAVE_MORECORE |
532 | #define MORECORE_CONTIGUOUS 0 |
532 | #define MORECORE_CONTIGUOUS 0 |
533 | #else /* !HAVE_MORECORE */ |
533 | #else /* !HAVE_MORECORE */ |
534 | #ifndef MORECORE |
534 | #ifndef MORECORE |
535 | #define MORECORE sbrk |
535 | #define MORECORE sbrk |
536 | #endif /* MORECORE */ |
536 | #endif /* MORECORE */ |
537 | #ifndef MORECORE_CONTIGUOUS |
537 | #ifndef MORECORE_CONTIGUOUS |
538 | #define MORECORE_CONTIGUOUS 1 |
538 | #define MORECORE_CONTIGUOUS 1 |
539 | #endif /* MORECORE_CONTIGUOUS */ |
539 | #endif /* MORECORE_CONTIGUOUS */ |
540 | #endif /* HAVE_MORECORE */ |
540 | #endif /* HAVE_MORECORE */ |
541 | #ifndef DEFAULT_GRANULARITY |
541 | #ifndef DEFAULT_GRANULARITY |
542 | #if MORECORE_CONTIGUOUS |
542 | #if MORECORE_CONTIGUOUS |
543 | #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ |
543 | #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ |
544 | #else /* MORECORE_CONTIGUOUS */ |
544 | #else /* MORECORE_CONTIGUOUS */ |
545 | #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) |
545 | #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) |
546 | #endif /* MORECORE_CONTIGUOUS */ |
546 | #endif /* MORECORE_CONTIGUOUS */ |
547 | #endif /* DEFAULT_GRANULARITY */ |
547 | #endif /* DEFAULT_GRANULARITY */ |
548 | #ifndef DEFAULT_TRIM_THRESHOLD |
548 | #ifndef DEFAULT_TRIM_THRESHOLD |
549 | #ifndef MORECORE_CANNOT_TRIM |
549 | #ifndef MORECORE_CANNOT_TRIM |
550 | #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) |
550 | #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) |
551 | #else /* MORECORE_CANNOT_TRIM */ |
551 | #else /* MORECORE_CANNOT_TRIM */ |
552 | #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T |
552 | #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T |
553 | #endif /* MORECORE_CANNOT_TRIM */ |
553 | #endif /* MORECORE_CANNOT_TRIM */ |
554 | #endif /* DEFAULT_TRIM_THRESHOLD */ |
554 | #endif /* DEFAULT_TRIM_THRESHOLD */ |
555 | #ifndef DEFAULT_MMAP_THRESHOLD |
555 | #ifndef DEFAULT_MMAP_THRESHOLD |
556 | #if HAVE_MMAP |
556 | #if HAVE_MMAP |
557 | #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) |
557 | #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) |
558 | #else /* HAVE_MMAP */ |
558 | #else /* HAVE_MMAP */ |
559 | #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T |
559 | #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T |
560 | #endif /* HAVE_MMAP */ |
560 | #endif /* HAVE_MMAP */ |
561 | #endif /* DEFAULT_MMAP_THRESHOLD */ |
561 | #endif /* DEFAULT_MMAP_THRESHOLD */ |
562 | #ifndef USE_BUILTIN_FFS |
562 | #ifndef USE_BUILTIN_FFS |
563 | #define USE_BUILTIN_FFS 0 |
563 | #define USE_BUILTIN_FFS 0 |
564 | #endif /* USE_BUILTIN_FFS */ |
564 | #endif /* USE_BUILTIN_FFS */ |
565 | #ifndef USE_DEV_RANDOM |
565 | #ifndef USE_DEV_RANDOM |
566 | #define USE_DEV_RANDOM 0 |
566 | #define USE_DEV_RANDOM 0 |
567 | #endif /* USE_DEV_RANDOM */ |
567 | #endif /* USE_DEV_RANDOM */ |
568 | #ifndef NO_MALLINFO |
568 | #ifndef NO_MALLINFO |
569 | #define NO_MALLINFO 0 |
569 | #define NO_MALLINFO 0 |
570 | #endif /* NO_MALLINFO */ |
570 | #endif /* NO_MALLINFO */ |
571 | #ifndef MALLINFO_FIELD_TYPE |
571 | #ifndef MALLINFO_FIELD_TYPE |
572 | #define MALLINFO_FIELD_TYPE size_t |
572 | #define MALLINFO_FIELD_TYPE size_t |
573 | #endif /* MALLINFO_FIELD_TYPE */ |
573 | #endif /* MALLINFO_FIELD_TYPE */ |
574 | 574 | ||
575 | /* |
575 | /* |
576 | mallopt tuning options. SVID/XPG defines four standard parameter |
576 | mallopt tuning options. SVID/XPG defines four standard parameter |
577 | numbers for mallopt, normally defined in malloc.h. None of these |
577 | numbers for mallopt, normally defined in malloc.h. None of these |
578 | are used in this malloc, so setting them has no effect. But this |
578 | are used in this malloc, so setting them has no effect. But this |
579 | malloc does support the following options. |
579 | malloc does support the following options. |
580 | */ |
580 | */ |
581 | 581 | ||
582 | #define M_TRIM_THRESHOLD (-1) |
582 | #define M_TRIM_THRESHOLD (-1) |
583 | #define M_GRANULARITY (-2) |
583 | #define M_GRANULARITY (-2) |
584 | #define M_MMAP_THRESHOLD (-3) |
584 | #define M_MMAP_THRESHOLD (-3) |
585 | 585 | ||
586 | /* ------------------------ Mallinfo declarations ------------------------ */ |
586 | /** Non-default helenos customizations */ |
- | 587 | #define LACKS_FCNTL_H |
|
- | 588 | #define LACKS_SYS_MMAN_H |
|
- | 589 | #define LACKS_SYS_PARAM_H |
|
- | 590 | #undef HAVE_MMAP |
|
- | 591 | #define HAVE_MMAP 0 |
|
- | 592 | #define LACKS_ERRNO_H |
|
- | 593 | /* Set errno? */ |
|
- | 594 | #undef MALLOC_FAILURE_ACTION |
|
- | 595 | #define MALLOC_FAILURE_ACTION |
|
587 | 596 | ||
588 | #if !NO_MALLINFO |
- | |
589 | /* |
- | |
590 | This version of malloc supports the standard SVID/XPG mallinfo |
- | |
591 | routine that returns a struct containing usage properties and |
- | |
592 | statistics. It should work on any system that has a |
- | |
593 | /usr/include/malloc.h defining struct mallinfo. The main |
- | |
594 | declaration needed is the mallinfo struct that is returned (by-copy) |
- | |
595 | by mallinfo(). The malloinfo struct contains a bunch of fields that |
- | |
596 | are not even meaningful in this version of malloc. These fields are |
- | |
597 | are instead filled by mallinfo() with other numbers that might be of |
- | |
598 | interest. |
- | |
599 | - | ||
600 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a |
- | |
601 | /usr/include/malloc.h file that includes a declaration of struct |
- | |
602 | mallinfo. If so, it is included; else a compliant version is |
- | |
603 | declared below. These must be precisely the same for mallinfo() to |
- | |
604 | work. The original SVID version of this struct, defined on most |
- | |
605 | systems with mallinfo, declares all fields as ints. But some others |
- | |
606 | define as unsigned long. If your system defines the fields using a |
- | |
607 | type of different width than listed here, you MUST #include your |
- | |
608 | system version and #define HAVE_USR_INCLUDE_MALLOC_H. |
- | |
609 | */ |
- | |
610 | - | ||
611 | /* #define HAVE_USR_INCLUDE_MALLOC_H */ |
- | |
612 | - | ||
613 | #ifdef HAVE_USR_INCLUDE_MALLOC_H |
- | |
614 | #include "/usr/include/malloc.h" |
- | |
615 | #else /* HAVE_USR_INCLUDE_MALLOC_H */ |
- | |
616 | - | ||
617 | struct mallinfo { |
- | |
618 | MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ |
- | |
619 | MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ |
- | |
620 | MALLINFO_FIELD_TYPE smblks; /* always 0 */ |
- | |
621 | MALLINFO_FIELD_TYPE hblks; /* always 0 */ |
- | |
622 | MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ |
- | |
623 | MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ |
- | |
624 | MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ |
- | |
625 | MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ |
- | |
626 | MALLINFO_FIELD_TYPE fordblks; /* total free space */ |
- | |
627 | MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ |
- | |
628 | }; |
- | |
629 | - | ||
630 | #endif /* HAVE_USR_INCLUDE_MALLOC_H */ |
- | |
631 | #endif /* NO_MALLINFO */ |
- | |
632 | - | ||
633 | #ifdef __cplusplus |
- | |
634 | extern "C" { |
- | |
635 | #endif /* __cplusplus */ |
- | |
636 | - | ||
637 | #if !ONLY_MSPACES |
- | |
638 | - | ||
639 | /* ------------------- Declarations of public routines ------------------- */ |
- | |
640 | - | ||
641 | #ifndef USE_DL_PREFIX |
- | |
642 | #define dlcalloc calloc |
- | |
643 | #define dlfree free |
- | |
644 | #define dlmalloc malloc |
- | |
645 | #define dlmemalign memalign |
- | |
646 | #define dlrealloc realloc |
- | |
647 | #define dlvalloc valloc |
- | |
648 | #define dlpvalloc pvalloc |
- | |
649 | #define dlmallinfo mallinfo |
- | |
650 | #define dlmallopt mallopt |
- | |
651 | #define dlmalloc_trim malloc_trim |
- | |
652 | #define dlmalloc_stats malloc_stats |
- | |
653 | #define dlmalloc_usable_size malloc_usable_size |
- | |
654 | #define dlmalloc_footprint malloc_footprint |
- | |
655 | #define dlmalloc_max_footprint malloc_max_footprint |
- | |
656 | #define dlindependent_calloc independent_calloc |
- | |
657 | #define dlindependent_comalloc independent_comalloc |
- | |
658 | #endif /* USE_DL_PREFIX */ |
- | |
659 | - | ||
660 | - | ||
661 | /* |
- | |
662 | malloc(size_t n) |
- | |
663 | Returns a pointer to a newly allocated chunk of at least n bytes, or |
- | |
664 | null if no space is available, in which case errno is set to ENOMEM |
- | |
665 | on ANSI C systems. |
- | |
666 | - | ||
667 | If n is zero, malloc returns a minimum-sized chunk. (The minimum |
- | |
668 | size is 16 bytes on most 32bit systems, and 32 bytes on 64bit |
- | |
669 | systems.) Note that size_t is an unsigned type, so calls with |
- | |
670 | arguments that would be negative if signed are interpreted as |
- | |
671 | requests for huge amounts of space, which will often fail. The |
- | |
672 | maximum supported value of n differs across systems, but is in all |
- | |
673 | cases less than the maximum representable value of a size_t. |
- | |
674 | */ |
- | |
675 | void* dlmalloc(size_t); |
- | |
676 | - | ||
677 | /* |
- | |
678 | free(void* p) |
- | |
679 | Releases the chunk of memory pointed to by p, that had been previously |
- | |
680 | allocated using malloc or a related routine such as realloc. |
- | |
681 | It has no effect if p is null. If p was not malloced or already |
- | |
682 | freed, free(p) will by default cause the current program to abort. |
- | |
683 | */ |
- | |
684 | void dlfree(void*); |
- | |
685 | - | ||
686 | /* |
- | |
687 | calloc(size_t n_elements, size_t element_size); |
- | |
688 | Returns a pointer to n_elements * element_size bytes, with all locations |
- | |
689 | set to zero. |
- | |
690 | */ |
- | |
691 | void* dlcalloc(size_t, size_t); |
- | |
692 | - | ||
693 | /* |
- | |
694 | realloc(void* p, size_t n) |
- | |
695 | Returns a pointer to a chunk of size n that contains the same data |
- | |
696 | as does chunk p up to the minimum of (n, p's size) bytes, or null |
- | |
697 | if no space is available. |
- | |
698 | - | ||
699 | The returned pointer may or may not be the same as p. The algorithm |
- | |
700 | prefers extending p in most cases when possible, otherwise it |
- | |
701 | employs the equivalent of a malloc-copy-free sequence. |
- | |
702 | - | ||
703 | If p is null, realloc is equivalent to malloc. |
- | |
704 | - | ||
705 | If space is not available, realloc returns null, errno is set (if on |
- | |
706 | ANSI) and p is NOT freed. |
- | |
707 | - | ||
708 | if n is for fewer bytes than already held by p, the newly unused |
- | |
709 | space is lopped off and freed if possible. realloc with a size |
- | |
710 | argument of zero (re)allocates a minimum-sized chunk. |
- | |
711 | - | ||
712 | The old unix realloc convention of allowing the last-free'd chunk |
- | |
713 | to be used as an argument to realloc is not supported. |
- | |
714 | */ |
- | |
715 | - | ||
716 | void* dlrealloc(void*, size_t); |
- | |
717 | - | ||
718 | /* |
- | |
719 | memalign(size_t alignment, size_t n); |
- | |
720 | Returns a pointer to a newly allocated chunk of n bytes, aligned |
- | |
721 | in accord with the alignment argument. |
- | |
722 | - | ||
723 | The alignment argument should be a power of two. If the argument is |
- | |
724 | not a power of two, the nearest greater power is used. |
- | |
725 | 8-byte alignment is guaranteed by normal malloc calls, so don't |
- | |
726 | bother calling memalign with an argument of 8 or less. |
- | |
727 | - | ||
728 | Overreliance on memalign is a sure way to fragment space. |
- | |
729 | */ |
- | |
730 | void* dlmemalign(size_t, size_t); |
- | |
731 | - | ||
732 | /* |
- | |
733 | valloc(size_t n); |
- | |
734 | Equivalent to memalign(pagesize, n), where pagesize is the page |
- | |
735 | size of the system. If the pagesize is unknown, 4096 is used. |
- | |
736 | */ |
- | |
737 | void* dlvalloc(size_t); |
- | |
738 | - | ||
739 | /* |
- | |
740 | mallopt(int parameter_number, int parameter_value) |
- | |
741 | Sets tunable parameters The format is to provide a |
- | |
742 | (parameter-number, parameter-value) pair. mallopt then sets the |
- | |
743 | corresponding parameter to the argument value if it can (i.e., so |
- | |
744 | long as the value is meaningful), and returns 1 if successful else |
- | |
745 | 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, |
- | |
746 | normally defined in malloc.h. None of these are use in this malloc, |
- | |
747 | so setting them has no effect. But this malloc also supports other |
- | |
748 | options in mallopt. See below for details. Briefly, supported |
- | |
749 | parameters are as follows (listed defaults are for "typical" |
- | |
750 | configurations). |
- | |
751 | - | ||
752 | Symbol param # default allowed param values |
- | |
753 | M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables) |
- | |
754 | M_GRANULARITY -2 page size any power of 2 >= page size |
- | |
755 | M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) |
- | |
756 | */ |
- | |
757 | int dlmallopt(int, int); |
- | |
758 | - | ||
759 | /* |
- | |
760 | malloc_footprint(); |
- | |
761 | Returns the number of bytes obtained from the system. The total |
- | |
762 | number of bytes allocated by malloc, realloc etc., is less than this |
- | |
763 | value. Unlike mallinfo, this function returns only a precomputed |
- | |
764 | result, so can be called frequently to monitor memory consumption. |
- | |
765 | Even if locks are otherwise defined, this function does not use them, |
- | |
766 | so results might not be up to date. |
- | |
767 | */ |
- | |
768 | size_t dlmalloc_footprint(void); |
- | |
769 | - | ||
770 | /* |
- | |
771 | malloc_max_footprint(); |
- | |
772 | Returns the maximum number of bytes obtained from the system. This |
- | |
773 | value will be greater than current footprint if deallocated space |
- | |
774 | has been reclaimed by the system. The peak number of bytes allocated |
- | |
775 | by malloc, realloc etc., is less than this value. Unlike mallinfo, |
- | |
776 | this function returns only a precomputed result, so can be called |
- | |
777 | frequently to monitor memory consumption. Even if locks are |
- | |
778 | otherwise defined, this function does not use them, so results might |
- | |
779 | not be up to date. |
- | |
780 | */ |
- | |
781 | size_t dlmalloc_max_footprint(void); |
- | |
782 | - | ||
783 | #if !NO_MALLINFO |
- | |
784 | /* |
- | |
785 | mallinfo() |
- | |
786 | Returns (by copy) a struct containing various summary statistics: |
- | |
787 | - | ||
788 | arena: current total non-mmapped bytes allocated from system |
- | |
789 | ordblks: the number of free chunks |
- | |
790 | smblks: always zero. |
- | |
791 | hblks: current number of mmapped regions |
- | |
792 | hblkhd: total bytes held in mmapped regions |
- | |
793 | usmblks: the maximum total allocated space. This will be greater |
- | |
794 | than current total if trimming has occurred. |
- | |
795 | fsmblks: always zero |
- | |
796 | uordblks: current total allocated space (normal or mmapped) |
- | |
797 | fordblks: total free space |
- | |
798 | keepcost: the maximum number of bytes that could ideally be released |
- | |
799 | back to system via malloc_trim. ("ideally" means that |
- | |
800 | it ignores page restrictions etc.) |
- | |
801 | - | ||
802 | Because these fields are ints, but internal bookkeeping may |
- | |
803 | be kept as longs, the reported values may wrap around zero and |
- | |
804 | thus be inaccurate. |
- | |
805 | */ |
- | |
806 | struct mallinfo dlmallinfo(void); |
- | |
807 | #endif /* NO_MALLINFO */ |
- | |
808 | - | ||
809 | /* |
- | |
810 | independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); |
- | |
811 | - | ||
812 | independent_calloc is similar to calloc, but instead of returning a |
- | |
813 | single cleared space, it returns an array of pointers to n_elements |
- | |
814 | independent elements that can hold contents of size elem_size, each |
- | |
815 | of which starts out cleared, and can be independently freed, |
- | |
816 | realloc'ed etc. The elements are guaranteed to be adjacently |
- | |
817 | allocated (this is not guaranteed to occur with multiple callocs or |
- | |
818 | mallocs), which may also improve cache locality in some |
- | |
819 | applications. |
- | |
820 | - | ||
821 | The "chunks" argument is optional (i.e., may be null, which is |
- | |
822 | probably the most typical usage). If it is null, the returned array |
- | |
823 | is itself dynamically allocated and should also be freed when it is |
- | |
824 | no longer needed. Otherwise, the chunks array must be of at least |
- | |
825 | n_elements in length. It is filled in with the pointers to the |
- | |
826 | chunks. |
- | |
827 | - | ||
828 | In either case, independent_calloc returns this pointer array, or |
- | |
829 | null if the allocation failed. If n_elements is zero and "chunks" |
- | |
830 | is null, it returns a chunk representing an array with zero elements |
- | |
831 | (which should be freed if not wanted). |
- | |
832 | - | ||
833 | Each element must be individually freed when it is no longer |
- | |
834 | needed. If you'd like to instead be able to free all at once, you |
- | |
835 | should instead use regular calloc and assign pointers into this |
- | |
836 | space to represent elements. (In this case though, you cannot |
- | |
837 | independently free elements.) |
- | |
838 | - | ||
839 | independent_calloc simplifies and speeds up implementations of many |
- | |
840 | kinds of pools. It may also be useful when constructing large data |
- | |
841 | structures that initially have a fixed number of fixed-sized nodes, |
- | |
842 | but the number is not known at compile time, and some of the nodes |
- | |
843 | may later need to be freed. For example: |
- | |
844 | - | ||
845 | struct Node { int item; struct Node* next; }; |
- | |
846 | - | ||
847 | struct Node* build_list() { |
- | |
848 | struct Node** pool; |
- | |
849 | int n = read_number_of_nodes_needed(); |
- | |
850 | if (n <= 0) return 0; |
- | |
851 | pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); |
- | |
852 | if (pool == 0) die(); |
- | |
853 | // organize into a linked list... |
- | |
854 | struct Node* first = pool[0]; |
- | |
855 | for (i = 0; i < n-1; ++i) |
- | |
856 | pool[i]->next = pool[i+1]; |
- | |
857 | free(pool); // Can now free the array (or not, if it is needed later) |
- | |
858 | return first; |
- | |
859 | } |
- | |
860 | */ |
- | |
861 | void** dlindependent_calloc(size_t, size_t, void**); |
- | |
862 | - | ||
863 | /* |
- | |
864 | independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); |
- | |
865 | - | ||
866 | independent_comalloc allocates, all at once, a set of n_elements |
- | |
867 | chunks with sizes indicated in the "sizes" array. It returns |
- | |
868 | an array of pointers to these elements, each of which can be |
- | |
869 | independently freed, realloc'ed etc. The elements are guaranteed to |
- | |
870 | be adjacently allocated (this is not guaranteed to occur with |
- | |
871 | multiple callocs or mallocs), which may also improve cache locality |
- | |
872 | in some applications. |
- | |
873 | - | ||
874 | The "chunks" argument is optional (i.e., may be null). If it is null |
- | |
875 | the returned array is itself dynamically allocated and should also |
- | |
876 | be freed when it is no longer needed. Otherwise, the chunks array |
- | |
877 | must be of at least n_elements in length. It is filled in with the |
- | |
878 | pointers to the chunks. |
- | |
879 | - | ||
880 | In either case, independent_comalloc returns this pointer array, or |
- | |
881 | null if the allocation failed. If n_elements is zero and chunks is |
- | |
882 | null, it returns a chunk representing an array with zero elements |
- | |
883 | (which should be freed if not wanted). |
- | |
884 | - | ||
885 | Each element must be individually freed when it is no longer |
- | |
886 | needed. If you'd like to instead be able to free all at once, you |
- | |
887 | should instead use a single regular malloc, and assign pointers at |
- | |
888 | particular offsets in the aggregate space. (In this case though, you |
- | |
889 | cannot independently free elements.) |
- | |
890 | - | ||
891 | independent_comallac differs from independent_calloc in that each |
- | |
892 | element may have a different size, and also that it does not |
- | |
893 | automatically clear elements. |
- | |
894 | - | ||
895 | independent_comalloc can be used to speed up allocation in cases |
- | |
896 | where several structs or objects must always be allocated at the |
- | |
897 | same time. For example: |
- | |
898 | - | ||
899 | struct Head { ... } |
- | |
900 | struct Foot { ... } |
- | |
901 | - | ||
902 | void send_message(char* msg) { |
- | |
903 | int msglen = strlen(msg); |
- | |
904 | size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; |
- | |
905 | void* chunks[3]; |
- | |
906 | if (independent_comalloc(3, sizes, chunks) == 0) |
- | |
907 | die(); |
- | |
908 | struct Head* head = (struct Head*)(chunks[0]); |
- | |
909 | char* body = (char*)(chunks[1]); |
- | |
910 | struct Foot* foot = (struct Foot*)(chunks[2]); |
- | |
911 | // ... |
- | |
912 | } |
- | |
913 | - | ||
914 | In general though, independent_comalloc is worth using only for |
- | |
915 | larger values of n_elements. For small values, you probably won't |
- | |
916 | detect enough difference from series of malloc calls to bother. |
- | |
917 | - | ||
918 | Overuse of independent_comalloc can increase overall memory usage, |
- | |
919 | since it cannot reuse existing noncontiguous small chunks that |
- | |
920 | might be available for some of the elements. |
- | |
921 | */ |
- | |
922 | void** dlindependent_comalloc(size_t, size_t*, void**); |
- | |
923 | - | ||
924 | - | ||
925 | /* |
- | |
926 | pvalloc(size_t n); |
- | |
927 | Equivalent to valloc(minimum-page-that-holds(n)), that is, |
- | |
928 | round up n to nearest pagesize. |
- | |
929 | */ |
- | |
930 | void* dlpvalloc(size_t); |
- | |
931 | - | ||
932 | /* |
- | |
933 | malloc_trim(size_t pad); |
- | |
934 | - | ||
935 | If possible, gives memory back to the system (via negative arguments |
- | |
936 | to sbrk) if there is unused memory at the `high' end of the malloc |
- | |
937 | pool or in unused MMAP segments. You can call this after freeing |
- | |
938 | large blocks of memory to potentially reduce the system-level memory |
- | |
939 | requirements of a program. However, it cannot guarantee to reduce |
- | |
940 | memory. Under some allocation patterns, some large free blocks of |
- | |
941 | memory will be locked between two used chunks, so they cannot be |
- | |
942 | given back to the system. |
- | |
943 | - | ||
944 | The `pad' argument to malloc_trim represents the amount of free |
- | |
945 | trailing space to leave untrimmed. If this argument is zero, only |
- | |
946 | the minimum amount of memory to maintain internal data structures |
- | |
947 | will be left. Non-zero arguments can be supplied to maintain enough |
- | |
948 | trailing space to service future expected allocations without having |
- | |
949 | to re-obtain memory from the system. |
- | |
950 | - | ||
951 | Malloc_trim returns 1 if it actually released any memory, else 0. |
- | |
952 | */ |
- | |
953 | int dlmalloc_trim(size_t); |
- | |
954 | - | ||
955 | /* |
- | |
956 | malloc_usable_size(void* p); |
- | |
957 | - | ||
958 | Returns the number of bytes you can actually use in |
- | |
959 | an allocated chunk, which may be more than you requested (although |
- | |
960 | often not) due to alignment and minimum size constraints. |
- | |
961 | You can use this many bytes without worrying about |
- | |
962 | overwriting other allocated objects. This is not a particularly great |
- | |
963 | programming practice. malloc_usable_size can be more useful in |
- | |
964 | debugging and assertions, for example: |
- | |
965 | - | ||
966 | p = malloc(n); |
- | |
967 | assert(malloc_usable_size(p) >= 256); |
- | |
968 | */ |
- | |
969 | size_t dlmalloc_usable_size(void*); |
- | |
970 | - | ||
971 | /* |
- | |
972 | malloc_stats(); |
- | |
973 | Prints on stderr the amount of space obtained from the system (both |
- | |
974 | via sbrk and mmap), the maximum amount (which may be more than |
- | |
975 | current if malloc_trim and/or munmap got called), and the current |
- | |
976 | number of bytes allocated via malloc (or realloc, etc) but not yet |
- | |
977 | freed. Note that this is the number of bytes allocated, not the |
- | |
978 | number requested. It will be larger than the number requested |
- | |
979 | because of alignment and bookkeeping overhead. Because it includes |
- | |
980 | alignment wastage as being in use, this figure may be greater than |
- | |
981 | zero even when no user-level chunks are allocated. |
- | |
982 | - | ||
983 | The reported current and maximum system memory can be inaccurate if |
- | |
984 | a program makes other calls to system memory allocation functions |
- | |
985 | (normally sbrk) outside of malloc. |
- | |
986 | - | ||
987 | malloc_stats prints only the most commonly interesting statistics. |
- | |
988 | More information can be obtained by calling mallinfo. |
- | |
989 | */ |
- | |
990 | void dlmalloc_stats(void); |
- | |
991 | - | ||
992 | #endif /* ONLY_MSPACES */ |
- | |
993 | - | ||
994 | #if MSPACES |
- | |
995 | - | ||
996 | /* |
- | |
997 | mspace is an opaque type representing an independent |
- | |
998 | region of space that supports mspace_malloc, etc. |
- | |
999 | */ |
- | |
1000 | typedef void* mspace; |
- | |
1001 | - | ||
1002 | /* |
- | |
1003 | create_mspace creates and returns a new independent space with the |
- | |
1004 | given initial capacity, or, if 0, the default granularity size. It |
- | |
1005 | returns null if there is no system memory available to create the |
- | |
1006 | space. If argument locked is non-zero, the space uses a separate |
- | |
1007 | lock to control access. The capacity of the space will grow |
- | |
1008 | dynamically as needed to service mspace_malloc requests. You can |
- | |
1009 | control the sizes of incremental increases of this space by |
- | |
1010 | compiling with a different DEFAULT_GRANULARITY or dynamically |
- | |
1011 | setting with mallopt(M_GRANULARITY, value). |
- | |
1012 | */ |
- | |
1013 | mspace create_mspace(size_t capacity, int locked); |
- | |
1014 | - | ||
1015 | /* |
- | |
1016 | destroy_mspace destroys the given space, and attempts to return all |
- | |
1017 | of its memory back to the system, returning the total number of |
- | |
1018 | bytes freed. After destruction, the results of access to all memory |
- | |
1019 | used by the space become undefined. |
- | |
1020 | */ |
- | |
1021 | size_t destroy_mspace(mspace msp); |
- | |
1022 | - | ||
1023 | /* |
- | |
1024 | create_mspace_with_base uses the memory supplied as the initial base |
- | |
1025 | of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this |
- | |
1026 | space is used for bookkeeping, so the capacity must be at least this |
- | |
1027 | large. (Otherwise 0 is returned.) When this initial space is |
- | |
1028 | exhausted, additional memory will be obtained from the system. |
- | |
1029 | Destroying this space will deallocate all additionally allocated |
- | |
1030 | space (if possible) but not the initial base. |
- | |
1031 | */ |
- | |
1032 | mspace create_mspace_with_base(void* base, size_t capacity, int locked); |
- | |
1033 | - | ||
1034 | /* |
- | |
1035 | mspace_malloc behaves as malloc, but operates within |
- | |
1036 | the given space. |
- | |
1037 | */ |
- | |
1038 | void* mspace_malloc(mspace msp, size_t bytes); |
- | |
1039 | - | ||
1040 | /* |
- | |
1041 | mspace_free behaves as free, but operates within |
- | |
1042 | the given space. |
- | |
1043 | - | ||
1044 | If compiled with FOOTERS==1, mspace_free is not actually needed. |
- | |
1045 | free may be called instead of mspace_free because freed chunks from |
- | |
1046 | any space are handled by their originating spaces. |
- | |
1047 | */ |
- | |
1048 | void mspace_free(mspace msp, void* mem); |
- | |
1049 | - | ||
1050 | /* |
- | |
1051 | mspace_realloc behaves as realloc, but operates within |
- | |
1052 | the given space. |
- | |
1053 | - | ||
1054 | If compiled with FOOTERS==1, mspace_realloc is not actually |
- | |
1055 | needed. realloc may be called instead of mspace_realloc because |
- | |
1056 | realloced chunks from any space are handled by their originating |
- | |
1057 | spaces. |
- | |
1058 | */ |
- | |
1059 | void* mspace_realloc(mspace msp, void* mem, size_t newsize); |
- | |
1060 | - | ||
1061 | /* |
- | |
1062 | mspace_calloc behaves as calloc, but operates within |
- | |
1063 | the given space. |
- | |
1064 | */ |
- | |
1065 | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); |
- | |
1066 | - | ||
1067 | /* |
- | |
1068 | mspace_memalign behaves as memalign, but operates within |
- | |
1069 | the given space. |
- | |
1070 | */ |
- | |
1071 | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); |
- | |
1072 | - | ||
1073 | /* |
- | |
1074 | mspace_independent_calloc behaves as independent_calloc, but |
- | |
1075 | operates within the given space. |
- | |
1076 | */ |
- | |
1077 | void** mspace_independent_calloc(mspace msp, size_t n_elements, |
- | |
1078 | size_t elem_size, void* chunks[]); |
- | |
1079 | - | ||
1080 | /* |
- | |
1081 | mspace_independent_comalloc behaves as independent_comalloc, but |
- | |
1082 | operates within the given space. |
- | |
1083 | */ |
- | |
1084 | void** mspace_independent_comalloc(mspace msp, size_t n_elements, |
- | |
1085 | size_t sizes[], void* chunks[]); |
- | |
1086 | - | ||
1087 | /* |
- | |
1088 | mspace_footprint() returns the number of bytes obtained from the |
- | |
1089 | system for this space. |
- | |
1090 | */ |
- | |
1091 | size_t mspace_footprint(mspace msp); |
- | |
1092 | - | ||
1093 | /* |
- | |
1094 | mspace_max_footprint() returns the peak number of bytes obtained from the |
- | |
1095 | system for this space. |
- | |
1096 | */ |
- | |
1097 | size_t mspace_max_footprint(mspace msp); |
- | |
1098 | - | ||
1099 | - | ||
1100 | #if !NO_MALLINFO |
- | |
1101 | /* |
- | |
1102 | mspace_mallinfo behaves as mallinfo, but reports properties of |
- | |
1103 | the given space. |
- | |
1104 | */ |
- | |
1105 | struct mallinfo mspace_mallinfo(mspace msp); |
- | |
1106 | #endif /* NO_MALLINFO */ |
- | |
1107 | - | ||
1108 | /* |
- | |
1109 | mspace_malloc_stats behaves as malloc_stats, but reports |
- | |
1110 | properties of the given space. |
- | |
1111 | */ |
- | |
1112 | void mspace_malloc_stats(mspace msp); |
- | |
1113 | - | ||
1114 | /* |
- | |
1115 | mspace_trim behaves as malloc_trim, but |
- | |
1116 | operates within the given space. |
- | |
1117 | */ |
- | |
1118 | int mspace_trim(mspace msp, size_t pad); |
- | |
1119 | - | ||
1120 | /* |
- | |
1121 | An alias for mallopt. |
- | |
1122 | */ |
- | |
1123 | int mspace_mallopt(int, int); |
- | |
1124 | - | ||
1125 | #endif /* MSPACES */ |
- | |
1126 | - | ||
1127 | #ifdef __cplusplus |
- | |
1128 | }; /* end of extern "C" */ |
- | |
1129 | #endif /* __cplusplus */ |
- | |
1130 | 597 | ||
1131 | /* |
598 | /* |
1132 | ======================================================================== |
599 | ======================================================================== |
1133 | To make a fully customizable malloc.h header file, cut everything |
600 | To make a fully customizable malloc.h header file, cut everything |
1134 | above this line, put into file malloc.h, edit to suit, and #include it |
601 | above this line, put into file malloc.h, edit to suit, and #include it |
1135 | on the next line, as well as in programs that use this malloc. |
602 | on the next line, as well as in programs that use this malloc. |
1136 | ======================================================================== |
603 | ======================================================================== |
1137 | */ |
604 | */ |
1138 | 605 | ||
1139 | /* #include "malloc.h" */ |
606 | #include "malloc.h" |
1140 | 607 | ||
1141 | /*------------------------------ internal #includes ---------------------- */ |
608 | /*------------------------------ internal #includes ---------------------- */ |
1142 | 609 | ||
1143 | #ifdef WIN32 |
610 | #ifdef WIN32 |
1144 | #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ |
611 | #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ |
1145 | #endif /* WIN32 */ |
612 | #endif /* WIN32 */ |
1146 | 613 | ||
1147 | #include <stdio.h> /* for printing in malloc_stats */ |
614 | #include <stdio.h> /* for printing in malloc_stats */ |
1148 | 615 | ||
1149 | #ifndef LACKS_ERRNO_H |
616 | #ifndef LACKS_ERRNO_H |
1150 | #include <errno.h> /* for MALLOC_FAILURE_ACTION */ |
617 | #include <errno.h> /* for MALLOC_FAILURE_ACTION */ |
1151 | #endif /* LACKS_ERRNO_H */ |
618 | #endif /* LACKS_ERRNO_H */ |
1152 | #if FOOTERS |
619 | #if FOOTERS |
1153 | #include <time.h> /* for magic initialization */ |
620 | #include <time.h> /* for magic initialization */ |
1154 | #endif /* FOOTERS */ |
621 | #endif /* FOOTERS */ |
1155 | #ifndef LACKS_STDLIB_H |
622 | #ifndef LACKS_STDLIB_H |
1156 | #include <stdlib.h> /* for abort() */ |
623 | #include <stdlib.h> /* for abort() */ |
1157 | #endif /* LACKS_STDLIB_H */ |
624 | #endif /* LACKS_STDLIB_H */ |
1158 | #ifdef DEBUG |
625 | #ifdef DEBUG |
1159 | #if ABORT_ON_ASSERT_FAILURE |
626 | #if ABORT_ON_ASSERT_FAILURE |
1160 | #define assert(x) if(!(x)) ABORT |
627 | #define assert(x) if(!(x)) ABORT |
1161 | #else /* ABORT_ON_ASSERT_FAILURE */ |
628 | #else /* ABORT_ON_ASSERT_FAILURE */ |
1162 | #include <assert.h> |
629 | #include <assert.h> |
1163 | #endif /* ABORT_ON_ASSERT_FAILURE */ |
630 | #endif /* ABORT_ON_ASSERT_FAILURE */ |
1164 | #else /* DEBUG */ |
631 | #else /* DEBUG */ |
1165 | #define assert(x) |
632 | #define assert(x) |
1166 | #endif /* DEBUG */ |
633 | #endif /* DEBUG */ |
1167 | #ifndef LACKS_STRING_H |
634 | #ifndef LACKS_STRING_H |
1168 | #include <string.h> /* for memset etc */ |
635 | #include <string.h> /* for memset etc */ |
1169 | #endif /* LACKS_STRING_H */ |
636 | #endif /* LACKS_STRING_H */ |
1170 | #if USE_BUILTIN_FFS |
637 | #if USE_BUILTIN_FFS |
1171 | #ifndef LACKS_STRINGS_H |
638 | #ifndef LACKS_STRINGS_H |
1172 | #include <strings.h> /* for ffs */ |
639 | #include <strings.h> /* for ffs */ |
1173 | #endif /* LACKS_STRINGS_H */ |
640 | #endif /* LACKS_STRINGS_H */ |
1174 | #endif /* USE_BUILTIN_FFS */ |
641 | #endif /* USE_BUILTIN_FFS */ |
1175 | #if HAVE_MMAP |
642 | #if HAVE_MMAP |
1176 | #ifndef LACKS_SYS_MMAN_H |
643 | #ifndef LACKS_SYS_MMAN_H |
1177 | #include <sys/mman.h> /* for mmap */ |
644 | #include <sys/mman.h> /* for mmap */ |
1178 | #endif /* LACKS_SYS_MMAN_H */ |
645 | #endif /* LACKS_SYS_MMAN_H */ |
1179 | #ifndef LACKS_FCNTL_H |
646 | #ifndef LACKS_FCNTL_H |
1180 | #include <fcntl.h> |
647 | #include <fcntl.h> |
1181 | #endif /* LACKS_FCNTL_H */ |
648 | #endif /* LACKS_FCNTL_H */ |
1182 | #endif /* HAVE_MMAP */ |
649 | #endif /* HAVE_MMAP */ |
1183 | #if HAVE_MORECORE |
650 | #if HAVE_MORECORE |
1184 | #ifndef LACKS_UNISTD_H |
651 | #ifndef LACKS_UNISTD_H |
1185 | #include <unistd.h> /* for sbrk */ |
652 | #include <unistd.h> /* for sbrk */ |
1186 | #else /* LACKS_UNISTD_H */ |
653 | #else /* LACKS_UNISTD_H */ |
1187 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
654 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
1188 | extern void* sbrk(ptrdiff_t); |
655 | extern void* sbrk(ptrdiff_t); |
1189 | #endif /* FreeBSD etc */ |
656 | #endif /* FreeBSD etc */ |
1190 | #endif /* LACKS_UNISTD_H */ |
657 | #endif /* LACKS_UNISTD_H */ |
1191 | #endif /* HAVE_MMAP */ |
658 | #endif /* HAVE_MMAP */ |
1192 | 659 | ||
1193 | #ifndef WIN32 |
660 | #ifndef WIN32 |
1194 | #ifndef malloc_getpagesize |
661 | #ifndef malloc_getpagesize |
1195 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
662 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
1196 | # ifndef _SC_PAGE_SIZE |
663 | # ifndef _SC_PAGE_SIZE |
1197 | # define _SC_PAGE_SIZE _SC_PAGESIZE |
664 | # define _SC_PAGE_SIZE _SC_PAGESIZE |
1198 | # endif |
665 | # endif |
1199 | # endif |
666 | # endif |
1200 | # ifdef _SC_PAGE_SIZE |
667 | # ifdef _SC_PAGE_SIZE |
1201 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
668 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
1202 | # else |
669 | # else |
1203 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
670 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
1204 | extern size_t getpagesize(); |
671 | extern size_t getpagesize(); |
1205 | # define malloc_getpagesize getpagesize() |
672 | # define malloc_getpagesize getpagesize() |
1206 | # else |
673 | # else |
1207 | # ifdef WIN32 /* use supplied emulation of getpagesize */ |
674 | # ifdef WIN32 /* use supplied emulation of getpagesize */ |
1208 | # define malloc_getpagesize getpagesize() |
675 | # define malloc_getpagesize getpagesize() |
1209 | # else |
676 | # else |
1210 | # ifndef LACKS_SYS_PARAM_H |
677 | # ifndef LACKS_SYS_PARAM_H |
1211 | # include <sys/param.h> |
678 | # include <sys/param.h> |
1212 | # endif |
679 | # endif |
1213 | # ifdef EXEC_PAGESIZE |
680 | # ifdef EXEC_PAGESIZE |
1214 | # define malloc_getpagesize EXEC_PAGESIZE |
681 | # define malloc_getpagesize EXEC_PAGESIZE |
1215 | # else |
682 | # else |
1216 | # ifdef NBPG |
683 | # ifdef NBPG |
1217 | # ifndef CLSIZE |
684 | # ifndef CLSIZE |
1218 | # define malloc_getpagesize NBPG |
685 | # define malloc_getpagesize NBPG |
1219 | # else |
686 | # else |
1220 | # define malloc_getpagesize (NBPG * CLSIZE) |
687 | # define malloc_getpagesize (NBPG * CLSIZE) |
1221 | # endif |
688 | # endif |
1222 | # else |
689 | # else |
1223 | # ifdef NBPC |
690 | # ifdef NBPC |
1224 | # define malloc_getpagesize NBPC |
691 | # define malloc_getpagesize NBPC |
1225 | # else |
692 | # else |
1226 | # ifdef PAGESIZE |
693 | # ifdef PAGESIZE |
1227 | # define malloc_getpagesize PAGESIZE |
694 | # define malloc_getpagesize PAGESIZE |
1228 | # else /* just guess */ |
695 | # else /* just guess */ |
1229 | # define malloc_getpagesize ((size_t)4096U) |
696 | # define malloc_getpagesize ((size_t)4096U) |
1230 | # endif |
697 | # endif |
1231 | # endif |
698 | # endif |
1232 | # endif |
699 | # endif |
1233 | # endif |
700 | # endif |
1234 | # endif |
701 | # endif |
1235 | # endif |
702 | # endif |
1236 | # endif |
703 | # endif |
1237 | #endif |
704 | #endif |
1238 | #endif |
705 | #endif |
1239 | 706 | ||
1240 | /* ------------------- size_t and alignment properties -------------------- */ |
707 | /* ------------------- size_t and alignment properties -------------------- */ |
1241 | 708 | ||
1242 | /* The byte and bit size of a size_t */ |
709 | /* The byte and bit size of a size_t */ |
1243 | #define SIZE_T_SIZE (sizeof(size_t)) |
710 | #define SIZE_T_SIZE (sizeof(size_t)) |
1244 | #define SIZE_T_BITSIZE (sizeof(size_t) << 3) |
711 | #define SIZE_T_BITSIZE (sizeof(size_t) << 3) |
1245 | 712 | ||
1246 | /* Some constants coerced to size_t */ |
713 | /* Some constants coerced to size_t */ |
1247 | /* Annoying but necessary to avoid errors on some plaftorms */ |
714 | /* Annoying but necessary to avoid errors on some plaftorms */ |
1248 | #define SIZE_T_ZERO ((size_t)0) |
715 | #define SIZE_T_ZERO ((size_t)0) |
1249 | #define SIZE_T_ONE ((size_t)1) |
716 | #define SIZE_T_ONE ((size_t)1) |
1250 | #define SIZE_T_TWO ((size_t)2) |
717 | #define SIZE_T_TWO ((size_t)2) |
1251 | #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) |
718 | #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) |
1252 | #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) |
719 | #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) |
1253 | #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) |
720 | #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) |
1254 | #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) |
721 | #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) |
1255 | 722 | ||
1256 | /* The bit mask value corresponding to MALLOC_ALIGNMENT */ |
723 | /* The bit mask value corresponding to MALLOC_ALIGNMENT */ |
1257 | #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) |
724 | #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) |
1258 | 725 | ||
1259 | /* True if address a has acceptable alignment */ |
726 | /* True if address a has acceptable alignment */ |
1260 | #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) |
727 | #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) |
1261 | 728 | ||
1262 | /* the number of bytes to offset an address to align it */ |
729 | /* the number of bytes to offset an address to align it */ |
1263 | #define align_offset(A)\ |
730 | #define align_offset(A)\ |
1264 | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
731 | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
1265 | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
732 | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
1266 | 733 | ||
1267 | /* -------------------------- MMAP preliminaries ------------------------- */ |
734 | /* -------------------------- MMAP preliminaries ------------------------- */ |
1268 | 735 | ||
1269 | /* |
736 | /* |
1270 | If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and |
737 | If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and |
1271 | checks to fail so compiler optimizer can delete code rather than |
738 | checks to fail so compiler optimizer can delete code rather than |
1272 | using so many "#if"s. |
739 | using so many "#if"s. |
1273 | */ |
740 | */ |
1274 | 741 | ||
1275 | 742 | ||
1276 | /* MORECORE and MMAP must return MFAIL on failure */ |
743 | /* MORECORE and MMAP must return MFAIL on failure */ |
1277 | #define MFAIL ((void*)(MAX_SIZE_T)) |
744 | #define MFAIL ((void*)(MAX_SIZE_T)) |
1278 | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
745 | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
1279 | 746 | ||
1280 | #if !HAVE_MMAP |
747 | #if !HAVE_MMAP |
1281 | #define IS_MMAPPED_BIT (SIZE_T_ZERO) |
748 | #define IS_MMAPPED_BIT (SIZE_T_ZERO) |
1282 | #define USE_MMAP_BIT (SIZE_T_ZERO) |
749 | #define USE_MMAP_BIT (SIZE_T_ZERO) |
1283 | #define CALL_MMAP(s) MFAIL |
750 | #define CALL_MMAP(s) MFAIL |
1284 | #define CALL_MUNMAP(a, s) (-1) |
751 | #define CALL_MUNMAP(a, s) (-1) |
1285 | #define DIRECT_MMAP(s) MFAIL |
752 | #define DIRECT_MMAP(s) MFAIL |
1286 | 753 | ||
1287 | #else /* HAVE_MMAP */ |
754 | #else /* HAVE_MMAP */ |
1288 | #define IS_MMAPPED_BIT (SIZE_T_ONE) |
755 | #define IS_MMAPPED_BIT (SIZE_T_ONE) |
1289 | #define USE_MMAP_BIT (SIZE_T_ONE) |
756 | #define USE_MMAP_BIT (SIZE_T_ONE) |
1290 | 757 | ||
1291 | #ifndef WIN32 |
758 | #ifndef WIN32 |
1292 | #define CALL_MUNMAP(a, s) munmap((a), (s)) |
759 | #define CALL_MUNMAP(a, s) munmap((a), (s)) |
1293 | #define MMAP_PROT (PROT_READ|PROT_WRITE) |
760 | #define MMAP_PROT (PROT_READ|PROT_WRITE) |
1294 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
761 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
1295 | #define MAP_ANONYMOUS MAP_ANON |
762 | #define MAP_ANONYMOUS MAP_ANON |
1296 | #endif /* MAP_ANON */ |
763 | #endif /* MAP_ANON */ |
1297 | #ifdef MAP_ANONYMOUS |
764 | #ifdef MAP_ANONYMOUS |
1298 | #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) |
765 | #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) |
1299 | #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) |
766 | #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) |
1300 | #else /* MAP_ANONYMOUS */ |
767 | #else /* MAP_ANONYMOUS */ |
1301 | /* |
768 | /* |
1302 | Nearly all versions of mmap support MAP_ANONYMOUS, so the following |
769 | Nearly all versions of mmap support MAP_ANONYMOUS, so the following |
1303 | is unlikely to be needed, but is supplied just in case. |
770 | is unlikely to be needed, but is supplied just in case. |
1304 | */ |
771 | */ |
1305 | #define MMAP_FLAGS (MAP_PRIVATE) |
772 | #define MMAP_FLAGS (MAP_PRIVATE) |
1306 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
773 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
1307 | #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \ |
774 | #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \ |
1308 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
775 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
1309 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ |
776 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ |
1310 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) |
777 | mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) |
1311 | #endif /* MAP_ANONYMOUS */ |
778 | #endif /* MAP_ANONYMOUS */ |
1312 | 779 | ||
1313 | #define DIRECT_MMAP(s) CALL_MMAP(s) |
780 | #define DIRECT_MMAP(s) CALL_MMAP(s) |
1314 | #else /* WIN32 */ |
781 | #else /* WIN32 */ |
1315 | 782 | ||
1316 | /* Win32 MMAP via VirtualAlloc */ |
783 | /* Win32 MMAP via VirtualAlloc */ |
1317 | static void* win32mmap(size_t size) { |
784 | static void* win32mmap(size_t size) { |
1318 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); |
785 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); |
1319 | return (ptr != 0)? ptr: MFAIL; |
786 | return (ptr != 0)? ptr: MFAIL; |
1320 | } |
787 | } |
1321 | 788 | ||
1322 | /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ |
789 | /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ |
1323 | static void* win32direct_mmap(size_t size) { |
790 | static void* win32direct_mmap(size_t size) { |
1324 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, |
791 | void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, |
1325 | PAGE_READWRITE); |
792 | PAGE_READWRITE); |
1326 | return (ptr != 0)? ptr: MFAIL; |
793 | return (ptr != 0)? ptr: MFAIL; |
1327 | } |
794 | } |
1328 | 795 | ||
1329 | /* This function supports releasing coalesed segments */ |
796 | /* This function supports releasing coalesed segments */ |
1330 | static int win32munmap(void* ptr, size_t size) { |
797 | static int win32munmap(void* ptr, size_t size) { |
1331 | MEMORY_BASIC_INFORMATION minfo; |
798 | MEMORY_BASIC_INFORMATION minfo; |
1332 | char* cptr = ptr; |
799 | char* cptr = ptr; |
1333 | while (size) { |
800 | while (size) { |
1334 | if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) |
801 | if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) |
1335 | return -1; |
802 | return -1; |
1336 | if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || |
803 | if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || |
1337 | minfo.State != MEM_COMMIT || minfo.RegionSize > size) |
804 | minfo.State != MEM_COMMIT || minfo.RegionSize > size) |
1338 | return -1; |
805 | return -1; |
1339 | if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) |
806 | if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) |
1340 | return -1; |
807 | return -1; |
1341 | cptr += minfo.RegionSize; |
808 | cptr += minfo.RegionSize; |
1342 | size -= minfo.RegionSize; |
809 | size -= minfo.RegionSize; |
1343 | } |
810 | } |
1344 | return 0; |
811 | return 0; |
1345 | } |
812 | } |
1346 | 813 | ||
1347 | #define CALL_MMAP(s) win32mmap(s) |
814 | #define CALL_MMAP(s) win32mmap(s) |
1348 | #define CALL_MUNMAP(a, s) win32munmap((a), (s)) |
815 | #define CALL_MUNMAP(a, s) win32munmap((a), (s)) |
1349 | #define DIRECT_MMAP(s) win32direct_mmap(s) |
816 | #define DIRECT_MMAP(s) win32direct_mmap(s) |
1350 | #endif /* WIN32 */ |
817 | #endif /* WIN32 */ |
1351 | #endif /* HAVE_MMAP */ |
818 | #endif /* HAVE_MMAP */ |
1352 | 819 | ||
1353 | #if HAVE_MMAP && HAVE_MREMAP |
820 | #if HAVE_MMAP && HAVE_MREMAP |
1354 | #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) |
821 | #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) |
1355 | #else /* HAVE_MMAP && HAVE_MREMAP */ |
822 | #else /* HAVE_MMAP && HAVE_MREMAP */ |
1356 | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
823 | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
1357 | #endif /* HAVE_MMAP && HAVE_MREMAP */ |
824 | #endif /* HAVE_MMAP && HAVE_MREMAP */ |
1358 | 825 | ||
1359 | #if HAVE_MORECORE |
826 | #if HAVE_MORECORE |
1360 | #define CALL_MORECORE(S) MORECORE(S) |
827 | #define CALL_MORECORE(S) MORECORE(S) |
1361 | #else /* HAVE_MORECORE */ |
828 | #else /* HAVE_MORECORE */ |
1362 | #define CALL_MORECORE(S) MFAIL |
829 | #define CALL_MORECORE(S) MFAIL |
1363 | #endif /* HAVE_MORECORE */ |
830 | #endif /* HAVE_MORECORE */ |
1364 | 831 | ||
1365 | /* mstate bit set if continguous morecore disabled or failed */ |
832 | /* mstate bit set if continguous morecore disabled or failed */ |
1366 | #define USE_NONCONTIGUOUS_BIT (4U) |
833 | #define USE_NONCONTIGUOUS_BIT (4U) |
1367 | 834 | ||
1368 | /* segment bit set in create_mspace_with_base */ |
835 | /* segment bit set in create_mspace_with_base */ |
1369 | #define EXTERN_BIT (8U) |
836 | #define EXTERN_BIT (8U) |
1370 | 837 | ||
1371 | 838 | ||
1372 | /* --------------------------- Lock preliminaries ------------------------ */ |
839 | /* --------------------------- Lock preliminaries ------------------------ */ |
1373 | 840 | ||
1374 | #if USE_LOCKS |
841 | #if USE_LOCKS |
1375 | 842 | ||
1376 | /* |
843 | /* |
1377 | When locks are defined, there are up to two global locks: |
844 | When locks are defined, there are up to two global locks: |
1378 | 845 | ||
1379 | * If HAVE_MORECORE, morecore_mutex protects sequences of calls to |
846 | * If HAVE_MORECORE, morecore_mutex protects sequences of calls to |
1380 | MORECORE. In many cases sys_alloc requires two calls, that should |
847 | MORECORE. In many cases sys_alloc requires two calls, that should |
1381 | not be interleaved with calls by other threads. This does not |
848 | not be interleaved with calls by other threads. This does not |
1382 | protect against direct calls to MORECORE by other threads not |
849 | protect against direct calls to MORECORE by other threads not |
1383 | using this lock, so there is still code to cope the best we can on |
850 | using this lock, so there is still code to cope the best we can on |
1384 | interference. |
851 | interference. |
1385 | 852 | ||
1386 | * magic_init_mutex ensures that mparams.magic and other |
853 | * magic_init_mutex ensures that mparams.magic and other |
1387 | unique mparams values are initialized only once. |
854 | unique mparams values are initialized only once. |
1388 | */ |
855 | */ |
1389 | 856 | ||
1390 | #ifndef WIN32 |
857 | #ifndef WIN32 |
1391 | /* By default use posix locks */ |
858 | /* By default use posix locks */ |
1392 | #include <pthread.h> |
859 | #include <pthread.h> |
1393 | #define MLOCK_T pthread_mutex_t |
860 | #define MLOCK_T pthread_mutex_t |
1394 | #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL) |
861 | #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL) |
1395 | #define ACQUIRE_LOCK(l) pthread_mutex_lock(l) |
862 | #define ACQUIRE_LOCK(l) pthread_mutex_lock(l) |
1396 | #define RELEASE_LOCK(l) pthread_mutex_unlock(l) |
863 | #define RELEASE_LOCK(l) pthread_mutex_unlock(l) |
1397 | 864 | ||
1398 | #if HAVE_MORECORE |
865 | #if HAVE_MORECORE |
1399 | static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER; |
866 | static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER; |
1400 | #endif /* HAVE_MORECORE */ |
867 | #endif /* HAVE_MORECORE */ |
1401 | 868 | ||
1402 | static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER; |
869 | static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER; |
1403 | 870 | ||
1404 | #else /* WIN32 */ |
871 | #else /* WIN32 */ |
1405 | /* |
872 | /* |
1406 | Because lock-protected regions have bounded times, and there |
873 | Because lock-protected regions have bounded times, and there |
1407 | are no recursive lock calls, we can use simple spinlocks. |
874 | are no recursive lock calls, we can use simple spinlocks. |
1408 | */ |
875 | */ |
1409 | 876 | ||
1410 | #define MLOCK_T long |
877 | #define MLOCK_T long |
1411 | static int win32_acquire_lock (MLOCK_T *sl) { |
878 | static int win32_acquire_lock (MLOCK_T *sl) { |
1412 | for (;;) { |
879 | for (;;) { |
1413 | #ifdef InterlockedCompareExchangePointer |
880 | #ifdef InterlockedCompareExchangePointer |
1414 | if (!InterlockedCompareExchange(sl, 1, 0)) |
881 | if (!InterlockedCompareExchange(sl, 1, 0)) |
1415 | return 0; |
882 | return 0; |
1416 | #else /* Use older void* version */ |
883 | #else /* Use older void* version */ |
1417 | if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0)) |
884 | if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0)) |
1418 | return 0; |
885 | return 0; |
1419 | #endif /* InterlockedCompareExchangePointer */ |
886 | #endif /* InterlockedCompareExchangePointer */ |
1420 | Sleep (0); |
887 | Sleep (0); |
1421 | } |
888 | } |
1422 | } |
889 | } |
1423 | 890 | ||
1424 | static void win32_release_lock (MLOCK_T *sl) { |
891 | static void win32_release_lock (MLOCK_T *sl) { |
1425 | InterlockedExchange (sl, 0); |
892 | InterlockedExchange (sl, 0); |
1426 | } |
893 | } |
1427 | 894 | ||
1428 | #define INITIAL_LOCK(l) *(l)=0 |
895 | #define INITIAL_LOCK(l) *(l)=0 |
1429 | #define ACQUIRE_LOCK(l) win32_acquire_lock(l) |
896 | #define ACQUIRE_LOCK(l) win32_acquire_lock(l) |
1430 | #define RELEASE_LOCK(l) win32_release_lock(l) |
897 | #define RELEASE_LOCK(l) win32_release_lock(l) |
1431 | #if HAVE_MORECORE |
898 | #if HAVE_MORECORE |
1432 | static MLOCK_T morecore_mutex; |
899 | static MLOCK_T morecore_mutex; |
1433 | #endif /* HAVE_MORECORE */ |
900 | #endif /* HAVE_MORECORE */ |
1434 | static MLOCK_T magic_init_mutex; |
901 | static MLOCK_T magic_init_mutex; |
1435 | #endif /* WIN32 */ |
902 | #endif /* WIN32 */ |
1436 | 903 | ||
1437 | #define USE_LOCK_BIT (2U) |
904 | #define USE_LOCK_BIT (2U) |
1438 | #else /* USE_LOCKS */ |
905 | #else /* USE_LOCKS */ |
1439 | #define USE_LOCK_BIT (0U) |
906 | #define USE_LOCK_BIT (0U) |
1440 | #define INITIAL_LOCK(l) |
907 | #define INITIAL_LOCK(l) |
1441 | #endif /* USE_LOCKS */ |
908 | #endif /* USE_LOCKS */ |
1442 | 909 | ||
1443 | #if USE_LOCKS && HAVE_MORECORE |
910 | #if USE_LOCKS && HAVE_MORECORE |
1444 | #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex); |
911 | #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex); |
1445 | #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex); |
912 | #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex); |
1446 | #else /* USE_LOCKS && HAVE_MORECORE */ |
913 | #else /* USE_LOCKS && HAVE_MORECORE */ |
1447 | #define ACQUIRE_MORECORE_LOCK() |
914 | #define ACQUIRE_MORECORE_LOCK() |
1448 | #define RELEASE_MORECORE_LOCK() |
915 | #define RELEASE_MORECORE_LOCK() |
1449 | #endif /* USE_LOCKS && HAVE_MORECORE */ |
916 | #endif /* USE_LOCKS && HAVE_MORECORE */ |
1450 | 917 | ||
1451 | #if USE_LOCKS |
918 | #if USE_LOCKS |
1452 | #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex); |
919 | #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex); |
1453 | #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex); |
920 | #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex); |
1454 | #else /* USE_LOCKS */ |
921 | #else /* USE_LOCKS */ |
1455 | #define ACQUIRE_MAGIC_INIT_LOCK() |
922 | #define ACQUIRE_MAGIC_INIT_LOCK() |
1456 | #define RELEASE_MAGIC_INIT_LOCK() |
923 | #define RELEASE_MAGIC_INIT_LOCK() |
1457 | #endif /* USE_LOCKS */ |
924 | #endif /* USE_LOCKS */ |
1458 | 925 | ||
1459 | 926 | ||
1460 | /* ----------------------- Chunk representations ------------------------ */ |
927 | /* ----------------------- Chunk representations ------------------------ */ |
1461 | 928 | ||
1462 | /* |
929 | /* |
1463 | (The following includes lightly edited explanations by Colin Plumb.) |
930 | (The following includes lightly edited explanations by Colin Plumb.) |
1464 | 931 | ||
1465 | The malloc_chunk declaration below is misleading (but accurate and |
932 | The malloc_chunk declaration below is misleading (but accurate and |
1466 | necessary). It declares a "view" into memory allowing access to |
933 | necessary). It declares a "view" into memory allowing access to |
1467 | necessary fields at known offsets from a given base. |
934 | necessary fields at known offsets from a given base. |
1468 | 935 | ||
1469 | Chunks of memory are maintained using a `boundary tag' method as |
936 | Chunks of memory are maintained using a `boundary tag' method as |
1470 | originally described by Knuth. (See the paper by Paul Wilson |
937 | originally described by Knuth. (See the paper by Paul Wilson |
1471 | ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such |
938 | ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such |
1472 | techniques.) Sizes of free chunks are stored both in the front of |
939 | techniques.) Sizes of free chunks are stored both in the front of |
1473 | each chunk and at the end. This makes consolidating fragmented |
940 | each chunk and at the end. This makes consolidating fragmented |
1474 | chunks into bigger chunks fast. The head fields also hold bits |
941 | chunks into bigger chunks fast. The head fields also hold bits |
1475 | representing whether chunks are free or in use. |
942 | representing whether chunks are free or in use. |
1476 | 943 | ||
1477 | Here are some pictures to make it clearer. They are "exploded" to |
944 | Here are some pictures to make it clearer. They are "exploded" to |
1478 | show that the state of a chunk can be thought of as extending from |
945 | show that the state of a chunk can be thought of as extending from |
1479 | the high 31 bits of the head field of its header through the |
946 | the high 31 bits of the head field of its header through the |
1480 | prev_foot and PINUSE_BIT bit of the following chunk header. |
947 | prev_foot and PINUSE_BIT bit of the following chunk header. |
1481 | 948 | ||
1482 | A chunk that's in use looks like: |
949 | A chunk that's in use looks like: |
1483 | 950 | ||
1484 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
951 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1485 | | Size of previous chunk (if P = 1) | |
952 | | Size of previous chunk (if P = 1) | |
1486 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
953 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1487 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
954 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
1488 | | Size of this chunk 1| +-+ |
955 | | Size of this chunk 1| +-+ |
1489 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
956 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1490 | | | |
957 | | | |
1491 | +- -+ |
958 | +- -+ |
1492 | | | |
959 | | | |
1493 | +- -+ |
960 | +- -+ |
1494 | | : |
961 | | : |
1495 | +- size - sizeof(size_t) available payload bytes -+ |
962 | +- size - sizeof(size_t) available payload bytes -+ |
1496 | : | |
963 | : | |
1497 | chunk-> +- -+ |
964 | chunk-> +- -+ |
1498 | | | |
965 | | | |
1499 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
966 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1500 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| |
967 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| |
1501 | | Size of next chunk (may or may not be in use) | +-+ |
968 | | Size of next chunk (may or may not be in use) | +-+ |
1502 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
969 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1503 | 970 | ||
1504 | And if it's free, it looks like this: |
971 | And if it's free, it looks like this: |
1505 | 972 | ||
1506 | chunk-> +- -+ |
973 | chunk-> +- -+ |
1507 | | User payload (must be in use, or we would have merged!) | |
974 | | User payload (must be in use, or we would have merged!) | |
1508 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
975 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1509 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
976 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
1510 | | Size of this chunk 0| +-+ |
977 | | Size of this chunk 0| +-+ |
1511 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
978 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1512 | | Next pointer | |
979 | | Next pointer | |
1513 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
980 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1514 | | Prev pointer | |
981 | | Prev pointer | |
1515 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
982 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1516 | | : |
983 | | : |
1517 | +- size - sizeof(struct chunk) unused bytes -+ |
984 | +- size - sizeof(struct chunk) unused bytes -+ |
1518 | : | |
985 | : | |
1519 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
986 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1520 | | Size of this chunk | |
987 | | Size of this chunk | |
1521 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
988 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1522 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| |
989 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| |
1523 | | Size of next chunk (must be in use, or we would have merged)| +-+ |
990 | | Size of next chunk (must be in use, or we would have merged)| +-+ |
1524 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
991 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1525 | | : |
992 | | : |
1526 | +- User payload -+ |
993 | +- User payload -+ |
1527 | : | |
994 | : | |
1528 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
995 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1529 | |0| |
996 | |0| |
1530 | +-+ |
997 | +-+ |
1531 | Note that since we always merge adjacent free chunks, the chunks |
998 | Note that since we always merge adjacent free chunks, the chunks |
1532 | adjacent to a free chunk must be in use. |
999 | adjacent to a free chunk must be in use. |
1533 | 1000 | ||
1534 | Given a pointer to a chunk (which can be derived trivially from the |
1001 | Given a pointer to a chunk (which can be derived trivially from the |
1535 | payload pointer) we can, in O(1) time, find out whether the adjacent |
1002 | payload pointer) we can, in O(1) time, find out whether the adjacent |
1536 | chunks are free, and if so, unlink them from the lists that they |
1003 | chunks are free, and if so, unlink them from the lists that they |
1537 | are on and merge them with the current chunk. |
1004 | are on and merge them with the current chunk. |
1538 | 1005 | ||
1539 | Chunks always begin on even word boundaries, so the mem portion |
1006 | Chunks always begin on even word boundaries, so the mem portion |
1540 | (which is returned to the user) is also on an even word boundary, and |
1007 | (which is returned to the user) is also on an even word boundary, and |
1541 | thus at least double-word aligned. |
1008 | thus at least double-word aligned. |
1542 | 1009 | ||
1543 | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the |
1010 | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the |
1544 | chunk size (which is always a multiple of two words), is an in-use |
1011 | chunk size (which is always a multiple of two words), is an in-use |
1545 | bit for the *previous* chunk. If that bit is *clear*, then the |
1012 | bit for the *previous* chunk. If that bit is *clear*, then the |
1546 | word before the current chunk size contains the previous chunk |
1013 | word before the current chunk size contains the previous chunk |
1547 | size, and can be used to find the front of the previous chunk. |
1014 | size, and can be used to find the front of the previous chunk. |
1548 | The very first chunk allocated always has this bit set, preventing |
1015 | The very first chunk allocated always has this bit set, preventing |
1549 | access to non-existent (or non-owned) memory. If pinuse is set for |
1016 | access to non-existent (or non-owned) memory. If pinuse is set for |
1550 | any given chunk, then you CANNOT determine the size of the |
1017 | any given chunk, then you CANNOT determine the size of the |
1551 | previous chunk, and might even get a memory addressing fault when |
1018 | previous chunk, and might even get a memory addressing fault when |
1552 | trying to do so. |
1019 | trying to do so. |
1553 | 1020 | ||
1554 | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of |
1021 | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of |
1555 | the chunk size redundantly records whether the current chunk is |
1022 | the chunk size redundantly records whether the current chunk is |
1556 | inuse. This redundancy enables usage checks within free and realloc, |
1023 | inuse. This redundancy enables usage checks within free and realloc, |
1557 | and reduces indirection when freeing and consolidating chunks. |
1024 | and reduces indirection when freeing and consolidating chunks. |
1558 | 1025 | ||
1559 | Each freshly allocated chunk must have both cinuse and pinuse set. |
1026 | Each freshly allocated chunk must have both cinuse and pinuse set. |
1560 | That is, each allocated chunk borders either a previously allocated |
1027 | That is, each allocated chunk borders either a previously allocated |
1561 | and still in-use chunk, or the base of its memory arena. This is |
1028 | and still in-use chunk, or the base of its memory arena. This is |
1562 | ensured by making all allocations from the the `lowest' part of any |
1029 | ensured by making all allocations from the the `lowest' part of any |
1563 | found chunk. Further, no free chunk physically borders another one, |
1030 | found chunk. Further, no free chunk physically borders another one, |
1564 | so each free chunk is known to be preceded and followed by either |
1031 | so each free chunk is known to be preceded and followed by either |
1565 | inuse chunks or the ends of memory. |
1032 | inuse chunks or the ends of memory. |
1566 | 1033 | ||
1567 | Note that the `foot' of the current chunk is actually represented |
1034 | Note that the `foot' of the current chunk is actually represented |
1568 | as the prev_foot of the NEXT chunk. This makes it easier to |
1035 | as the prev_foot of the NEXT chunk. This makes it easier to |
1569 | deal with alignments etc but can be very confusing when trying |
1036 | deal with alignments etc but can be very confusing when trying |
1570 | to extend or adapt this code. |
1037 | to extend or adapt this code. |
1571 | 1038 | ||
1572 | The exceptions to all this are |
1039 | The exceptions to all this are |
1573 | 1040 | ||
1574 | 1. The special chunk `top' is the top-most available chunk (i.e., |
1041 | 1. The special chunk `top' is the top-most available chunk (i.e., |
1575 | the one bordering the end of available memory). It is treated |
1042 | the one bordering the end of available memory). It is treated |
1576 | specially. Top is never included in any bin, is used only if |
1043 | specially. Top is never included in any bin, is used only if |
1577 | no other chunk is available, and is released back to the |
1044 | no other chunk is available, and is released back to the |
1578 | system if it is very large (see M_TRIM_THRESHOLD). In effect, |
1045 | system if it is very large (see M_TRIM_THRESHOLD). In effect, |
1579 | the top chunk is treated as larger (and thus less well |
1046 | the top chunk is treated as larger (and thus less well |
1580 | fitting) than any other available chunk. The top chunk |
1047 | fitting) than any other available chunk. The top chunk |
1581 | doesn't update its trailing size field since there is no next |
1048 | doesn't update its trailing size field since there is no next |
1582 | contiguous chunk that would have to index off it. However, |
1049 | contiguous chunk that would have to index off it. However, |
1583 | space is still allocated for it (TOP_FOOT_SIZE) to enable |
1050 | space is still allocated for it (TOP_FOOT_SIZE) to enable |
1584 | separation or merging when space is extended. |
1051 | separation or merging when space is extended. |
1585 | 1052 | ||
1586 | 3. Chunks allocated via mmap, which have the lowest-order bit |
1053 | 3. Chunks allocated via mmap, which have the lowest-order bit |
1587 | (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set |
1054 | (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set |
1588 | PINUSE_BIT in their head fields. Because they are allocated |
1055 | PINUSE_BIT in their head fields. Because they are allocated |
1589 | one-by-one, each must carry its own prev_foot field, which is |
1056 | one-by-one, each must carry its own prev_foot field, which is |
1590 | also used to hold the offset this chunk has within its mmapped |
1057 | also used to hold the offset this chunk has within its mmapped |
1591 | region, which is needed to preserve alignment. Each mmapped |
1058 | region, which is needed to preserve alignment. Each mmapped |
1592 | chunk is trailed by the first two fields of a fake next-chunk |
1059 | chunk is trailed by the first two fields of a fake next-chunk |
1593 | for sake of usage checks. |
1060 | for sake of usage checks. |
1594 | 1061 | ||
1595 | */ |
1062 | */ |
1596 | 1063 | ||
1597 | struct malloc_chunk { |
1064 | struct malloc_chunk { |
1598 | size_t prev_foot; /* Size of previous chunk (if free). */ |
1065 | size_t prev_foot; /* Size of previous chunk (if free). */ |
1599 | size_t head; /* Size and inuse bits. */ |
1066 | size_t head; /* Size and inuse bits. */ |
1600 | struct malloc_chunk* fd; /* double links -- used only if free. */ |
1067 | struct malloc_chunk* fd; /* double links -- used only if free. */ |
1601 | struct malloc_chunk* bk; |
1068 | struct malloc_chunk* bk; |
1602 | }; |
1069 | }; |
1603 | 1070 | ||
1604 | typedef struct malloc_chunk mchunk; |
1071 | typedef struct malloc_chunk mchunk; |
1605 | typedef struct malloc_chunk* mchunkptr; |
1072 | typedef struct malloc_chunk* mchunkptr; |
1606 | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
1073 | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
1607 | typedef unsigned int bindex_t; /* Described below */ |
1074 | typedef unsigned int bindex_t; /* Described below */ |
1608 | typedef unsigned int binmap_t; /* Described below */ |
1075 | typedef unsigned int binmap_t; /* Described below */ |
1609 | typedef unsigned int flag_t; /* The type of various bit flag sets */ |
1076 | typedef unsigned int flag_t; /* The type of various bit flag sets */ |
1610 | 1077 | ||
1611 | /* ------------------- Chunks sizes and alignments ----------------------- */ |
1078 | /* ------------------- Chunks sizes and alignments ----------------------- */ |
1612 | 1079 | ||
1613 | #define MCHUNK_SIZE (sizeof(mchunk)) |
1080 | #define MCHUNK_SIZE (sizeof(mchunk)) |
1614 | 1081 | ||
1615 | #if FOOTERS |
1082 | #if FOOTERS |
1616 | #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
1083 | #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
1617 | #else /* FOOTERS */ |
1084 | #else /* FOOTERS */ |
1618 | #define CHUNK_OVERHEAD (SIZE_T_SIZE) |
1085 | #define CHUNK_OVERHEAD (SIZE_T_SIZE) |
1619 | #endif /* FOOTERS */ |
1086 | #endif /* FOOTERS */ |
1620 | 1087 | ||
1621 | /* MMapped chunks need a second word of overhead ... */ |
1088 | /* MMapped chunks need a second word of overhead ... */ |
1622 | #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
1089 | #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
1623 | /* ... and additional padding for fake next-chunk at foot */ |
1090 | /* ... and additional padding for fake next-chunk at foot */ |
1624 | #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) |
1091 | #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) |
1625 | 1092 | ||
1626 | /* The smallest size we can malloc is an aligned minimal chunk */ |
1093 | /* The smallest size we can malloc is an aligned minimal chunk */ |
1627 | #define MIN_CHUNK_SIZE\ |
1094 | #define MIN_CHUNK_SIZE\ |
1628 | ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
1095 | ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
1629 | 1096 | ||
1630 | /* conversion from malloc headers to user pointers, and back */ |
1097 | /* conversion from malloc headers to user pointers, and back */ |
1631 | #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) |
1098 | #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) |
1632 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) |
1099 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) |
1633 | /* chunk associated with aligned address A */ |
1100 | /* chunk associated with aligned address A */ |
1634 | #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) |
1101 | #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) |
1635 | 1102 | ||
1636 | /* Bounds on request (not chunk) sizes. */ |
1103 | /* Bounds on request (not chunk) sizes. */ |
1637 | #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) |
1104 | #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) |
1638 | #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) |
1105 | #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) |
1639 | 1106 | ||
1640 | /* pad request bytes into a usable size */ |
1107 | /* pad request bytes into a usable size */ |
1641 | #define pad_request(req) \ |
1108 | #define pad_request(req) \ |
1642 | (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
1109 | (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
1643 | 1110 | ||
1644 | /* pad request, checking for minimum (but not maximum) */ |
1111 | /* pad request, checking for minimum (but not maximum) */ |
1645 | #define request2size(req) \ |
1112 | #define request2size(req) \ |
1646 | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
1113 | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
1647 | 1114 | ||
1648 | 1115 | ||
1649 | /* ------------------ Operations on head and foot fields ----------------- */ |
1116 | /* ------------------ Operations on head and foot fields ----------------- */ |
1650 | 1117 | ||
1651 | /* |
1118 | /* |
1652 | The head field of a chunk is or'ed with PINUSE_BIT when previous |
1119 | The head field of a chunk is or'ed with PINUSE_BIT when previous |
1653 | adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in |
1120 | adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in |
1654 | use. If the chunk was obtained with mmap, the prev_foot field has |
1121 | use. If the chunk was obtained with mmap, the prev_foot field has |
1655 | IS_MMAPPED_BIT set, otherwise holding the offset of the base of the |
1122 | IS_MMAPPED_BIT set, otherwise holding the offset of the base of the |
1656 | mmapped region to the base of the chunk. |
1123 | mmapped region to the base of the chunk. |
1657 | */ |
1124 | */ |
1658 | 1125 | ||
1659 | #define PINUSE_BIT (SIZE_T_ONE) |
1126 | #define PINUSE_BIT (SIZE_T_ONE) |
1660 | #define CINUSE_BIT (SIZE_T_TWO) |
1127 | #define CINUSE_BIT (SIZE_T_TWO) |
1661 | #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) |
1128 | #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) |
1662 | 1129 | ||
1663 | /* Head value for fenceposts */ |
1130 | /* Head value for fenceposts */ |
1664 | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
1131 | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
1665 | 1132 | ||
1666 | /* extraction of fields from head words */ |
1133 | /* extraction of fields from head words */ |
1667 | #define cinuse(p) ((p)->head & CINUSE_BIT) |
1134 | #define cinuse(p) ((p)->head & CINUSE_BIT) |
1668 | #define pinuse(p) ((p)->head & PINUSE_BIT) |
1135 | #define pinuse(p) ((p)->head & PINUSE_BIT) |
1669 | #define chunksize(p) ((p)->head & ~(INUSE_BITS)) |
1136 | #define chunksize(p) ((p)->head & ~(INUSE_BITS)) |
1670 | 1137 | ||
1671 | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
1138 | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
1672 | #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT) |
1139 | #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT) |
1673 | 1140 | ||
1674 | /* Treat space at ptr +/- offset as a chunk */ |
1141 | /* Treat space at ptr +/- offset as a chunk */ |
1675 | #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
1142 | #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
1676 | #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) |
1143 | #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) |
1677 | 1144 | ||
1678 | /* Ptr to next or previous physical malloc_chunk. */ |
1145 | /* Ptr to next or previous physical malloc_chunk. */ |
1679 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS))) |
1146 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS))) |
1680 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) |
1147 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) |
1681 | 1148 | ||
1682 | /* extract next chunk's pinuse bit */ |
1149 | /* extract next chunk's pinuse bit */ |
1683 | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
1150 | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
1684 | 1151 | ||
1685 | /* Get/set size at footer */ |
1152 | /* Get/set size at footer */ |
1686 | #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) |
1153 | #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) |
1687 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) |
1154 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) |
1688 | 1155 | ||
1689 | /* Set size, pinuse bit, and foot */ |
1156 | /* Set size, pinuse bit, and foot */ |
1690 | #define set_size_and_pinuse_of_free_chunk(p, s)\ |
1157 | #define set_size_and_pinuse_of_free_chunk(p, s)\ |
1691 | ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) |
1158 | ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) |
1692 | 1159 | ||
1693 | /* Set size, pinuse bit, foot, and clear next pinuse */ |
1160 | /* Set size, pinuse bit, foot, and clear next pinuse */ |
1694 | #define set_free_with_pinuse(p, s, n)\ |
1161 | #define set_free_with_pinuse(p, s, n)\ |
1695 | (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) |
1162 | (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) |
1696 | 1163 | ||
1697 | #define is_mmapped(p)\ |
1164 | #define is_mmapped(p)\ |
1698 | (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT)) |
1165 | (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT)) |
1699 | 1166 | ||
1700 | /* Get the internal overhead associated with chunk p */ |
1167 | /* Get the internal overhead associated with chunk p */ |
1701 | #define overhead_for(p)\ |
1168 | #define overhead_for(p)\ |
1702 | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
1169 | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
1703 | 1170 | ||
1704 | /* Return true if malloced space is not necessarily cleared */ |
1171 | /* Return true if malloced space is not necessarily cleared */ |
1705 | #if MMAP_CLEARS |
1172 | #if MMAP_CLEARS |
1706 | #define calloc_must_clear(p) (!is_mmapped(p)) |
1173 | #define calloc_must_clear(p) (!is_mmapped(p)) |
1707 | #else /* MMAP_CLEARS */ |
1174 | #else /* MMAP_CLEARS */ |
1708 | #define calloc_must_clear(p) (1) |
1175 | #define calloc_must_clear(p) (1) |
1709 | #endif /* MMAP_CLEARS */ |
1176 | #endif /* MMAP_CLEARS */ |
1710 | 1177 | ||
1711 | /* ---------------------- Overlaid data structures ----------------------- */ |
1178 | /* ---------------------- Overlaid data structures ----------------------- */ |
1712 | 1179 | ||
1713 | /* |
1180 | /* |
1714 | When chunks are not in use, they are treated as nodes of either |
1181 | When chunks are not in use, they are treated as nodes of either |
1715 | lists or trees. |
1182 | lists or trees. |
1716 | 1183 | ||
1717 | "Small" chunks are stored in circular doubly-linked lists, and look |
1184 | "Small" chunks are stored in circular doubly-linked lists, and look |
1718 | like this: |
1185 | like this: |
1719 | 1186 | ||
1720 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1187 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1721 | | Size of previous chunk | |
1188 | | Size of previous chunk | |
1722 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1189 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1723 | `head:' | Size of chunk, in bytes |P| |
1190 | `head:' | Size of chunk, in bytes |P| |
1724 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1191 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1725 | | Forward pointer to next chunk in list | |
1192 | | Forward pointer to next chunk in list | |
1726 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1193 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1727 | | Back pointer to previous chunk in list | |
1194 | | Back pointer to previous chunk in list | |
1728 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1195 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1729 | | Unused space (may be 0 bytes long) . |
1196 | | Unused space (may be 0 bytes long) . |
1730 | . . |
1197 | . . |
1731 | . | |
1198 | . | |
1732 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1199 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1733 | `foot:' | Size of chunk, in bytes | |
1200 | `foot:' | Size of chunk, in bytes | |
1734 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1201 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1735 | 1202 | ||
1736 | Larger chunks are kept in a form of bitwise digital trees (aka |
1203 | Larger chunks are kept in a form of bitwise digital trees (aka |
1737 | tries) keyed on chunksizes. Because malloc_tree_chunks are only for |
1204 | tries) keyed on chunksizes. Because malloc_tree_chunks are only for |
1738 | free chunks greater than 256 bytes, their size doesn't impose any |
1205 | free chunks greater than 256 bytes, their size doesn't impose any |
1739 | constraints on user chunk sizes. Each node looks like: |
1206 | constraints on user chunk sizes. Each node looks like: |
1740 | 1207 | ||
1741 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1208 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1742 | | Size of previous chunk | |
1209 | | Size of previous chunk | |
1743 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1210 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1744 | `head:' | Size of chunk, in bytes |P| |
1211 | `head:' | Size of chunk, in bytes |P| |
1745 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1212 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1746 | | Forward pointer to next chunk of same size | |
1213 | | Forward pointer to next chunk of same size | |
1747 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1214 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1748 | | Back pointer to previous chunk of same size | |
1215 | | Back pointer to previous chunk of same size | |
1749 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1216 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1750 | | Pointer to left child (child[0]) | |
1217 | | Pointer to left child (child[0]) | |
1751 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1218 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1752 | | Pointer to right child (child[1]) | |
1219 | | Pointer to right child (child[1]) | |
1753 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1220 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1754 | | Pointer to parent | |
1221 | | Pointer to parent | |
1755 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1222 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1756 | | bin index of this chunk | |
1223 | | bin index of this chunk | |
1757 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1224 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1758 | | Unused space . |
1225 | | Unused space . |
1759 | . | |
1226 | . | |
1760 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1227 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1761 | `foot:' | Size of chunk, in bytes | |
1228 | `foot:' | Size of chunk, in bytes | |
1762 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1229 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
1763 | 1230 | ||
1764 | Each tree holding treenodes is a tree of unique chunk sizes. Chunks |
1231 | Each tree holding treenodes is a tree of unique chunk sizes. Chunks |
1765 | of the same size are arranged in a circularly-linked list, with only |
1232 | of the same size are arranged in a circularly-linked list, with only |
1766 | the oldest chunk (the next to be used, in our FIFO ordering) |
1233 | the oldest chunk (the next to be used, in our FIFO ordering) |
1767 | actually in the tree. (Tree members are distinguished by a non-null |
1234 | actually in the tree. (Tree members are distinguished by a non-null |
1768 | parent pointer.) If a chunk with the same size an an existing node |
1235 | parent pointer.) If a chunk with the same size an an existing node |
1769 | is inserted, it is linked off the existing node using pointers that |
1236 | is inserted, it is linked off the existing node using pointers that |
1770 | work in the same way as fd/bk pointers of small chunks. |
1237 | work in the same way as fd/bk pointers of small chunks. |
1771 | 1238 | ||
1772 | Each tree contains a power of 2 sized range of chunk sizes (the |
1239 | Each tree contains a power of 2 sized range of chunk sizes (the |
1773 | smallest is 0x100 <= x < 0x180), which is is divided in half at each |
1240 | smallest is 0x100 <= x < 0x180), which is is divided in half at each |
1774 | tree level, with the chunks in the smaller half of the range (0x100 |
1241 | tree level, with the chunks in the smaller half of the range (0x100 |
1775 | <= x < 0x140 for the top nose) in the left subtree and the larger |
1242 | <= x < 0x140 for the top nose) in the left subtree and the larger |
1776 | half (0x140 <= x < 0x180) in the right subtree. This is, of course, |
1243 | half (0x140 <= x < 0x180) in the right subtree. This is, of course, |
1777 | done by inspecting individual bits. |
1244 | done by inspecting individual bits. |
1778 | 1245 | ||
1779 | Using these rules, each node's left subtree contains all smaller |
1246 | Using these rules, each node's left subtree contains all smaller |
1780 | sizes than its right subtree. However, the node at the root of each |
1247 | sizes than its right subtree. However, the node at the root of each |
1781 | subtree has no particular ordering relationship to either. (The |
1248 | subtree has no particular ordering relationship to either. (The |
1782 | dividing line between the subtree sizes is based on trie relation.) |
1249 | dividing line between the subtree sizes is based on trie relation.) |
1783 | If we remove the last chunk of a given size from the interior of the |
1250 | If we remove the last chunk of a given size from the interior of the |
1784 | tree, we need to replace it with a leaf node. The tree ordering |
1251 | tree, we need to replace it with a leaf node. The tree ordering |
1785 | rules permit a node to be replaced by any leaf below it. |
1252 | rules permit a node to be replaced by any leaf below it. |
1786 | 1253 | ||
1787 | The smallest chunk in a tree (a common operation in a best-fit |
1254 | The smallest chunk in a tree (a common operation in a best-fit |
1788 | allocator) can be found by walking a path to the leftmost leaf in |
1255 | allocator) can be found by walking a path to the leftmost leaf in |
1789 | the tree. Unlike a usual binary tree, where we follow left child |
1256 | the tree. Unlike a usual binary tree, where we follow left child |
1790 | pointers until we reach a null, here we follow the right child |
1257 | pointers until we reach a null, here we follow the right child |
1791 | pointer any time the left one is null, until we reach a leaf with |
1258 | pointer any time the left one is null, until we reach a leaf with |
1792 | both child pointers null. The smallest chunk in the tree will be |
1259 | both child pointers null. The smallest chunk in the tree will be |
1793 | somewhere along that path. |
1260 | somewhere along that path. |
1794 | 1261 | ||
1795 | The worst case number of steps to add, find, or remove a node is |
1262 | The worst case number of steps to add, find, or remove a node is |
1796 | bounded by the number of bits differentiating chunks within |
1263 | bounded by the number of bits differentiating chunks within |
1797 | bins. Under current bin calculations, this ranges from 6 up to 21 |
1264 | bins. Under current bin calculations, this ranges from 6 up to 21 |
1798 | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case |
1265 | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case |
1799 | is of course much better. |
1266 | is of course much better. |
1800 | */ |
1267 | */ |
1801 | 1268 | ||
1802 | struct malloc_tree_chunk { |
1269 | struct malloc_tree_chunk { |
1803 | /* The first four fields must be compatible with malloc_chunk */ |
1270 | /* The first four fields must be compatible with malloc_chunk */ |
1804 | size_t prev_foot; |
1271 | size_t prev_foot; |
1805 | size_t head; |
1272 | size_t head; |
1806 | struct malloc_tree_chunk* fd; |
1273 | struct malloc_tree_chunk* fd; |
1807 | struct malloc_tree_chunk* bk; |
1274 | struct malloc_tree_chunk* bk; |
1808 | 1275 | ||
1809 | struct malloc_tree_chunk* child[2]; |
1276 | struct malloc_tree_chunk* child[2]; |
1810 | struct malloc_tree_chunk* parent; |
1277 | struct malloc_tree_chunk* parent; |
1811 | bindex_t index; |
1278 | bindex_t index; |
1812 | }; |
1279 | }; |
1813 | 1280 | ||
1814 | typedef struct malloc_tree_chunk tchunk; |
1281 | typedef struct malloc_tree_chunk tchunk; |
1815 | typedef struct malloc_tree_chunk* tchunkptr; |
1282 | typedef struct malloc_tree_chunk* tchunkptr; |
1816 | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
1283 | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
1817 | 1284 | ||
1818 | /* A little helper macro for trees */ |
1285 | /* A little helper macro for trees */ |
1819 | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
1286 | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
1820 | 1287 | ||
1821 | /* ----------------------------- Segments -------------------------------- */ |
1288 | /* ----------------------------- Segments -------------------------------- */ |
1822 | 1289 | ||
1823 | /* |
1290 | /* |
1824 | Each malloc space may include non-contiguous segments, held in a |
1291 | Each malloc space may include non-contiguous segments, held in a |
1825 | list headed by an embedded malloc_segment record representing the |
1292 | list headed by an embedded malloc_segment record representing the |
1826 | top-most space. Segments also include flags holding properties of |
1293 | top-most space. Segments also include flags holding properties of |
1827 | the space. Large chunks that are directly allocated by mmap are not |
1294 | the space. Large chunks that are directly allocated by mmap are not |
1828 | included in this list. They are instead independently created and |
1295 | included in this list. They are instead independently created and |
1829 | destroyed without otherwise keeping track of them. |
1296 | destroyed without otherwise keeping track of them. |
1830 | 1297 | ||
1831 | Segment management mainly comes into play for spaces allocated by |
1298 | Segment management mainly comes into play for spaces allocated by |
1832 | MMAP. Any call to MMAP might or might not return memory that is |
1299 | MMAP. Any call to MMAP might or might not return memory that is |
1833 | adjacent to an existing segment. MORECORE normally contiguously |
1300 | adjacent to an existing segment. MORECORE normally contiguously |
1834 | extends the current space, so this space is almost always adjacent, |
1301 | extends the current space, so this space is almost always adjacent, |
1835 | which is simpler and faster to deal with. (This is why MORECORE is |
1302 | which is simpler and faster to deal with. (This is why MORECORE is |
1836 | used preferentially to MMAP when both are available -- see |
1303 | used preferentially to MMAP when both are available -- see |
1837 | sys_alloc.) When allocating using MMAP, we don't use any of the |
1304 | sys_alloc.) When allocating using MMAP, we don't use any of the |
1838 | hinting mechanisms (inconsistently) supported in various |
1305 | hinting mechanisms (inconsistently) supported in various |
1839 | implementations of unix mmap, or distinguish reserving from |
1306 | implementations of unix mmap, or distinguish reserving from |
1840 | committing memory. Instead, we just ask for space, and exploit |
1307 | committing memory. Instead, we just ask for space, and exploit |
1841 | contiguity when we get it. It is probably possible to do |
1308 | contiguity when we get it. It is probably possible to do |
1842 | better than this on some systems, but no general scheme seems |
1309 | better than this on some systems, but no general scheme seems |
1843 | to be significantly better. |
1310 | to be significantly better. |
1844 | 1311 | ||
1845 | Management entails a simpler variant of the consolidation scheme |
1312 | Management entails a simpler variant of the consolidation scheme |
1846 | used for chunks to reduce fragmentation -- new adjacent memory is |
1313 | used for chunks to reduce fragmentation -- new adjacent memory is |
1847 | normally prepended or appended to an existing segment. However, |
1314 | normally prepended or appended to an existing segment. However, |
1848 | there are limitations compared to chunk consolidation that mostly |
1315 | there are limitations compared to chunk consolidation that mostly |
1849 | reflect the fact that segment processing is relatively infrequent |
1316 | reflect the fact that segment processing is relatively infrequent |
1850 | (occurring only when getting memory from system) and that we |
1317 | (occurring only when getting memory from system) and that we |
1851 | don't expect to have huge numbers of segments: |
1318 | don't expect to have huge numbers of segments: |
1852 | 1319 | ||
1853 | * Segments are not indexed, so traversal requires linear scans. (It |
1320 | * Segments are not indexed, so traversal requires linear scans. (It |
1854 | would be possible to index these, but is not worth the extra |
1321 | would be possible to index these, but is not worth the extra |
1855 | overhead and complexity for most programs on most platforms.) |
1322 | overhead and complexity for most programs on most platforms.) |
1856 | * New segments are only appended to old ones when holding top-most |
1323 | * New segments are only appended to old ones when holding top-most |
1857 | memory; if they cannot be prepended to others, they are held in |
1324 | memory; if they cannot be prepended to others, they are held in |
1858 | different segments. |
1325 | different segments. |
1859 | 1326 | ||
1860 | Except for the top-most segment of an mstate, each segment record |
1327 | Except for the top-most segment of an mstate, each segment record |
1861 | is kept at the tail of its segment. Segments are added by pushing |
1328 | is kept at the tail of its segment. Segments are added by pushing |
1862 | segment records onto the list headed by &mstate.seg for the |
1329 | segment records onto the list headed by &mstate.seg for the |
1863 | containing mstate. |
1330 | containing mstate. |
1864 | 1331 | ||
1865 | Segment flags control allocation/merge/deallocation policies: |
1332 | Segment flags control allocation/merge/deallocation policies: |
1866 | * If EXTERN_BIT set, then we did not allocate this segment, |
1333 | * If EXTERN_BIT set, then we did not allocate this segment, |
1867 | and so should not try to deallocate or merge with others. |
1334 | and so should not try to deallocate or merge with others. |
1868 | (This currently holds only for the initial segment passed |
1335 | (This currently holds only for the initial segment passed |
1869 | into create_mspace_with_base.) |
1336 | into create_mspace_with_base.) |
1870 | * If IS_MMAPPED_BIT set, the segment may be merged with |
1337 | * If IS_MMAPPED_BIT set, the segment may be merged with |
1871 | other surrounding mmapped segments and trimmed/de-allocated |
1338 | other surrounding mmapped segments and trimmed/de-allocated |
1872 | using munmap. |
1339 | using munmap. |
1873 | * If neither bit is set, then the segment was obtained using |
1340 | * If neither bit is set, then the segment was obtained using |
1874 | MORECORE so can be merged with surrounding MORECORE'd segments |
1341 | MORECORE so can be merged with surrounding MORECORE'd segments |
1875 | and deallocated/trimmed using MORECORE with negative arguments. |
1342 | and deallocated/trimmed using MORECORE with negative arguments. |
1876 | */ |
1343 | */ |
1877 | 1344 | ||
1878 | struct malloc_segment { |
1345 | struct malloc_segment { |
1879 | char* base; /* base address */ |
1346 | char* base; /* base address */ |
1880 | size_t size; /* allocated size */ |
1347 | size_t size; /* allocated size */ |
1881 | struct malloc_segment* next; /* ptr to next segment */ |
1348 | struct malloc_segment* next; /* ptr to next segment */ |
1882 | flag_t sflags; /* mmap and extern flag */ |
1349 | flag_t sflags; /* mmap and extern flag */ |
1883 | }; |
1350 | }; |
1884 | 1351 | ||
1885 | #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT) |
1352 | #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT) |
1886 | #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT) |
1353 | #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT) |
1887 | 1354 | ||
1888 | typedef struct malloc_segment msegment; |
1355 | typedef struct malloc_segment msegment; |
1889 | typedef struct malloc_segment* msegmentptr; |
1356 | typedef struct malloc_segment* msegmentptr; |
1890 | 1357 | ||
1891 | /* ---------------------------- malloc_state ----------------------------- */ |
1358 | /* ---------------------------- malloc_state ----------------------------- */ |
1892 | 1359 | ||
1893 | /* |
1360 | /* |
1894 | A malloc_state holds all of the bookkeeping for a space. |
1361 | A malloc_state holds all of the bookkeeping for a space. |
1895 | The main fields are: |
1362 | The main fields are: |
1896 | 1363 | ||
1897 | Top |
1364 | Top |
1898 | The topmost chunk of the currently active segment. Its size is |
1365 | The topmost chunk of the currently active segment. Its size is |
1899 | cached in topsize. The actual size of topmost space is |
1366 | cached in topsize. The actual size of topmost space is |
1900 | topsize+TOP_FOOT_SIZE, which includes space reserved for adding |
1367 | topsize+TOP_FOOT_SIZE, which includes space reserved for adding |
1901 | fenceposts and segment records if necessary when getting more |
1368 | fenceposts and segment records if necessary when getting more |
1902 | space from the system. The size at which to autotrim top is |
1369 | space from the system. The size at which to autotrim top is |
1903 | cached from mparams in trim_check, except that it is disabled if |
1370 | cached from mparams in trim_check, except that it is disabled if |
1904 | an autotrim fails. |
1371 | an autotrim fails. |
1905 | 1372 | ||
1906 | Designated victim (dv) |
1373 | Designated victim (dv) |
1907 | This is the preferred chunk for servicing small requests that |
1374 | This is the preferred chunk for servicing small requests that |
1908 | don't have exact fits. It is normally the chunk split off most |
1375 | don't have exact fits. It is normally the chunk split off most |
1909 | recently to service another small request. Its size is cached in |
1376 | recently to service another small request. Its size is cached in |
1910 | dvsize. The link fields of this chunk are not maintained since it |
1377 | dvsize. The link fields of this chunk are not maintained since it |
1911 | is not kept in a bin. |
1378 | is not kept in a bin. |
1912 | 1379 | ||
1913 | SmallBins |
1380 | SmallBins |
1914 | An array of bin headers for free chunks. These bins hold chunks |
1381 | An array of bin headers for free chunks. These bins hold chunks |
1915 | with sizes less than MIN_LARGE_SIZE bytes. Each bin contains |
1382 | with sizes less than MIN_LARGE_SIZE bytes. Each bin contains |
1916 | chunks of all the same size, spaced 8 bytes apart. To simplify |
1383 | chunks of all the same size, spaced 8 bytes apart. To simplify |
1917 | use in double-linked lists, each bin header acts as a malloc_chunk |
1384 | use in double-linked lists, each bin header acts as a malloc_chunk |
1918 | pointing to the real first node, if it exists (else pointing to |
1385 | pointing to the real first node, if it exists (else pointing to |
1919 | itself). This avoids special-casing for headers. But to avoid |
1386 | itself). This avoids special-casing for headers. But to avoid |
1920 | waste, we allocate only the fd/bk pointers of bins, and then use |
1387 | waste, we allocate only the fd/bk pointers of bins, and then use |
1921 | repositioning tricks to treat these as the fields of a chunk. |
1388 | repositioning tricks to treat these as the fields of a chunk. |
1922 | 1389 | ||
1923 | TreeBins |
1390 | TreeBins |
1924 | Treebins are pointers to the roots of trees holding a range of |
1391 | Treebins are pointers to the roots of trees holding a range of |
1925 | sizes. There are 2 equally spaced treebins for each power of two |
1392 | sizes. There are 2 equally spaced treebins for each power of two |
1926 | from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything |
1393 | from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything |
1927 | larger. |
1394 | larger. |
1928 | 1395 | ||
1929 | Bin maps |
1396 | Bin maps |
1930 | There is one bit map for small bins ("smallmap") and one for |
1397 | There is one bit map for small bins ("smallmap") and one for |
1931 | treebins ("treemap). Each bin sets its bit when non-empty, and |
1398 | treebins ("treemap). Each bin sets its bit when non-empty, and |
1932 | clears the bit when empty. Bit operations are then used to avoid |
1399 | clears the bit when empty. Bit operations are then used to avoid |
1933 | bin-by-bin searching -- nearly all "search" is done without ever |
1400 | bin-by-bin searching -- nearly all "search" is done without ever |
1934 | looking at bins that won't be selected. The bit maps |
1401 | looking at bins that won't be selected. The bit maps |
1935 | conservatively use 32 bits per map word, even if on 64bit system. |
1402 | conservatively use 32 bits per map word, even if on 64bit system. |
1936 | For a good description of some of the bit-based techniques used |
1403 | For a good description of some of the bit-based techniques used |
1937 | here, see Henry S. Warren Jr's book "Hacker's Delight" (and |
1404 | here, see Henry S. Warren Jr's book "Hacker's Delight" (and |
1938 | supplement at http://hackersdelight.org/). Many of these are |
1405 | supplement at http://hackersdelight.org/). Many of these are |
1939 | intended to reduce the branchiness of paths through malloc etc, as |
1406 | intended to reduce the branchiness of paths through malloc etc, as |
1940 | well as to reduce the number of memory locations read or written. |
1407 | well as to reduce the number of memory locations read or written. |
1941 | 1408 | ||
1942 | Segments |
1409 | Segments |
1943 | A list of segments headed by an embedded malloc_segment record |
1410 | A list of segments headed by an embedded malloc_segment record |
1944 | representing the initial space. |
1411 | representing the initial space. |
1945 | 1412 | ||
1946 | Address check support |
1413 | Address check support |
1947 | The least_addr field is the least address ever obtained from |
1414 | The least_addr field is the least address ever obtained from |
1948 | MORECORE or MMAP. Attempted frees and reallocs of any address less |
1415 | MORECORE or MMAP. Attempted frees and reallocs of any address less |
1949 | than this are trapped (unless INSECURE is defined). |
1416 | than this are trapped (unless INSECURE is defined). |
1950 | 1417 | ||
1951 | Magic tag |
1418 | Magic tag |
1952 | A cross-check field that should always hold same value as mparams.magic. |
1419 | A cross-check field that should always hold same value as mparams.magic. |
1953 | 1420 | ||
1954 | Flags |
1421 | Flags |
1955 | Bits recording whether to use MMAP, locks, or contiguous MORECORE |
1422 | Bits recording whether to use MMAP, locks, or contiguous MORECORE |
1956 | 1423 | ||
1957 | Statistics |
1424 | Statistics |
1958 | Each space keeps track of current and maximum system memory |
1425 | Each space keeps track of current and maximum system memory |
1959 | obtained via MORECORE or MMAP. |
1426 | obtained via MORECORE or MMAP. |
1960 | 1427 | ||
1961 | Locking |
1428 | Locking |
1962 | If USE_LOCKS is defined, the "mutex" lock is acquired and released |
1429 | If USE_LOCKS is defined, the "mutex" lock is acquired and released |
1963 | around every public call using this mspace. |
1430 | around every public call using this mspace. |
1964 | */ |
1431 | */ |
1965 | 1432 | ||
1966 | /* Bin types, widths and sizes */ |
1433 | /* Bin types, widths and sizes */ |
1967 | #define NSMALLBINS (32U) |
1434 | #define NSMALLBINS (32U) |
1968 | #define NTREEBINS (32U) |
1435 | #define NTREEBINS (32U) |
1969 | #define SMALLBIN_SHIFT (3U) |
1436 | #define SMALLBIN_SHIFT (3U) |
1970 | #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) |
1437 | #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) |
1971 | #define TREEBIN_SHIFT (8U) |
1438 | #define TREEBIN_SHIFT (8U) |
1972 | #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) |
1439 | #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) |
1973 | #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) |
1440 | #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) |
1974 | #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) |
1441 | #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) |
1975 | 1442 | ||
1976 | struct malloc_state { |
1443 | struct malloc_state { |
1977 | binmap_t smallmap; |
1444 | binmap_t smallmap; |
1978 | binmap_t treemap; |
1445 | binmap_t treemap; |
1979 | size_t dvsize; |
1446 | size_t dvsize; |
1980 | size_t topsize; |
1447 | size_t topsize; |
1981 | char* least_addr; |
1448 | char* least_addr; |
1982 | mchunkptr dv; |
1449 | mchunkptr dv; |
1983 | mchunkptr top; |
1450 | mchunkptr top; |
1984 | size_t trim_check; |
1451 | size_t trim_check; |
1985 | size_t magic; |
1452 | size_t magic; |
1986 | mchunkptr smallbins[(NSMALLBINS+1)*2]; |
1453 | mchunkptr smallbins[(NSMALLBINS+1)*2]; |
1987 | tbinptr treebins[NTREEBINS]; |
1454 | tbinptr treebins[NTREEBINS]; |
1988 | size_t footprint; |
1455 | size_t footprint; |
1989 | size_t max_footprint; |
1456 | size_t max_footprint; |
1990 | flag_t mflags; |
1457 | flag_t mflags; |
1991 | #if USE_LOCKS |
1458 | #if USE_LOCKS |
1992 | MLOCK_T mutex; /* locate lock among fields that rarely change */ |
1459 | MLOCK_T mutex; /* locate lock among fields that rarely change */ |
1993 | #endif /* USE_LOCKS */ |
1460 | #endif /* USE_LOCKS */ |
1994 | msegment seg; |
1461 | msegment seg; |
1995 | }; |
1462 | }; |
1996 | 1463 | ||
1997 | typedef struct malloc_state* mstate; |
1464 | typedef struct malloc_state* mstate; |
1998 | 1465 | ||
1999 | /* ------------- Global malloc_state and malloc_params ------------------- */ |
1466 | /* ------------- Global malloc_state and malloc_params ------------------- */ |
2000 | 1467 | ||
2001 | /* |
1468 | /* |
2002 | malloc_params holds global properties, including those that can be |
1469 | malloc_params holds global properties, including those that can be |
2003 | dynamically set using mallopt. There is a single instance, mparams, |
1470 | dynamically set using mallopt. There is a single instance, mparams, |
2004 | initialized in init_mparams. |
1471 | initialized in init_mparams. |
2005 | */ |
1472 | */ |
2006 | 1473 | ||
2007 | struct malloc_params { |
1474 | struct malloc_params { |
2008 | size_t magic; |
1475 | size_t magic; |
2009 | size_t page_size; |
1476 | size_t page_size; |
2010 | size_t granularity; |
1477 | size_t granularity; |
2011 | size_t mmap_threshold; |
1478 | size_t mmap_threshold; |
2012 | size_t trim_threshold; |
1479 | size_t trim_threshold; |
2013 | flag_t default_mflags; |
1480 | flag_t default_mflags; |
2014 | }; |
1481 | }; |
2015 | 1482 | ||
2016 | static struct malloc_params mparams; |
1483 | static struct malloc_params mparams; |
2017 | 1484 | ||
2018 | /* The global malloc_state used for all non-"mspace" calls */ |
1485 | /* The global malloc_state used for all non-"mspace" calls */ |
2019 | static struct malloc_state _gm_; |
1486 | static struct malloc_state _gm_; |
2020 | #define gm (&_gm_) |
1487 | #define gm (&_gm_) |
2021 | #define is_global(M) ((M) == &_gm_) |
1488 | #define is_global(M) ((M) == &_gm_) |
2022 | #define is_initialized(M) ((M)->top != 0) |
1489 | #define is_initialized(M) ((M)->top != 0) |
2023 | 1490 | ||
2024 | /* -------------------------- system alloc setup ------------------------- */ |
1491 | /* -------------------------- system alloc setup ------------------------- */ |
2025 | 1492 | ||
2026 | /* Operations on mflags */ |
1493 | /* Operations on mflags */ |
2027 | 1494 | ||
2028 | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
1495 | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
2029 | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
1496 | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
2030 | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
1497 | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
2031 | 1498 | ||
2032 | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
1499 | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
2033 | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
1500 | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
2034 | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
1501 | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
2035 | 1502 | ||
2036 | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
1503 | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
2037 | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
1504 | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
2038 | 1505 | ||
2039 | #define set_lock(M,L)\ |
1506 | #define set_lock(M,L)\ |
2040 | ((M)->mflags = (L)?\ |
1507 | ((M)->mflags = (L)?\ |
2041 | ((M)->mflags | USE_LOCK_BIT) :\ |
1508 | ((M)->mflags | USE_LOCK_BIT) :\ |
2042 | ((M)->mflags & ~USE_LOCK_BIT)) |
1509 | ((M)->mflags & ~USE_LOCK_BIT)) |
2043 | 1510 | ||
2044 | /* page-align a size */ |
1511 | /* page-align a size */ |
2045 | #define page_align(S)\ |
1512 | #define page_align(S)\ |
2046 | (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE)) |
1513 | (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE)) |
2047 | 1514 | ||
2048 | /* granularity-align a size */ |
1515 | /* granularity-align a size */ |
2049 | #define granularity_align(S)\ |
1516 | #define granularity_align(S)\ |
2050 | (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE)) |
1517 | (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE)) |
2051 | 1518 | ||
2052 | #define is_page_aligned(S)\ |
1519 | #define is_page_aligned(S)\ |
2053 | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
1520 | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
2054 | #define is_granularity_aligned(S)\ |
1521 | #define is_granularity_aligned(S)\ |
2055 | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
1522 | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
2056 | 1523 | ||
2057 | /* True if segment S holds address A */ |
1524 | /* True if segment S holds address A */ |
2058 | #define segment_holds(S, A)\ |
1525 | #define segment_holds(S, A)\ |
2059 | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
1526 | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
2060 | 1527 | ||
2061 | /* Return segment holding given address */ |
1528 | /* Return segment holding given address */ |
2062 | static msegmentptr segment_holding(mstate m, char* addr) { |
1529 | static msegmentptr segment_holding(mstate m, char* addr) { |
2063 | msegmentptr sp = &m->seg; |
1530 | msegmentptr sp = &m->seg; |
2064 | for (;;) { |
1531 | for (;;) { |
2065 | if (addr >= sp->base && addr < sp->base + sp->size) |
1532 | if (addr >= sp->base && addr < sp->base + sp->size) |
2066 | return sp; |
1533 | return sp; |
2067 | if ((sp = sp->next) == 0) |
1534 | if ((sp = sp->next) == 0) |
2068 | return 0; |
1535 | return 0; |
2069 | } |
1536 | } |
2070 | } |
1537 | } |
2071 | 1538 | ||
2072 | /* Return true if segment contains a segment link */ |
1539 | /* Return true if segment contains a segment link */ |
2073 | static int has_segment_link(mstate m, msegmentptr ss) { |
1540 | static int has_segment_link(mstate m, msegmentptr ss) { |
2074 | msegmentptr sp = &m->seg; |
1541 | msegmentptr sp = &m->seg; |
2075 | for (;;) { |
1542 | for (;;) { |
2076 | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
1543 | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
2077 | return 1; |
1544 | return 1; |
2078 | if ((sp = sp->next) == 0) |
1545 | if ((sp = sp->next) == 0) |
2079 | return 0; |
1546 | return 0; |
2080 | } |
1547 | } |
2081 | } |
1548 | } |
2082 | 1549 | ||
2083 | #ifndef MORECORE_CANNOT_TRIM |
1550 | #ifndef MORECORE_CANNOT_TRIM |
2084 | #define should_trim(M,s) ((s) > (M)->trim_check) |
1551 | #define should_trim(M,s) ((s) > (M)->trim_check) |
2085 | #else /* MORECORE_CANNOT_TRIM */ |
1552 | #else /* MORECORE_CANNOT_TRIM */ |
2086 | #define should_trim(M,s) (0) |
1553 | #define should_trim(M,s) (0) |
2087 | #endif /* MORECORE_CANNOT_TRIM */ |
1554 | #endif /* MORECORE_CANNOT_TRIM */ |
2088 | 1555 | ||
2089 | /* |
1556 | /* |
2090 | TOP_FOOT_SIZE is padding at the end of a segment, including space |
1557 | TOP_FOOT_SIZE is padding at the end of a segment, including space |
2091 | that may be needed to place segment records and fenceposts when new |
1558 | that may be needed to place segment records and fenceposts when new |
2092 | noncontiguous segments are added. |
1559 | noncontiguous segments are added. |
2093 | */ |
1560 | */ |
2094 | #define TOP_FOOT_SIZE\ |
1561 | #define TOP_FOOT_SIZE\ |
2095 | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) |
1562 | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) |
2096 | 1563 | ||
2097 | 1564 | ||
2098 | /* ------------------------------- Hooks -------------------------------- */ |
1565 | /* ------------------------------- Hooks -------------------------------- */ |
2099 | 1566 | ||
2100 | /* |
1567 | /* |
2101 | PREACTION should be defined to return 0 on success, and nonzero on |
1568 | PREACTION should be defined to return 0 on success, and nonzero on |
2102 | failure. If you are not using locking, you can redefine these to do |
1569 | failure. If you are not using locking, you can redefine these to do |
2103 | anything you like. |
1570 | anything you like. |
2104 | */ |
1571 | */ |
2105 | 1572 | ||
2106 | #if USE_LOCKS |
1573 | #if USE_LOCKS |
2107 | 1574 | ||
2108 | /* Ensure locks are initialized */ |
1575 | /* Ensure locks are initialized */ |
2109 | #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams()) |
1576 | #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams()) |
2110 | 1577 | ||
2111 | #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) |
1578 | #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) |
2112 | #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } |
1579 | #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } |
2113 | #else /* USE_LOCKS */ |
1580 | #else /* USE_LOCKS */ |
2114 | 1581 | ||
2115 | #ifndef PREACTION |
1582 | #ifndef PREACTION |
2116 | #define PREACTION(M) (0) |
1583 | #define PREACTION(M) (0) |
2117 | #endif /* PREACTION */ |
1584 | #endif /* PREACTION */ |
2118 | 1585 | ||
2119 | #ifndef POSTACTION |
1586 | #ifndef POSTACTION |
2120 | #define POSTACTION(M) |
1587 | #define POSTACTION(M) |
2121 | #endif /* POSTACTION */ |
1588 | #endif /* POSTACTION */ |
2122 | 1589 | ||
2123 | #endif /* USE_LOCKS */ |
1590 | #endif /* USE_LOCKS */ |
2124 | 1591 | ||
2125 | /* |
1592 | /* |
2126 | CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. |
1593 | CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. |
2127 | USAGE_ERROR_ACTION is triggered on detected bad frees and |
1594 | USAGE_ERROR_ACTION is triggered on detected bad frees and |
2128 | reallocs. The argument p is an address that might have triggered the |
1595 | reallocs. The argument p is an address that might have triggered the |
2129 | fault. It is ignored by the two predefined actions, but might be |
1596 | fault. It is ignored by the two predefined actions, but might be |
2130 | useful in custom actions that try to help diagnose errors. |
1597 | useful in custom actions that try to help diagnose errors. |
2131 | */ |
1598 | */ |
2132 | 1599 | ||
2133 | #if PROCEED_ON_ERROR |
1600 | #if PROCEED_ON_ERROR |
2134 | 1601 | ||
2135 | /* A count of the number of corruption errors causing resets */ |
1602 | /* A count of the number of corruption errors causing resets */ |
2136 | int malloc_corruption_error_count; |
1603 | int malloc_corruption_error_count; |
2137 | 1604 | ||
2138 | /* default corruption action */ |
1605 | /* default corruption action */ |
2139 | static void reset_on_error(mstate m); |
1606 | static void reset_on_error(mstate m); |
2140 | 1607 | ||
2141 | #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) |
1608 | #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) |
2142 | #define USAGE_ERROR_ACTION(m, p) |
1609 | #define USAGE_ERROR_ACTION(m, p) |
2143 | 1610 | ||
2144 | #else /* PROCEED_ON_ERROR */ |
1611 | #else /* PROCEED_ON_ERROR */ |
2145 | 1612 | ||
2146 | #ifndef CORRUPTION_ERROR_ACTION |
1613 | #ifndef CORRUPTION_ERROR_ACTION |
2147 | #define CORRUPTION_ERROR_ACTION(m) ABORT |
1614 | #define CORRUPTION_ERROR_ACTION(m) ABORT |
2148 | #endif /* CORRUPTION_ERROR_ACTION */ |
1615 | #endif /* CORRUPTION_ERROR_ACTION */ |
2149 | 1616 | ||
2150 | #ifndef USAGE_ERROR_ACTION |
1617 | #ifndef USAGE_ERROR_ACTION |
2151 | #define USAGE_ERROR_ACTION(m,p) ABORT |
1618 | #define USAGE_ERROR_ACTION(m,p) ABORT |
2152 | #endif /* USAGE_ERROR_ACTION */ |
1619 | #endif /* USAGE_ERROR_ACTION */ |
2153 | 1620 | ||
2154 | #endif /* PROCEED_ON_ERROR */ |
1621 | #endif /* PROCEED_ON_ERROR */ |
2155 | 1622 | ||
2156 | /* -------------------------- Debugging setup ---------------------------- */ |
1623 | /* -------------------------- Debugging setup ---------------------------- */ |
2157 | 1624 | ||
2158 | #if ! DEBUG |
1625 | #if ! DEBUG |
2159 | 1626 | ||
2160 | #define check_free_chunk(M,P) |
1627 | #define check_free_chunk(M,P) |
2161 | #define check_inuse_chunk(M,P) |
1628 | #define check_inuse_chunk(M,P) |
2162 | #define check_malloced_chunk(M,P,N) |
1629 | #define check_malloced_chunk(M,P,N) |
2163 | #define check_mmapped_chunk(M,P) |
1630 | #define check_mmapped_chunk(M,P) |
2164 | #define check_malloc_state(M) |
1631 | #define check_malloc_state(M) |
2165 | #define check_top_chunk(M,P) |
1632 | #define check_top_chunk(M,P) |
2166 | 1633 | ||
2167 | #else /* DEBUG */ |
1634 | #else /* DEBUG */ |
2168 | #define check_free_chunk(M,P) do_check_free_chunk(M,P) |
1635 | #define check_free_chunk(M,P) do_check_free_chunk(M,P) |
2169 | #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) |
1636 | #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) |
2170 | #define check_top_chunk(M,P) do_check_top_chunk(M,P) |
1637 | #define check_top_chunk(M,P) do_check_top_chunk(M,P) |
2171 | #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) |
1638 | #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) |
2172 | #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) |
1639 | #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) |
2173 | #define check_malloc_state(M) do_check_malloc_state(M) |
1640 | #define check_malloc_state(M) do_check_malloc_state(M) |
2174 | 1641 | ||
2175 | static void do_check_any_chunk(mstate m, mchunkptr p); |
1642 | static void do_check_any_chunk(mstate m, mchunkptr p); |
2176 | static void do_check_top_chunk(mstate m, mchunkptr p); |
1643 | static void do_check_top_chunk(mstate m, mchunkptr p); |
2177 | static void do_check_mmapped_chunk(mstate m, mchunkptr p); |
1644 | static void do_check_mmapped_chunk(mstate m, mchunkptr p); |
2178 | static void do_check_inuse_chunk(mstate m, mchunkptr p); |
1645 | static void do_check_inuse_chunk(mstate m, mchunkptr p); |
2179 | static void do_check_free_chunk(mstate m, mchunkptr p); |
1646 | static void do_check_free_chunk(mstate m, mchunkptr p); |
2180 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s); |
1647 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s); |
2181 | static void do_check_tree(mstate m, tchunkptr t); |
1648 | static void do_check_tree(mstate m, tchunkptr t); |
2182 | static void do_check_treebin(mstate m, bindex_t i); |
1649 | static void do_check_treebin(mstate m, bindex_t i); |
2183 | static void do_check_smallbin(mstate m, bindex_t i); |
1650 | static void do_check_smallbin(mstate m, bindex_t i); |
2184 | static void do_check_malloc_state(mstate m); |
1651 | static void do_check_malloc_state(mstate m); |
2185 | static int bin_find(mstate m, mchunkptr x); |
1652 | static int bin_find(mstate m, mchunkptr x); |
2186 | static size_t traverse_and_check(mstate m); |
1653 | static size_t traverse_and_check(mstate m); |
2187 | #endif /* DEBUG */ |
1654 | #endif /* DEBUG */ |
2188 | 1655 | ||
2189 | /* ---------------------------- Indexing Bins ---------------------------- */ |
1656 | /* ---------------------------- Indexing Bins ---------------------------- */ |
2190 | 1657 | ||
2191 | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
1658 | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
2192 | #define small_index(s) ((s) >> SMALLBIN_SHIFT) |
1659 | #define small_index(s) ((s) >> SMALLBIN_SHIFT) |
2193 | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
1660 | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
2194 | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
1661 | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
2195 | 1662 | ||
2196 | /* addressing by index. See above about smallbin repositioning */ |
1663 | /* addressing by index. See above about smallbin repositioning */ |
2197 | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) |
1664 | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) |
2198 | #define treebin_at(M,i) (&((M)->treebins[i])) |
1665 | #define treebin_at(M,i) (&((M)->treebins[i])) |
2199 | 1666 | ||
2200 | /* assign tree index for size S to variable I */ |
1667 | /* assign tree index for size S to variable I */ |
2201 | #if defined(__GNUC__) && defined(i386) |
1668 | #if defined(__GNUC__) && defined(i386) |
2202 | #define compute_tree_index(S, I)\ |
1669 | #define compute_tree_index(S, I)\ |
2203 | {\ |
1670 | {\ |
2204 | size_t X = S >> TREEBIN_SHIFT;\ |
1671 | size_t X = S >> TREEBIN_SHIFT;\ |
2205 | if (X == 0)\ |
1672 | if (X == 0)\ |
2206 | I = 0;\ |
1673 | I = 0;\ |
2207 | else if (X > 0xFFFF)\ |
1674 | else if (X > 0xFFFF)\ |
2208 | I = NTREEBINS-1;\ |
1675 | I = NTREEBINS-1;\ |
2209 | else {\ |
1676 | else {\ |
2210 | unsigned int K;\ |
1677 | unsigned int K;\ |
2211 | __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\ |
1678 | __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\ |
2212 | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
1679 | I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
2213 | }\ |
1680 | }\ |
2214 | } |
1681 | } |
2215 | #else /* GNUC */ |
1682 | #else /* GNUC */ |
2216 | #define compute_tree_index(S, I)\ |
1683 | #define compute_tree_index(S, I)\ |
2217 | {\ |
1684 | {\ |
2218 | size_t X = S >> TREEBIN_SHIFT;\ |
1685 | size_t X = S >> TREEBIN_SHIFT;\ |
2219 | if (X == 0)\ |
1686 | if (X == 0)\ |
2220 | I = 0;\ |
1687 | I = 0;\ |
2221 | else if (X > 0xFFFF)\ |
1688 | else if (X > 0xFFFF)\ |
2222 | I = NTREEBINS-1;\ |
1689 | I = NTREEBINS-1;\ |
2223 | else {\ |
1690 | else {\ |
2224 | unsigned int Y = (unsigned int)X;\ |
1691 | unsigned int Y = (unsigned int)X;\ |
2225 | unsigned int N = ((Y - 0x100) >> 16) & 8;\ |
1692 | unsigned int N = ((Y - 0x100) >> 16) & 8;\ |
2226 | unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ |
1693 | unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ |
2227 | N += K;\ |
1694 | N += K;\ |
2228 | N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ |
1695 | N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ |
2229 | K = 14 - N + ((Y <<= K) >> 15);\ |
1696 | K = 14 - N + ((Y <<= K) >> 15);\ |
2230 | I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ |
1697 | I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ |
2231 | }\ |
1698 | }\ |
2232 | } |
1699 | } |
2233 | #endif /* GNUC */ |
1700 | #endif /* GNUC */ |
2234 | 1701 | ||
2235 | /* Bit representing maximum resolved size in a treebin at i */ |
1702 | /* Bit representing maximum resolved size in a treebin at i */ |
2236 | #define bit_for_tree_index(i) \ |
1703 | #define bit_for_tree_index(i) \ |
2237 | (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) |
1704 | (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) |
2238 | 1705 | ||
2239 | /* Shift placing maximum resolved bit in a treebin at i as sign bit */ |
1706 | /* Shift placing maximum resolved bit in a treebin at i as sign bit */ |
2240 | #define leftshift_for_tree_index(i) \ |
1707 | #define leftshift_for_tree_index(i) \ |
2241 | ((i == NTREEBINS-1)? 0 : \ |
1708 | ((i == NTREEBINS-1)? 0 : \ |
2242 | ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) |
1709 | ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) |
2243 | 1710 | ||
2244 | /* The size of the smallest chunk held in bin with index i */ |
1711 | /* The size of the smallest chunk held in bin with index i */ |
2245 | #define minsize_for_tree_index(i) \ |
1712 | #define minsize_for_tree_index(i) \ |
2246 | ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ |
1713 | ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ |
2247 | (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) |
1714 | (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) |
2248 | 1715 | ||
2249 | 1716 | ||
2250 | /* ------------------------ Operations on bin maps ----------------------- */ |
1717 | /* ------------------------ Operations on bin maps ----------------------- */ |
2251 | 1718 | ||
2252 | /* bit corresponding to given index */ |
1719 | /* bit corresponding to given index */ |
2253 | #define idx2bit(i) ((binmap_t)(1) << (i)) |
1720 | #define idx2bit(i) ((binmap_t)(1) << (i)) |
2254 | 1721 | ||
2255 | /* Mark/Clear bits with given index */ |
1722 | /* Mark/Clear bits with given index */ |
2256 | #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) |
1723 | #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) |
2257 | #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) |
1724 | #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) |
2258 | #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) |
1725 | #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) |
2259 | 1726 | ||
2260 | #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) |
1727 | #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) |
2261 | #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) |
1728 | #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) |
2262 | #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) |
1729 | #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) |
2263 | 1730 | ||
2264 | /* index corresponding to given bit */ |
1731 | /* index corresponding to given bit */ |
2265 | 1732 | ||
2266 | #if defined(__GNUC__) && defined(i386) |
1733 | #if defined(__GNUC__) && defined(i386) |
2267 | #define compute_bit2idx(X, I)\ |
1734 | #define compute_bit2idx(X, I)\ |
2268 | {\ |
1735 | {\ |
2269 | unsigned int J;\ |
1736 | unsigned int J;\ |
2270 | __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\ |
1737 | __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\ |
2271 | I = (bindex_t)J;\ |
1738 | I = (bindex_t)J;\ |
2272 | } |
1739 | } |
2273 | 1740 | ||
2274 | #else /* GNUC */ |
1741 | #else /* GNUC */ |
2275 | #if USE_BUILTIN_FFS |
1742 | #if USE_BUILTIN_FFS |
2276 | #define compute_bit2idx(X, I) I = ffs(X)-1 |
1743 | #define compute_bit2idx(X, I) I = ffs(X)-1 |
2277 | 1744 | ||
2278 | #else /* USE_BUILTIN_FFS */ |
1745 | #else /* USE_BUILTIN_FFS */ |
2279 | #define compute_bit2idx(X, I)\ |
1746 | #define compute_bit2idx(X, I)\ |
2280 | {\ |
1747 | {\ |
2281 | unsigned int Y = X - 1;\ |
1748 | unsigned int Y = X - 1;\ |
2282 | unsigned int K = Y >> (16-4) & 16;\ |
1749 | unsigned int K = Y >> (16-4) & 16;\ |
2283 | unsigned int N = K; Y >>= K;\ |
1750 | unsigned int N = K; Y >>= K;\ |
2284 | N += K = Y >> (8-3) & 8; Y >>= K;\ |
1751 | N += K = Y >> (8-3) & 8; Y >>= K;\ |
2285 | N += K = Y >> (4-2) & 4; Y >>= K;\ |
1752 | N += K = Y >> (4-2) & 4; Y >>= K;\ |
2286 | N += K = Y >> (2-1) & 2; Y >>= K;\ |
1753 | N += K = Y >> (2-1) & 2; Y >>= K;\ |
2287 | N += K = Y >> (1-0) & 1; Y >>= K;\ |
1754 | N += K = Y >> (1-0) & 1; Y >>= K;\ |
2288 | I = (bindex_t)(N + Y);\ |
1755 | I = (bindex_t)(N + Y);\ |
2289 | } |
1756 | } |
2290 | #endif /* USE_BUILTIN_FFS */ |
1757 | #endif /* USE_BUILTIN_FFS */ |
2291 | #endif /* GNUC */ |
1758 | #endif /* GNUC */ |
2292 | 1759 | ||
2293 | /* isolate the least set bit of a bitmap */ |
1760 | /* isolate the least set bit of a bitmap */ |
2294 | #define least_bit(x) ((x) & -(x)) |
1761 | #define least_bit(x) ((x) & -(x)) |
2295 | 1762 | ||
2296 | /* mask with all bits to left of least bit of x on */ |
1763 | /* mask with all bits to left of least bit of x on */ |
2297 | #define left_bits(x) ((x<<1) | -(x<<1)) |
1764 | #define left_bits(x) ((x<<1) | -(x<<1)) |
2298 | 1765 | ||
2299 | /* mask with all bits to left of or equal to least bit of x on */ |
1766 | /* mask with all bits to left of or equal to least bit of x on */ |
2300 | #define same_or_left_bits(x) ((x) | -(x)) |
1767 | #define same_or_left_bits(x) ((x) | -(x)) |
2301 | 1768 | ||
2302 | 1769 | ||
2303 | /* ----------------------- Runtime Check Support ------------------------- */ |
1770 | /* ----------------------- Runtime Check Support ------------------------- */ |
2304 | 1771 | ||
2305 | /* |
1772 | /* |
2306 | For security, the main invariant is that malloc/free/etc never |
1773 | For security, the main invariant is that malloc/free/etc never |
2307 | writes to a static address other than malloc_state, unless static |
1774 | writes to a static address other than malloc_state, unless static |
2308 | malloc_state itself has been corrupted, which cannot occur via |
1775 | malloc_state itself has been corrupted, which cannot occur via |
2309 | malloc (because of these checks). In essence this means that we |
1776 | malloc (because of these checks). In essence this means that we |
2310 | believe all pointers, sizes, maps etc held in malloc_state, but |
1777 | believe all pointers, sizes, maps etc held in malloc_state, but |
2311 | check all of those linked or offsetted from other embedded data |
1778 | check all of those linked or offsetted from other embedded data |
2312 | structures. These checks are interspersed with main code in a way |
1779 | structures. These checks are interspersed with main code in a way |
2313 | that tends to minimize their run-time cost. |
1780 | that tends to minimize their run-time cost. |
2314 | 1781 | ||
2315 | When FOOTERS is defined, in addition to range checking, we also |
1782 | When FOOTERS is defined, in addition to range checking, we also |
2316 | verify footer fields of inuse chunks, which can be used guarantee |
1783 | verify footer fields of inuse chunks, which can be used guarantee |
2317 | that the mstate controlling malloc/free is intact. This is a |
1784 | that the mstate controlling malloc/free is intact. This is a |
2318 | streamlined version of the approach described by William Robertson |
1785 | streamlined version of the approach described by William Robertson |
2319 | et al in "Run-time Detection of Heap-based Overflows" LISA'03 |
1786 | et al in "Run-time Detection of Heap-based Overflows" LISA'03 |
2320 | http://www.usenix.org/events/lisa03/tech/robertson.html The footer |
1787 | http://www.usenix.org/events/lisa03/tech/robertson.html The footer |
2321 | of an inuse chunk holds the xor of its mstate and a random seed, |
1788 | of an inuse chunk holds the xor of its mstate and a random seed, |
2322 | that is checked upon calls to free() and realloc(). This is |
1789 | that is checked upon calls to free() and realloc(). This is |
2323 | (probablistically) unguessable from outside the program, but can be |
1790 | (probablistically) unguessable from outside the program, but can be |
2324 | computed by any code successfully malloc'ing any chunk, so does not |
1791 | computed by any code successfully malloc'ing any chunk, so does not |
2325 | itself provide protection against code that has already broken |
1792 | itself provide protection against code that has already broken |
2326 | security through some other means. Unlike Robertson et al, we |
1793 | security through some other means. Unlike Robertson et al, we |
2327 | always dynamically check addresses of all offset chunks (previous, |
1794 | always dynamically check addresses of all offset chunks (previous, |
2328 | next, etc). This turns out to be cheaper than relying on hashes. |
1795 | next, etc). This turns out to be cheaper than relying on hashes. |
2329 | */ |
1796 | */ |
2330 | 1797 | ||
2331 | #if !INSECURE |
1798 | #if !INSECURE |
2332 | /* Check if address a is at least as high as any from MORECORE or MMAP */ |
1799 | /* Check if address a is at least as high as any from MORECORE or MMAP */ |
2333 | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
1800 | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
2334 | /* Check if address of next chunk n is higher than base chunk p */ |
1801 | /* Check if address of next chunk n is higher than base chunk p */ |
2335 | #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
1802 | #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
2336 | /* Check if p has its cinuse bit on */ |
1803 | /* Check if p has its cinuse bit on */ |
2337 | #define ok_cinuse(p) cinuse(p) |
1804 | #define ok_cinuse(p) cinuse(p) |
2338 | /* Check if p has its pinuse bit on */ |
1805 | /* Check if p has its pinuse bit on */ |
2339 | #define ok_pinuse(p) pinuse(p) |
1806 | #define ok_pinuse(p) pinuse(p) |
2340 | 1807 | ||
2341 | #else /* !INSECURE */ |
1808 | #else /* !INSECURE */ |
2342 | #define ok_address(M, a) (1) |
1809 | #define ok_address(M, a) (1) |
2343 | #define ok_next(b, n) (1) |
1810 | #define ok_next(b, n) (1) |
2344 | #define ok_cinuse(p) (1) |
1811 | #define ok_cinuse(p) (1) |
2345 | #define ok_pinuse(p) (1) |
1812 | #define ok_pinuse(p) (1) |
2346 | #endif /* !INSECURE */ |
1813 | #endif /* !INSECURE */ |
2347 | 1814 | ||
2348 | #if (FOOTERS && !INSECURE) |
1815 | #if (FOOTERS && !INSECURE) |
2349 | /* Check if (alleged) mstate m has expected magic field */ |
1816 | /* Check if (alleged) mstate m has expected magic field */ |
2350 | #define ok_magic(M) ((M)->magic == mparams.magic) |
1817 | #define ok_magic(M) ((M)->magic == mparams.magic) |
2351 | #else /* (FOOTERS && !INSECURE) */ |
1818 | #else /* (FOOTERS && !INSECURE) */ |
2352 | #define ok_magic(M) (1) |
1819 | #define ok_magic(M) (1) |
2353 | #endif /* (FOOTERS && !INSECURE) */ |
1820 | #endif /* (FOOTERS && !INSECURE) */ |
2354 | 1821 | ||
2355 | 1822 | ||
2356 | /* In gcc, use __builtin_expect to minimize impact of checks */ |
1823 | /* In gcc, use __builtin_expect to minimize impact of checks */ |
2357 | #if !INSECURE |
1824 | #if !INSECURE |
2358 | #if defined(__GNUC__) && __GNUC__ >= 3 |
1825 | #if defined(__GNUC__) && __GNUC__ >= 3 |
2359 | #define RTCHECK(e) __builtin_expect(e, 1) |
1826 | #define RTCHECK(e) __builtin_expect(e, 1) |
2360 | #else /* GNUC */ |
1827 | #else /* GNUC */ |
2361 | #define RTCHECK(e) (e) |
1828 | #define RTCHECK(e) (e) |
2362 | #endif /* GNUC */ |
1829 | #endif /* GNUC */ |
2363 | #else /* !INSECURE */ |
1830 | #else /* !INSECURE */ |
2364 | #define RTCHECK(e) (1) |
1831 | #define RTCHECK(e) (1) |
2365 | #endif /* !INSECURE */ |
1832 | #endif /* !INSECURE */ |
2366 | 1833 | ||
2367 | /* macros to set up inuse chunks with or without footers */ |
1834 | /* macros to set up inuse chunks with or without footers */ |
2368 | 1835 | ||
2369 | #if !FOOTERS |
1836 | #if !FOOTERS |
2370 | 1837 | ||
2371 | #define mark_inuse_foot(M,p,s) |
1838 | #define mark_inuse_foot(M,p,s) |
2372 | 1839 | ||
2373 | /* Set cinuse bit and pinuse bit of next chunk */ |
1840 | /* Set cinuse bit and pinuse bit of next chunk */ |
2374 | #define set_inuse(M,p,s)\ |
1841 | #define set_inuse(M,p,s)\ |
2375 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
1842 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
2376 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
1843 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
2377 | 1844 | ||
2378 | /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ |
1845 | /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ |
2379 | #define set_inuse_and_pinuse(M,p,s)\ |
1846 | #define set_inuse_and_pinuse(M,p,s)\ |
2380 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
1847 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
2381 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
1848 | ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
2382 | 1849 | ||
2383 | /* Set size, cinuse and pinuse bit of this chunk */ |
1850 | /* Set size, cinuse and pinuse bit of this chunk */ |
2384 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
1851 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
2385 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
1852 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
2386 | 1853 | ||
2387 | #else /* FOOTERS */ |
1854 | #else /* FOOTERS */ |
2388 | 1855 | ||
2389 | /* Set foot of inuse chunk to be xor of mstate and seed */ |
1856 | /* Set foot of inuse chunk to be xor of mstate and seed */ |
2390 | #define mark_inuse_foot(M,p,s)\ |
1857 | #define mark_inuse_foot(M,p,s)\ |
2391 | (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) |
1858 | (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) |
2392 | 1859 | ||
2393 | #define get_mstate_for(p)\ |
1860 | #define get_mstate_for(p)\ |
2394 | ((mstate)(((mchunkptr)((char*)(p) +\ |
1861 | ((mstate)(((mchunkptr)((char*)(p) +\ |
2395 | (chunksize(p))))->prev_foot ^ mparams.magic)) |
1862 | (chunksize(p))))->prev_foot ^ mparams.magic)) |
2396 | 1863 | ||
2397 | #define set_inuse(M,p,s)\ |
1864 | #define set_inuse(M,p,s)\ |
2398 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
1865 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
2399 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ |
1866 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ |
2400 | mark_inuse_foot(M,p,s)) |
1867 | mark_inuse_foot(M,p,s)) |
2401 | 1868 | ||
2402 | #define set_inuse_and_pinuse(M,p,s)\ |
1869 | #define set_inuse_and_pinuse(M,p,s)\ |
2403 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
1870 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
2404 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ |
1871 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ |
2405 | mark_inuse_foot(M,p,s)) |
1872 | mark_inuse_foot(M,p,s)) |
2406 | 1873 | ||
2407 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
1874 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
2408 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
1875 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
2409 | mark_inuse_foot(M, p, s)) |
1876 | mark_inuse_foot(M, p, s)) |
2410 | 1877 | ||
2411 | #endif /* !FOOTERS */ |
1878 | #endif /* !FOOTERS */ |
2412 | 1879 | ||
2413 | /* ---------------------------- setting mparams -------------------------- */ |
1880 | /* ---------------------------- setting mparams -------------------------- */ |
2414 | 1881 | ||
2415 | /* Initialize mparams */ |
1882 | /* Initialize mparams */ |
2416 | static int init_mparams(void) { |
1883 | static int init_mparams(void) { |
2417 | if (mparams.page_size == 0) { |
1884 | if (mparams.page_size == 0) { |
2418 | size_t s; |
1885 | size_t s; |
2419 | 1886 | ||
2420 | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
1887 | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
2421 | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
1888 | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
2422 | #if MORECORE_CONTIGUOUS |
1889 | #if MORECORE_CONTIGUOUS |
2423 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; |
1890 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; |
2424 | #else /* MORECORE_CONTIGUOUS */ |
1891 | #else /* MORECORE_CONTIGUOUS */ |
2425 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
1892 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
2426 | #endif /* MORECORE_CONTIGUOUS */ |
1893 | #endif /* MORECORE_CONTIGUOUS */ |
2427 | 1894 | ||
2428 | #if (FOOTERS && !INSECURE) |
1895 | #if (FOOTERS && !INSECURE) |
2429 | { |
1896 | { |
2430 | #if USE_DEV_RANDOM |
1897 | #if USE_DEV_RANDOM |
2431 | int fd; |
1898 | int fd; |
2432 | unsigned char buf[sizeof(size_t)]; |
1899 | unsigned char buf[sizeof(size_t)]; |
2433 | /* Try to use /dev/urandom, else fall back on using time */ |
1900 | /* Try to use /dev/urandom, else fall back on using time */ |
2434 | if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && |
1901 | if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && |
2435 | read(fd, buf, sizeof(buf)) == sizeof(buf)) { |
1902 | read(fd, buf, sizeof(buf)) == sizeof(buf)) { |
2436 | s = *((size_t *) buf); |
1903 | s = *((size_t *) buf); |
2437 | close(fd); |
1904 | close(fd); |
2438 | } |
1905 | } |
2439 | else |
1906 | else |
2440 | #endif /* USE_DEV_RANDOM */ |
1907 | #endif /* USE_DEV_RANDOM */ |
2441 | s = (size_t)(time(0) ^ (size_t)0x55555555U); |
1908 | s = (size_t)(time(0) ^ (size_t)0x55555555U); |
2442 | 1909 | ||
2443 | s |= (size_t)8U; /* ensure nonzero */ |
1910 | s |= (size_t)8U; /* ensure nonzero */ |
2444 | s &= ~(size_t)7U; /* improve chances of fault for bad values */ |
1911 | s &= ~(size_t)7U; /* improve chances of fault for bad values */ |
2445 | 1912 | ||
2446 | } |
1913 | } |
2447 | #else /* (FOOTERS && !INSECURE) */ |
1914 | #else /* (FOOTERS && !INSECURE) */ |
2448 | s = (size_t)0x58585858U; |
1915 | s = (size_t)0x58585858U; |
2449 | #endif /* (FOOTERS && !INSECURE) */ |
1916 | #endif /* (FOOTERS && !INSECURE) */ |
2450 | ACQUIRE_MAGIC_INIT_LOCK(); |
1917 | ACQUIRE_MAGIC_INIT_LOCK(); |
2451 | if (mparams.magic == 0) { |
1918 | if (mparams.magic == 0) { |
2452 | mparams.magic = s; |
1919 | mparams.magic = s; |
2453 | /* Set up lock for main malloc area */ |
1920 | /* Set up lock for main malloc area */ |
2454 | INITIAL_LOCK(&gm->mutex); |
1921 | INITIAL_LOCK(&gm->mutex); |
2455 | gm->mflags = mparams.default_mflags; |
1922 | gm->mflags = mparams.default_mflags; |
2456 | } |
1923 | } |
2457 | RELEASE_MAGIC_INIT_LOCK(); |
1924 | RELEASE_MAGIC_INIT_LOCK(); |
2458 | 1925 | ||
2459 | #ifndef WIN32 |
1926 | #ifndef WIN32 |
2460 | mparams.page_size = malloc_getpagesize; |
1927 | mparams.page_size = malloc_getpagesize; |
2461 | mparams.granularity = ((DEFAULT_GRANULARITY != 0)? |
1928 | mparams.granularity = ((DEFAULT_GRANULARITY != 0)? |
2462 | DEFAULT_GRANULARITY : mparams.page_size); |
1929 | DEFAULT_GRANULARITY : mparams.page_size); |
2463 | #else /* WIN32 */ |
1930 | #else /* WIN32 */ |
2464 | { |
1931 | { |
2465 | SYSTEM_INFO system_info; |
1932 | SYSTEM_INFO system_info; |
2466 | GetSystemInfo(&system_info); |
1933 | GetSystemInfo(&system_info); |
2467 | mparams.page_size = system_info.dwPageSize; |
1934 | mparams.page_size = system_info.dwPageSize; |
2468 | mparams.granularity = system_info.dwAllocationGranularity; |
1935 | mparams.granularity = system_info.dwAllocationGranularity; |
2469 | } |
1936 | } |
2470 | #endif /* WIN32 */ |
1937 | #endif /* WIN32 */ |
2471 | 1938 | ||
2472 | /* Sanity-check configuration: |
1939 | /* Sanity-check configuration: |
2473 | size_t must be unsigned and as wide as pointer type. |
1940 | size_t must be unsigned and as wide as pointer type. |
2474 | ints must be at least 4 bytes. |
1941 | ints must be at least 4 bytes. |
2475 | alignment must be at least 8. |
1942 | alignment must be at least 8. |
2476 | Alignment, min chunk size, and page size must all be powers of 2. |
1943 | Alignment, min chunk size, and page size must all be powers of 2. |
2477 | */ |
1944 | */ |
2478 | if ((sizeof(size_t) != sizeof(char*)) || |
1945 | if ((sizeof(size_t) != sizeof(char*)) || |
2479 | (MAX_SIZE_T < MIN_CHUNK_SIZE) || |
1946 | (MAX_SIZE_T < MIN_CHUNK_SIZE) || |
2480 | (sizeof(int) < 4) || |
1947 | (sizeof(int) < 4) || |
2481 | (MALLOC_ALIGNMENT < (size_t)8U) || |
1948 | (MALLOC_ALIGNMENT < (size_t)8U) || |
2482 | ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || |
1949 | ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || |
2483 | ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || |
1950 | ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || |
2484 | ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) || |
1951 | ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) || |
2485 | ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0)) |
1952 | ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0)) |
2486 | ABORT; |
1953 | ABORT; |
2487 | } |
1954 | } |
2488 | return 0; |
1955 | return 0; |
2489 | } |
1956 | } |
2490 | 1957 | ||
2491 | /* support for mallopt */ |
1958 | /* support for mallopt */ |
2492 | static int change_mparam(int param_number, int value) { |
1959 | static int change_mparam(int param_number, int value) { |
2493 | size_t val = (size_t)value; |
1960 | size_t val = (size_t)value; |
2494 | init_mparams(); |
1961 | init_mparams(); |
2495 | switch(param_number) { |
1962 | switch(param_number) { |
2496 | case M_TRIM_THRESHOLD: |
1963 | case M_TRIM_THRESHOLD: |
2497 | mparams.trim_threshold = val; |
1964 | mparams.trim_threshold = val; |
2498 | return 1; |
1965 | return 1; |
2499 | case M_GRANULARITY: |
1966 | case M_GRANULARITY: |
2500 | if (val >= mparams.page_size && ((val & (val-1)) == 0)) { |
1967 | if (val >= mparams.page_size && ((val & (val-1)) == 0)) { |
2501 | mparams.granularity = val; |
1968 | mparams.granularity = val; |
2502 | return 1; |
1969 | return 1; |
2503 | } |
1970 | } |
2504 | else |
1971 | else |
2505 | return 0; |
1972 | return 0; |
2506 | case M_MMAP_THRESHOLD: |
1973 | case M_MMAP_THRESHOLD: |
2507 | mparams.mmap_threshold = val; |
1974 | mparams.mmap_threshold = val; |
2508 | return 1; |
1975 | return 1; |
2509 | default: |
1976 | default: |
2510 | return 0; |
1977 | return 0; |
2511 | } |
1978 | } |
2512 | } |
1979 | } |
2513 | 1980 | ||
2514 | #if DEBUG |
1981 | #if DEBUG |
2515 | /* ------------------------- Debugging Support --------------------------- */ |
1982 | /* ------------------------- Debugging Support --------------------------- */ |
2516 | 1983 | ||
2517 | /* Check properties of any chunk, whether free, inuse, mmapped etc */ |
1984 | /* Check properties of any chunk, whether free, inuse, mmapped etc */ |
2518 | static void do_check_any_chunk(mstate m, mchunkptr p) { |
1985 | static void do_check_any_chunk(mstate m, mchunkptr p) { |
2519 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
1986 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
2520 | assert(ok_address(m, p)); |
1987 | assert(ok_address(m, p)); |
2521 | } |
1988 | } |
2522 | 1989 | ||
2523 | /* Check properties of top chunk */ |
1990 | /* Check properties of top chunk */ |
2524 | static void do_check_top_chunk(mstate m, mchunkptr p) { |
1991 | static void do_check_top_chunk(mstate m, mchunkptr p) { |
2525 | msegmentptr sp = segment_holding(m, (char*)p); |
1992 | msegmentptr sp = segment_holding(m, (char*)p); |
2526 | size_t sz = chunksize(p); |
1993 | size_t sz = chunksize(p); |
2527 | assert(sp != 0); |
1994 | assert(sp != 0); |
2528 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
1995 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
2529 | assert(ok_address(m, p)); |
1996 | assert(ok_address(m, p)); |
2530 | assert(sz == m->topsize); |
1997 | assert(sz == m->topsize); |
2531 | assert(sz > 0); |
1998 | assert(sz > 0); |
2532 | assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); |
1999 | assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); |
2533 | assert(pinuse(p)); |
2000 | assert(pinuse(p)); |
2534 | assert(!next_pinuse(p)); |
2001 | assert(!next_pinuse(p)); |
2535 | } |
2002 | } |
2536 | 2003 | ||
2537 | /* Check properties of (inuse) mmapped chunks */ |
2004 | /* Check properties of (inuse) mmapped chunks */ |
2538 | static void do_check_mmapped_chunk(mstate m, mchunkptr p) { |
2005 | static void do_check_mmapped_chunk(mstate m, mchunkptr p) { |
2539 | size_t sz = chunksize(p); |
2006 | size_t sz = chunksize(p); |
2540 | size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD); |
2007 | size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD); |
2541 | assert(is_mmapped(p)); |
2008 | assert(is_mmapped(p)); |
2542 | assert(use_mmap(m)); |
2009 | assert(use_mmap(m)); |
2543 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
2010 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
2544 | assert(ok_address(m, p)); |
2011 | assert(ok_address(m, p)); |
2545 | assert(!is_small(sz)); |
2012 | assert(!is_small(sz)); |
2546 | assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); |
2013 | assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); |
2547 | assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); |
2014 | assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); |
2548 | assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); |
2015 | assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); |
2549 | } |
2016 | } |
2550 | 2017 | ||
2551 | /* Check properties of inuse chunks */ |
2018 | /* Check properties of inuse chunks */ |
2552 | static void do_check_inuse_chunk(mstate m, mchunkptr p) { |
2019 | static void do_check_inuse_chunk(mstate m, mchunkptr p) { |
2553 | do_check_any_chunk(m, p); |
2020 | do_check_any_chunk(m, p); |
2554 | assert(cinuse(p)); |
2021 | assert(cinuse(p)); |
2555 | assert(next_pinuse(p)); |
2022 | assert(next_pinuse(p)); |
2556 | /* If not pinuse and not mmapped, previous chunk has OK offset */ |
2023 | /* If not pinuse and not mmapped, previous chunk has OK offset */ |
2557 | assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); |
2024 | assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); |
2558 | if (is_mmapped(p)) |
2025 | if (is_mmapped(p)) |
2559 | do_check_mmapped_chunk(m, p); |
2026 | do_check_mmapped_chunk(m, p); |
2560 | } |
2027 | } |
2561 | 2028 | ||
2562 | /* Check properties of free chunks */ |
2029 | /* Check properties of free chunks */ |
2563 | static void do_check_free_chunk(mstate m, mchunkptr p) { |
2030 | static void do_check_free_chunk(mstate m, mchunkptr p) { |
2564 | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
2031 | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
2565 | mchunkptr next = chunk_plus_offset(p, sz); |
2032 | mchunkptr next = chunk_plus_offset(p, sz); |
2566 | do_check_any_chunk(m, p); |
2033 | do_check_any_chunk(m, p); |
2567 | assert(!cinuse(p)); |
2034 | assert(!cinuse(p)); |
2568 | assert(!next_pinuse(p)); |
2035 | assert(!next_pinuse(p)); |
2569 | assert (!is_mmapped(p)); |
2036 | assert (!is_mmapped(p)); |
2570 | if (p != m->dv && p != m->top) { |
2037 | if (p != m->dv && p != m->top) { |
2571 | if (sz >= MIN_CHUNK_SIZE) { |
2038 | if (sz >= MIN_CHUNK_SIZE) { |
2572 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
2039 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
2573 | assert(is_aligned(chunk2mem(p))); |
2040 | assert(is_aligned(chunk2mem(p))); |
2574 | assert(next->prev_foot == sz); |
2041 | assert(next->prev_foot == sz); |
2575 | assert(pinuse(p)); |
2042 | assert(pinuse(p)); |
2576 | assert (next == m->top || cinuse(next)); |
2043 | assert (next == m->top || cinuse(next)); |
2577 | assert(p->fd->bk == p); |
2044 | assert(p->fd->bk == p); |
2578 | assert(p->bk->fd == p); |
2045 | assert(p->bk->fd == p); |
2579 | } |
2046 | } |
2580 | else /* markers are always of size SIZE_T_SIZE */ |
2047 | else /* markers are always of size SIZE_T_SIZE */ |
2581 | assert(sz == SIZE_T_SIZE); |
2048 | assert(sz == SIZE_T_SIZE); |
2582 | } |
2049 | } |
2583 | } |
2050 | } |
2584 | 2051 | ||
2585 | /* Check properties of malloced chunks at the point they are malloced */ |
2052 | /* Check properties of malloced chunks at the point they are malloced */ |
2586 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { |
2053 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { |
2587 | if (mem != 0) { |
2054 | if (mem != 0) { |
2588 | mchunkptr p = mem2chunk(mem); |
2055 | mchunkptr p = mem2chunk(mem); |
2589 | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
2056 | size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT); |
2590 | do_check_inuse_chunk(m, p); |
2057 | do_check_inuse_chunk(m, p); |
2591 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
2058 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
2592 | assert(sz >= MIN_CHUNK_SIZE); |
2059 | assert(sz >= MIN_CHUNK_SIZE); |
2593 | assert(sz >= s); |
2060 | assert(sz >= s); |
2594 | /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ |
2061 | /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ |
2595 | assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); |
2062 | assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); |
2596 | } |
2063 | } |
2597 | } |
2064 | } |
2598 | 2065 | ||
2599 | /* Check a tree and its subtrees. */ |
2066 | /* Check a tree and its subtrees. */ |
2600 | static void do_check_tree(mstate m, tchunkptr t) { |
2067 | static void do_check_tree(mstate m, tchunkptr t) { |
2601 | tchunkptr head = 0; |
2068 | tchunkptr head = 0; |
2602 | tchunkptr u = t; |
2069 | tchunkptr u = t; |
2603 | bindex_t tindex = t->index; |
2070 | bindex_t tindex = t->index; |
2604 | size_t tsize = chunksize(t); |
2071 | size_t tsize = chunksize(t); |
2605 | bindex_t idx; |
2072 | bindex_t idx; |
2606 | compute_tree_index(tsize, idx); |
2073 | compute_tree_index(tsize, idx); |
2607 | assert(tindex == idx); |
2074 | assert(tindex == idx); |
2608 | assert(tsize >= MIN_LARGE_SIZE); |
2075 | assert(tsize >= MIN_LARGE_SIZE); |
2609 | assert(tsize >= minsize_for_tree_index(idx)); |
2076 | assert(tsize >= minsize_for_tree_index(idx)); |
2610 | assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); |
2077 | assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); |
2611 | 2078 | ||
2612 | do { /* traverse through chain of same-sized nodes */ |
2079 | do { /* traverse through chain of same-sized nodes */ |
2613 | do_check_any_chunk(m, ((mchunkptr)u)); |
2080 | do_check_any_chunk(m, ((mchunkptr)u)); |
2614 | assert(u->index == tindex); |
2081 | assert(u->index == tindex); |
2615 | assert(chunksize(u) == tsize); |
2082 | assert(chunksize(u) == tsize); |
2616 | assert(!cinuse(u)); |
2083 | assert(!cinuse(u)); |
2617 | assert(!next_pinuse(u)); |
2084 | assert(!next_pinuse(u)); |
2618 | assert(u->fd->bk == u); |
2085 | assert(u->fd->bk == u); |
2619 | assert(u->bk->fd == u); |
2086 | assert(u->bk->fd == u); |
2620 | if (u->parent == 0) { |
2087 | if (u->parent == 0) { |
2621 | assert(u->child[0] == 0); |
2088 | assert(u->child[0] == 0); |
2622 | assert(u->child[1] == 0); |
2089 | assert(u->child[1] == 0); |
2623 | } |
2090 | } |
2624 | else { |
2091 | else { |
2625 | assert(head == 0); /* only one node on chain has parent */ |
2092 | assert(head == 0); /* only one node on chain has parent */ |
2626 | head = u; |
2093 | head = u; |
2627 | assert(u->parent != u); |
2094 | assert(u->parent != u); |
2628 | assert (u->parent->child[0] == u || |
2095 | assert (u->parent->child[0] == u || |
2629 | u->parent->child[1] == u || |
2096 | u->parent->child[1] == u || |
2630 | *((tbinptr*)(u->parent)) == u); |
2097 | *((tbinptr*)(u->parent)) == u); |
2631 | if (u->child[0] != 0) { |
2098 | if (u->child[0] != 0) { |
2632 | assert(u->child[0]->parent == u); |
2099 | assert(u->child[0]->parent == u); |
2633 | assert(u->child[0] != u); |
2100 | assert(u->child[0] != u); |
2634 | do_check_tree(m, u->child[0]); |
2101 | do_check_tree(m, u->child[0]); |
2635 | } |
2102 | } |
2636 | if (u->child[1] != 0) { |
2103 | if (u->child[1] != 0) { |
2637 | assert(u->child[1]->parent == u); |
2104 | assert(u->child[1]->parent == u); |
2638 | assert(u->child[1] != u); |
2105 | assert(u->child[1] != u); |
2639 | do_check_tree(m, u->child[1]); |
2106 | do_check_tree(m, u->child[1]); |
2640 | } |
2107 | } |
2641 | if (u->child[0] != 0 && u->child[1] != 0) { |
2108 | if (u->child[0] != 0 && u->child[1] != 0) { |
2642 | assert(chunksize(u->child[0]) < chunksize(u->child[1])); |
2109 | assert(chunksize(u->child[0]) < chunksize(u->child[1])); |
2643 | } |
2110 | } |
2644 | } |
2111 | } |
2645 | u = u->fd; |
2112 | u = u->fd; |
2646 | } while (u != t); |
2113 | } while (u != t); |
2647 | assert(head != 0); |
2114 | assert(head != 0); |
2648 | } |
2115 | } |
2649 | 2116 | ||
2650 | /* Check all the chunks in a treebin. */ |
2117 | /* Check all the chunks in a treebin. */ |
2651 | static void do_check_treebin(mstate m, bindex_t i) { |
2118 | static void do_check_treebin(mstate m, bindex_t i) { |
2652 | tbinptr* tb = treebin_at(m, i); |
2119 | tbinptr* tb = treebin_at(m, i); |
2653 | tchunkptr t = *tb; |
2120 | tchunkptr t = *tb; |
2654 | int empty = (m->treemap & (1U << i)) == 0; |
2121 | int empty = (m->treemap & (1U << i)) == 0; |
2655 | if (t == 0) |
2122 | if (t == 0) |
2656 | assert(empty); |
2123 | assert(empty); |
2657 | if (!empty) |
2124 | if (!empty) |
2658 | do_check_tree(m, t); |
2125 | do_check_tree(m, t); |
2659 | } |
2126 | } |
2660 | 2127 | ||
2661 | /* Check all the chunks in a smallbin. */ |
2128 | /* Check all the chunks in a smallbin. */ |
2662 | static void do_check_smallbin(mstate m, bindex_t i) { |
2129 | static void do_check_smallbin(mstate m, bindex_t i) { |
2663 | sbinptr b = smallbin_at(m, i); |
2130 | sbinptr b = smallbin_at(m, i); |
2664 | mchunkptr p = b->bk; |
2131 | mchunkptr p = b->bk; |
2665 | unsigned int empty = (m->smallmap & (1U << i)) == 0; |
2132 | unsigned int empty = (m->smallmap & (1U << i)) == 0; |
2666 | if (p == b) |
2133 | if (p == b) |
2667 | assert(empty); |
2134 | assert(empty); |
2668 | if (!empty) { |
2135 | if (!empty) { |
2669 | for (; p != b; p = p->bk) { |
2136 | for (; p != b; p = p->bk) { |
2670 | size_t size = chunksize(p); |
2137 | size_t size = chunksize(p); |
2671 | mchunkptr q; |
2138 | mchunkptr q; |
2672 | /* each chunk claims to be free */ |
2139 | /* each chunk claims to be free */ |
2673 | do_check_free_chunk(m, p); |
2140 | do_check_free_chunk(m, p); |
2674 | /* chunk belongs in bin */ |
2141 | /* chunk belongs in bin */ |
2675 | assert(small_index(size) == i); |
2142 | assert(small_index(size) == i); |
2676 | assert(p->bk == b || chunksize(p->bk) == chunksize(p)); |
2143 | assert(p->bk == b || chunksize(p->bk) == chunksize(p)); |
2677 | /* chunk is followed by an inuse chunk */ |
2144 | /* chunk is followed by an inuse chunk */ |
2678 | q = next_chunk(p); |
2145 | q = next_chunk(p); |
2679 | if (q->head != FENCEPOST_HEAD) |
2146 | if (q->head != FENCEPOST_HEAD) |
2680 | do_check_inuse_chunk(m, q); |
2147 | do_check_inuse_chunk(m, q); |
2681 | } |
2148 | } |
2682 | } |
2149 | } |
2683 | } |
2150 | } |
2684 | 2151 | ||
2685 | /* Find x in a bin. Used in other check functions. */ |
2152 | /* Find x in a bin. Used in other check functions. */ |
2686 | static int bin_find(mstate m, mchunkptr x) { |
2153 | static int bin_find(mstate m, mchunkptr x) { |
2687 | size_t size = chunksize(x); |
2154 | size_t size = chunksize(x); |
2688 | if (is_small(size)) { |
2155 | if (is_small(size)) { |
2689 | bindex_t sidx = small_index(size); |
2156 | bindex_t sidx = small_index(size); |
2690 | sbinptr b = smallbin_at(m, sidx); |
2157 | sbinptr b = smallbin_at(m, sidx); |
2691 | if (smallmap_is_marked(m, sidx)) { |
2158 | if (smallmap_is_marked(m, sidx)) { |
2692 | mchunkptr p = b; |
2159 | mchunkptr p = b; |
2693 | do { |
2160 | do { |
2694 | if (p == x) |
2161 | if (p == x) |
2695 | return 1; |
2162 | return 1; |
2696 | } while ((p = p->fd) != b); |
2163 | } while ((p = p->fd) != b); |
2697 | } |
2164 | } |
2698 | } |
2165 | } |
2699 | else { |
2166 | else { |
2700 | bindex_t tidx; |
2167 | bindex_t tidx; |
2701 | compute_tree_index(size, tidx); |
2168 | compute_tree_index(size, tidx); |
2702 | if (treemap_is_marked(m, tidx)) { |
2169 | if (treemap_is_marked(m, tidx)) { |
2703 | tchunkptr t = *treebin_at(m, tidx); |
2170 | tchunkptr t = *treebin_at(m, tidx); |
2704 | size_t sizebits = size << leftshift_for_tree_index(tidx); |
2171 | size_t sizebits = size << leftshift_for_tree_index(tidx); |
2705 | while (t != 0 && chunksize(t) != size) { |
2172 | while (t != 0 && chunksize(t) != size) { |
2706 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
2173 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
2707 | sizebits <<= 1; |
2174 | sizebits <<= 1; |
2708 | } |
2175 | } |
2709 | if (t != 0) { |
2176 | if (t != 0) { |
2710 | tchunkptr u = t; |
2177 | tchunkptr u = t; |
2711 | do { |
2178 | do { |
2712 | if (u == (tchunkptr)x) |
2179 | if (u == (tchunkptr)x) |
2713 | return 1; |
2180 | return 1; |
2714 | } while ((u = u->fd) != t); |
2181 | } while ((u = u->fd) != t); |
2715 | } |
2182 | } |
2716 | } |
2183 | } |
2717 | } |
2184 | } |
2718 | return 0; |
2185 | return 0; |
2719 | } |
2186 | } |
2720 | 2187 | ||
2721 | /* Traverse each chunk and check it; return total */ |
2188 | /* Traverse each chunk and check it; return total */ |
2722 | static size_t traverse_and_check(mstate m) { |
2189 | static size_t traverse_and_check(mstate m) { |
2723 | size_t sum = 0; |
2190 | size_t sum = 0; |
2724 | if (is_initialized(m)) { |
2191 | if (is_initialized(m)) { |
2725 | msegmentptr s = &m->seg; |
2192 | msegmentptr s = &m->seg; |
2726 | sum += m->topsize + TOP_FOOT_SIZE; |
2193 | sum += m->topsize + TOP_FOOT_SIZE; |
2727 | while (s != 0) { |
2194 | while (s != 0) { |
2728 | mchunkptr q = align_as_chunk(s->base); |
2195 | mchunkptr q = align_as_chunk(s->base); |
2729 | mchunkptr lastq = 0; |
2196 | mchunkptr lastq = 0; |
2730 | assert(pinuse(q)); |
2197 | assert(pinuse(q)); |
2731 | while (segment_holds(s, q) && |
2198 | while (segment_holds(s, q) && |
2732 | q != m->top && q->head != FENCEPOST_HEAD) { |
2199 | q != m->top && q->head != FENCEPOST_HEAD) { |
2733 | sum += chunksize(q); |
2200 | sum += chunksize(q); |
2734 | if (cinuse(q)) { |
2201 | if (cinuse(q)) { |
2735 | assert(!bin_find(m, q)); |
2202 | assert(!bin_find(m, q)); |
2736 | do_check_inuse_chunk(m, q); |
2203 | do_check_inuse_chunk(m, q); |
2737 | } |
2204 | } |
2738 | else { |
2205 | else { |
2739 | assert(q == m->dv || bin_find(m, q)); |
2206 | assert(q == m->dv || bin_find(m, q)); |
2740 | assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */ |
2207 | assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */ |
2741 | do_check_free_chunk(m, q); |
2208 | do_check_free_chunk(m, q); |
2742 | } |
2209 | } |
2743 | lastq = q; |
2210 | lastq = q; |
2744 | q = next_chunk(q); |
2211 | q = next_chunk(q); |
2745 | } |
2212 | } |
2746 | s = s->next; |
2213 | s = s->next; |
2747 | } |
2214 | } |
2748 | } |
2215 | } |
2749 | return sum; |
2216 | return sum; |
2750 | } |
2217 | } |
2751 | 2218 | ||
2752 | /* Check all properties of malloc_state. */ |
2219 | /* Check all properties of malloc_state. */ |
2753 | static void do_check_malloc_state(mstate m) { |
2220 | static void do_check_malloc_state(mstate m) { |
2754 | bindex_t i; |
2221 | bindex_t i; |
2755 | size_t total; |
2222 | size_t total; |
2756 | /* check bins */ |
2223 | /* check bins */ |
2757 | for (i = 0; i < NSMALLBINS; ++i) |
2224 | for (i = 0; i < NSMALLBINS; ++i) |
2758 | do_check_smallbin(m, i); |
2225 | do_check_smallbin(m, i); |
2759 | for (i = 0; i < NTREEBINS; ++i) |
2226 | for (i = 0; i < NTREEBINS; ++i) |
2760 | do_check_treebin(m, i); |
2227 | do_check_treebin(m, i); |
2761 | 2228 | ||
2762 | if (m->dvsize != 0) { /* check dv chunk */ |
2229 | if (m->dvsize != 0) { /* check dv chunk */ |
2763 | do_check_any_chunk(m, m->dv); |
2230 | do_check_any_chunk(m, m->dv); |
2764 | assert(m->dvsize == chunksize(m->dv)); |
2231 | assert(m->dvsize == chunksize(m->dv)); |
2765 | assert(m->dvsize >= MIN_CHUNK_SIZE); |
2232 | assert(m->dvsize >= MIN_CHUNK_SIZE); |
2766 | assert(bin_find(m, m->dv) == 0); |
2233 | assert(bin_find(m, m->dv) == 0); |
2767 | } |
2234 | } |
2768 | 2235 | ||
2769 | if (m->top != 0) { /* check top chunk */ |
2236 | if (m->top != 0) { /* check top chunk */ |
2770 | do_check_top_chunk(m, m->top); |
2237 | do_check_top_chunk(m, m->top); |
2771 | assert(m->topsize == chunksize(m->top)); |
2238 | assert(m->topsize == chunksize(m->top)); |
2772 | assert(m->topsize > 0); |
2239 | assert(m->topsize > 0); |
2773 | assert(bin_find(m, m->top) == 0); |
2240 | assert(bin_find(m, m->top) == 0); |
2774 | } |
2241 | } |
2775 | 2242 | ||
2776 | total = traverse_and_check(m); |
2243 | total = traverse_and_check(m); |
2777 | assert(total <= m->footprint); |
2244 | assert(total <= m->footprint); |
2778 | assert(m->footprint <= m->max_footprint); |
2245 | assert(m->footprint <= m->max_footprint); |
2779 | } |
2246 | } |
2780 | #endif /* DEBUG */ |
2247 | #endif /* DEBUG */ |
2781 | 2248 | ||
2782 | /* ----------------------------- statistics ------------------------------ */ |
2249 | /* ----------------------------- statistics ------------------------------ */ |
2783 | 2250 | ||
2784 | #if !NO_MALLINFO |
2251 | #if !NO_MALLINFO |
2785 | static struct mallinfo internal_mallinfo(mstate m) { |
2252 | static struct mallinfo internal_mallinfo(mstate m) { |
2786 | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
2253 | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
2787 | if (!PREACTION(m)) { |
2254 | if (!PREACTION(m)) { |
2788 | check_malloc_state(m); |
2255 | check_malloc_state(m); |
2789 | if (is_initialized(m)) { |
2256 | if (is_initialized(m)) { |
2790 | size_t nfree = SIZE_T_ONE; /* top always free */ |
2257 | size_t nfree = SIZE_T_ONE; /* top always free */ |
2791 | size_t mfree = m->topsize + TOP_FOOT_SIZE; |
2258 | size_t mfree = m->topsize + TOP_FOOT_SIZE; |
2792 | size_t sum = mfree; |
2259 | size_t sum = mfree; |
2793 | msegmentptr s = &m->seg; |
2260 | msegmentptr s = &m->seg; |
2794 | while (s != 0) { |
2261 | while (s != 0) { |
2795 | mchunkptr q = align_as_chunk(s->base); |
2262 | mchunkptr q = align_as_chunk(s->base); |
2796 | while (segment_holds(s, q) && |
2263 | while (segment_holds(s, q) && |
2797 | q != m->top && q->head != FENCEPOST_HEAD) { |
2264 | q != m->top && q->head != FENCEPOST_HEAD) { |
2798 | size_t sz = chunksize(q); |
2265 | size_t sz = chunksize(q); |
2799 | sum += sz; |
2266 | sum += sz; |
2800 | if (!cinuse(q)) { |
2267 | if (!cinuse(q)) { |
2801 | mfree += sz; |
2268 | mfree += sz; |
2802 | ++nfree; |
2269 | ++nfree; |
2803 | } |
2270 | } |
2804 | q = next_chunk(q); |
2271 | q = next_chunk(q); |
2805 | } |
2272 | } |
2806 | s = s->next; |
2273 | s = s->next; |
2807 | } |
2274 | } |
2808 | 2275 | ||
2809 | nm.arena = sum; |
2276 | nm.arena = sum; |
2810 | nm.ordblks = nfree; |
2277 | nm.ordblks = nfree; |
2811 | nm.hblkhd = m->footprint - sum; |
2278 | nm.hblkhd = m->footprint - sum; |
2812 | nm.usmblks = m->max_footprint; |
2279 | nm.usmblks = m->max_footprint; |
2813 | nm.uordblks = m->footprint - mfree; |
2280 | nm.uordblks = m->footprint - mfree; |
2814 | nm.fordblks = mfree; |
2281 | nm.fordblks = mfree; |
2815 | nm.keepcost = m->topsize; |
2282 | nm.keepcost = m->topsize; |
2816 | } |
2283 | } |
2817 | 2284 | ||
2818 | POSTACTION(m); |
2285 | POSTACTION(m); |
2819 | } |
2286 | } |
2820 | return nm; |
2287 | return nm; |
2821 | } |
2288 | } |
2822 | #endif /* !NO_MALLINFO */ |
2289 | #endif /* !NO_MALLINFO */ |
2823 | 2290 | ||
2824 | static void internal_malloc_stats(mstate m) { |
2291 | static void internal_malloc_stats(mstate m) { |
2825 | if (!PREACTION(m)) { |
2292 | if (!PREACTION(m)) { |
2826 | size_t maxfp = 0; |
2293 | size_t maxfp = 0; |
2827 | size_t fp = 0; |
2294 | size_t fp = 0; |
2828 | size_t used = 0; |
2295 | size_t used = 0; |
2829 | check_malloc_state(m); |
2296 | check_malloc_state(m); |
2830 | if (is_initialized(m)) { |
2297 | if (is_initialized(m)) { |
2831 | msegmentptr s = &m->seg; |
2298 | msegmentptr s = &m->seg; |
2832 | maxfp = m->max_footprint; |
2299 | maxfp = m->max_footprint; |
2833 | fp = m->footprint; |
2300 | fp = m->footprint; |
2834 | used = fp - (m->topsize + TOP_FOOT_SIZE); |
2301 | used = fp - (m->topsize + TOP_FOOT_SIZE); |
2835 | 2302 | ||
2836 | while (s != 0) { |
2303 | while (s != 0) { |
2837 | mchunkptr q = align_as_chunk(s->base); |
2304 | mchunkptr q = align_as_chunk(s->base); |
2838 | while (segment_holds(s, q) && |
2305 | while (segment_holds(s, q) && |
2839 | q != m->top && q->head != FENCEPOST_HEAD) { |
2306 | q != m->top && q->head != FENCEPOST_HEAD) { |
2840 | if (!cinuse(q)) |
2307 | if (!cinuse(q)) |
2841 | used -= chunksize(q); |
2308 | used -= chunksize(q); |
2842 | q = next_chunk(q); |
2309 | q = next_chunk(q); |
2843 | } |
2310 | } |
2844 | s = s->next; |
2311 | s = s->next; |
2845 | } |
2312 | } |
2846 | } |
2313 | } |
2847 | 2314 | ||
2848 | fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); |
2315 | fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); |
2849 | fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); |
2316 | fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); |
2850 | fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); |
2317 | fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); |
2851 | 2318 | ||
2852 | POSTACTION(m); |
2319 | POSTACTION(m); |
2853 | } |
2320 | } |
2854 | } |
2321 | } |
2855 | 2322 | ||
2856 | /* ----------------------- Operations on smallbins ----------------------- */ |
2323 | /* ----------------------- Operations on smallbins ----------------------- */ |
2857 | 2324 | ||
2858 | /* |
2325 | /* |
2859 | Various forms of linking and unlinking are defined as macros. Even |
2326 | Various forms of linking and unlinking are defined as macros. Even |
2860 | the ones for trees, which are very long but have very short typical |
2327 | the ones for trees, which are very long but have very short typical |
2861 | paths. This is ugly but reduces reliance on inlining support of |
2328 | paths. This is ugly but reduces reliance on inlining support of |
2862 | compilers. |
2329 | compilers. |
2863 | */ |
2330 | */ |
2864 | 2331 | ||
2865 | /* Link a free chunk into a smallbin */ |
2332 | /* Link a free chunk into a smallbin */ |
2866 | #define insert_small_chunk(M, P, S) {\ |
2333 | #define insert_small_chunk(M, P, S) {\ |
2867 | bindex_t I = small_index(S);\ |
2334 | bindex_t I = small_index(S);\ |
2868 | mchunkptr B = smallbin_at(M, I);\ |
2335 | mchunkptr B = smallbin_at(M, I);\ |
2869 | mchunkptr F = B;\ |
2336 | mchunkptr F = B;\ |
2870 | assert(S >= MIN_CHUNK_SIZE);\ |
2337 | assert(S >= MIN_CHUNK_SIZE);\ |
2871 | if (!smallmap_is_marked(M, I))\ |
2338 | if (!smallmap_is_marked(M, I))\ |
2872 | mark_smallmap(M, I);\ |
2339 | mark_smallmap(M, I);\ |
2873 | else if (RTCHECK(ok_address(M, B->fd)))\ |
2340 | else if (RTCHECK(ok_address(M, B->fd)))\ |
2874 | F = B->fd;\ |
2341 | F = B->fd;\ |
2875 | else {\ |
2342 | else {\ |
2876 | CORRUPTION_ERROR_ACTION(M);\ |
2343 | CORRUPTION_ERROR_ACTION(M);\ |
2877 | }\ |
2344 | }\ |
2878 | B->fd = P;\ |
2345 | B->fd = P;\ |
2879 | F->bk = P;\ |
2346 | F->bk = P;\ |
2880 | P->fd = F;\ |
2347 | P->fd = F;\ |
2881 | P->bk = B;\ |
2348 | P->bk = B;\ |
2882 | } |
2349 | } |
2883 | 2350 | ||
2884 | /* Unlink a chunk from a smallbin */ |
2351 | /* Unlink a chunk from a smallbin */ |
2885 | #define unlink_small_chunk(M, P, S) {\ |
2352 | #define unlink_small_chunk(M, P, S) {\ |
2886 | mchunkptr F = P->fd;\ |
2353 | mchunkptr F = P->fd;\ |
2887 | mchunkptr B = P->bk;\ |
2354 | mchunkptr B = P->bk;\ |
2888 | bindex_t I = small_index(S);\ |
2355 | bindex_t I = small_index(S);\ |
2889 | assert(P != B);\ |
2356 | assert(P != B);\ |
2890 | assert(P != F);\ |
2357 | assert(P != F);\ |
2891 | assert(chunksize(P) == small_index2size(I));\ |
2358 | assert(chunksize(P) == small_index2size(I));\ |
2892 | if (F == B)\ |
2359 | if (F == B)\ |
2893 | clear_smallmap(M, I);\ |
2360 | clear_smallmap(M, I);\ |
2894 | else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\ |
2361 | else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\ |
2895 | (B == smallbin_at(M,I) || ok_address(M, B)))) {\ |
2362 | (B == smallbin_at(M,I) || ok_address(M, B)))) {\ |
2896 | F->bk = B;\ |
2363 | F->bk = B;\ |
2897 | B->fd = F;\ |
2364 | B->fd = F;\ |
2898 | }\ |
2365 | }\ |
2899 | else {\ |
2366 | else {\ |
2900 | CORRUPTION_ERROR_ACTION(M);\ |
2367 | CORRUPTION_ERROR_ACTION(M);\ |
2901 | }\ |
2368 | }\ |
2902 | } |
2369 | } |
2903 | 2370 | ||
2904 | /* Unlink the first chunk from a smallbin */ |
2371 | /* Unlink the first chunk from a smallbin */ |
2905 | #define unlink_first_small_chunk(M, B, P, I) {\ |
2372 | #define unlink_first_small_chunk(M, B, P, I) {\ |
2906 | mchunkptr F = P->fd;\ |
2373 | mchunkptr F = P->fd;\ |
2907 | assert(P != B);\ |
2374 | assert(P != B);\ |
2908 | assert(P != F);\ |
2375 | assert(P != F);\ |
2909 | assert(chunksize(P) == small_index2size(I));\ |
2376 | assert(chunksize(P) == small_index2size(I));\ |
2910 | if (B == F)\ |
2377 | if (B == F)\ |
2911 | clear_smallmap(M, I);\ |
2378 | clear_smallmap(M, I);\ |
2912 | else if (RTCHECK(ok_address(M, F))) {\ |
2379 | else if (RTCHECK(ok_address(M, F))) {\ |
2913 | B->fd = F;\ |
2380 | B->fd = F;\ |
2914 | F->bk = B;\ |
2381 | F->bk = B;\ |
2915 | }\ |
2382 | }\ |
2916 | else {\ |
2383 | else {\ |
2917 | CORRUPTION_ERROR_ACTION(M);\ |
2384 | CORRUPTION_ERROR_ACTION(M);\ |
2918 | }\ |
2385 | }\ |
2919 | } |
2386 | } |
2920 | 2387 | ||
2921 | /* Replace dv node, binning the old one */ |
2388 | /* Replace dv node, binning the old one */ |
2922 | /* Used only when dvsize known to be small */ |
2389 | /* Used only when dvsize known to be small */ |
2923 | #define replace_dv(M, P, S) {\ |
2390 | #define replace_dv(M, P, S) {\ |
2924 | size_t DVS = M->dvsize;\ |
2391 | size_t DVS = M->dvsize;\ |
2925 | if (DVS != 0) {\ |
2392 | if (DVS != 0) {\ |
2926 | mchunkptr DV = M->dv;\ |
2393 | mchunkptr DV = M->dv;\ |
2927 | assert(is_small(DVS));\ |
2394 | assert(is_small(DVS));\ |
2928 | insert_small_chunk(M, DV, DVS);\ |
2395 | insert_small_chunk(M, DV, DVS);\ |
2929 | }\ |
2396 | }\ |
2930 | M->dvsize = S;\ |
2397 | M->dvsize = S;\ |
2931 | M->dv = P;\ |
2398 | M->dv = P;\ |
2932 | } |
2399 | } |
2933 | 2400 | ||
2934 | /* ------------------------- Operations on trees ------------------------- */ |
2401 | /* ------------------------- Operations on trees ------------------------- */ |
2935 | 2402 | ||
2936 | /* Insert chunk into tree */ |
2403 | /* Insert chunk into tree */ |
2937 | #define insert_large_chunk(M, X, S) {\ |
2404 | #define insert_large_chunk(M, X, S) {\ |
2938 | tbinptr* H;\ |
2405 | tbinptr* H;\ |
2939 | bindex_t I;\ |
2406 | bindex_t I;\ |
2940 | compute_tree_index(S, I);\ |
2407 | compute_tree_index(S, I);\ |
2941 | H = treebin_at(M, I);\ |
2408 | H = treebin_at(M, I);\ |
2942 | X->index = I;\ |
2409 | X->index = I;\ |
2943 | X->child[0] = X->child[1] = 0;\ |
2410 | X->child[0] = X->child[1] = 0;\ |
2944 | if (!treemap_is_marked(M, I)) {\ |
2411 | if (!treemap_is_marked(M, I)) {\ |
2945 | mark_treemap(M, I);\ |
2412 | mark_treemap(M, I);\ |
2946 | *H = X;\ |
2413 | *H = X;\ |
2947 | X->parent = (tchunkptr)H;\ |
2414 | X->parent = (tchunkptr)H;\ |
2948 | X->fd = X->bk = X;\ |
2415 | X->fd = X->bk = X;\ |
2949 | }\ |
2416 | }\ |
2950 | else {\ |
2417 | else {\ |
2951 | tchunkptr T = *H;\ |
2418 | tchunkptr T = *H;\ |
2952 | size_t K = S << leftshift_for_tree_index(I);\ |
2419 | size_t K = S << leftshift_for_tree_index(I);\ |
2953 | for (;;) {\ |
2420 | for (;;) {\ |
2954 | if (chunksize(T) != S) {\ |
2421 | if (chunksize(T) != S) {\ |
2955 | tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ |
2422 | tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ |
2956 | K <<= 1;\ |
2423 | K <<= 1;\ |
2957 | if (*C != 0)\ |
2424 | if (*C != 0)\ |
2958 | T = *C;\ |
2425 | T = *C;\ |
2959 | else if (RTCHECK(ok_address(M, C))) {\ |
2426 | else if (RTCHECK(ok_address(M, C))) {\ |
2960 | *C = X;\ |
2427 | *C = X;\ |
2961 | X->parent = T;\ |
2428 | X->parent = T;\ |
2962 | X->fd = X->bk = X;\ |
2429 | X->fd = X->bk = X;\ |
2963 | break;\ |
2430 | break;\ |
2964 | }\ |
2431 | }\ |
2965 | else {\ |
2432 | else {\ |
2966 | CORRUPTION_ERROR_ACTION(M);\ |
2433 | CORRUPTION_ERROR_ACTION(M);\ |
2967 | break;\ |
2434 | break;\ |
2968 | }\ |
2435 | }\ |
2969 | }\ |
2436 | }\ |
2970 | else {\ |
2437 | else {\ |
2971 | tchunkptr F = T->fd;\ |
2438 | tchunkptr F = T->fd;\ |
2972 | if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ |
2439 | if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ |
2973 | T->fd = F->bk = X;\ |
2440 | T->fd = F->bk = X;\ |
2974 | X->fd = F;\ |
2441 | X->fd = F;\ |
2975 | X->bk = T;\ |
2442 | X->bk = T;\ |
2976 | X->parent = 0;\ |
2443 | X->parent = 0;\ |
2977 | break;\ |
2444 | break;\ |
2978 | }\ |
2445 | }\ |
2979 | else {\ |
2446 | else {\ |
2980 | CORRUPTION_ERROR_ACTION(M);\ |
2447 | CORRUPTION_ERROR_ACTION(M);\ |
2981 | break;\ |
2448 | break;\ |
2982 | }\ |
2449 | }\ |
2983 | }\ |
2450 | }\ |
2984 | }\ |
2451 | }\ |
2985 | }\ |
2452 | }\ |
2986 | } |
2453 | } |
2987 | 2454 | ||
2988 | /* |
2455 | /* |
2989 | Unlink steps: |
2456 | Unlink steps: |
2990 | 2457 | ||
2991 | 1. If x is a chained node, unlink it from its same-sized fd/bk links |
2458 | 1. If x is a chained node, unlink it from its same-sized fd/bk links |
2992 | and choose its bk node as its replacement. |
2459 | and choose its bk node as its replacement. |
2993 | 2. If x was the last node of its size, but not a leaf node, it must |
2460 | 2. If x was the last node of its size, but not a leaf node, it must |
2994 | be replaced with a leaf node (not merely one with an open left or |
2461 | be replaced with a leaf node (not merely one with an open left or |
2995 | right), to make sure that lefts and rights of descendents |
2462 | right), to make sure that lefts and rights of descendents |
2996 | correspond properly to bit masks. We use the rightmost descendent |
2463 | correspond properly to bit masks. We use the rightmost descendent |
2997 | of x. We could use any other leaf, but this is easy to locate and |
2464 | of x. We could use any other leaf, but this is easy to locate and |
2998 | tends to counteract removal of leftmosts elsewhere, and so keeps |
2465 | tends to counteract removal of leftmosts elsewhere, and so keeps |
2999 | paths shorter than minimally guaranteed. This doesn't loop much |
2466 | paths shorter than minimally guaranteed. This doesn't loop much |
3000 | because on average a node in a tree is near the bottom. |
2467 | because on average a node in a tree is near the bottom. |
3001 | 3. If x is the base of a chain (i.e., has parent links) relink |
2468 | 3. If x is the base of a chain (i.e., has parent links) relink |
3002 | x's parent and children to x's replacement (or null if none). |
2469 | x's parent and children to x's replacement (or null if none). |
3003 | */ |
2470 | */ |
3004 | 2471 | ||
3005 | #define unlink_large_chunk(M, X) {\ |
2472 | #define unlink_large_chunk(M, X) {\ |
3006 | tchunkptr XP = X->parent;\ |
2473 | tchunkptr XP = X->parent;\ |
3007 | tchunkptr R;\ |
2474 | tchunkptr R;\ |
3008 | if (X->bk != X) {\ |
2475 | if (X->bk != X) {\ |
3009 | tchunkptr F = X->fd;\ |
2476 | tchunkptr F = X->fd;\ |
3010 | R = X->bk;\ |
2477 | R = X->bk;\ |
3011 | if (RTCHECK(ok_address(M, F))) {\ |
2478 | if (RTCHECK(ok_address(M, F))) {\ |
3012 | F->bk = R;\ |
2479 | F->bk = R;\ |
3013 | R->fd = F;\ |
2480 | R->fd = F;\ |
3014 | }\ |
2481 | }\ |
3015 | else {\ |
2482 | else {\ |
3016 | CORRUPTION_ERROR_ACTION(M);\ |
2483 | CORRUPTION_ERROR_ACTION(M);\ |
3017 | }\ |
2484 | }\ |
3018 | }\ |
2485 | }\ |
3019 | else {\ |
2486 | else {\ |
3020 | tchunkptr* RP;\ |
2487 | tchunkptr* RP;\ |
3021 | if (((R = *(RP = &(X->child[1]))) != 0) ||\ |
2488 | if (((R = *(RP = &(X->child[1]))) != 0) ||\ |
3022 | ((R = *(RP = &(X->child[0]))) != 0)) {\ |
2489 | ((R = *(RP = &(X->child[0]))) != 0)) {\ |
3023 | tchunkptr* CP;\ |
2490 | tchunkptr* CP;\ |
3024 | while ((*(CP = &(R->child[1])) != 0) ||\ |
2491 | while ((*(CP = &(R->child[1])) != 0) ||\ |
3025 | (*(CP = &(R->child[0])) != 0)) {\ |
2492 | (*(CP = &(R->child[0])) != 0)) {\ |
3026 | R = *(RP = CP);\ |
2493 | R = *(RP = CP);\ |
3027 | }\ |
2494 | }\ |
3028 | if (RTCHECK(ok_address(M, RP)))\ |
2495 | if (RTCHECK(ok_address(M, RP)))\ |
3029 | *RP = 0;\ |
2496 | *RP = 0;\ |
3030 | else {\ |
2497 | else {\ |
3031 | CORRUPTION_ERROR_ACTION(M);\ |
2498 | CORRUPTION_ERROR_ACTION(M);\ |
3032 | }\ |
2499 | }\ |
3033 | }\ |
2500 | }\ |
3034 | }\ |
2501 | }\ |
3035 | if (XP != 0) {\ |
2502 | if (XP != 0) {\ |
3036 | tbinptr* H = treebin_at(M, X->index);\ |
2503 | tbinptr* H = treebin_at(M, X->index);\ |
3037 | if (X == *H) {\ |
2504 | if (X == *H) {\ |
3038 | if ((*H = R) == 0) \ |
2505 | if ((*H = R) == 0) \ |
3039 | clear_treemap(M, X->index);\ |
2506 | clear_treemap(M, X->index);\ |
3040 | }\ |
2507 | }\ |
3041 | else if (RTCHECK(ok_address(M, XP))) {\ |
2508 | else if (RTCHECK(ok_address(M, XP))) {\ |
3042 | if (XP->child[0] == X) \ |
2509 | if (XP->child[0] == X) \ |
3043 | XP->child[0] = R;\ |
2510 | XP->child[0] = R;\ |
3044 | else \ |
2511 | else \ |
3045 | XP->child[1] = R;\ |
2512 | XP->child[1] = R;\ |
3046 | }\ |
2513 | }\ |
3047 | else\ |
2514 | else\ |
3048 | CORRUPTION_ERROR_ACTION(M);\ |
2515 | CORRUPTION_ERROR_ACTION(M);\ |
3049 | if (R != 0) {\ |
2516 | if (R != 0) {\ |
3050 | if (RTCHECK(ok_address(M, R))) {\ |
2517 | if (RTCHECK(ok_address(M, R))) {\ |
3051 | tchunkptr C0, C1;\ |
2518 | tchunkptr C0, C1;\ |
3052 | R->parent = XP;\ |
2519 | R->parent = XP;\ |
3053 | if ((C0 = X->child[0]) != 0) {\ |
2520 | if ((C0 = X->child[0]) != 0) {\ |
3054 | if (RTCHECK(ok_address(M, C0))) {\ |
2521 | if (RTCHECK(ok_address(M, C0))) {\ |
3055 | R->child[0] = C0;\ |
2522 | R->child[0] = C0;\ |
3056 | C0->parent = R;\ |
2523 | C0->parent = R;\ |
3057 | }\ |
2524 | }\ |
3058 | else\ |
2525 | else\ |
3059 | CORRUPTION_ERROR_ACTION(M);\ |
2526 | CORRUPTION_ERROR_ACTION(M);\ |
3060 | }\ |
2527 | }\ |
3061 | if ((C1 = X->child[1]) != 0) {\ |
2528 | if ((C1 = X->child[1]) != 0) {\ |
3062 | if (RTCHECK(ok_address(M, C1))) {\ |
2529 | if (RTCHECK(ok_address(M, C1))) {\ |
3063 | R->child[1] = C1;\ |
2530 | R->child[1] = C1;\ |
3064 | C1->parent = R;\ |
2531 | C1->parent = R;\ |
3065 | }\ |
2532 | }\ |
3066 | else\ |
2533 | else\ |
3067 | CORRUPTION_ERROR_ACTION(M);\ |
2534 | CORRUPTION_ERROR_ACTION(M);\ |
3068 | }\ |
2535 | }\ |
3069 | }\ |
2536 | }\ |
3070 | else\ |
2537 | else\ |
3071 | CORRUPTION_ERROR_ACTION(M);\ |
2538 | CORRUPTION_ERROR_ACTION(M);\ |
3072 | }\ |
2539 | }\ |
3073 | }\ |
2540 | }\ |
3074 | } |
2541 | } |
3075 | 2542 | ||
3076 | /* Relays to large vs small bin operations */ |
2543 | /* Relays to large vs small bin operations */ |
3077 | 2544 | ||
3078 | #define insert_chunk(M, P, S)\ |
2545 | #define insert_chunk(M, P, S)\ |
3079 | if (is_small(S)) insert_small_chunk(M, P, S)\ |
2546 | if (is_small(S)) insert_small_chunk(M, P, S)\ |
3080 | else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } |
2547 | else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } |
3081 | 2548 | ||
3082 | #define unlink_chunk(M, P, S)\ |
2549 | #define unlink_chunk(M, P, S)\ |
3083 | if (is_small(S)) unlink_small_chunk(M, P, S)\ |
2550 | if (is_small(S)) unlink_small_chunk(M, P, S)\ |
3084 | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
2551 | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
3085 | 2552 | ||
3086 | 2553 | ||
3087 | /* Relays to internal calls to malloc/free from realloc, memalign etc */ |
2554 | /* Relays to internal calls to malloc/free from realloc, memalign etc */ |
3088 | 2555 | ||
3089 | #if ONLY_MSPACES |
2556 | #if ONLY_MSPACES |
3090 | #define internal_malloc(m, b) mspace_malloc(m, b) |
2557 | #define internal_malloc(m, b) mspace_malloc(m, b) |
3091 | #define internal_free(m, mem) mspace_free(m,mem); |
2558 | #define internal_free(m, mem) mspace_free(m,mem); |
3092 | #else /* ONLY_MSPACES */ |
2559 | #else /* ONLY_MSPACES */ |
3093 | #if MSPACES |
2560 | #if MSPACES |
3094 | #define internal_malloc(m, b)\ |
2561 | #define internal_malloc(m, b)\ |
3095 | (m == gm)? dlmalloc(b) : mspace_malloc(m, b) |
2562 | (m == gm)? dlmalloc(b) : mspace_malloc(m, b) |
3096 | #define internal_free(m, mem)\ |
2563 | #define internal_free(m, mem)\ |
3097 | if (m == gm) dlfree(mem); else mspace_free(m,mem); |
2564 | if (m == gm) dlfree(mem); else mspace_free(m,mem); |
3098 | #else /* MSPACES */ |
2565 | #else /* MSPACES */ |
3099 | #define internal_malloc(m, b) dlmalloc(b) |
2566 | #define internal_malloc(m, b) dlmalloc(b) |
3100 | #define internal_free(m, mem) dlfree(mem) |
2567 | #define internal_free(m, mem) dlfree(mem) |
3101 | #endif /* MSPACES */ |
2568 | #endif /* MSPACES */ |
3102 | #endif /* ONLY_MSPACES */ |
2569 | #endif /* ONLY_MSPACES */ |
3103 | 2570 | ||
3104 | /* ----------------------- Direct-mmapping chunks ----------------------- */ |
2571 | /* ----------------------- Direct-mmapping chunks ----------------------- */ |
3105 | 2572 | ||
3106 | /* |
2573 | /* |
3107 | Directly mmapped chunks are set up with an offset to the start of |
2574 | Directly mmapped chunks are set up with an offset to the start of |
3108 | the mmapped region stored in the prev_foot field of the chunk. This |
2575 | the mmapped region stored in the prev_foot field of the chunk. This |
3109 | allows reconstruction of the required argument to MUNMAP when freed, |
2576 | allows reconstruction of the required argument to MUNMAP when freed, |
3110 | and also allows adjustment of the returned chunk to meet alignment |
2577 | and also allows adjustment of the returned chunk to meet alignment |
3111 | requirements (especially in memalign). There is also enough space |
2578 | requirements (especially in memalign). There is also enough space |
3112 | allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain |
2579 | allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain |
3113 | the PINUSE bit so frees can be checked. |
2580 | the PINUSE bit so frees can be checked. |
3114 | */ |
2581 | */ |
3115 | 2582 | ||
3116 | /* Malloc using mmap */ |
2583 | /* Malloc using mmap */ |
3117 | static void* mmap_alloc(mstate m, size_t nb) { |
2584 | static void* mmap_alloc(mstate m, size_t nb) { |
3118 | size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
2585 | size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
3119 | if (mmsize > nb) { /* Check for wrap around 0 */ |
2586 | if (mmsize > nb) { /* Check for wrap around 0 */ |
3120 | char* mm = (char*)(DIRECT_MMAP(mmsize)); |
2587 | char* mm = (char*)(DIRECT_MMAP(mmsize)); |
3121 | if (mm != CMFAIL) { |
2588 | if (mm != CMFAIL) { |
3122 | size_t offset = align_offset(chunk2mem(mm)); |
2589 | size_t offset = align_offset(chunk2mem(mm)); |
3123 | size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
2590 | size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
3124 | mchunkptr p = (mchunkptr)(mm + offset); |
2591 | mchunkptr p = (mchunkptr)(mm + offset); |
3125 | p->prev_foot = offset | IS_MMAPPED_BIT; |
2592 | p->prev_foot = offset | IS_MMAPPED_BIT; |
3126 | (p)->head = (psize|CINUSE_BIT); |
2593 | (p)->head = (psize|CINUSE_BIT); |
3127 | mark_inuse_foot(m, p, psize); |
2594 | mark_inuse_foot(m, p, psize); |
3128 | chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; |
2595 | chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; |
3129 | chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; |
2596 | chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; |
3130 | 2597 | ||
3131 | if (mm < m->least_addr) |
2598 | if (mm < m->least_addr) |
3132 | m->least_addr = mm; |
2599 | m->least_addr = mm; |
3133 | if ((m->footprint += mmsize) > m->max_footprint) |
2600 | if ((m->footprint += mmsize) > m->max_footprint) |
3134 | m->max_footprint = m->footprint; |
2601 | m->max_footprint = m->footprint; |
3135 | assert(is_aligned(chunk2mem(p))); |
2602 | assert(is_aligned(chunk2mem(p))); |
3136 | check_mmapped_chunk(m, p); |
2603 | check_mmapped_chunk(m, p); |
3137 | return chunk2mem(p); |
2604 | return chunk2mem(p); |
3138 | } |
2605 | } |
3139 | } |
2606 | } |
3140 | return 0; |
2607 | return 0; |
3141 | } |
2608 | } |
3142 | 2609 | ||
3143 | /* Realloc using mmap */ |
2610 | /* Realloc using mmap */ |
3144 | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) { |
2611 | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) { |
3145 | size_t oldsize = chunksize(oldp); |
2612 | size_t oldsize = chunksize(oldp); |
3146 | if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
2613 | if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
3147 | return 0; |
2614 | return 0; |
3148 | /* Keep old chunk if big enough but not too big */ |
2615 | /* Keep old chunk if big enough but not too big */ |
3149 | if (oldsize >= nb + SIZE_T_SIZE && |
2616 | if (oldsize >= nb + SIZE_T_SIZE && |
3150 | (oldsize - nb) <= (mparams.granularity << 1)) |
2617 | (oldsize - nb) <= (mparams.granularity << 1)) |
3151 | return oldp; |
2618 | return oldp; |
3152 | else { |
2619 | else { |
3153 | size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT; |
2620 | size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT; |
3154 | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
2621 | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
3155 | size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES + |
2622 | size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES + |
3156 | CHUNK_ALIGN_MASK); |
2623 | CHUNK_ALIGN_MASK); |
3157 | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
2624 | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
3158 | oldmmsize, newmmsize, 1); |
2625 | oldmmsize, newmmsize, 1); |
3159 | if (cp != CMFAIL) { |
2626 | if (cp != CMFAIL) { |
3160 | mchunkptr newp = (mchunkptr)(cp + offset); |
2627 | mchunkptr newp = (mchunkptr)(cp + offset); |
3161 | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
2628 | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
3162 | newp->head = (psize|CINUSE_BIT); |
2629 | newp->head = (psize|CINUSE_BIT); |
3163 | mark_inuse_foot(m, newp, psize); |
2630 | mark_inuse_foot(m, newp, psize); |
3164 | chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; |
2631 | chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; |
3165 | chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; |
2632 | chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; |
3166 | 2633 | ||
3167 | if (cp < m->least_addr) |
2634 | if (cp < m->least_addr) |
3168 | m->least_addr = cp; |
2635 | m->least_addr = cp; |
3169 | if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) |
2636 | if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) |
3170 | m->max_footprint = m->footprint; |
2637 | m->max_footprint = m->footprint; |
3171 | check_mmapped_chunk(m, newp); |
2638 | check_mmapped_chunk(m, newp); |
3172 | return newp; |
2639 | return newp; |
3173 | } |
2640 | } |
3174 | } |
2641 | } |
3175 | return 0; |
2642 | return 0; |
3176 | } |
2643 | } |
3177 | 2644 | ||
3178 | /* -------------------------- mspace management -------------------------- */ |
2645 | /* -------------------------- mspace management -------------------------- */ |
3179 | 2646 | ||
3180 | /* Initialize top chunk and its size */ |
2647 | /* Initialize top chunk and its size */ |
3181 | static void init_top(mstate m, mchunkptr p, size_t psize) { |
2648 | static void init_top(mstate m, mchunkptr p, size_t psize) { |
3182 | /* Ensure alignment */ |
2649 | /* Ensure alignment */ |
3183 | size_t offset = align_offset(chunk2mem(p)); |
2650 | size_t offset = align_offset(chunk2mem(p)); |
3184 | p = (mchunkptr)((char*)p + offset); |
2651 | p = (mchunkptr)((char*)p + offset); |
3185 | psize -= offset; |
2652 | psize -= offset; |
3186 | 2653 | ||
3187 | m->top = p; |
2654 | m->top = p; |
3188 | m->topsize = psize; |
2655 | m->topsize = psize; |
3189 | p->head = psize | PINUSE_BIT; |
2656 | p->head = psize | PINUSE_BIT; |
3190 | /* set size of fake trailing chunk holding overhead space only once */ |
2657 | /* set size of fake trailing chunk holding overhead space only once */ |
3191 | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
2658 | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
3192 | m->trim_check = mparams.trim_threshold; /* reset on each update */ |
2659 | m->trim_check = mparams.trim_threshold; /* reset on each update */ |
3193 | } |
2660 | } |
3194 | 2661 | ||
3195 | /* Initialize bins for a new mstate that is otherwise zeroed out */ |
2662 | /* Initialize bins for a new mstate that is otherwise zeroed out */ |
3196 | static void init_bins(mstate m) { |
2663 | static void init_bins(mstate m) { |
3197 | /* Establish circular links for smallbins */ |
2664 | /* Establish circular links for smallbins */ |
3198 | bindex_t i; |
2665 | bindex_t i; |
3199 | for (i = 0; i < NSMALLBINS; ++i) { |
2666 | for (i = 0; i < NSMALLBINS; ++i) { |
3200 | sbinptr bin = smallbin_at(m,i); |
2667 | sbinptr bin = smallbin_at(m,i); |
3201 | bin->fd = bin->bk = bin; |
2668 | bin->fd = bin->bk = bin; |
3202 | } |
2669 | } |
3203 | } |
2670 | } |
3204 | 2671 | ||
3205 | #if PROCEED_ON_ERROR |
2672 | #if PROCEED_ON_ERROR |
3206 | 2673 | ||
3207 | /* default corruption action */ |
2674 | /* default corruption action */ |
3208 | static void reset_on_error(mstate m) { |
2675 | static void reset_on_error(mstate m) { |
3209 | int i; |
2676 | int i; |
3210 | ++malloc_corruption_error_count; |
2677 | ++malloc_corruption_error_count; |
3211 | /* Reinitialize fields to forget about all memory */ |
2678 | /* Reinitialize fields to forget about all memory */ |
3212 | m->smallbins = m->treebins = 0; |
2679 | m->smallbins = m->treebins = 0; |
3213 | m->dvsize = m->topsize = 0; |
2680 | m->dvsize = m->topsize = 0; |
3214 | m->seg.base = 0; |
2681 | m->seg.base = 0; |
3215 | m->seg.size = 0; |
2682 | m->seg.size = 0; |
3216 | m->seg.next = 0; |
2683 | m->seg.next = 0; |
3217 | m->top = m->dv = 0; |
2684 | m->top = m->dv = 0; |
3218 | for (i = 0; i < NTREEBINS; ++i) |
2685 | for (i = 0; i < NTREEBINS; ++i) |
3219 | *treebin_at(m, i) = 0; |
2686 | *treebin_at(m, i) = 0; |
3220 | init_bins(m); |
2687 | init_bins(m); |
3221 | } |
2688 | } |
3222 | #endif /* PROCEED_ON_ERROR */ |
2689 | #endif /* PROCEED_ON_ERROR */ |
3223 | 2690 | ||
3224 | /* Allocate chunk and prepend remainder with chunk in successor base. */ |
2691 | /* Allocate chunk and prepend remainder with chunk in successor base. */ |
3225 | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
2692 | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
3226 | size_t nb) { |
2693 | size_t nb) { |
3227 | mchunkptr p = align_as_chunk(newbase); |
2694 | mchunkptr p = align_as_chunk(newbase); |
3228 | mchunkptr oldfirst = align_as_chunk(oldbase); |
2695 | mchunkptr oldfirst = align_as_chunk(oldbase); |
3229 | size_t psize = (char*)oldfirst - (char*)p; |
2696 | size_t psize = (char*)oldfirst - (char*)p; |
3230 | mchunkptr q = chunk_plus_offset(p, nb); |
2697 | mchunkptr q = chunk_plus_offset(p, nb); |
3231 | size_t qsize = psize - nb; |
2698 | size_t qsize = psize - nb; |
3232 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
2699 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
3233 | 2700 | ||
3234 | assert((char*)oldfirst > (char*)q); |
2701 | assert((char*)oldfirst > (char*)q); |
3235 | assert(pinuse(oldfirst)); |
2702 | assert(pinuse(oldfirst)); |
3236 | assert(qsize >= MIN_CHUNK_SIZE); |
2703 | assert(qsize >= MIN_CHUNK_SIZE); |
3237 | 2704 | ||
3238 | /* consolidate remainder with first chunk of old base */ |
2705 | /* consolidate remainder with first chunk of old base */ |
3239 | if (oldfirst == m->top) { |
2706 | if (oldfirst == m->top) { |
3240 | size_t tsize = m->topsize += qsize; |
2707 | size_t tsize = m->topsize += qsize; |
3241 | m->top = q; |
2708 | m->top = q; |
3242 | q->head = tsize | PINUSE_BIT; |
2709 | q->head = tsize | PINUSE_BIT; |
3243 | check_top_chunk(m, q); |
2710 | check_top_chunk(m, q); |
3244 | } |
2711 | } |
3245 | else if (oldfirst == m->dv) { |
2712 | else if (oldfirst == m->dv) { |
3246 | size_t dsize = m->dvsize += qsize; |
2713 | size_t dsize = m->dvsize += qsize; |
3247 | m->dv = q; |
2714 | m->dv = q; |
3248 | set_size_and_pinuse_of_free_chunk(q, dsize); |
2715 | set_size_and_pinuse_of_free_chunk(q, dsize); |
3249 | } |
2716 | } |
3250 | else { |
2717 | else { |
3251 | if (!cinuse(oldfirst)) { |
2718 | if (!cinuse(oldfirst)) { |
3252 | size_t nsize = chunksize(oldfirst); |
2719 | size_t nsize = chunksize(oldfirst); |
3253 | unlink_chunk(m, oldfirst, nsize); |
2720 | unlink_chunk(m, oldfirst, nsize); |
3254 | oldfirst = chunk_plus_offset(oldfirst, nsize); |
2721 | oldfirst = chunk_plus_offset(oldfirst, nsize); |
3255 | qsize += nsize; |
2722 | qsize += nsize; |
3256 | } |
2723 | } |
3257 | set_free_with_pinuse(q, qsize, oldfirst); |
2724 | set_free_with_pinuse(q, qsize, oldfirst); |
3258 | insert_chunk(m, q, qsize); |
2725 | insert_chunk(m, q, qsize); |
3259 | check_free_chunk(m, q); |
2726 | check_free_chunk(m, q); |
3260 | } |
2727 | } |
3261 | 2728 | ||
3262 | check_malloced_chunk(m, chunk2mem(p), nb); |
2729 | check_malloced_chunk(m, chunk2mem(p), nb); |
3263 | return chunk2mem(p); |
2730 | return chunk2mem(p); |
3264 | } |
2731 | } |
3265 | 2732 | ||
3266 | 2733 | ||
3267 | /* Add a segment to hold a new noncontiguous region */ |
2734 | /* Add a segment to hold a new noncontiguous region */ |
3268 | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { |
2735 | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { |
3269 | /* Determine locations and sizes of segment, fenceposts, old top */ |
2736 | /* Determine locations and sizes of segment, fenceposts, old top */ |
3270 | char* old_top = (char*)m->top; |
2737 | char* old_top = (char*)m->top; |
3271 | msegmentptr oldsp = segment_holding(m, old_top); |
2738 | msegmentptr oldsp = segment_holding(m, old_top); |
3272 | char* old_end = oldsp->base + oldsp->size; |
2739 | char* old_end = oldsp->base + oldsp->size; |
3273 | size_t ssize = pad_request(sizeof(struct malloc_segment)); |
2740 | size_t ssize = pad_request(sizeof(struct malloc_segment)); |
3274 | char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
2741 | char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
3275 | size_t offset = align_offset(chunk2mem(rawsp)); |
2742 | size_t offset = align_offset(chunk2mem(rawsp)); |
3276 | char* asp = rawsp + offset; |
2743 | char* asp = rawsp + offset; |
3277 | char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; |
2744 | char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; |
3278 | mchunkptr sp = (mchunkptr)csp; |
2745 | mchunkptr sp = (mchunkptr)csp; |
3279 | msegmentptr ss = (msegmentptr)(chunk2mem(sp)); |
2746 | msegmentptr ss = (msegmentptr)(chunk2mem(sp)); |
3280 | mchunkptr tnext = chunk_plus_offset(sp, ssize); |
2747 | mchunkptr tnext = chunk_plus_offset(sp, ssize); |
3281 | mchunkptr p = tnext; |
2748 | mchunkptr p = tnext; |
3282 | int nfences = 0; |
2749 | int nfences = 0; |
3283 | 2750 | ||
3284 | /* reset top to new space */ |
2751 | /* reset top to new space */ |
3285 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
2752 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
3286 | 2753 | ||
3287 | /* Set up segment record */ |
2754 | /* Set up segment record */ |
3288 | assert(is_aligned(ss)); |
2755 | assert(is_aligned(ss)); |
3289 | set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); |
2756 | set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); |
3290 | *ss = m->seg; /* Push current record */ |
2757 | *ss = m->seg; /* Push current record */ |
3291 | m->seg.base = tbase; |
2758 | m->seg.base = tbase; |
3292 | m->seg.size = tsize; |
2759 | m->seg.size = tsize; |
3293 | m->seg.sflags = mmapped; |
2760 | m->seg.sflags = mmapped; |
3294 | m->seg.next = ss; |
2761 | m->seg.next = ss; |
3295 | 2762 | ||
3296 | /* Insert trailing fenceposts */ |
2763 | /* Insert trailing fenceposts */ |
3297 | for (;;) { |
2764 | for (;;) { |
3298 | mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); |
2765 | mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); |
3299 | p->head = FENCEPOST_HEAD; |
2766 | p->head = FENCEPOST_HEAD; |
3300 | ++nfences; |
2767 | ++nfences; |
3301 | if ((char*)(&(nextp->head)) < old_end) |
2768 | if ((char*)(&(nextp->head)) < old_end) |
3302 | p = nextp; |
2769 | p = nextp; |
3303 | else |
2770 | else |
3304 | break; |
2771 | break; |
3305 | } |
2772 | } |
3306 | assert(nfences >= 2); |
2773 | assert(nfences >= 2); |
3307 | 2774 | ||
3308 | /* Insert the rest of old top into a bin as an ordinary free chunk */ |
2775 | /* Insert the rest of old top into a bin as an ordinary free chunk */ |
3309 | if (csp != old_top) { |
2776 | if (csp != old_top) { |
3310 | mchunkptr q = (mchunkptr)old_top; |
2777 | mchunkptr q = (mchunkptr)old_top; |
3311 | size_t psize = csp - old_top; |
2778 | size_t psize = csp - old_top; |
3312 | mchunkptr tn = chunk_plus_offset(q, psize); |
2779 | mchunkptr tn = chunk_plus_offset(q, psize); |
3313 | set_free_with_pinuse(q, psize, tn); |
2780 | set_free_with_pinuse(q, psize, tn); |
3314 | insert_chunk(m, q, psize); |
2781 | insert_chunk(m, q, psize); |
3315 | } |
2782 | } |
3316 | 2783 | ||
3317 | check_top_chunk(m, m->top); |
2784 | check_top_chunk(m, m->top); |
3318 | } |
2785 | } |
3319 | 2786 | ||
3320 | /* -------------------------- System allocation -------------------------- */ |
2787 | /* -------------------------- System allocation -------------------------- */ |
3321 | 2788 | ||
3322 | /* Get memory from system using MORECORE or MMAP */ |
2789 | /* Get memory from system using MORECORE or MMAP */ |
3323 | static void* sys_alloc(mstate m, size_t nb) { |
2790 | static void* sys_alloc(mstate m, size_t nb) { |
3324 | char* tbase = CMFAIL; |
2791 | char* tbase = CMFAIL; |
3325 | size_t tsize = 0; |
2792 | size_t tsize = 0; |
3326 | flag_t mmap_flag = 0; |
2793 | flag_t mmap_flag = 0; |
3327 | 2794 | ||
3328 | init_mparams(); |
2795 | init_mparams(); |
3329 | 2796 | ||
3330 | /* Directly map large chunks */ |
2797 | /* Directly map large chunks */ |
3331 | if (use_mmap(m) && nb >= mparams.mmap_threshold) { |
2798 | if (use_mmap(m) && nb >= mparams.mmap_threshold) { |
3332 | void* mem = mmap_alloc(m, nb); |
2799 | void* mem = mmap_alloc(m, nb); |
3333 | if (mem != 0) |
2800 | if (mem != 0) |
3334 | return mem; |
2801 | return mem; |
3335 | } |
2802 | } |
3336 | 2803 | ||
3337 | /* |
2804 | /* |
3338 | Try getting memory in any of three ways (in most-preferred to |
2805 | Try getting memory in any of three ways (in most-preferred to |
3339 | least-preferred order): |
2806 | least-preferred order): |
3340 | 1. A call to MORECORE that can normally contiguously extend memory. |
2807 | 1. A call to MORECORE that can normally contiguously extend memory. |
3341 | (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or |
2808 | (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or |
3342 | or main space is mmapped or a previous contiguous call failed) |
2809 | or main space is mmapped or a previous contiguous call failed) |
3343 | 2. A call to MMAP new space (disabled if not HAVE_MMAP). |
2810 | 2. A call to MMAP new space (disabled if not HAVE_MMAP). |
3344 | Note that under the default settings, if MORECORE is unable to |
2811 | Note that under the default settings, if MORECORE is unable to |
3345 | fulfill a request, and HAVE_MMAP is true, then mmap is |
2812 | fulfill a request, and HAVE_MMAP is true, then mmap is |
3346 | used as a noncontiguous system allocator. This is a useful backup |
2813 | used as a noncontiguous system allocator. This is a useful backup |
3347 | strategy for systems with holes in address spaces -- in this case |
2814 | strategy for systems with holes in address spaces -- in this case |
3348 | sbrk cannot contiguously expand the heap, but mmap may be able to |
2815 | sbrk cannot contiguously expand the heap, but mmap may be able to |
3349 | find space. |
2816 | find space. |
3350 | 3. A call to MORECORE that cannot usually contiguously extend memory. |
2817 | 3. A call to MORECORE that cannot usually contiguously extend memory. |
3351 | (disabled if not HAVE_MORECORE) |
2818 | (disabled if not HAVE_MORECORE) |
3352 | */ |
2819 | */ |
3353 | 2820 | ||
3354 | if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { |
2821 | if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { |
3355 | char* br = CMFAIL; |
2822 | char* br = CMFAIL; |
3356 | msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); |
2823 | msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); |
3357 | size_t asize = 0; |
2824 | size_t asize = 0; |
3358 | ACQUIRE_MORECORE_LOCK(); |
2825 | ACQUIRE_MORECORE_LOCK(); |
3359 | 2826 | ||
3360 | if (ss == 0) { /* First time through or recovery */ |
2827 | if (ss == 0) { /* First time through or recovery */ |
3361 | char* base = (char*)CALL_MORECORE(0); |
2828 | char* base = (char*)CALL_MORECORE(0); |
3362 | if (base != CMFAIL) { |
2829 | if (base != CMFAIL) { |
3363 | asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
2830 | asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
3364 | /* Adjust to end on a page boundary */ |
2831 | /* Adjust to end on a page boundary */ |
3365 | if (!is_page_aligned(base)) |
2832 | if (!is_page_aligned(base)) |
3366 | asize += (page_align((size_t)base) - (size_t)base); |
2833 | asize += (page_align((size_t)base) - (size_t)base); |
3367 | /* Can't call MORECORE if size is negative when treated as signed */ |
2834 | /* Can't call MORECORE if size is negative when treated as signed */ |
3368 | if (asize < HALF_MAX_SIZE_T && |
2835 | if (asize < HALF_MAX_SIZE_T && |
3369 | (br = (char*)(CALL_MORECORE(asize))) == base) { |
2836 | (br = (char*)(CALL_MORECORE(asize))) == base) { |
3370 | tbase = base; |
2837 | tbase = base; |
3371 | tsize = asize; |
2838 | tsize = asize; |
3372 | } |
2839 | } |
3373 | } |
2840 | } |
3374 | } |
2841 | } |
3375 | else { |
2842 | else { |
3376 | /* Subtract out existing available top space from MORECORE request. */ |
2843 | /* Subtract out existing available top space from MORECORE request. */ |
3377 | asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE); |
2844 | asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE); |
3378 | /* Use mem here only if it did continuously extend old space */ |
2845 | /* Use mem here only if it did continuously extend old space */ |
3379 | if (asize < HALF_MAX_SIZE_T && |
2846 | if (asize < HALF_MAX_SIZE_T && |
3380 | (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) { |
2847 | (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) { |
3381 | tbase = br; |
2848 | tbase = br; |
3382 | tsize = asize; |
2849 | tsize = asize; |
3383 | } |
2850 | } |
3384 | } |
2851 | } |
3385 | 2852 | ||
3386 | if (tbase == CMFAIL) { /* Cope with partial failure */ |
2853 | if (tbase == CMFAIL) { /* Cope with partial failure */ |
3387 | if (br != CMFAIL) { /* Try to use/extend the space we did get */ |
2854 | if (br != CMFAIL) { /* Try to use/extend the space we did get */ |
3388 | if (asize < HALF_MAX_SIZE_T && |
2855 | if (asize < HALF_MAX_SIZE_T && |
3389 | asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) { |
2856 | asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) { |
3390 | size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize); |
2857 | size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize); |
3391 | if (esize < HALF_MAX_SIZE_T) { |
2858 | if (esize < HALF_MAX_SIZE_T) { |
3392 | char* end = (char*)CALL_MORECORE(esize); |
2859 | char* end = (char*)CALL_MORECORE(esize); |
3393 | if (end != CMFAIL) |
2860 | if (end != CMFAIL) |
3394 | asize += esize; |
2861 | asize += esize; |
3395 | else { /* Can't use; try to release */ |
2862 | else { /* Can't use; try to release */ |
3396 | CALL_MORECORE(-asize); |
2863 | CALL_MORECORE(-asize); |
3397 | br = CMFAIL; |
2864 | br = CMFAIL; |
3398 | } |
2865 | } |
3399 | } |
2866 | } |
3400 | } |
2867 | } |
3401 | } |
2868 | } |
3402 | if (br != CMFAIL) { /* Use the space we did get */ |
2869 | if (br != CMFAIL) { /* Use the space we did get */ |
3403 | tbase = br; |
2870 | tbase = br; |
3404 | tsize = asize; |
2871 | tsize = asize; |
3405 | } |
2872 | } |
3406 | else |
2873 | else |
3407 | disable_contiguous(m); /* Don't try contiguous path in the future */ |
2874 | disable_contiguous(m); /* Don't try contiguous path in the future */ |
3408 | } |
2875 | } |
3409 | 2876 | ||
3410 | RELEASE_MORECORE_LOCK(); |
2877 | RELEASE_MORECORE_LOCK(); |
3411 | } |
2878 | } |
3412 | 2879 | ||
3413 | if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ |
2880 | if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ |
3414 | size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE; |
2881 | size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE; |
3415 | size_t rsize = granularity_align(req); |
2882 | size_t rsize = granularity_align(req); |
3416 | if (rsize > nb) { /* Fail if wraps around zero */ |
2883 | if (rsize > nb) { /* Fail if wraps around zero */ |
3417 | char* mp = (char*)(CALL_MMAP(rsize)); |
2884 | char* mp = (char*)(CALL_MMAP(rsize)); |
3418 | if (mp != CMFAIL) { |
2885 | if (mp != CMFAIL) { |
3419 | tbase = mp; |
2886 | tbase = mp; |
3420 | tsize = rsize; |
2887 | tsize = rsize; |
3421 | mmap_flag = IS_MMAPPED_BIT; |
2888 | mmap_flag = IS_MMAPPED_BIT; |
3422 | } |
2889 | } |
3423 | } |
2890 | } |
3424 | } |
2891 | } |
3425 | 2892 | ||
3426 | if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ |
2893 | if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ |
3427 | size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
2894 | size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE); |
3428 | if (asize < HALF_MAX_SIZE_T) { |
2895 | if (asize < HALF_MAX_SIZE_T) { |
3429 | char* br = CMFAIL; |
2896 | char* br = CMFAIL; |
3430 | char* end = CMFAIL; |
2897 | char* end = CMFAIL; |
3431 | ACQUIRE_MORECORE_LOCK(); |
2898 | ACQUIRE_MORECORE_LOCK(); |
3432 | br = (char*)(CALL_MORECORE(asize)); |
2899 | br = (char*)(CALL_MORECORE(asize)); |
3433 | end = (char*)(CALL_MORECORE(0)); |
2900 | end = (char*)(CALL_MORECORE(0)); |
3434 | RELEASE_MORECORE_LOCK(); |
2901 | RELEASE_MORECORE_LOCK(); |
3435 | if (br != CMFAIL && end != CMFAIL && br < end) { |
2902 | if (br != CMFAIL && end != CMFAIL && br < end) { |
3436 | size_t ssize = end - br; |
2903 | size_t ssize = end - br; |
3437 | if (ssize > nb + TOP_FOOT_SIZE) { |
2904 | if (ssize > nb + TOP_FOOT_SIZE) { |
3438 | tbase = br; |
2905 | tbase = br; |
3439 | tsize = ssize; |
2906 | tsize = ssize; |
3440 | } |
2907 | } |
3441 | } |
2908 | } |
3442 | } |
2909 | } |
3443 | } |
2910 | } |
3444 | 2911 | ||
3445 | if (tbase != CMFAIL) { |
2912 | if (tbase != CMFAIL) { |
3446 | 2913 | ||
3447 | if ((m->footprint += tsize) > m->max_footprint) |
2914 | if ((m->footprint += tsize) > m->max_footprint) |
3448 | m->max_footprint = m->footprint; |
2915 | m->max_footprint = m->footprint; |
3449 | 2916 | ||
3450 | if (!is_initialized(m)) { /* first-time initialization */ |
2917 | if (!is_initialized(m)) { /* first-time initialization */ |
3451 | m->seg.base = m->least_addr = tbase; |
2918 | m->seg.base = m->least_addr = tbase; |
3452 | m->seg.size = tsize; |
2919 | m->seg.size = tsize; |
3453 | m->seg.sflags = mmap_flag; |
2920 | m->seg.sflags = mmap_flag; |
3454 | m->magic = mparams.magic; |
2921 | m->magic = mparams.magic; |
3455 | init_bins(m); |
2922 | init_bins(m); |
3456 | if (is_global(m)) |
2923 | if (is_global(m)) |
3457 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
2924 | init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
3458 | else { |
2925 | else { |
3459 | /* Offset top by embedded malloc_state */ |
2926 | /* Offset top by embedded malloc_state */ |
3460 | mchunkptr mn = next_chunk(mem2chunk(m)); |
2927 | mchunkptr mn = next_chunk(mem2chunk(m)); |
3461 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
2928 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
3462 | } |
2929 | } |
3463 | } |
2930 | } |
3464 | 2931 | ||
3465 | else { |
2932 | else { |
3466 | /* Try to merge with an existing segment */ |
2933 | /* Try to merge with an existing segment */ |
3467 | msegmentptr sp = &m->seg; |
2934 | msegmentptr sp = &m->seg; |
3468 | while (sp != 0 && tbase != sp->base + sp->size) |
2935 | while (sp != 0 && tbase != sp->base + sp->size) |
3469 | sp = sp->next; |
2936 | sp = sp->next; |
3470 | if (sp != 0 && |
2937 | if (sp != 0 && |
3471 | !is_extern_segment(sp) && |
2938 | !is_extern_segment(sp) && |
3472 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag && |
2939 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag && |
3473 | segment_holds(sp, m->top)) { /* append */ |
2940 | segment_holds(sp, m->top)) { /* append */ |
3474 | sp->size += tsize; |
2941 | sp->size += tsize; |
3475 | init_top(m, m->top, m->topsize + tsize); |
2942 | init_top(m, m->top, m->topsize + tsize); |
3476 | } |
2943 | } |
3477 | else { |
2944 | else { |
3478 | if (tbase < m->least_addr) |
2945 | if (tbase < m->least_addr) |
3479 | m->least_addr = tbase; |
2946 | m->least_addr = tbase; |
3480 | sp = &m->seg; |
2947 | sp = &m->seg; |
3481 | while (sp != 0 && sp->base != tbase + tsize) |
2948 | while (sp != 0 && sp->base != tbase + tsize) |
3482 | sp = sp->next; |
2949 | sp = sp->next; |
3483 | if (sp != 0 && |
2950 | if (sp != 0 && |
3484 | !is_extern_segment(sp) && |
2951 | !is_extern_segment(sp) && |
3485 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) { |
2952 | (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) { |
3486 | char* oldbase = sp->base; |
2953 | char* oldbase = sp->base; |
3487 | sp->base = tbase; |
2954 | sp->base = tbase; |
3488 | sp->size += tsize; |
2955 | sp->size += tsize; |
3489 | return prepend_alloc(m, tbase, oldbase, nb); |
2956 | return prepend_alloc(m, tbase, oldbase, nb); |
3490 | } |
2957 | } |
3491 | else |
2958 | else |
3492 | add_segment(m, tbase, tsize, mmap_flag); |
2959 | add_segment(m, tbase, tsize, mmap_flag); |
3493 | } |
2960 | } |
3494 | } |
2961 | } |
3495 | 2962 | ||
3496 | if (nb < m->topsize) { /* Allocate from new or extended top space */ |
2963 | if (nb < m->topsize) { /* Allocate from new or extended top space */ |
3497 | size_t rsize = m->topsize -= nb; |
2964 | size_t rsize = m->topsize -= nb; |
3498 | mchunkptr p = m->top; |
2965 | mchunkptr p = m->top; |
3499 | mchunkptr r = m->top = chunk_plus_offset(p, nb); |
2966 | mchunkptr r = m->top = chunk_plus_offset(p, nb); |
3500 | r->head = rsize | PINUSE_BIT; |
2967 | r->head = rsize | PINUSE_BIT; |
3501 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
2968 | set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
3502 | check_top_chunk(m, m->top); |
2969 | check_top_chunk(m, m->top); |
3503 | check_malloced_chunk(m, chunk2mem(p), nb); |
2970 | check_malloced_chunk(m, chunk2mem(p), nb); |
3504 | return chunk2mem(p); |
2971 | return chunk2mem(p); |
3505 | } |
2972 | } |
3506 | } |
2973 | } |
3507 | 2974 | ||
3508 | MALLOC_FAILURE_ACTION; |
2975 | MALLOC_FAILURE_ACTION; |
3509 | return 0; |
2976 | return 0; |
3510 | } |
2977 | } |
3511 | 2978 | ||
3512 | /* ----------------------- system deallocation -------------------------- */ |
2979 | /* ----------------------- system deallocation -------------------------- */ |
3513 | 2980 | ||
3514 | /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
2981 | /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
3515 | static size_t release_unused_segments(mstate m) { |
2982 | static size_t release_unused_segments(mstate m) { |
3516 | size_t released = 0; |
2983 | size_t released = 0; |
3517 | msegmentptr pred = &m->seg; |
2984 | msegmentptr pred = &m->seg; |
3518 | msegmentptr sp = pred->next; |
2985 | msegmentptr sp = pred->next; |
3519 | while (sp != 0) { |
2986 | while (sp != 0) { |
3520 | char* base = sp->base; |
2987 | char* base = sp->base; |
3521 | size_t size = sp->size; |
2988 | size_t size = sp->size; |
3522 | msegmentptr next = sp->next; |
2989 | msegmentptr next = sp->next; |
3523 | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { |
2990 | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { |
3524 | mchunkptr p = align_as_chunk(base); |
2991 | mchunkptr p = align_as_chunk(base); |
3525 | size_t psize = chunksize(p); |
2992 | size_t psize = chunksize(p); |
3526 | /* Can unmap if first chunk holds entire segment and not pinned */ |
2993 | /* Can unmap if first chunk holds entire segment and not pinned */ |
3527 | if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { |
2994 | if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { |
3528 | tchunkptr tp = (tchunkptr)p; |
2995 | tchunkptr tp = (tchunkptr)p; |
3529 | assert(segment_holds(sp, (char*)sp)); |
2996 | assert(segment_holds(sp, (char*)sp)); |
3530 | if (p == m->dv) { |
2997 | if (p == m->dv) { |
3531 | m->dv = 0; |
2998 | m->dv = 0; |
3532 | m->dvsize = 0; |
2999 | m->dvsize = 0; |
3533 | } |
3000 | } |
3534 | else { |
3001 | else { |
3535 | unlink_large_chunk(m, tp); |
3002 | unlink_large_chunk(m, tp); |
3536 | } |
3003 | } |
3537 | if (CALL_MUNMAP(base, size) == 0) { |
3004 | if (CALL_MUNMAP(base, size) == 0) { |
3538 | released += size; |
3005 | released += size; |
3539 | m->footprint -= size; |
3006 | m->footprint -= size; |
3540 | /* unlink obsoleted record */ |
3007 | /* unlink obsoleted record */ |
3541 | sp = pred; |
3008 | sp = pred; |
3542 | sp->next = next; |
3009 | sp->next = next; |
3543 | } |
3010 | } |
3544 | else { /* back out if cannot unmap */ |
3011 | else { /* back out if cannot unmap */ |
3545 | insert_large_chunk(m, tp, psize); |
3012 | insert_large_chunk(m, tp, psize); |
3546 | } |
3013 | } |
3547 | } |
3014 | } |
3548 | } |
3015 | } |
3549 | pred = sp; |
3016 | pred = sp; |
3550 | sp = next; |
3017 | sp = next; |
3551 | } |
3018 | } |
3552 | return released; |
3019 | return released; |
3553 | } |
3020 | } |
3554 | 3021 | ||
3555 | static int sys_trim(mstate m, size_t pad) { |
3022 | static int sys_trim(mstate m, size_t pad) { |
3556 | size_t released = 0; |
3023 | size_t released = 0; |
3557 | if (pad < MAX_REQUEST && is_initialized(m)) { |
3024 | if (pad < MAX_REQUEST && is_initialized(m)) { |
3558 | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
3025 | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
3559 | 3026 | ||
3560 | if (m->topsize > pad) { |
3027 | if (m->topsize > pad) { |
3561 | /* Shrink top space in granularity-size units, keeping at least one */ |
3028 | /* Shrink top space in granularity-size units, keeping at least one */ |
3562 | size_t unit = mparams.granularity; |
3029 | size_t unit = mparams.granularity; |
3563 | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
3030 | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
3564 | SIZE_T_ONE) * unit; |
3031 | SIZE_T_ONE) * unit; |
3565 | msegmentptr sp = segment_holding(m, (char*)m->top); |
3032 | msegmentptr sp = segment_holding(m, (char*)m->top); |
3566 | 3033 | ||
3567 | if (!is_extern_segment(sp)) { |
3034 | if (!is_extern_segment(sp)) { |
3568 | if (is_mmapped_segment(sp)) { |
3035 | if (is_mmapped_segment(sp)) { |
3569 | if (HAVE_MMAP && |
3036 | if (HAVE_MMAP && |
3570 | sp->size >= extra && |
3037 | sp->size >= extra && |
3571 | !has_segment_link(m, sp)) { /* can't shrink if pinned */ |
3038 | !has_segment_link(m, sp)) { /* can't shrink if pinned */ |
3572 | size_t newsize = sp->size - extra; |
3039 | size_t newsize = sp->size - extra; |
3573 | /* Prefer mremap, fall back to munmap */ |
3040 | /* Prefer mremap, fall back to munmap */ |
3574 | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
3041 | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
3575 | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { |
3042 | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { |
3576 | released = extra; |
3043 | released = extra; |
3577 | } |
3044 | } |
3578 | } |
3045 | } |
3579 | } |
3046 | } |
3580 | else if (HAVE_MORECORE) { |
3047 | else if (HAVE_MORECORE) { |
3581 | if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ |
3048 | if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ |
3582 | extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; |
3049 | extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; |
3583 | ACQUIRE_MORECORE_LOCK(); |
3050 | ACQUIRE_MORECORE_LOCK(); |
3584 | { |
3051 | { |
3585 | /* Make sure end of memory is where we last set it. */ |
3052 | /* Make sure end of memory is where we last set it. */ |
3586 | char* old_br = (char*)(CALL_MORECORE(0)); |
3053 | char* old_br = (char*)(CALL_MORECORE(0)); |
3587 | if (old_br == sp->base + sp->size) { |
3054 | if (old_br == sp->base + sp->size) { |
3588 | char* rel_br = (char*)(CALL_MORECORE(-extra)); |
3055 | char* rel_br = (char*)(CALL_MORECORE(-extra)); |
3589 | char* new_br = (char*)(CALL_MORECORE(0)); |
3056 | char* new_br = (char*)(CALL_MORECORE(0)); |
3590 | if (rel_br != CMFAIL && new_br < old_br) |
3057 | if (rel_br != CMFAIL && new_br < old_br) |
3591 | released = old_br - new_br; |
3058 | released = old_br - new_br; |
3592 | } |
3059 | } |
3593 | } |
3060 | } |
3594 | RELEASE_MORECORE_LOCK(); |
3061 | RELEASE_MORECORE_LOCK(); |
3595 | } |
3062 | } |
3596 | } |
3063 | } |
3597 | 3064 | ||
3598 | if (released != 0) { |
3065 | if (released != 0) { |
3599 | sp->size -= released; |
3066 | sp->size -= released; |
3600 | m->footprint -= released; |
3067 | m->footprint -= released; |
3601 | init_top(m, m->top, m->topsize - released); |
3068 | init_top(m, m->top, m->topsize - released); |
3602 | check_top_chunk(m, m->top); |
3069 | check_top_chunk(m, m->top); |
3603 | } |
3070 | } |
3604 | } |
3071 | } |
3605 | 3072 | ||
3606 | /* Unmap any unused mmapped segments */ |
3073 | /* Unmap any unused mmapped segments */ |
3607 | if (HAVE_MMAP) |
3074 | if (HAVE_MMAP) |
3608 | released += release_unused_segments(m); |
3075 | released += release_unused_segments(m); |
3609 | 3076 | ||
3610 | /* On failure, disable autotrim to avoid repeated failed future calls */ |
3077 | /* On failure, disable autotrim to avoid repeated failed future calls */ |
3611 | if (released == 0) |
3078 | if (released == 0) |
3612 | m->trim_check = MAX_SIZE_T; |
3079 | m->trim_check = MAX_SIZE_T; |
3613 | } |
3080 | } |
3614 | 3081 | ||
3615 | return (released != 0)? 1 : 0; |
3082 | return (released != 0)? 1 : 0; |
3616 | } |
3083 | } |
3617 | 3084 | ||
3618 | /* ---------------------------- malloc support --------------------------- */ |
3085 | /* ---------------------------- malloc support --------------------------- */ |
3619 | 3086 | ||
3620 | /* allocate a large request from the best fitting chunk in a treebin */ |
3087 | /* allocate a large request from the best fitting chunk in a treebin */ |
3621 | static void* tmalloc_large(mstate m, size_t nb) { |
3088 | static void* tmalloc_large(mstate m, size_t nb) { |
3622 | tchunkptr v = 0; |
3089 | tchunkptr v = 0; |
3623 | size_t rsize = -nb; /* Unsigned negation */ |
3090 | size_t rsize = -nb; /* Unsigned negation */ |
3624 | tchunkptr t; |
3091 | tchunkptr t; |
3625 | bindex_t idx; |
3092 | bindex_t idx; |
3626 | compute_tree_index(nb, idx); |
3093 | compute_tree_index(nb, idx); |
3627 | 3094 | ||
3628 | if ((t = *treebin_at(m, idx)) != 0) { |
3095 | if ((t = *treebin_at(m, idx)) != 0) { |
3629 | /* Traverse tree for this bin looking for node with size == nb */ |
3096 | /* Traverse tree for this bin looking for node with size == nb */ |
3630 | size_t sizebits = nb << leftshift_for_tree_index(idx); |
3097 | size_t sizebits = nb << leftshift_for_tree_index(idx); |
3631 | tchunkptr rst = 0; /* The deepest untaken right subtree */ |
3098 | tchunkptr rst = 0; /* The deepest untaken right subtree */ |
3632 | for (;;) { |
3099 | for (;;) { |
3633 | tchunkptr rt; |
3100 | tchunkptr rt; |
3634 | size_t trem = chunksize(t) - nb; |
3101 | size_t trem = chunksize(t) - nb; |
3635 | if (trem < rsize) { |
3102 | if (trem < rsize) { |
3636 | v = t; |
3103 | v = t; |
3637 | if ((rsize = trem) == 0) |
3104 | if ((rsize = trem) == 0) |
3638 | break; |
3105 | break; |
3639 | } |
3106 | } |
3640 | rt = t->child[1]; |
3107 | rt = t->child[1]; |
3641 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
3108 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
3642 | if (rt != 0 && rt != t) |
3109 | if (rt != 0 && rt != t) |
3643 | rst = rt; |
3110 | rst = rt; |
3644 | if (t == 0) { |
3111 | if (t == 0) { |
3645 | t = rst; /* set t to least subtree holding sizes > nb */ |
3112 | t = rst; /* set t to least subtree holding sizes > nb */ |
3646 | break; |
3113 | break; |
3647 | } |
3114 | } |
3648 | sizebits <<= 1; |
3115 | sizebits <<= 1; |
3649 | } |
3116 | } |
3650 | } |
3117 | } |
3651 | 3118 | ||
3652 | if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ |
3119 | if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ |
3653 | binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; |
3120 | binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; |
3654 | if (leftbits != 0) { |
3121 | if (leftbits != 0) { |
3655 | bindex_t i; |
3122 | bindex_t i; |
3656 | binmap_t leastbit = least_bit(leftbits); |
3123 | binmap_t leastbit = least_bit(leftbits); |
3657 | compute_bit2idx(leastbit, i); |
3124 | compute_bit2idx(leastbit, i); |
3658 | t = *treebin_at(m, i); |
3125 | t = *treebin_at(m, i); |
3659 | } |
3126 | } |
3660 | } |
3127 | } |
3661 | 3128 | ||
3662 | while (t != 0) { /* find smallest of tree or subtree */ |
3129 | while (t != 0) { /* find smallest of tree or subtree */ |
3663 | size_t trem = chunksize(t) - nb; |
3130 | size_t trem = chunksize(t) - nb; |
3664 | if (trem < rsize) { |
3131 | if (trem < rsize) { |
3665 | rsize = trem; |
3132 | rsize = trem; |
3666 | v = t; |
3133 | v = t; |
3667 | } |
3134 | } |
3668 | t = leftmost_child(t); |
3135 | t = leftmost_child(t); |
3669 | } |
3136 | } |
3670 | 3137 | ||
3671 | /* If dv is a better fit, return 0 so malloc will use it */ |
3138 | /* If dv is a better fit, return 0 so malloc will use it */ |
3672 | if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { |
3139 | if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { |
3673 | if (RTCHECK(ok_address(m, v))) { /* split */ |
3140 | if (RTCHECK(ok_address(m, v))) { /* split */ |
3674 | mchunkptr r = chunk_plus_offset(v, nb); |
3141 | mchunkptr r = chunk_plus_offset(v, nb); |
3675 | assert(chunksize(v) == rsize + nb); |
3142 | assert(chunksize(v) == rsize + nb); |
3676 | if (RTCHECK(ok_next(v, r))) { |
3143 | if (RTCHECK(ok_next(v, r))) { |
3677 | unlink_large_chunk(m, v); |
3144 | unlink_large_chunk(m, v); |
3678 | if (rsize < MIN_CHUNK_SIZE) |
3145 | if (rsize < MIN_CHUNK_SIZE) |
3679 | set_inuse_and_pinuse(m, v, (rsize + nb)); |
3146 | set_inuse_and_pinuse(m, v, (rsize + nb)); |
3680 | else { |
3147 | else { |
3681 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
3148 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
3682 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3149 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3683 | insert_chunk(m, r, rsize); |
3150 | insert_chunk(m, r, rsize); |
3684 | } |
3151 | } |
3685 | return chunk2mem(v); |
3152 | return chunk2mem(v); |
3686 | } |
3153 | } |
3687 | } |
3154 | } |
3688 | CORRUPTION_ERROR_ACTION(m); |
3155 | CORRUPTION_ERROR_ACTION(m); |
3689 | } |
3156 | } |
3690 | return 0; |
3157 | return 0; |
3691 | } |
3158 | } |
3692 | 3159 | ||
3693 | /* allocate a small request from the best fitting chunk in a treebin */ |
3160 | /* allocate a small request from the best fitting chunk in a treebin */ |
3694 | static void* tmalloc_small(mstate m, size_t nb) { |
3161 | static void* tmalloc_small(mstate m, size_t nb) { |
3695 | tchunkptr t, v; |
3162 | tchunkptr t, v; |
3696 | size_t rsize; |
3163 | size_t rsize; |
3697 | bindex_t i; |
3164 | bindex_t i; |
3698 | binmap_t leastbit = least_bit(m->treemap); |
3165 | binmap_t leastbit = least_bit(m->treemap); |
3699 | compute_bit2idx(leastbit, i); |
3166 | compute_bit2idx(leastbit, i); |
3700 | 3167 | ||
3701 | v = t = *treebin_at(m, i); |
3168 | v = t = *treebin_at(m, i); |
3702 | rsize = chunksize(t) - nb; |
3169 | rsize = chunksize(t) - nb; |
3703 | 3170 | ||
3704 | while ((t = leftmost_child(t)) != 0) { |
3171 | while ((t = leftmost_child(t)) != 0) { |
3705 | size_t trem = chunksize(t) - nb; |
3172 | size_t trem = chunksize(t) - nb; |
3706 | if (trem < rsize) { |
3173 | if (trem < rsize) { |
3707 | rsize = trem; |
3174 | rsize = trem; |
3708 | v = t; |
3175 | v = t; |
3709 | } |
3176 | } |
3710 | } |
3177 | } |
3711 | 3178 | ||
3712 | if (RTCHECK(ok_address(m, v))) { |
3179 | if (RTCHECK(ok_address(m, v))) { |
3713 | mchunkptr r = chunk_plus_offset(v, nb); |
3180 | mchunkptr r = chunk_plus_offset(v, nb); |
3714 | assert(chunksize(v) == rsize + nb); |
3181 | assert(chunksize(v) == rsize + nb); |
3715 | if (RTCHECK(ok_next(v, r))) { |
3182 | if (RTCHECK(ok_next(v, r))) { |
3716 | unlink_large_chunk(m, v); |
3183 | unlink_large_chunk(m, v); |
3717 | if (rsize < MIN_CHUNK_SIZE) |
3184 | if (rsize < MIN_CHUNK_SIZE) |
3718 | set_inuse_and_pinuse(m, v, (rsize + nb)); |
3185 | set_inuse_and_pinuse(m, v, (rsize + nb)); |
3719 | else { |
3186 | else { |
3720 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
3187 | set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
3721 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3188 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3722 | replace_dv(m, r, rsize); |
3189 | replace_dv(m, r, rsize); |
3723 | } |
3190 | } |
3724 | return chunk2mem(v); |
3191 | return chunk2mem(v); |
3725 | } |
3192 | } |
3726 | } |
3193 | } |
3727 | 3194 | ||
3728 | CORRUPTION_ERROR_ACTION(m); |
3195 | CORRUPTION_ERROR_ACTION(m); |
3729 | return 0; |
3196 | return 0; |
3730 | } |
3197 | } |
3731 | 3198 | ||
3732 | /* --------------------------- realloc support --------------------------- */ |
3199 | /* --------------------------- realloc support --------------------------- */ |
3733 | 3200 | ||
3734 | static void* internal_realloc(mstate m, void* oldmem, size_t bytes) { |
3201 | static void* internal_realloc(mstate m, void* oldmem, size_t bytes) { |
3735 | if (bytes >= MAX_REQUEST) { |
3202 | if (bytes >= MAX_REQUEST) { |
3736 | MALLOC_FAILURE_ACTION; |
3203 | MALLOC_FAILURE_ACTION; |
3737 | return 0; |
3204 | return 0; |
3738 | } |
3205 | } |
3739 | if (!PREACTION(m)) { |
3206 | if (!PREACTION(m)) { |
3740 | mchunkptr oldp = mem2chunk(oldmem); |
3207 | mchunkptr oldp = mem2chunk(oldmem); |
3741 | size_t oldsize = chunksize(oldp); |
3208 | size_t oldsize = chunksize(oldp); |
3742 | mchunkptr next = chunk_plus_offset(oldp, oldsize); |
3209 | mchunkptr next = chunk_plus_offset(oldp, oldsize); |
3743 | mchunkptr newp = 0; |
3210 | mchunkptr newp = 0; |
3744 | void* extra = 0; |
3211 | void* extra = 0; |
3745 | 3212 | ||
3746 | /* Try to either shrink or extend into top. Else malloc-copy-free */ |
3213 | /* Try to either shrink or extend into top. Else malloc-copy-free */ |
3747 | 3214 | ||
3748 | if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) && |
3215 | if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) && |
3749 | ok_next(oldp, next) && ok_pinuse(next))) { |
3216 | ok_next(oldp, next) && ok_pinuse(next))) { |
3750 | size_t nb = request2size(bytes); |
3217 | size_t nb = request2size(bytes); |
3751 | if (is_mmapped(oldp)) |
3218 | if (is_mmapped(oldp)) |
3752 | newp = mmap_resize(m, oldp, nb); |
3219 | newp = mmap_resize(m, oldp, nb); |
3753 | else if (oldsize >= nb) { /* already big enough */ |
3220 | else if (oldsize >= nb) { /* already big enough */ |
3754 | size_t rsize = oldsize - nb; |
3221 | size_t rsize = oldsize - nb; |
3755 | newp = oldp; |
3222 | newp = oldp; |
3756 | if (rsize >= MIN_CHUNK_SIZE) { |
3223 | if (rsize >= MIN_CHUNK_SIZE) { |
3757 | mchunkptr remainder = chunk_plus_offset(newp, nb); |
3224 | mchunkptr remainder = chunk_plus_offset(newp, nb); |
3758 | set_inuse(m, newp, nb); |
3225 | set_inuse(m, newp, nb); |
3759 | set_inuse(m, remainder, rsize); |
3226 | set_inuse(m, remainder, rsize); |
3760 | extra = chunk2mem(remainder); |
3227 | extra = chunk2mem(remainder); |
3761 | } |
3228 | } |
3762 | } |
3229 | } |
3763 | else if (next == m->top && oldsize + m->topsize > nb) { |
3230 | else if (next == m->top && oldsize + m->topsize > nb) { |
3764 | /* Expand into top */ |
3231 | /* Expand into top */ |
3765 | size_t newsize = oldsize + m->topsize; |
3232 | size_t newsize = oldsize + m->topsize; |
3766 | size_t newtopsize = newsize - nb; |
3233 | size_t newtopsize = newsize - nb; |
3767 | mchunkptr newtop = chunk_plus_offset(oldp, nb); |
3234 | mchunkptr newtop = chunk_plus_offset(oldp, nb); |
3768 | set_inuse(m, oldp, nb); |
3235 | set_inuse(m, oldp, nb); |
3769 | newtop->head = newtopsize |PINUSE_BIT; |
3236 | newtop->head = newtopsize |PINUSE_BIT; |
3770 | m->top = newtop; |
3237 | m->top = newtop; |
3771 | m->topsize = newtopsize; |
3238 | m->topsize = newtopsize; |
3772 | newp = oldp; |
3239 | newp = oldp; |
3773 | } |
3240 | } |
3774 | } |
3241 | } |
3775 | else { |
3242 | else { |
3776 | USAGE_ERROR_ACTION(m, oldmem); |
3243 | USAGE_ERROR_ACTION(m, oldmem); |
3777 | POSTACTION(m); |
3244 | POSTACTION(m); |
3778 | return 0; |
3245 | return 0; |
3779 | } |
3246 | } |
3780 | 3247 | ||
3781 | POSTACTION(m); |
3248 | POSTACTION(m); |
3782 | 3249 | ||
3783 | if (newp != 0) { |
3250 | if (newp != 0) { |
3784 | if (extra != 0) { |
3251 | if (extra != 0) { |
3785 | internal_free(m, extra); |
3252 | internal_free(m, extra); |
3786 | } |
3253 | } |
3787 | check_inuse_chunk(m, newp); |
3254 | check_inuse_chunk(m, newp); |
3788 | return chunk2mem(newp); |
3255 | return chunk2mem(newp); |
3789 | } |
3256 | } |
3790 | else { |
3257 | else { |
3791 | void* newmem = internal_malloc(m, bytes); |
3258 | void* newmem = internal_malloc(m, bytes); |
3792 | if (newmem != 0) { |
3259 | if (newmem != 0) { |
3793 | size_t oc = oldsize - overhead_for(oldp); |
3260 | size_t oc = oldsize - overhead_for(oldp); |
3794 | memcpy(newmem, oldmem, (oc < bytes)? oc : bytes); |
3261 | memcpy(newmem, oldmem, (oc < bytes)? oc : bytes); |
3795 | internal_free(m, oldmem); |
3262 | internal_free(m, oldmem); |
3796 | } |
3263 | } |
3797 | return newmem; |
3264 | return newmem; |
3798 | } |
3265 | } |
3799 | } |
3266 | } |
3800 | return 0; |
3267 | return 0; |
3801 | } |
3268 | } |
3802 | 3269 | ||
3803 | /* --------------------------- memalign support -------------------------- */ |
3270 | /* --------------------------- memalign support -------------------------- */ |
3804 | 3271 | ||
3805 | static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { |
3272 | static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { |
3806 | if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */ |
3273 | if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */ |
3807 | return internal_malloc(m, bytes); |
3274 | return internal_malloc(m, bytes); |
3808 | if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ |
3275 | if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ |
3809 | alignment = MIN_CHUNK_SIZE; |
3276 | alignment = MIN_CHUNK_SIZE; |
3810 | if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ |
3277 | if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ |
3811 | size_t a = MALLOC_ALIGNMENT << 1; |
3278 | size_t a = MALLOC_ALIGNMENT << 1; |
3812 | while (a < alignment) a <<= 1; |
3279 | while (a < alignment) a <<= 1; |
3813 | alignment = a; |
3280 | alignment = a; |
3814 | } |
3281 | } |
3815 | 3282 | ||
3816 | if (bytes >= MAX_REQUEST - alignment) { |
3283 | if (bytes >= MAX_REQUEST - alignment) { |
3817 | if (m != 0) { /* Test isn't needed but avoids compiler warning */ |
3284 | if (m != 0) { /* Test isn't needed but avoids compiler warning */ |
3818 | MALLOC_FAILURE_ACTION; |
3285 | MALLOC_FAILURE_ACTION; |
3819 | } |
3286 | } |
3820 | } |
3287 | } |
3821 | else { |
3288 | else { |
3822 | size_t nb = request2size(bytes); |
3289 | size_t nb = request2size(bytes); |
3823 | size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; |
3290 | size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; |
3824 | char* mem = (char*)internal_malloc(m, req); |
3291 | char* mem = (char*)internal_malloc(m, req); |
3825 | if (mem != 0) { |
3292 | if (mem != 0) { |
3826 | void* leader = 0; |
3293 | void* leader = 0; |
3827 | void* trailer = 0; |
3294 | void* trailer = 0; |
3828 | mchunkptr p = mem2chunk(mem); |
3295 | mchunkptr p = mem2chunk(mem); |
3829 | 3296 | ||
3830 | if (PREACTION(m)) return 0; |
3297 | if (PREACTION(m)) return 0; |
3831 | if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */ |
3298 | if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */ |
3832 | /* |
3299 | /* |
3833 | Find an aligned spot inside chunk. Since we need to give |
3300 | Find an aligned spot inside chunk. Since we need to give |
3834 | back leading space in a chunk of at least MIN_CHUNK_SIZE, if |
3301 | back leading space in a chunk of at least MIN_CHUNK_SIZE, if |
3835 | the first calculation places us at a spot with less than |
3302 | the first calculation places us at a spot with less than |
3836 | MIN_CHUNK_SIZE leader, we can move to the next aligned spot. |
3303 | MIN_CHUNK_SIZE leader, we can move to the next aligned spot. |
3837 | We've allocated enough total room so that this is always |
3304 | We've allocated enough total room so that this is always |
3838 | possible. |
3305 | possible. |
3839 | */ |
3306 | */ |
3840 | char* br = (char*)mem2chunk((size_t)(((size_t)(mem + |
3307 | char* br = (char*)mem2chunk((size_t)(((size_t)(mem + |
3841 | alignment - |
3308 | alignment - |
3842 | SIZE_T_ONE)) & |
3309 | SIZE_T_ONE)) & |
3843 | -alignment)); |
3310 | -alignment)); |
3844 | char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? |
3311 | char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? |
3845 | br : br+alignment; |
3312 | br : br+alignment; |
3846 | mchunkptr newp = (mchunkptr)pos; |
3313 | mchunkptr newp = (mchunkptr)pos; |
3847 | size_t leadsize = pos - (char*)(p); |
3314 | size_t leadsize = pos - (char*)(p); |
3848 | size_t newsize = chunksize(p) - leadsize; |
3315 | size_t newsize = chunksize(p) - leadsize; |
3849 | 3316 | ||
3850 | if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ |
3317 | if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ |
3851 | newp->prev_foot = p->prev_foot + leadsize; |
3318 | newp->prev_foot = p->prev_foot + leadsize; |
3852 | newp->head = (newsize|CINUSE_BIT); |
3319 | newp->head = (newsize|CINUSE_BIT); |
3853 | } |
3320 | } |
3854 | else { /* Otherwise, give back leader, use the rest */ |
3321 | else { /* Otherwise, give back leader, use the rest */ |
3855 | set_inuse(m, newp, newsize); |
3322 | set_inuse(m, newp, newsize); |
3856 | set_inuse(m, p, leadsize); |
3323 | set_inuse(m, p, leadsize); |
3857 | leader = chunk2mem(p); |
3324 | leader = chunk2mem(p); |
3858 | } |
3325 | } |
3859 | p = newp; |
3326 | p = newp; |
3860 | } |
3327 | } |
3861 | 3328 | ||
3862 | /* Give back spare room at the end */ |
3329 | /* Give back spare room at the end */ |
3863 | if (!is_mmapped(p)) { |
3330 | if (!is_mmapped(p)) { |
3864 | size_t size = chunksize(p); |
3331 | size_t size = chunksize(p); |
3865 | if (size > nb + MIN_CHUNK_SIZE) { |
3332 | if (size > nb + MIN_CHUNK_SIZE) { |
3866 | size_t remainder_size = size - nb; |
3333 | size_t remainder_size = size - nb; |
3867 | mchunkptr remainder = chunk_plus_offset(p, nb); |
3334 | mchunkptr remainder = chunk_plus_offset(p, nb); |
3868 | set_inuse(m, p, nb); |
3335 | set_inuse(m, p, nb); |
3869 | set_inuse(m, remainder, remainder_size); |
3336 | set_inuse(m, remainder, remainder_size); |
3870 | trailer = chunk2mem(remainder); |
3337 | trailer = chunk2mem(remainder); |
3871 | } |
3338 | } |
3872 | } |
3339 | } |
3873 | 3340 | ||
3874 | assert (chunksize(p) >= nb); |
3341 | assert (chunksize(p) >= nb); |
3875 | assert((((size_t)(chunk2mem(p))) % alignment) == 0); |
3342 | assert((((size_t)(chunk2mem(p))) % alignment) == 0); |
3876 | check_inuse_chunk(m, p); |
3343 | check_inuse_chunk(m, p); |
3877 | POSTACTION(m); |
3344 | POSTACTION(m); |
3878 | if (leader != 0) { |
3345 | if (leader != 0) { |
3879 | internal_free(m, leader); |
3346 | internal_free(m, leader); |
3880 | } |
3347 | } |
3881 | if (trailer != 0) { |
3348 | if (trailer != 0) { |
3882 | internal_free(m, trailer); |
3349 | internal_free(m, trailer); |
3883 | } |
3350 | } |
3884 | return chunk2mem(p); |
3351 | return chunk2mem(p); |
3885 | } |
3352 | } |
3886 | } |
3353 | } |
3887 | return 0; |
3354 | return 0; |
3888 | } |
3355 | } |
3889 | 3356 | ||
3890 | /* ------------------------ comalloc/coalloc support --------------------- */ |
3357 | /* ------------------------ comalloc/coalloc support --------------------- */ |
3891 | 3358 | ||
3892 | static void** ialloc(mstate m, |
3359 | static void** ialloc(mstate m, |
3893 | size_t n_elements, |
3360 | size_t n_elements, |
3894 | size_t* sizes, |
3361 | size_t* sizes, |
3895 | int opts, |
3362 | int opts, |
3896 | void* chunks[]) { |
3363 | void* chunks[]) { |
3897 | /* |
3364 | /* |
3898 | This provides common support for independent_X routines, handling |
3365 | This provides common support for independent_X routines, handling |
3899 | all of the combinations that can result. |
3366 | all of the combinations that can result. |
3900 | 3367 | ||
3901 | The opts arg has: |
3368 | The opts arg has: |
3902 | bit 0 set if all elements are same size (using sizes[0]) |
3369 | bit 0 set if all elements are same size (using sizes[0]) |
3903 | bit 1 set if elements should be zeroed |
3370 | bit 1 set if elements should be zeroed |
3904 | */ |
3371 | */ |
3905 | 3372 | ||
3906 | size_t element_size; /* chunksize of each element, if all same */ |
3373 | size_t element_size; /* chunksize of each element, if all same */ |
3907 | size_t contents_size; /* total size of elements */ |
3374 | size_t contents_size; /* total size of elements */ |
3908 | size_t array_size; /* request size of pointer array */ |
3375 | size_t array_size; /* request size of pointer array */ |
3909 | void* mem; /* malloced aggregate space */ |
3376 | void* mem; /* malloced aggregate space */ |
3910 | mchunkptr p; /* corresponding chunk */ |
3377 | mchunkptr p; /* corresponding chunk */ |
3911 | size_t remainder_size; /* remaining bytes while splitting */ |
3378 | size_t remainder_size; /* remaining bytes while splitting */ |
3912 | void** marray; /* either "chunks" or malloced ptr array */ |
3379 | void** marray; /* either "chunks" or malloced ptr array */ |
3913 | mchunkptr array_chunk; /* chunk for malloced ptr array */ |
3380 | mchunkptr array_chunk; /* chunk for malloced ptr array */ |
3914 | flag_t was_enabled; /* to disable mmap */ |
3381 | flag_t was_enabled; /* to disable mmap */ |
3915 | size_t size; |
3382 | size_t size; |
3916 | size_t i; |
3383 | size_t i; |
3917 | 3384 | ||
3918 | /* compute array length, if needed */ |
3385 | /* compute array length, if needed */ |
3919 | if (chunks != 0) { |
3386 | if (chunks != 0) { |
3920 | if (n_elements == 0) |
3387 | if (n_elements == 0) |
3921 | return chunks; /* nothing to do */ |
3388 | return chunks; /* nothing to do */ |
3922 | marray = chunks; |
3389 | marray = chunks; |
3923 | array_size = 0; |
3390 | array_size = 0; |
3924 | } |
3391 | } |
3925 | else { |
3392 | else { |
3926 | /* if empty req, must still return chunk representing empty array */ |
3393 | /* if empty req, must still return chunk representing empty array */ |
3927 | if (n_elements == 0) |
3394 | if (n_elements == 0) |
3928 | return (void**)internal_malloc(m, 0); |
3395 | return (void**)internal_malloc(m, 0); |
3929 | marray = 0; |
3396 | marray = 0; |
3930 | array_size = request2size(n_elements * (sizeof(void*))); |
3397 | array_size = request2size(n_elements * (sizeof(void*))); |
3931 | } |
3398 | } |
3932 | 3399 | ||
3933 | /* compute total element size */ |
3400 | /* compute total element size */ |
3934 | if (opts & 0x1) { /* all-same-size */ |
3401 | if (opts & 0x1) { /* all-same-size */ |
3935 | element_size = request2size(*sizes); |
3402 | element_size = request2size(*sizes); |
3936 | contents_size = n_elements * element_size; |
3403 | contents_size = n_elements * element_size; |
3937 | } |
3404 | } |
3938 | else { /* add up all the sizes */ |
3405 | else { /* add up all the sizes */ |
3939 | element_size = 0; |
3406 | element_size = 0; |
3940 | contents_size = 0; |
3407 | contents_size = 0; |
3941 | for (i = 0; i != n_elements; ++i) |
3408 | for (i = 0; i != n_elements; ++i) |
3942 | contents_size += request2size(sizes[i]); |
3409 | contents_size += request2size(sizes[i]); |
3943 | } |
3410 | } |
3944 | 3411 | ||
3945 | size = contents_size + array_size; |
3412 | size = contents_size + array_size; |
3946 | 3413 | ||
3947 | /* |
3414 | /* |
3948 | Allocate the aggregate chunk. First disable direct-mmapping so |
3415 | Allocate the aggregate chunk. First disable direct-mmapping so |
3949 | malloc won't use it, since we would not be able to later |
3416 | malloc won't use it, since we would not be able to later |
3950 | free/realloc space internal to a segregated mmap region. |
3417 | free/realloc space internal to a segregated mmap region. |
3951 | */ |
3418 | */ |
3952 | was_enabled = use_mmap(m); |
3419 | was_enabled = use_mmap(m); |
3953 | disable_mmap(m); |
3420 | disable_mmap(m); |
3954 | mem = internal_malloc(m, size - CHUNK_OVERHEAD); |
3421 | mem = internal_malloc(m, size - CHUNK_OVERHEAD); |
3955 | if (was_enabled) |
3422 | if (was_enabled) |
3956 | enable_mmap(m); |
3423 | enable_mmap(m); |
3957 | if (mem == 0) |
3424 | if (mem == 0) |
3958 | return 0; |
3425 | return 0; |
3959 | 3426 | ||
3960 | if (PREACTION(m)) return 0; |
3427 | if (PREACTION(m)) return 0; |
3961 | p = mem2chunk(mem); |
3428 | p = mem2chunk(mem); |
3962 | remainder_size = chunksize(p); |
3429 | remainder_size = chunksize(p); |
3963 | 3430 | ||
3964 | assert(!is_mmapped(p)); |
3431 | assert(!is_mmapped(p)); |
3965 | 3432 | ||
3966 | if (opts & 0x2) { /* optionally clear the elements */ |
3433 | if (opts & 0x2) { /* optionally clear the elements */ |
3967 | memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); |
3434 | memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); |
3968 | } |
3435 | } |
3969 | 3436 | ||
3970 | /* If not provided, allocate the pointer array as final part of chunk */ |
3437 | /* If not provided, allocate the pointer array as final part of chunk */ |
3971 | if (marray == 0) { |
3438 | if (marray == 0) { |
3972 | size_t array_chunk_size; |
3439 | size_t array_chunk_size; |
3973 | array_chunk = chunk_plus_offset(p, contents_size); |
3440 | array_chunk = chunk_plus_offset(p, contents_size); |
3974 | array_chunk_size = remainder_size - contents_size; |
3441 | array_chunk_size = remainder_size - contents_size; |
3975 | marray = (void**) (chunk2mem(array_chunk)); |
3442 | marray = (void**) (chunk2mem(array_chunk)); |
3976 | set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); |
3443 | set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); |
3977 | remainder_size = contents_size; |
3444 | remainder_size = contents_size; |
3978 | } |
3445 | } |
3979 | 3446 | ||
3980 | /* split out elements */ |
3447 | /* split out elements */ |
3981 | for (i = 0; ; ++i) { |
3448 | for (i = 0; ; ++i) { |
3982 | marray[i] = chunk2mem(p); |
3449 | marray[i] = chunk2mem(p); |
3983 | if (i != n_elements-1) { |
3450 | if (i != n_elements-1) { |
3984 | if (element_size != 0) |
3451 | if (element_size != 0) |
3985 | size = element_size; |
3452 | size = element_size; |
3986 | else |
3453 | else |
3987 | size = request2size(sizes[i]); |
3454 | size = request2size(sizes[i]); |
3988 | remainder_size -= size; |
3455 | remainder_size -= size; |
3989 | set_size_and_pinuse_of_inuse_chunk(m, p, size); |
3456 | set_size_and_pinuse_of_inuse_chunk(m, p, size); |
3990 | p = chunk_plus_offset(p, size); |
3457 | p = chunk_plus_offset(p, size); |
3991 | } |
3458 | } |
3992 | else { /* the final element absorbs any overallocation slop */ |
3459 | else { /* the final element absorbs any overallocation slop */ |
3993 | set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); |
3460 | set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); |
3994 | break; |
3461 | break; |
3995 | } |
3462 | } |
3996 | } |
3463 | } |
3997 | 3464 | ||
3998 | #if DEBUG |
3465 | #if DEBUG |
3999 | if (marray != chunks) { |
3466 | if (marray != chunks) { |
4000 | /* final element must have exactly exhausted chunk */ |
3467 | /* final element must have exactly exhausted chunk */ |
4001 | if (element_size != 0) { |
3468 | if (element_size != 0) { |
4002 | assert(remainder_size == element_size); |
3469 | assert(remainder_size == element_size); |
4003 | } |
3470 | } |
4004 | else { |
3471 | else { |
4005 | assert(remainder_size == request2size(sizes[i])); |
3472 | assert(remainder_size == request2size(sizes[i])); |
4006 | } |
3473 | } |
4007 | check_inuse_chunk(m, mem2chunk(marray)); |
3474 | check_inuse_chunk(m, mem2chunk(marray)); |
4008 | } |
3475 | } |
4009 | for (i = 0; i != n_elements; ++i) |
3476 | for (i = 0; i != n_elements; ++i) |
4010 | check_inuse_chunk(m, mem2chunk(marray[i])); |
3477 | check_inuse_chunk(m, mem2chunk(marray[i])); |
4011 | 3478 | ||
4012 | #endif /* DEBUG */ |
3479 | #endif /* DEBUG */ |
4013 | 3480 | ||
4014 | POSTACTION(m); |
3481 | POSTACTION(m); |
4015 | return marray; |
3482 | return marray; |
4016 | } |
3483 | } |
4017 | 3484 | ||
4018 | 3485 | ||
4019 | /* -------------------------- public routines ---------------------------- */ |
3486 | /* -------------------------- public routines ---------------------------- */ |
4020 | 3487 | ||
4021 | #if !ONLY_MSPACES |
3488 | #if !ONLY_MSPACES |
4022 | 3489 | ||
4023 | void* dlmalloc(size_t bytes) { |
3490 | void* dlmalloc(size_t bytes) { |
4024 | /* |
3491 | /* |
4025 | Basic algorithm: |
3492 | Basic algorithm: |
4026 | If a small request (< 256 bytes minus per-chunk overhead): |
3493 | If a small request (< 256 bytes minus per-chunk overhead): |
4027 | 1. If one exists, use a remainderless chunk in associated smallbin. |
3494 | 1. If one exists, use a remainderless chunk in associated smallbin. |
4028 | (Remainderless means that there are too few excess bytes to |
3495 | (Remainderless means that there are too few excess bytes to |
4029 | represent as a chunk.) |
3496 | represent as a chunk.) |
4030 | 2. If it is big enough, use the dv chunk, which is normally the |
3497 | 2. If it is big enough, use the dv chunk, which is normally the |
4031 | chunk adjacent to the one used for the most recent small request. |
3498 | chunk adjacent to the one used for the most recent small request. |
4032 | 3. If one exists, split the smallest available chunk in a bin, |
3499 | 3. If one exists, split the smallest available chunk in a bin, |
4033 | saving remainder in dv. |
3500 | saving remainder in dv. |
4034 | 4. If it is big enough, use the top chunk. |
3501 | 4. If it is big enough, use the top chunk. |
4035 | 5. If available, get memory from system and use it |
3502 | 5. If available, get memory from system and use it |
4036 | Otherwise, for a large request: |
3503 | Otherwise, for a large request: |
4037 | 1. Find the smallest available binned chunk that fits, and use it |
3504 | 1. Find the smallest available binned chunk that fits, and use it |
4038 | if it is better fitting than dv chunk, splitting if necessary. |
3505 | if it is better fitting than dv chunk, splitting if necessary. |
4039 | 2. If better fitting than any binned chunk, use the dv chunk. |
3506 | 2. If better fitting than any binned chunk, use the dv chunk. |
4040 | 3. If it is big enough, use the top chunk. |
3507 | 3. If it is big enough, use the top chunk. |
4041 | 4. If request size >= mmap threshold, try to directly mmap this chunk. |
3508 | 4. If request size >= mmap threshold, try to directly mmap this chunk. |
4042 | 5. If available, get memory from system and use it |
3509 | 5. If available, get memory from system and use it |
4043 | 3510 | ||
4044 | The ugly goto's here ensure that postaction occurs along all paths. |
3511 | The ugly goto's here ensure that postaction occurs along all paths. |
4045 | */ |
3512 | */ |
4046 | 3513 | ||
4047 | if (!PREACTION(gm)) { |
3514 | if (!PREACTION(gm)) { |
4048 | void* mem; |
3515 | void* mem; |
4049 | size_t nb; |
3516 | size_t nb; |
4050 | if (bytes <= MAX_SMALL_REQUEST) { |
3517 | if (bytes <= MAX_SMALL_REQUEST) { |
4051 | bindex_t idx; |
3518 | bindex_t idx; |
4052 | binmap_t smallbits; |
3519 | binmap_t smallbits; |
4053 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
3520 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
4054 | idx = small_index(nb); |
3521 | idx = small_index(nb); |
4055 | smallbits = gm->smallmap >> idx; |
3522 | smallbits = gm->smallmap >> idx; |
4056 | 3523 | ||
4057 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
3524 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
4058 | mchunkptr b, p; |
3525 | mchunkptr b, p; |
4059 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
3526 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
4060 | b = smallbin_at(gm, idx); |
3527 | b = smallbin_at(gm, idx); |
4061 | p = b->fd; |
3528 | p = b->fd; |
4062 | assert(chunksize(p) == small_index2size(idx)); |
3529 | assert(chunksize(p) == small_index2size(idx)); |
4063 | unlink_first_small_chunk(gm, b, p, idx); |
3530 | unlink_first_small_chunk(gm, b, p, idx); |
4064 | set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
3531 | set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
4065 | mem = chunk2mem(p); |
3532 | mem = chunk2mem(p); |
4066 | check_malloced_chunk(gm, mem, nb); |
3533 | check_malloced_chunk(gm, mem, nb); |
4067 | goto postaction; |
3534 | goto postaction; |
4068 | } |
3535 | } |
4069 | 3536 | ||
4070 | else if (nb > gm->dvsize) { |
3537 | else if (nb > gm->dvsize) { |
4071 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
3538 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
4072 | mchunkptr b, p, r; |
3539 | mchunkptr b, p, r; |
4073 | size_t rsize; |
3540 | size_t rsize; |
4074 | bindex_t i; |
3541 | bindex_t i; |
4075 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
3542 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
4076 | binmap_t leastbit = least_bit(leftbits); |
3543 | binmap_t leastbit = least_bit(leftbits); |
4077 | compute_bit2idx(leastbit, i); |
3544 | compute_bit2idx(leastbit, i); |
4078 | b = smallbin_at(gm, i); |
3545 | b = smallbin_at(gm, i); |
4079 | p = b->fd; |
3546 | p = b->fd; |
4080 | assert(chunksize(p) == small_index2size(i)); |
3547 | assert(chunksize(p) == small_index2size(i)); |
4081 | unlink_first_small_chunk(gm, b, p, i); |
3548 | unlink_first_small_chunk(gm, b, p, i); |
4082 | rsize = small_index2size(i) - nb; |
3549 | rsize = small_index2size(i) - nb; |
4083 | /* Fit here cannot be remainderless if 4byte sizes */ |
3550 | /* Fit here cannot be remainderless if 4byte sizes */ |
4084 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
3551 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
4085 | set_inuse_and_pinuse(gm, p, small_index2size(i)); |
3552 | set_inuse_and_pinuse(gm, p, small_index2size(i)); |
4086 | else { |
3553 | else { |
4087 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
3554 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
4088 | r = chunk_plus_offset(p, nb); |
3555 | r = chunk_plus_offset(p, nb); |
4089 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3556 | set_size_and_pinuse_of_free_chunk(r, rsize); |
4090 | replace_dv(gm, r, rsize); |
3557 | replace_dv(gm, r, rsize); |
4091 | } |
3558 | } |
4092 | mem = chunk2mem(p); |
3559 | mem = chunk2mem(p); |
4093 | check_malloced_chunk(gm, mem, nb); |
3560 | check_malloced_chunk(gm, mem, nb); |
4094 | goto postaction; |
3561 | goto postaction; |
4095 | } |
3562 | } |
4096 | 3563 | ||
4097 | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { |
3564 | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { |
4098 | check_malloced_chunk(gm, mem, nb); |
3565 | check_malloced_chunk(gm, mem, nb); |
4099 | goto postaction; |
3566 | goto postaction; |
4100 | } |
3567 | } |
4101 | } |
3568 | } |
4102 | } |
3569 | } |
4103 | else if (bytes >= MAX_REQUEST) |
3570 | else if (bytes >= MAX_REQUEST) |
4104 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
3571 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
4105 | else { |
3572 | else { |
4106 | nb = pad_request(bytes); |
3573 | nb = pad_request(bytes); |
4107 | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { |
3574 | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { |
4108 | check_malloced_chunk(gm, mem, nb); |
3575 | check_malloced_chunk(gm, mem, nb); |
4109 | goto postaction; |
3576 | goto postaction; |
4110 | } |
3577 | } |
4111 | } |
3578 | } |
4112 | 3579 | ||
4113 | if (nb <= gm->dvsize) { |
3580 | if (nb <= gm->dvsize) { |
4114 | size_t rsize = gm->dvsize - nb; |
3581 | size_t rsize = gm->dvsize - nb; |
4115 | mchunkptr p = gm->dv; |
3582 | mchunkptr p = gm->dv; |
4116 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
3583 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
4117 | mchunkptr r = gm->dv = chunk_plus_offset(p, nb); |
3584 | mchunkptr r = gm->dv = chunk_plus_offset(p, nb); |
4118 | gm->dvsize = rsize; |
3585 | gm->dvsize = rsize; |
4119 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3586 | set_size_and_pinuse_of_free_chunk(r, rsize); |
4120 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
3587 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
4121 | } |
3588 | } |
4122 | else { /* exhaust dv */ |
3589 | else { /* exhaust dv */ |
4123 | size_t dvs = gm->dvsize; |
3590 | size_t dvs = gm->dvsize; |
4124 | gm->dvsize = 0; |
3591 | gm->dvsize = 0; |
4125 | gm->dv = 0; |
3592 | gm->dv = 0; |
4126 | set_inuse_and_pinuse(gm, p, dvs); |
3593 | set_inuse_and_pinuse(gm, p, dvs); |
4127 | } |
3594 | } |
4128 | mem = chunk2mem(p); |
3595 | mem = chunk2mem(p); |
4129 | check_malloced_chunk(gm, mem, nb); |
3596 | check_malloced_chunk(gm, mem, nb); |
4130 | goto postaction; |
3597 | goto postaction; |
4131 | } |
3598 | } |
4132 | 3599 | ||
4133 | else if (nb < gm->topsize) { /* Split top */ |
3600 | else if (nb < gm->topsize) { /* Split top */ |
4134 | size_t rsize = gm->topsize -= nb; |
3601 | size_t rsize = gm->topsize -= nb; |
4135 | mchunkptr p = gm->top; |
3602 | mchunkptr p = gm->top; |
4136 | mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
3603 | mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
4137 | r->head = rsize | PINUSE_BIT; |
3604 | r->head = rsize | PINUSE_BIT; |
4138 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
3605 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
4139 | mem = chunk2mem(p); |
3606 | mem = chunk2mem(p); |
4140 | check_top_chunk(gm, gm->top); |
3607 | check_top_chunk(gm, gm->top); |
4141 | check_malloced_chunk(gm, mem, nb); |
3608 | check_malloced_chunk(gm, mem, nb); |
4142 | goto postaction; |
3609 | goto postaction; |
4143 | } |
3610 | } |
4144 | 3611 | ||
4145 | mem = sys_alloc(gm, nb); |
3612 | mem = sys_alloc(gm, nb); |
4146 | 3613 | ||
4147 | postaction: |
3614 | postaction: |
4148 | POSTACTION(gm); |
3615 | POSTACTION(gm); |
4149 | return mem; |
3616 | return mem; |
4150 | } |
3617 | } |
4151 | 3618 | ||
4152 | return 0; |
3619 | return 0; |
4153 | } |
3620 | } |
4154 | 3621 | ||
4155 | void dlfree(void* mem) { |
3622 | void dlfree(void* mem) { |
4156 | /* |
3623 | /* |
4157 | Consolidate freed chunks with preceeding or succeeding bordering |
3624 | Consolidate freed chunks with preceeding or succeeding bordering |
4158 | free chunks, if they exist, and then place in a bin. Intermixed |
3625 | free chunks, if they exist, and then place in a bin. Intermixed |
4159 | with special cases for top, dv, mmapped chunks, and usage errors. |
3626 | with special cases for top, dv, mmapped chunks, and usage errors. |
4160 | */ |
3627 | */ |
4161 | 3628 | ||
4162 | if (mem != 0) { |
3629 | if (mem != 0) { |
4163 | mchunkptr p = mem2chunk(mem); |
3630 | mchunkptr p = mem2chunk(mem); |
4164 | #if FOOTERS |
3631 | #if FOOTERS |
4165 | mstate fm = get_mstate_for(p); |
3632 | mstate fm = get_mstate_for(p); |
4166 | if (!ok_magic(fm)) { |
3633 | if (!ok_magic(fm)) { |
4167 | USAGE_ERROR_ACTION(fm, p); |
3634 | USAGE_ERROR_ACTION(fm, p); |
4168 | return; |
3635 | return; |
4169 | } |
3636 | } |
4170 | #else /* FOOTERS */ |
3637 | #else /* FOOTERS */ |
4171 | #define fm gm |
3638 | #define fm gm |
4172 | #endif /* FOOTERS */ |
3639 | #endif /* FOOTERS */ |
4173 | if (!PREACTION(fm)) { |
3640 | if (!PREACTION(fm)) { |
4174 | check_inuse_chunk(fm, p); |
3641 | check_inuse_chunk(fm, p); |
4175 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
3642 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
4176 | size_t psize = chunksize(p); |
3643 | size_t psize = chunksize(p); |
4177 | mchunkptr next = chunk_plus_offset(p, psize); |
3644 | mchunkptr next = chunk_plus_offset(p, psize); |
4178 | if (!pinuse(p)) { |
3645 | if (!pinuse(p)) { |
4179 | size_t prevsize = p->prev_foot; |
3646 | size_t prevsize = p->prev_foot; |
4180 | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
3647 | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
4181 | prevsize &= ~IS_MMAPPED_BIT; |
3648 | prevsize &= ~IS_MMAPPED_BIT; |
4182 | psize += prevsize + MMAP_FOOT_PAD; |
3649 | psize += prevsize + MMAP_FOOT_PAD; |
4183 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
3650 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
4184 | fm->footprint -= psize; |
3651 | fm->footprint -= psize; |
4185 | goto postaction; |
3652 | goto postaction; |
4186 | } |
3653 | } |
4187 | else { |
3654 | else { |
4188 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
3655 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
4189 | psize += prevsize; |
3656 | psize += prevsize; |
4190 | p = prev; |
3657 | p = prev; |
4191 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
3658 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
4192 | if (p != fm->dv) { |
3659 | if (p != fm->dv) { |
4193 | unlink_chunk(fm, p, prevsize); |
3660 | unlink_chunk(fm, p, prevsize); |
4194 | } |
3661 | } |
4195 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
3662 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
4196 | fm->dvsize = psize; |
3663 | fm->dvsize = psize; |
4197 | set_free_with_pinuse(p, psize, next); |
3664 | set_free_with_pinuse(p, psize, next); |
4198 | goto postaction; |
3665 | goto postaction; |
4199 | } |
3666 | } |
4200 | } |
3667 | } |
4201 | else |
3668 | else |
4202 | goto erroraction; |
3669 | goto erroraction; |
4203 | } |
3670 | } |
4204 | } |
3671 | } |
4205 | 3672 | ||
4206 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
3673 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
4207 | if (!cinuse(next)) { /* consolidate forward */ |
3674 | if (!cinuse(next)) { /* consolidate forward */ |
4208 | if (next == fm->top) { |
3675 | if (next == fm->top) { |
4209 | size_t tsize = fm->topsize += psize; |
3676 | size_t tsize = fm->topsize += psize; |
4210 | fm->top = p; |
3677 | fm->top = p; |
4211 | p->head = tsize | PINUSE_BIT; |
3678 | p->head = tsize | PINUSE_BIT; |
4212 | if (p == fm->dv) { |
3679 | if (p == fm->dv) { |
4213 | fm->dv = 0; |
3680 | fm->dv = 0; |
4214 | fm->dvsize = 0; |
3681 | fm->dvsize = 0; |
4215 | } |
3682 | } |
4216 | if (should_trim(fm, tsize)) |
3683 | if (should_trim(fm, tsize)) |
4217 | sys_trim(fm, 0); |
3684 | sys_trim(fm, 0); |
4218 | goto postaction; |
3685 | goto postaction; |
4219 | } |
3686 | } |
4220 | else if (next == fm->dv) { |
3687 | else if (next == fm->dv) { |
4221 | size_t dsize = fm->dvsize += psize; |
3688 | size_t dsize = fm->dvsize += psize; |
4222 | fm->dv = p; |
3689 | fm->dv = p; |
4223 | set_size_and_pinuse_of_free_chunk(p, dsize); |
3690 | set_size_and_pinuse_of_free_chunk(p, dsize); |
4224 | goto postaction; |
3691 | goto postaction; |
4225 | } |
3692 | } |
4226 | else { |
3693 | else { |
4227 | size_t nsize = chunksize(next); |
3694 | size_t nsize = chunksize(next); |
4228 | psize += nsize; |
3695 | psize += nsize; |
4229 | unlink_chunk(fm, next, nsize); |
3696 | unlink_chunk(fm, next, nsize); |
4230 | set_size_and_pinuse_of_free_chunk(p, psize); |
3697 | set_size_and_pinuse_of_free_chunk(p, psize); |
4231 | if (p == fm->dv) { |
3698 | if (p == fm->dv) { |
4232 | fm->dvsize = psize; |
3699 | fm->dvsize = psize; |
4233 | goto postaction; |
3700 | goto postaction; |
4234 | } |
3701 | } |
4235 | } |
3702 | } |
4236 | } |
3703 | } |
4237 | else |
3704 | else |
4238 | set_free_with_pinuse(p, psize, next); |
3705 | set_free_with_pinuse(p, psize, next); |
4239 | insert_chunk(fm, p, psize); |
3706 | insert_chunk(fm, p, psize); |
4240 | check_free_chunk(fm, p); |
3707 | check_free_chunk(fm, p); |
4241 | goto postaction; |
3708 | goto postaction; |
4242 | } |
3709 | } |
4243 | } |
3710 | } |
4244 | erroraction: |
3711 | erroraction: |
4245 | USAGE_ERROR_ACTION(fm, p); |
3712 | USAGE_ERROR_ACTION(fm, p); |
4246 | postaction: |
3713 | postaction: |
4247 | POSTACTION(fm); |
3714 | POSTACTION(fm); |
4248 | } |
3715 | } |
4249 | } |
3716 | } |
4250 | #if !FOOTERS |
3717 | #if !FOOTERS |
4251 | #undef fm |
3718 | #undef fm |
4252 | #endif /* FOOTERS */ |
3719 | #endif /* FOOTERS */ |
4253 | } |
3720 | } |
4254 | 3721 | ||
4255 | void* dlcalloc(size_t n_elements, size_t elem_size) { |
3722 | void* dlcalloc(size_t n_elements, size_t elem_size) { |
4256 | void* mem; |
3723 | void* mem; |
4257 | size_t req = 0; |
3724 | size_t req = 0; |
4258 | if (n_elements != 0) { |
3725 | if (n_elements != 0) { |
4259 | req = n_elements * elem_size; |
3726 | req = n_elements * elem_size; |
4260 | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
3727 | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
4261 | (req / n_elements != elem_size)) |
3728 | (req / n_elements != elem_size)) |
4262 | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
3729 | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
4263 | } |
3730 | } |
4264 | mem = dlmalloc(req); |
3731 | mem = dlmalloc(req); |
4265 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
3732 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
4266 | memset(mem, 0, req); |
3733 | memset(mem, 0, req); |
4267 | return mem; |
3734 | return mem; |
4268 | } |
3735 | } |
4269 | 3736 | ||
4270 | void* dlrealloc(void* oldmem, size_t bytes) { |
3737 | void* dlrealloc(void* oldmem, size_t bytes) { |
4271 | if (oldmem == 0) |
3738 | if (oldmem == 0) |
4272 | return dlmalloc(bytes); |
3739 | return dlmalloc(bytes); |
4273 | #ifdef REALLOC_ZERO_BYTES_FREES |
3740 | #ifdef REALLOC_ZERO_BYTES_FREES |
4274 | if (bytes == 0) { |
3741 | if (bytes == 0) { |
4275 | dlfree(oldmem); |
3742 | dlfree(oldmem); |
4276 | return 0; |
3743 | return 0; |
4277 | } |
3744 | } |
4278 | #endif /* REALLOC_ZERO_BYTES_FREES */ |
3745 | #endif /* REALLOC_ZERO_BYTES_FREES */ |
4279 | else { |
3746 | else { |
4280 | #if ! FOOTERS |
3747 | #if ! FOOTERS |
4281 | mstate m = gm; |
3748 | mstate m = gm; |
4282 | #else /* FOOTERS */ |
3749 | #else /* FOOTERS */ |
4283 | mstate m = get_mstate_for(mem2chunk(oldmem)); |
3750 | mstate m = get_mstate_for(mem2chunk(oldmem)); |
4284 | if (!ok_magic(m)) { |
3751 | if (!ok_magic(m)) { |
4285 | USAGE_ERROR_ACTION(m, oldmem); |
3752 | USAGE_ERROR_ACTION(m, oldmem); |
4286 | return 0; |
3753 | return 0; |
4287 | } |
3754 | } |
4288 | #endif /* FOOTERS */ |
3755 | #endif /* FOOTERS */ |
4289 | return internal_realloc(m, oldmem, bytes); |
3756 | return internal_realloc(m, oldmem, bytes); |
4290 | } |
3757 | } |
4291 | } |
3758 | } |
4292 | 3759 | ||
4293 | void* dlmemalign(size_t alignment, size_t bytes) { |
3760 | void* dlmemalign(size_t alignment, size_t bytes) { |
4294 | return internal_memalign(gm, alignment, bytes); |
3761 | return internal_memalign(gm, alignment, bytes); |
4295 | } |
3762 | } |
4296 | 3763 | ||
4297 | void** dlindependent_calloc(size_t n_elements, size_t elem_size, |
3764 | void** dlindependent_calloc(size_t n_elements, size_t elem_size, |
4298 | void* chunks[]) { |
3765 | void* chunks[]) { |
4299 | size_t sz = elem_size; /* serves as 1-element array */ |
3766 | size_t sz = elem_size; /* serves as 1-element array */ |
4300 | return ialloc(gm, n_elements, &sz, 3, chunks); |
3767 | return ialloc(gm, n_elements, &sz, 3, chunks); |
4301 | } |
3768 | } |
4302 | 3769 | ||
4303 | void** dlindependent_comalloc(size_t n_elements, size_t sizes[], |
3770 | void** dlindependent_comalloc(size_t n_elements, size_t sizes[], |
4304 | void* chunks[]) { |
3771 | void* chunks[]) { |
4305 | return ialloc(gm, n_elements, sizes, 0, chunks); |
3772 | return ialloc(gm, n_elements, sizes, 0, chunks); |
4306 | } |
3773 | } |
4307 | 3774 | ||
4308 | void* dlvalloc(size_t bytes) { |
3775 | void* dlvalloc(size_t bytes) { |
4309 | size_t pagesz; |
3776 | size_t pagesz; |
4310 | init_mparams(); |
3777 | init_mparams(); |
4311 | pagesz = mparams.page_size; |
3778 | pagesz = mparams.page_size; |
4312 | return dlmemalign(pagesz, bytes); |
3779 | return dlmemalign(pagesz, bytes); |
4313 | } |
3780 | } |
4314 | 3781 | ||
4315 | void* dlpvalloc(size_t bytes) { |
3782 | void* dlpvalloc(size_t bytes) { |
4316 | size_t pagesz; |
3783 | size_t pagesz; |
4317 | init_mparams(); |
3784 | init_mparams(); |
4318 | pagesz = mparams.page_size; |
3785 | pagesz = mparams.page_size; |
4319 | return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); |
3786 | return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); |
4320 | } |
3787 | } |
4321 | 3788 | ||
4322 | int dlmalloc_trim(size_t pad) { |
3789 | int dlmalloc_trim(size_t pad) { |
4323 | int result = 0; |
3790 | int result = 0; |
4324 | if (!PREACTION(gm)) { |
3791 | if (!PREACTION(gm)) { |
4325 | result = sys_trim(gm, pad); |
3792 | result = sys_trim(gm, pad); |
4326 | POSTACTION(gm); |
3793 | POSTACTION(gm); |
4327 | } |
3794 | } |
4328 | return result; |
3795 | return result; |
4329 | } |
3796 | } |
4330 | 3797 | ||
4331 | size_t dlmalloc_footprint(void) { |
3798 | size_t dlmalloc_footprint(void) { |
4332 | return gm->footprint; |
3799 | return gm->footprint; |
4333 | } |
3800 | } |
4334 | 3801 | ||
4335 | size_t dlmalloc_max_footprint(void) { |
3802 | size_t dlmalloc_max_footprint(void) { |
4336 | return gm->max_footprint; |
3803 | return gm->max_footprint; |
4337 | } |
3804 | } |
4338 | 3805 | ||
4339 | #if !NO_MALLINFO |
3806 | #if !NO_MALLINFO |
4340 | struct mallinfo dlmallinfo(void) { |
3807 | struct mallinfo dlmallinfo(void) { |
4341 | return internal_mallinfo(gm); |
3808 | return internal_mallinfo(gm); |
4342 | } |
3809 | } |
4343 | #endif /* NO_MALLINFO */ |
3810 | #endif /* NO_MALLINFO */ |
4344 | 3811 | ||
4345 | void dlmalloc_stats() { |
3812 | void dlmalloc_stats() { |
4346 | internal_malloc_stats(gm); |
3813 | internal_malloc_stats(gm); |
4347 | } |
3814 | } |
4348 | 3815 | ||
4349 | size_t dlmalloc_usable_size(void* mem) { |
3816 | size_t dlmalloc_usable_size(void* mem) { |
4350 | if (mem != 0) { |
3817 | if (mem != 0) { |
4351 | mchunkptr p = mem2chunk(mem); |
3818 | mchunkptr p = mem2chunk(mem); |
4352 | if (cinuse(p)) |
3819 | if (cinuse(p)) |
4353 | return chunksize(p) - overhead_for(p); |
3820 | return chunksize(p) - overhead_for(p); |
4354 | } |
3821 | } |
4355 | return 0; |
3822 | return 0; |
4356 | } |
3823 | } |
4357 | 3824 | ||
4358 | int dlmallopt(int param_number, int value) { |
3825 | int dlmallopt(int param_number, int value) { |
4359 | return change_mparam(param_number, value); |
3826 | return change_mparam(param_number, value); |
4360 | } |
3827 | } |
4361 | 3828 | ||
4362 | #endif /* !ONLY_MSPACES */ |
3829 | #endif /* !ONLY_MSPACES */ |
4363 | 3830 | ||
4364 | /* ----------------------------- user mspaces ---------------------------- */ |
3831 | /* ----------------------------- user mspaces ---------------------------- */ |
4365 | 3832 | ||
4366 | #if MSPACES |
3833 | #if MSPACES |
4367 | 3834 | ||
4368 | static mstate init_user_mstate(char* tbase, size_t tsize) { |
3835 | static mstate init_user_mstate(char* tbase, size_t tsize) { |
4369 | size_t msize = pad_request(sizeof(struct malloc_state)); |
3836 | size_t msize = pad_request(sizeof(struct malloc_state)); |
4370 | mchunkptr mn; |
3837 | mchunkptr mn; |
4371 | mchunkptr msp = align_as_chunk(tbase); |
3838 | mchunkptr msp = align_as_chunk(tbase); |
4372 | mstate m = (mstate)(chunk2mem(msp)); |
3839 | mstate m = (mstate)(chunk2mem(msp)); |
4373 | memset(m, 0, msize); |
3840 | memset(m, 0, msize); |
4374 | INITIAL_LOCK(&m->mutex); |
3841 | INITIAL_LOCK(&m->mutex); |
4375 | msp->head = (msize|PINUSE_BIT|CINUSE_BIT); |
3842 | msp->head = (msize|PINUSE_BIT|CINUSE_BIT); |
4376 | m->seg.base = m->least_addr = tbase; |
3843 | m->seg.base = m->least_addr = tbase; |
4377 | m->seg.size = m->footprint = m->max_footprint = tsize; |
3844 | m->seg.size = m->footprint = m->max_footprint = tsize; |
4378 | m->magic = mparams.magic; |
3845 | m->magic = mparams.magic; |
4379 | m->mflags = mparams.default_mflags; |
3846 | m->mflags = mparams.default_mflags; |
4380 | disable_contiguous(m); |
3847 | disable_contiguous(m); |
4381 | init_bins(m); |
3848 | init_bins(m); |
4382 | mn = next_chunk(mem2chunk(m)); |
3849 | mn = next_chunk(mem2chunk(m)); |
4383 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); |
3850 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); |
4384 | check_top_chunk(m, m->top); |
3851 | check_top_chunk(m, m->top); |
4385 | return m; |
3852 | return m; |
4386 | } |
3853 | } |
4387 | 3854 | ||
4388 | mspace create_mspace(size_t capacity, int locked) { |
3855 | mspace create_mspace(size_t capacity, int locked) { |
4389 | mstate m = 0; |
3856 | mstate m = 0; |
4390 | size_t msize = pad_request(sizeof(struct malloc_state)); |
3857 | size_t msize = pad_request(sizeof(struct malloc_state)); |
4391 | init_mparams(); /* Ensure pagesize etc initialized */ |
3858 | init_mparams(); /* Ensure pagesize etc initialized */ |
4392 | 3859 | ||
4393 | if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
3860 | if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
4394 | size_t rs = ((capacity == 0)? mparams.granularity : |
3861 | size_t rs = ((capacity == 0)? mparams.granularity : |
4395 | (capacity + TOP_FOOT_SIZE + msize)); |
3862 | (capacity + TOP_FOOT_SIZE + msize)); |
4396 | size_t tsize = granularity_align(rs); |
3863 | size_t tsize = granularity_align(rs); |
4397 | char* tbase = (char*)(CALL_MMAP(tsize)); |
3864 | char* tbase = (char*)(CALL_MMAP(tsize)); |
4398 | if (tbase != CMFAIL) { |
3865 | if (tbase != CMFAIL) { |
4399 | m = init_user_mstate(tbase, tsize); |
3866 | m = init_user_mstate(tbase, tsize); |
4400 | m->seg.sflags = IS_MMAPPED_BIT; |
3867 | m->seg.sflags = IS_MMAPPED_BIT; |
4401 | set_lock(m, locked); |
3868 | set_lock(m, locked); |
4402 | } |
3869 | } |
4403 | } |
3870 | } |
4404 | return (mspace)m; |
3871 | return (mspace)m; |
4405 | } |
3872 | } |
4406 | 3873 | ||
4407 | mspace create_mspace_with_base(void* base, size_t capacity, int locked) { |
3874 | mspace create_mspace_with_base(void* base, size_t capacity, int locked) { |
4408 | mstate m = 0; |
3875 | mstate m = 0; |
4409 | size_t msize = pad_request(sizeof(struct malloc_state)); |
3876 | size_t msize = pad_request(sizeof(struct malloc_state)); |
4410 | init_mparams(); /* Ensure pagesize etc initialized */ |
3877 | init_mparams(); /* Ensure pagesize etc initialized */ |
4411 | 3878 | ||
4412 | if (capacity > msize + TOP_FOOT_SIZE && |
3879 | if (capacity > msize + TOP_FOOT_SIZE && |
4413 | capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
3880 | capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { |
4414 | m = init_user_mstate((char*)base, capacity); |
3881 | m = init_user_mstate((char*)base, capacity); |
4415 | m->seg.sflags = EXTERN_BIT; |
3882 | m->seg.sflags = EXTERN_BIT; |
4416 | set_lock(m, locked); |
3883 | set_lock(m, locked); |
4417 | } |
3884 | } |
4418 | return (mspace)m; |
3885 | return (mspace)m; |
4419 | } |
3886 | } |
4420 | 3887 | ||
4421 | size_t destroy_mspace(mspace msp) { |
3888 | size_t destroy_mspace(mspace msp) { |
4422 | size_t freed = 0; |
3889 | size_t freed = 0; |
4423 | mstate ms = (mstate)msp; |
3890 | mstate ms = (mstate)msp; |
4424 | if (ok_magic(ms)) { |
3891 | if (ok_magic(ms)) { |
4425 | msegmentptr sp = &ms->seg; |
3892 | msegmentptr sp = &ms->seg; |
4426 | while (sp != 0) { |
3893 | while (sp != 0) { |
4427 | char* base = sp->base; |
3894 | char* base = sp->base; |
4428 | size_t size = sp->size; |
3895 | size_t size = sp->size; |
4429 | flag_t flag = sp->sflags; |
3896 | flag_t flag = sp->sflags; |
4430 | sp = sp->next; |
3897 | sp = sp->next; |
4431 | if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) && |
3898 | if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) && |
4432 | CALL_MUNMAP(base, size) == 0) |
3899 | CALL_MUNMAP(base, size) == 0) |
4433 | freed += size; |
3900 | freed += size; |
4434 | } |
3901 | } |
4435 | } |
3902 | } |
4436 | else { |
3903 | else { |
4437 | USAGE_ERROR_ACTION(ms,ms); |
3904 | USAGE_ERROR_ACTION(ms,ms); |
4438 | } |
3905 | } |
4439 | return freed; |
3906 | return freed; |
4440 | } |
3907 | } |
4441 | 3908 | ||
4442 | /* |
3909 | /* |
4443 | mspace versions of routines are near-clones of the global |
3910 | mspace versions of routines are near-clones of the global |
4444 | versions. This is not so nice but better than the alternatives. |
3911 | versions. This is not so nice but better than the alternatives. |
4445 | */ |
3912 | */ |
4446 | 3913 | ||
4447 | 3914 | ||
4448 | void* mspace_malloc(mspace msp, size_t bytes) { |
3915 | void* mspace_malloc(mspace msp, size_t bytes) { |
4449 | mstate ms = (mstate)msp; |
3916 | mstate ms = (mstate)msp; |
4450 | if (!ok_magic(ms)) { |
3917 | if (!ok_magic(ms)) { |
4451 | USAGE_ERROR_ACTION(ms,ms); |
3918 | USAGE_ERROR_ACTION(ms,ms); |
4452 | return 0; |
3919 | return 0; |
4453 | } |
3920 | } |
4454 | if (!PREACTION(ms)) { |
3921 | if (!PREACTION(ms)) { |
4455 | void* mem; |
3922 | void* mem; |
4456 | size_t nb; |
3923 | size_t nb; |
4457 | if (bytes <= MAX_SMALL_REQUEST) { |
3924 | if (bytes <= MAX_SMALL_REQUEST) { |
4458 | bindex_t idx; |
3925 | bindex_t idx; |
4459 | binmap_t smallbits; |
3926 | binmap_t smallbits; |
4460 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
3927 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
4461 | idx = small_index(nb); |
3928 | idx = small_index(nb); |
4462 | smallbits = ms->smallmap >> idx; |
3929 | smallbits = ms->smallmap >> idx; |
4463 | 3930 | ||
4464 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
3931 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
4465 | mchunkptr b, p; |
3932 | mchunkptr b, p; |
4466 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
3933 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
4467 | b = smallbin_at(ms, idx); |
3934 | b = smallbin_at(ms, idx); |
4468 | p = b->fd; |
3935 | p = b->fd; |
4469 | assert(chunksize(p) == small_index2size(idx)); |
3936 | assert(chunksize(p) == small_index2size(idx)); |
4470 | unlink_first_small_chunk(ms, b, p, idx); |
3937 | unlink_first_small_chunk(ms, b, p, idx); |
4471 | set_inuse_and_pinuse(ms, p, small_index2size(idx)); |
3938 | set_inuse_and_pinuse(ms, p, small_index2size(idx)); |
4472 | mem = chunk2mem(p); |
3939 | mem = chunk2mem(p); |
4473 | check_malloced_chunk(ms, mem, nb); |
3940 | check_malloced_chunk(ms, mem, nb); |
4474 | goto postaction; |
3941 | goto postaction; |
4475 | } |
3942 | } |
4476 | 3943 | ||
4477 | else if (nb > ms->dvsize) { |
3944 | else if (nb > ms->dvsize) { |
4478 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
3945 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
4479 | mchunkptr b, p, r; |
3946 | mchunkptr b, p, r; |
4480 | size_t rsize; |
3947 | size_t rsize; |
4481 | bindex_t i; |
3948 | bindex_t i; |
4482 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
3949 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
4483 | binmap_t leastbit = least_bit(leftbits); |
3950 | binmap_t leastbit = least_bit(leftbits); |
4484 | compute_bit2idx(leastbit, i); |
3951 | compute_bit2idx(leastbit, i); |
4485 | b = smallbin_at(ms, i); |
3952 | b = smallbin_at(ms, i); |
4486 | p = b->fd; |
3953 | p = b->fd; |
4487 | assert(chunksize(p) == small_index2size(i)); |
3954 | assert(chunksize(p) == small_index2size(i)); |
4488 | unlink_first_small_chunk(ms, b, p, i); |
3955 | unlink_first_small_chunk(ms, b, p, i); |
4489 | rsize = small_index2size(i) - nb; |
3956 | rsize = small_index2size(i) - nb; |
4490 | /* Fit here cannot be remainderless if 4byte sizes */ |
3957 | /* Fit here cannot be remainderless if 4byte sizes */ |
4491 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
3958 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
4492 | set_inuse_and_pinuse(ms, p, small_index2size(i)); |
3959 | set_inuse_and_pinuse(ms, p, small_index2size(i)); |
4493 | else { |
3960 | else { |
4494 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
3961 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
4495 | r = chunk_plus_offset(p, nb); |
3962 | r = chunk_plus_offset(p, nb); |
4496 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3963 | set_size_and_pinuse_of_free_chunk(r, rsize); |
4497 | replace_dv(ms, r, rsize); |
3964 | replace_dv(ms, r, rsize); |
4498 | } |
3965 | } |
4499 | mem = chunk2mem(p); |
3966 | mem = chunk2mem(p); |
4500 | check_malloced_chunk(ms, mem, nb); |
3967 | check_malloced_chunk(ms, mem, nb); |
4501 | goto postaction; |
3968 | goto postaction; |
4502 | } |
3969 | } |
4503 | 3970 | ||
4504 | else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { |
3971 | else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { |
4505 | check_malloced_chunk(ms, mem, nb); |
3972 | check_malloced_chunk(ms, mem, nb); |
4506 | goto postaction; |
3973 | goto postaction; |
4507 | } |
3974 | } |
4508 | } |
3975 | } |
4509 | } |
3976 | } |
4510 | else if (bytes >= MAX_REQUEST) |
3977 | else if (bytes >= MAX_REQUEST) |
4511 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
3978 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
4512 | else { |
3979 | else { |
4513 | nb = pad_request(bytes); |
3980 | nb = pad_request(bytes); |
4514 | if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { |
3981 | if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { |
4515 | check_malloced_chunk(ms, mem, nb); |
3982 | check_malloced_chunk(ms, mem, nb); |
4516 | goto postaction; |
3983 | goto postaction; |
4517 | } |
3984 | } |
4518 | } |
3985 | } |
4519 | 3986 | ||
4520 | if (nb <= ms->dvsize) { |
3987 | if (nb <= ms->dvsize) { |
4521 | size_t rsize = ms->dvsize - nb; |
3988 | size_t rsize = ms->dvsize - nb; |
4522 | mchunkptr p = ms->dv; |
3989 | mchunkptr p = ms->dv; |
4523 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
3990 | if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
4524 | mchunkptr r = ms->dv = chunk_plus_offset(p, nb); |
3991 | mchunkptr r = ms->dv = chunk_plus_offset(p, nb); |
4525 | ms->dvsize = rsize; |
3992 | ms->dvsize = rsize; |
4526 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3993 | set_size_and_pinuse_of_free_chunk(r, rsize); |
4527 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
3994 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
4528 | } |
3995 | } |
4529 | else { /* exhaust dv */ |
3996 | else { /* exhaust dv */ |
4530 | size_t dvs = ms->dvsize; |
3997 | size_t dvs = ms->dvsize; |
4531 | ms->dvsize = 0; |
3998 | ms->dvsize = 0; |
4532 | ms->dv = 0; |
3999 | ms->dv = 0; |
4533 | set_inuse_and_pinuse(ms, p, dvs); |
4000 | set_inuse_and_pinuse(ms, p, dvs); |
4534 | } |
4001 | } |
4535 | mem = chunk2mem(p); |
4002 | mem = chunk2mem(p); |
4536 | check_malloced_chunk(ms, mem, nb); |
4003 | check_malloced_chunk(ms, mem, nb); |
4537 | goto postaction; |
4004 | goto postaction; |
4538 | } |
4005 | } |
4539 | 4006 | ||
4540 | else if (nb < ms->topsize) { /* Split top */ |
4007 | else if (nb < ms->topsize) { /* Split top */ |
4541 | size_t rsize = ms->topsize -= nb; |
4008 | size_t rsize = ms->topsize -= nb; |
4542 | mchunkptr p = ms->top; |
4009 | mchunkptr p = ms->top; |
4543 | mchunkptr r = ms->top = chunk_plus_offset(p, nb); |
4010 | mchunkptr r = ms->top = chunk_plus_offset(p, nb); |
4544 | r->head = rsize | PINUSE_BIT; |
4011 | r->head = rsize | PINUSE_BIT; |
4545 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
4012 | set_size_and_pinuse_of_inuse_chunk(ms, p, nb); |
4546 | mem = chunk2mem(p); |
4013 | mem = chunk2mem(p); |
4547 | check_top_chunk(ms, ms->top); |
4014 | check_top_chunk(ms, ms->top); |
4548 | check_malloced_chunk(ms, mem, nb); |
4015 | check_malloced_chunk(ms, mem, nb); |
4549 | goto postaction; |
4016 | goto postaction; |
4550 | } |
4017 | } |
4551 | 4018 | ||
4552 | mem = sys_alloc(ms, nb); |
4019 | mem = sys_alloc(ms, nb); |
4553 | 4020 | ||
4554 | postaction: |
4021 | postaction: |
4555 | POSTACTION(ms); |
4022 | POSTACTION(ms); |
4556 | return mem; |
4023 | return mem; |
4557 | } |
4024 | } |
4558 | 4025 | ||
4559 | return 0; |
4026 | return 0; |
4560 | } |
4027 | } |
4561 | 4028 | ||
4562 | void mspace_free(mspace msp, void* mem) { |
4029 | void mspace_free(mspace msp, void* mem) { |
4563 | if (mem != 0) { |
4030 | if (mem != 0) { |
4564 | mchunkptr p = mem2chunk(mem); |
4031 | mchunkptr p = mem2chunk(mem); |
4565 | #if FOOTERS |
4032 | #if FOOTERS |
4566 | mstate fm = get_mstate_for(p); |
4033 | mstate fm = get_mstate_for(p); |
4567 | #else /* FOOTERS */ |
4034 | #else /* FOOTERS */ |
4568 | mstate fm = (mstate)msp; |
4035 | mstate fm = (mstate)msp; |
4569 | #endif /* FOOTERS */ |
4036 | #endif /* FOOTERS */ |
4570 | if (!ok_magic(fm)) { |
4037 | if (!ok_magic(fm)) { |
4571 | USAGE_ERROR_ACTION(fm, p); |
4038 | USAGE_ERROR_ACTION(fm, p); |
4572 | return; |
4039 | return; |
4573 | } |
4040 | } |
4574 | if (!PREACTION(fm)) { |
4041 | if (!PREACTION(fm)) { |
4575 | check_inuse_chunk(fm, p); |
4042 | check_inuse_chunk(fm, p); |
4576 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
4043 | if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) { |
4577 | size_t psize = chunksize(p); |
4044 | size_t psize = chunksize(p); |
4578 | mchunkptr next = chunk_plus_offset(p, psize); |
4045 | mchunkptr next = chunk_plus_offset(p, psize); |
4579 | if (!pinuse(p)) { |
4046 | if (!pinuse(p)) { |
4580 | size_t prevsize = p->prev_foot; |
4047 | size_t prevsize = p->prev_foot; |
4581 | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
4048 | if ((prevsize & IS_MMAPPED_BIT) != 0) { |
4582 | prevsize &= ~IS_MMAPPED_BIT; |
4049 | prevsize &= ~IS_MMAPPED_BIT; |
4583 | psize += prevsize + MMAP_FOOT_PAD; |
4050 | psize += prevsize + MMAP_FOOT_PAD; |
4584 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
4051 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
4585 | fm->footprint -= psize; |
4052 | fm->footprint -= psize; |
4586 | goto postaction; |
4053 | goto postaction; |
4587 | } |
4054 | } |
4588 | else { |
4055 | else { |
4589 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
4056 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
4590 | psize += prevsize; |
4057 | psize += prevsize; |
4591 | p = prev; |
4058 | p = prev; |
4592 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
4059 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
4593 | if (p != fm->dv) { |
4060 | if (p != fm->dv) { |
4594 | unlink_chunk(fm, p, prevsize); |
4061 | unlink_chunk(fm, p, prevsize); |
4595 | } |
4062 | } |
4596 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
4063 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
4597 | fm->dvsize = psize; |
4064 | fm->dvsize = psize; |
4598 | set_free_with_pinuse(p, psize, next); |
4065 | set_free_with_pinuse(p, psize, next); |
4599 | goto postaction; |
4066 | goto postaction; |
4600 | } |
4067 | } |
4601 | } |
4068 | } |
4602 | else |
4069 | else |
4603 | goto erroraction; |
4070 | goto erroraction; |
4604 | } |
4071 | } |
4605 | } |
4072 | } |
4606 | 4073 | ||
4607 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
4074 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
4608 | if (!cinuse(next)) { /* consolidate forward */ |
4075 | if (!cinuse(next)) { /* consolidate forward */ |
4609 | if (next == fm->top) { |
4076 | if (next == fm->top) { |
4610 | size_t tsize = fm->topsize += psize; |
4077 | size_t tsize = fm->topsize += psize; |
4611 | fm->top = p; |
4078 | fm->top = p; |
4612 | p->head = tsize | PINUSE_BIT; |
4079 | p->head = tsize | PINUSE_BIT; |
4613 | if (p == fm->dv) { |
4080 | if (p == fm->dv) { |
4614 | fm->dv = 0; |
4081 | fm->dv = 0; |
4615 | fm->dvsize = 0; |
4082 | fm->dvsize = 0; |
4616 | } |
4083 | } |
4617 | if (should_trim(fm, tsize)) |
4084 | if (should_trim(fm, tsize)) |
4618 | sys_trim(fm, 0); |
4085 | sys_trim(fm, 0); |
4619 | goto postaction; |
4086 | goto postaction; |
4620 | } |
4087 | } |
4621 | else if (next == fm->dv) { |
4088 | else if (next == fm->dv) { |
4622 | size_t dsize = fm->dvsize += psize; |
4089 | size_t dsize = fm->dvsize += psize; |
4623 | fm->dv = p; |
4090 | fm->dv = p; |
4624 | set_size_and_pinuse_of_free_chunk(p, dsize); |
4091 | set_size_and_pinuse_of_free_chunk(p, dsize); |
4625 | goto postaction; |
4092 | goto postaction; |
4626 | } |
4093 | } |
4627 | else { |
4094 | else { |
4628 | size_t nsize = chunksize(next); |
4095 | size_t nsize = chunksize(next); |
4629 | psize += nsize; |
4096 | psize += nsize; |
4630 | unlink_chunk(fm, next, nsize); |
4097 | unlink_chunk(fm, next, nsize); |
4631 | set_size_and_pinuse_of_free_chunk(p, psize); |
4098 | set_size_and_pinuse_of_free_chunk(p, psize); |
4632 | if (p == fm->dv) { |
4099 | if (p == fm->dv) { |
4633 | fm->dvsize = psize; |
4100 | fm->dvsize = psize; |
4634 | goto postaction; |
4101 | goto postaction; |
4635 | } |
4102 | } |
4636 | } |
4103 | } |
4637 | } |
4104 | } |
4638 | else |
4105 | else |
4639 | set_free_with_pinuse(p, psize, next); |
4106 | set_free_with_pinuse(p, psize, next); |
4640 | insert_chunk(fm, p, psize); |
4107 | insert_chunk(fm, p, psize); |
4641 | check_free_chunk(fm, p); |
4108 | check_free_chunk(fm, p); |
4642 | goto postaction; |
4109 | goto postaction; |
4643 | } |
4110 | } |
4644 | } |
4111 | } |
4645 | erroraction: |
4112 | erroraction: |
4646 | USAGE_ERROR_ACTION(fm, p); |
4113 | USAGE_ERROR_ACTION(fm, p); |
4647 | postaction: |
4114 | postaction: |
4648 | POSTACTION(fm); |
4115 | POSTACTION(fm); |
4649 | } |
4116 | } |
4650 | } |
4117 | } |
4651 | } |
4118 | } |
4652 | 4119 | ||
4653 | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { |
4120 | void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { |
4654 | void* mem; |
4121 | void* mem; |
4655 | size_t req = 0; |
4122 | size_t req = 0; |
4656 | mstate ms = (mstate)msp; |
4123 | mstate ms = (mstate)msp; |
4657 | if (!ok_magic(ms)) { |
4124 | if (!ok_magic(ms)) { |
4658 | USAGE_ERROR_ACTION(ms,ms); |
4125 | USAGE_ERROR_ACTION(ms,ms); |
4659 | return 0; |
4126 | return 0; |
4660 | } |
4127 | } |
4661 | if (n_elements != 0) { |
4128 | if (n_elements != 0) { |
4662 | req = n_elements * elem_size; |
4129 | req = n_elements * elem_size; |
4663 | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
4130 | if (((n_elements | elem_size) & ~(size_t)0xffff) && |
4664 | (req / n_elements != elem_size)) |
4131 | (req / n_elements != elem_size)) |
4665 | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
4132 | req = MAX_SIZE_T; /* force downstream failure on overflow */ |
4666 | } |
4133 | } |
4667 | mem = internal_malloc(ms, req); |
4134 | mem = internal_malloc(ms, req); |
4668 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
4135 | if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
4669 | memset(mem, 0, req); |
4136 | memset(mem, 0, req); |
4670 | return mem; |
4137 | return mem; |
4671 | } |
4138 | } |
4672 | 4139 | ||
4673 | void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { |
4140 | void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { |
4674 | if (oldmem == 0) |
4141 | if (oldmem == 0) |
4675 | return mspace_malloc(msp, bytes); |
4142 | return mspace_malloc(msp, bytes); |
4676 | #ifdef REALLOC_ZERO_BYTES_FREES |
4143 | #ifdef REALLOC_ZERO_BYTES_FREES |
4677 | if (bytes == 0) { |
4144 | if (bytes == 0) { |
4678 | mspace_free(msp, oldmem); |
4145 | mspace_free(msp, oldmem); |
4679 | return 0; |
4146 | return 0; |
4680 | } |
4147 | } |
4681 | #endif /* REALLOC_ZERO_BYTES_FREES */ |
4148 | #endif /* REALLOC_ZERO_BYTES_FREES */ |
4682 | else { |
4149 | else { |
4683 | #if FOOTERS |
4150 | #if FOOTERS |
4684 | mchunkptr p = mem2chunk(oldmem); |
4151 | mchunkptr p = mem2chunk(oldmem); |
4685 | mstate ms = get_mstate_for(p); |
4152 | mstate ms = get_mstate_for(p); |
4686 | #else /* FOOTERS */ |
4153 | #else /* FOOTERS */ |
4687 | mstate ms = (mstate)msp; |
4154 | mstate ms = (mstate)msp; |
4688 | #endif /* FOOTERS */ |
4155 | #endif /* FOOTERS */ |
4689 | if (!ok_magic(ms)) { |
4156 | if (!ok_magic(ms)) { |
4690 | USAGE_ERROR_ACTION(ms,ms); |
4157 | USAGE_ERROR_ACTION(ms,ms); |
4691 | return 0; |
4158 | return 0; |
4692 | } |
4159 | } |
4693 | return internal_realloc(ms, oldmem, bytes); |
4160 | return internal_realloc(ms, oldmem, bytes); |
4694 | } |
4161 | } |
4695 | } |
4162 | } |
4696 | 4163 | ||
4697 | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { |
4164 | void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { |
4698 | mstate ms = (mstate)msp; |
4165 | mstate ms = (mstate)msp; |
4699 | if (!ok_magic(ms)) { |
4166 | if (!ok_magic(ms)) { |
4700 | USAGE_ERROR_ACTION(ms,ms); |
4167 | USAGE_ERROR_ACTION(ms,ms); |
4701 | return 0; |
4168 | return 0; |
4702 | } |
4169 | } |
4703 | return internal_memalign(ms, alignment, bytes); |
4170 | return internal_memalign(ms, alignment, bytes); |
4704 | } |
4171 | } |
4705 | 4172 | ||
4706 | void** mspace_independent_calloc(mspace msp, size_t n_elements, |
4173 | void** mspace_independent_calloc(mspace msp, size_t n_elements, |
4707 | size_t elem_size, void* chunks[]) { |
4174 | size_t elem_size, void* chunks[]) { |
4708 | size_t sz = elem_size; /* serves as 1-element array */ |
4175 | size_t sz = elem_size; /* serves as 1-element array */ |
4709 | mstate ms = (mstate)msp; |
4176 | mstate ms = (mstate)msp; |
4710 | if (!ok_magic(ms)) { |
4177 | if (!ok_magic(ms)) { |
4711 | USAGE_ERROR_ACTION(ms,ms); |
4178 | USAGE_ERROR_ACTION(ms,ms); |
4712 | return 0; |
4179 | return 0; |
4713 | } |
4180 | } |
4714 | return ialloc(ms, n_elements, &sz, 3, chunks); |
4181 | return ialloc(ms, n_elements, &sz, 3, chunks); |
4715 | } |
4182 | } |
4716 | 4183 | ||
4717 | void** mspace_independent_comalloc(mspace msp, size_t n_elements, |
4184 | void** mspace_independent_comalloc(mspace msp, size_t n_elements, |
4718 | size_t sizes[], void* chunks[]) { |
4185 | size_t sizes[], void* chunks[]) { |
4719 | mstate ms = (mstate)msp; |
4186 | mstate ms = (mstate)msp; |
4720 | if (!ok_magic(ms)) { |
4187 | if (!ok_magic(ms)) { |
4721 | USAGE_ERROR_ACTION(ms,ms); |
4188 | USAGE_ERROR_ACTION(ms,ms); |
4722 | return 0; |
4189 | return 0; |
4723 | } |
4190 | } |
4724 | return ialloc(ms, n_elements, sizes, 0, chunks); |
4191 | return ialloc(ms, n_elements, sizes, 0, chunks); |
4725 | } |
4192 | } |
4726 | 4193 | ||
4727 | int mspace_trim(mspace msp, size_t pad) { |
4194 | int mspace_trim(mspace msp, size_t pad) { |
4728 | int result = 0; |
4195 | int result = 0; |
4729 | mstate ms = (mstate)msp; |
4196 | mstate ms = (mstate)msp; |
4730 | if (ok_magic(ms)) { |
4197 | if (ok_magic(ms)) { |
4731 | if (!PREACTION(ms)) { |
4198 | if (!PREACTION(ms)) { |
4732 | result = sys_trim(ms, pad); |
4199 | result = sys_trim(ms, pad); |
4733 | POSTACTION(ms); |
4200 | POSTACTION(ms); |
4734 | } |
4201 | } |
4735 | } |
4202 | } |
4736 | else { |
4203 | else { |
4737 | USAGE_ERROR_ACTION(ms,ms); |
4204 | USAGE_ERROR_ACTION(ms,ms); |
4738 | } |
4205 | } |
4739 | return result; |
4206 | return result; |
4740 | } |
4207 | } |
4741 | 4208 | ||
4742 | void mspace_malloc_stats(mspace msp) { |
4209 | void mspace_malloc_stats(mspace msp) { |
4743 | mstate ms = (mstate)msp; |
4210 | mstate ms = (mstate)msp; |
4744 | if (ok_magic(ms)) { |
4211 | if (ok_magic(ms)) { |
4745 | internal_malloc_stats(ms); |
4212 | internal_malloc_stats(ms); |
4746 | } |
4213 | } |
4747 | else { |
4214 | else { |
4748 | USAGE_ERROR_ACTION(ms,ms); |
4215 | USAGE_ERROR_ACTION(ms,ms); |
4749 | } |
4216 | } |
4750 | } |
4217 | } |
4751 | 4218 | ||
4752 | size_t mspace_footprint(mspace msp) { |
4219 | size_t mspace_footprint(mspace msp) { |
4753 | size_t result; |
4220 | size_t result; |
4754 | mstate ms = (mstate)msp; |
4221 | mstate ms = (mstate)msp; |
4755 | if (ok_magic(ms)) { |
4222 | if (ok_magic(ms)) { |
4756 | result = ms->footprint; |
4223 | result = ms->footprint; |
4757 | } |
4224 | } |
4758 | USAGE_ERROR_ACTION(ms,ms); |
4225 | USAGE_ERROR_ACTION(ms,ms); |
4759 | return result; |
4226 | return result; |
4760 | } |
4227 | } |
4761 | 4228 | ||
4762 | 4229 | ||
4763 | size_t mspace_max_footprint(mspace msp) { |
4230 | size_t mspace_max_footprint(mspace msp) { |
4764 | size_t result; |
4231 | size_t result; |
4765 | mstate ms = (mstate)msp; |
4232 | mstate ms = (mstate)msp; |
4766 | if (ok_magic(ms)) { |
4233 | if (ok_magic(ms)) { |
4767 | result = ms->max_footprint; |
4234 | result = ms->max_footprint; |
4768 | } |
4235 | } |
4769 | USAGE_ERROR_ACTION(ms,ms); |
4236 | USAGE_ERROR_ACTION(ms,ms); |
4770 | return result; |
4237 | return result; |
4771 | } |
4238 | } |
4772 | 4239 | ||
4773 | 4240 | ||
4774 | #if !NO_MALLINFO |
4241 | #if !NO_MALLINFO |
4775 | struct mallinfo mspace_mallinfo(mspace msp) { |
4242 | struct mallinfo mspace_mallinfo(mspace msp) { |
4776 | mstate ms = (mstate)msp; |
4243 | mstate ms = (mstate)msp; |
4777 | if (!ok_magic(ms)) { |
4244 | if (!ok_magic(ms)) { |
4778 | USAGE_ERROR_ACTION(ms,ms); |
4245 | USAGE_ERROR_ACTION(ms,ms); |
4779 | } |
4246 | } |
4780 | return internal_mallinfo(ms); |
4247 | return internal_mallinfo(ms); |
4781 | } |
4248 | } |
4782 | #endif /* NO_MALLINFO */ |
4249 | #endif /* NO_MALLINFO */ |
4783 | 4250 | ||
4784 | int mspace_mallopt(int param_number, int value) { |
4251 | int mspace_mallopt(int param_number, int value) { |
4785 | return change_mparam(param_number, value); |
4252 | return change_mparam(param_number, value); |
4786 | } |
4253 | } |
4787 | 4254 | ||
4788 | #endif /* MSPACES */ |
4255 | #endif /* MSPACES */ |
4789 | 4256 | ||
4790 | /* -------------------- Alternative MORECORE functions ------------------- */ |
4257 | /* -------------------- Alternative MORECORE functions ------------------- */ |
4791 | 4258 | ||
4792 | /* |
4259 | /* |
4793 | Guidelines for creating a custom version of MORECORE: |
4260 | Guidelines for creating a custom version of MORECORE: |
4794 | 4261 | ||
4795 | * For best performance, MORECORE should allocate in multiples of pagesize. |
4262 | * For best performance, MORECORE should allocate in multiples of pagesize. |
4796 | * MORECORE may allocate more memory than requested. (Or even less, |
4263 | * MORECORE may allocate more memory than requested. (Or even less, |
4797 | but this will usually result in a malloc failure.) |
4264 | but this will usually result in a malloc failure.) |
4798 | * MORECORE must not allocate memory when given argument zero, but |
4265 | * MORECORE must not allocate memory when given argument zero, but |
4799 | instead return one past the end address of memory from previous |
4266 | instead return one past the end address of memory from previous |
4800 | nonzero call. |
4267 | nonzero call. |
4801 | * For best performance, consecutive calls to MORECORE with positive |
4268 | * For best performance, consecutive calls to MORECORE with positive |
4802 | arguments should return increasing addresses, indicating that |
4269 | arguments should return increasing addresses, indicating that |
4803 | space has been contiguously extended. |
4270 | space has been contiguously extended. |
4804 | * Even though consecutive calls to MORECORE need not return contiguous |
4271 | * Even though consecutive calls to MORECORE need not return contiguous |
4805 | addresses, it must be OK for malloc'ed chunks to span multiple |
4272 | addresses, it must be OK for malloc'ed chunks to span multiple |
4806 | regions in those cases where they do happen to be contiguous. |
4273 | regions in those cases where they do happen to be contiguous. |
4807 | * MORECORE need not handle negative arguments -- it may instead |
4274 | * MORECORE need not handle negative arguments -- it may instead |
4808 | just return MFAIL when given negative arguments. |
4275 | just return MFAIL when given negative arguments. |
4809 | Negative arguments are always multiples of pagesize. MORECORE |
4276 | Negative arguments are always multiples of pagesize. MORECORE |
4810 | must not misinterpret negative args as large positive unsigned |
4277 | must not misinterpret negative args as large positive unsigned |
4811 | args. You can suppress all such calls from even occurring by defining |
4278 | args. You can suppress all such calls from even occurring by defining |
4812 | MORECORE_CANNOT_TRIM, |
4279 | MORECORE_CANNOT_TRIM, |
4813 | 4280 | ||
4814 | As an example alternative MORECORE, here is a custom allocator |
4281 | As an example alternative MORECORE, here is a custom allocator |
4815 | kindly contributed for pre-OSX macOS. It uses virtually but not |
4282 | kindly contributed for pre-OSX macOS. It uses virtually but not |
4816 | necessarily physically contiguous non-paged memory (locked in, |
4283 | necessarily physically contiguous non-paged memory (locked in, |
4817 | present and won't get swapped out). You can use it by uncommenting |
4284 | present and won't get swapped out). You can use it by uncommenting |
4818 | this section, adding some #includes, and setting up the appropriate |
4285 | this section, adding some #includes, and setting up the appropriate |
4819 | defines above: |
4286 | defines above: |
4820 | 4287 | ||
4821 | #define MORECORE osMoreCore |
4288 | #define MORECORE osMoreCore |
4822 | 4289 | ||
4823 | There is also a shutdown routine that should somehow be called for |
4290 | There is also a shutdown routine that should somehow be called for |
4824 | cleanup upon program exit. |
4291 | cleanup upon program exit. |
4825 | 4292 | ||
4826 | #define MAX_POOL_ENTRIES 100 |
4293 | #define MAX_POOL_ENTRIES 100 |
4827 | #define MINIMUM_MORECORE_SIZE (64 * 1024U) |
4294 | #define MINIMUM_MORECORE_SIZE (64 * 1024U) |
4828 | static int next_os_pool; |
4295 | static int next_os_pool; |
4829 | void *our_os_pools[MAX_POOL_ENTRIES]; |
4296 | void *our_os_pools[MAX_POOL_ENTRIES]; |
4830 | 4297 | ||
4831 | void *osMoreCore(int size) |
4298 | void *osMoreCore(int size) |
4832 | { |
4299 | { |
4833 | void *ptr = 0; |
4300 | void *ptr = 0; |
4834 | static void *sbrk_top = 0; |
4301 | static void *sbrk_top = 0; |
4835 | 4302 | ||
4836 | if (size > 0) |
4303 | if (size > 0) |
4837 | { |
4304 | { |
4838 | if (size < MINIMUM_MORECORE_SIZE) |
4305 | if (size < MINIMUM_MORECORE_SIZE) |
4839 | size = MINIMUM_MORECORE_SIZE; |
4306 | size = MINIMUM_MORECORE_SIZE; |
4840 | if (CurrentExecutionLevel() == kTaskLevel) |
4307 | if (CurrentExecutionLevel() == kTaskLevel) |
4841 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
4308 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
4842 | if (ptr == 0) |
4309 | if (ptr == 0) |
4843 | { |
4310 | { |
4844 | return (void *) MFAIL; |
4311 | return (void *) MFAIL; |
4845 | } |
4312 | } |
4846 | // save ptrs so they can be freed during cleanup |
4313 | // save ptrs so they can be freed during cleanup |
4847 | our_os_pools[next_os_pool] = ptr; |
4314 | our_os_pools[next_os_pool] = ptr; |
4848 | next_os_pool++; |
4315 | next_os_pool++; |
4849 | ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
4316 | ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
4850 | sbrk_top = (char *) ptr + size; |
4317 | sbrk_top = (char *) ptr + size; |
4851 | return ptr; |
4318 | return ptr; |
4852 | } |
4319 | } |
4853 | else if (size < 0) |
4320 | else if (size < 0) |
4854 | { |
4321 | { |
4855 | // we don't currently support shrink behavior |
4322 | // we don't currently support shrink behavior |
4856 | return (void *) MFAIL; |
4323 | return (void *) MFAIL; |
4857 | } |
4324 | } |
4858 | else |
4325 | else |
4859 | { |
4326 | { |
4860 | return sbrk_top; |
4327 | return sbrk_top; |
4861 | } |
4328 | } |
4862 | } |
4329 | } |
4863 | 4330 | ||
4864 | // cleanup any allocated memory pools |
4331 | // cleanup any allocated memory pools |
4865 | // called as last thing before shutting down driver |
4332 | // called as last thing before shutting down driver |
4866 | 4333 | ||
4867 | void osCleanupMem(void) |
4334 | void osCleanupMem(void) |
4868 | { |
4335 | { |
4869 | void **ptr; |
4336 | void **ptr; |
4870 | 4337 | ||
4871 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
4338 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
4872 | if (*ptr) |
4339 | if (*ptr) |
4873 | { |
4340 | { |
4874 | PoolDeallocate(*ptr); |
4341 | PoolDeallocate(*ptr); |
4875 | *ptr = 0; |
4342 | *ptr = 0; |
4876 | } |
4343 | } |
4877 | } |
4344 | } |
4878 | 4345 | ||
4879 | */ |
4346 | */ |
4880 | 4347 | ||
4881 | 4348 | ||
4882 | /* ----------------------------------------------------------------------- |
4349 | /* ----------------------------------------------------------------------- |
4883 | History: |
4350 | History: |
4884 | V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) |
4351 | V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) |
4885 | * Add max_footprint functions |
4352 | * Add max_footprint functions |
4886 | * Ensure all appropriate literals are size_t |
4353 | * Ensure all appropriate literals are size_t |
4887 | * Fix conditional compilation problem for some #define settings |
4354 | * Fix conditional compilation problem for some #define settings |
4888 | * Avoid concatenating segments with the one provided |
4355 | * Avoid concatenating segments with the one provided |
4889 | in create_mspace_with_base |
4356 | in create_mspace_with_base |
4890 | * Rename some variables to avoid compiler shadowing warnings |
4357 | * Rename some variables to avoid compiler shadowing warnings |
4891 | * Use explicit lock initialization. |
4358 | * Use explicit lock initialization. |
4892 | * Better handling of sbrk interference. |
4359 | * Better handling of sbrk interference. |
4893 | * Simplify and fix segment insertion, trimming and mspace_destroy |
4360 | * Simplify and fix segment insertion, trimming and mspace_destroy |
4894 | * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x |
4361 | * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x |
4895 | * Thanks especially to Dennis Flanagan for help on these. |
4362 | * Thanks especially to Dennis Flanagan for help on these. |
4896 | 4363 | ||
4897 | V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) |
4364 | V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) |
4898 | * Fix memalign brace error. |
4365 | * Fix memalign brace error. |
4899 | 4366 | ||
4900 | V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) |
4367 | V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) |
4901 | * Fix improper #endif nesting in C++ |
4368 | * Fix improper #endif nesting in C++ |
4902 | * Add explicit casts needed for C++ |
4369 | * Add explicit casts needed for C++ |
4903 | 4370 | ||
4904 | V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) |
4371 | V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) |
4905 | * Use trees for large bins |
4372 | * Use trees for large bins |
4906 | * Support mspaces |
4373 | * Support mspaces |
4907 | * Use segments to unify sbrk-based and mmap-based system allocation, |
4374 | * Use segments to unify sbrk-based and mmap-based system allocation, |
4908 | removing need for emulation on most platforms without sbrk. |
4375 | removing need for emulation on most platforms without sbrk. |
4909 | * Default safety checks |
4376 | * Default safety checks |
4910 | * Optional footer checks. Thanks to William Robertson for the idea. |
4377 | * Optional footer checks. Thanks to William Robertson for the idea. |
4911 | * Internal code refactoring |
4378 | * Internal code refactoring |
4912 | * Incorporate suggestions and platform-specific changes. |
4379 | * Incorporate suggestions and platform-specific changes. |
4913 | Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, |
4380 | Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, |
4914 | Aaron Bachmann, Emery Berger, and others. |
4381 | Aaron Bachmann, Emery Berger, and others. |
4915 | * Speed up non-fastbin processing enough to remove fastbins. |
4382 | * Speed up non-fastbin processing enough to remove fastbins. |
4916 | * Remove useless cfree() to avoid conflicts with other apps. |
4383 | * Remove useless cfree() to avoid conflicts with other apps. |
4917 | * Remove internal memcpy, memset. Compilers handle builtins better. |
4384 | * Remove internal memcpy, memset. Compilers handle builtins better. |
4918 | * Remove some options that no one ever used and rename others. |
4385 | * Remove some options that no one ever used and rename others. |
4919 | 4386 | ||
4920 | V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
4387 | V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
4921 | * Fix malloc_state bitmap array misdeclaration |
4388 | * Fix malloc_state bitmap array misdeclaration |
4922 | 4389 | ||
4923 | V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) |
4390 | V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) |
4924 | * Allow tuning of FIRST_SORTED_BIN_SIZE |
4391 | * Allow tuning of FIRST_SORTED_BIN_SIZE |
4925 | * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. |
4392 | * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. |
4926 | * Better detection and support for non-contiguousness of MORECORE. |
4393 | * Better detection and support for non-contiguousness of MORECORE. |
4927 | Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger |
4394 | Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger |
4928 | * Bypass most of malloc if no frees. Thanks To Emery Berger. |
4395 | * Bypass most of malloc if no frees. Thanks To Emery Berger. |
4929 | * Fix freeing of old top non-contiguous chunk im sysmalloc. |
4396 | * Fix freeing of old top non-contiguous chunk im sysmalloc. |
4930 | * Raised default trim and map thresholds to 256K. |
4397 | * Raised default trim and map thresholds to 256K. |
4931 | * Fix mmap-related #defines. Thanks to Lubos Lunak. |
4398 | * Fix mmap-related #defines. Thanks to Lubos Lunak. |
4932 | * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. |
4399 | * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. |
4933 | * Branch-free bin calculation |
4400 | * Branch-free bin calculation |
4934 | * Default trim and mmap thresholds now 256K. |
4401 | * Default trim and mmap thresholds now 256K. |
4935 | 4402 | ||
4936 | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
4403 | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
4937 | * Introduce independent_comalloc and independent_calloc. |
4404 | * Introduce independent_comalloc and independent_calloc. |
4938 | Thanks to Michael Pachos for motivation and help. |
4405 | Thanks to Michael Pachos for motivation and help. |
4939 | * Make optional .h file available |
4406 | * Make optional .h file available |
4940 | * Allow > 2GB requests on 32bit systems. |
4407 | * Allow > 2GB requests on 32bit systems. |
4941 | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
4408 | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
4942 | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
4409 | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
4943 | and Anonymous. |
4410 | and Anonymous. |
4944 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
4411 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
4945 | helping test this.) |
4412 | helping test this.) |
4946 | * memalign: check alignment arg |
4413 | * memalign: check alignment arg |
4947 | * realloc: don't try to shift chunks backwards, since this |
4414 | * realloc: don't try to shift chunks backwards, since this |
4948 | leads to more fragmentation in some programs and doesn't |
4415 | leads to more fragmentation in some programs and doesn't |
4949 | seem to help in any others. |
4416 | seem to help in any others. |
4950 | * Collect all cases in malloc requiring system memory into sysmalloc |
4417 | * Collect all cases in malloc requiring system memory into sysmalloc |
4951 | * Use mmap as backup to sbrk |
4418 | * Use mmap as backup to sbrk |
4952 | * Place all internal state in malloc_state |
4419 | * Place all internal state in malloc_state |
4953 | * Introduce fastbins (although similar to 2.5.1) |
4420 | * Introduce fastbins (although similar to 2.5.1) |
4954 | * Many minor tunings and cosmetic improvements |
4421 | * Many minor tunings and cosmetic improvements |
4955 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
4422 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
4956 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
4423 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
4957 | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
4424 | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
4958 | * Include errno.h to support default failure action. |
4425 | * Include errno.h to support default failure action. |
4959 | 4426 | ||
4960 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
4427 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
4961 | * return null for negative arguments |
4428 | * return null for negative arguments |
4962 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
4429 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
4963 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
4430 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
4964 | (e.g. WIN32 platforms) |
4431 | (e.g. WIN32 platforms) |
4965 | * Cleanup header file inclusion for WIN32 platforms |
4432 | * Cleanup header file inclusion for WIN32 platforms |
4966 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
4433 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
4967 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
4434 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
4968 | memory allocation routines |
4435 | memory allocation routines |
4969 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
4436 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
4970 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
4437 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
4971 | usage of 'assert' in non-WIN32 code |
4438 | usage of 'assert' in non-WIN32 code |
4972 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
4439 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
4973 | avoid infinite loop |
4440 | avoid infinite loop |
4974 | * Always call 'fREe()' rather than 'free()' |
4441 | * Always call 'fREe()' rather than 'free()' |
4975 | 4442 | ||
4976 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
4443 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
4977 | * Fixed ordering problem with boundary-stamping |
4444 | * Fixed ordering problem with boundary-stamping |
4978 | 4445 | ||
4979 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
4446 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
4980 | * Added pvalloc, as recommended by H.J. Liu |
4447 | * Added pvalloc, as recommended by H.J. Liu |
4981 | * Added 64bit pointer support mainly from Wolfram Gloger |
4448 | * Added 64bit pointer support mainly from Wolfram Gloger |
4982 | * Added anonymously donated WIN32 sbrk emulation |
4449 | * Added anonymously donated WIN32 sbrk emulation |
4983 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
4450 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
4984 | * malloc_extend_top: fix mask error that caused wastage after |
4451 | * malloc_extend_top: fix mask error that caused wastage after |
4985 | foreign sbrks |
4452 | foreign sbrks |
4986 | * Add linux mremap support code from HJ Liu |
4453 | * Add linux mremap support code from HJ Liu |
4987 | 4454 | ||
4988 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
4455 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
4989 | * Integrated most documentation with the code. |
4456 | * Integrated most documentation with the code. |
4990 | * Add support for mmap, with help from |
4457 | * Add support for mmap, with help from |
4991 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
4458 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
4992 | * Use last_remainder in more cases. |
4459 | * Use last_remainder in more cases. |
4993 | * Pack bins using idea from colin@nyx10.cs.du.edu |
4460 | * Pack bins using idea from colin@nyx10.cs.du.edu |
4994 | * Use ordered bins instead of best-fit threshhold |
4461 | * Use ordered bins instead of best-fit threshhold |
4995 | * Eliminate block-local decls to simplify tracing and debugging. |
4462 | * Eliminate block-local decls to simplify tracing and debugging. |
4996 | * Support another case of realloc via move into top |
4463 | * Support another case of realloc via move into top |
4997 | * Fix error occuring when initial sbrk_base not word-aligned. |
4464 | * Fix error occuring when initial sbrk_base not word-aligned. |
4998 | * Rely on page size for units instead of SBRK_UNIT to |
4465 | * Rely on page size for units instead of SBRK_UNIT to |
4999 | avoid surprises about sbrk alignment conventions. |
4466 | avoid surprises about sbrk alignment conventions. |
5000 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
4467 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
5001 | (raymond@es.ele.tue.nl) for the suggestion. |
4468 | (raymond@es.ele.tue.nl) for the suggestion. |
5002 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
4469 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
5003 | * More precautions for cases where other routines call sbrk, |
4470 | * More precautions for cases where other routines call sbrk, |
5004 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
4471 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
5005 | * Added macros etc., allowing use in linux libc from |
4472 | * Added macros etc., allowing use in linux libc from |
5006 | H.J. Lu (hjl@gnu.ai.mit.edu) |
4473 | H.J. Lu (hjl@gnu.ai.mit.edu) |
5007 | * Inverted this history list |
4474 | * Inverted this history list |
5008 | 4475 | ||
5009 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
4476 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
5010 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
4477 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
5011 | * Removed all preallocation code since under current scheme |
4478 | * Removed all preallocation code since under current scheme |
5012 | the work required to undo bad preallocations exceeds |
4479 | the work required to undo bad preallocations exceeds |
5013 | the work saved in good cases for most test programs. |
4480 | the work saved in good cases for most test programs. |
5014 | * No longer use return list or unconsolidated bins since |
4481 | * No longer use return list or unconsolidated bins since |
5015 | no scheme using them consistently outperforms those that don't |
4482 | no scheme using them consistently outperforms those that don't |
5016 | given above changes. |
4483 | given above changes. |
5017 | * Use best fit for very large chunks to prevent some worst-cases. |
4484 | * Use best fit for very large chunks to prevent some worst-cases. |
5018 | * Added some support for debugging |
4485 | * Added some support for debugging |
5019 | 4486 | ||
5020 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
4487 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
5021 | * Removed footers when chunks are in use. Thanks to |
4488 | * Removed footers when chunks are in use. Thanks to |
5022 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
4489 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
5023 | 4490 | ||
5024 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
4491 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
5025 | * Added malloc_trim, with help from Wolfram Gloger |
4492 | * Added malloc_trim, with help from Wolfram Gloger |
5026 | (wmglo@Dent.MED.Uni-Muenchen.DE). |
4493 | (wmglo@Dent.MED.Uni-Muenchen.DE). |
5027 | 4494 | ||
5028 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
4495 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
5029 | 4496 | ||
5030 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
4497 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
5031 | * realloc: try to expand in both directions |
4498 | * realloc: try to expand in both directions |
5032 | * malloc: swap order of clean-bin strategy; |
4499 | * malloc: swap order of clean-bin strategy; |
5033 | * realloc: only conditionally expand backwards |
4500 | * realloc: only conditionally expand backwards |
5034 | * Try not to scavenge used bins |
4501 | * Try not to scavenge used bins |
5035 | * Use bin counts as a guide to preallocation |
4502 | * Use bin counts as a guide to preallocation |
5036 | * Occasionally bin return list chunks in first scan |
4503 | * Occasionally bin return list chunks in first scan |
5037 | * Add a few optimizations from colin@nyx10.cs.du.edu |
4504 | * Add a few optimizations from colin@nyx10.cs.du.edu |
5038 | 4505 | ||
5039 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
4506 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
5040 | * faster bin computation & slightly different binning |
4507 | * faster bin computation & slightly different binning |
5041 | * merged all consolidations to one part of malloc proper |
4508 | * merged all consolidations to one part of malloc proper |
5042 | (eliminating old malloc_find_space & malloc_clean_bin) |
4509 | (eliminating old malloc_find_space & malloc_clean_bin) |
5043 | * Scan 2 returns chunks (not just 1) |
4510 | * Scan 2 returns chunks (not just 1) |
5044 | * Propagate failure in realloc if malloc returns 0 |
4511 | * Propagate failure in realloc if malloc returns 0 |
5045 | * Add stuff to allow compilation on non-ANSI compilers |
4512 | * Add stuff to allow compilation on non-ANSI compilers |
5046 | from kpv@research.att.com |
4513 | from kpv@research.att.com |
5047 | 4514 | ||
5048 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
4515 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
5049 | * removed potential for odd address access in prev_chunk |
4516 | * removed potential for odd address access in prev_chunk |
5050 | * removed dependency on getpagesize.h |
4517 | * removed dependency on getpagesize.h |
5051 | * misc cosmetics and a bit more internal documentation |
4518 | * misc cosmetics and a bit more internal documentation |
5052 | * anticosmetics: mangled names in macros to evade debugger strangeness |
4519 | * anticosmetics: mangled names in macros to evade debugger strangeness |
5053 | * tested on sparc, hp-700, dec-mips, rs6000 |
4520 | * tested on sparc, hp-700, dec-mips, rs6000 |
5054 | with gcc & native cc (hp, dec only) allowing |
4521 | with gcc & native cc (hp, dec only) allowing |
5055 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
4522 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
5056 | 4523 | ||
5057 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
4524 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
5058 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
4525 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
5059 | structure of old version, but most details differ.) |
4526 | structure of old version, but most details differ.) |
5060 | |
4527 | |
5061 | */ |
4528 | */ |
5062 | 4529 |