|
1 /* |
|
2 This is a version (aka dlmalloc) of malloc/free/realloc written by |
|
3 Doug Lea and released to the public domain. Use, modify, and |
|
4 redistribute this code without permission or acknowledgement in any |
|
5 way you wish. Send questions, comments, complaints, performance |
|
6 data, etc to dl@cs.oswego.edu |
|
7 |
|
8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
|
9 |
|
10 Note: There may be an updated version of this malloc obtainable at |
|
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
|
12 Check before installing! |
|
13 |
|
14 * Quickstart |
|
15 |
|
16 This library is all in one file to simplify the most common usage: |
|
17 ftp it, compile it (-O), and link it into another program. All |
|
18 of the compile-time options default to reasonable values for use on |
|
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows. |
|
20 You might later want to step through various compile-time and dynamic |
|
21 tuning options. |
|
22 |
|
23 For convenience, an include file for code using this malloc is at: |
|
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h |
|
25 You don't really need this .h file unless you call functions not |
|
26 defined in your system include files. The .h file contains only the |
|
27 excerpts from this file needed for using this malloc on ANSI C/C++ |
|
28 systems, so long as you haven't changed compile-time options about |
|
29 naming and tuning parameters. If you do, then you can create your |
|
30 own malloc.h that does include all settings by cutting at the point |
|
31 indicated below. |
|
32 |
|
33 * Why use this malloc? |
|
34 |
|
35 This is not the fastest, most space-conserving, most portable, or |
|
36 most tunable malloc ever written. However it is among the fastest |
|
37 while also being among the most space-conserving, portable and tunable. |
|
38 Consistent balance across these factors results in a good general-purpose |
|
39 allocator for malloc-intensive programs. |
|
40 |
|
41 The main properties of the algorithms are: |
|
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator, |
|
43 with ties normally decided via FIFO (i.e. least recently used). |
|
44 * For small (<= 64 bytes by default) requests, it is a caching |
|
45 allocator, that maintains pools of quickly recycled chunks. |
|
46 * In between, and for combinations of large and small requests, it does |
|
47 the best it can trying to meet both goals at once. |
|
48 * For very large requests (>= 128KB by default), it relies on system |
|
49 memory mapping facilities, if supported. |
|
50 |
|
51 For a longer but slightly out of date high-level description, see |
|
52 http://gee.cs.oswego.edu/dl/html/malloc.html |
|
53 |
|
54 You may already by default be using a C library containing a malloc |
|
55 that is based on some version of this malloc (for example in |
|
56 linux). You might still want to use the one in this file in order to |
|
57 customize settings or to avoid overheads associated with library |
|
58 versions. |
|
59 |
|
60 * Contents, described in more detail in "description of public routines" below. |
|
61 |
|
62 Standard (ANSI/SVID/...) functions: |
|
63 malloc(size_t n); |
|
64 calloc(size_t n_elements, size_t element_size); |
|
65 free(Void_t* p); |
|
66 realloc(Void_t* p, size_t n); |
|
67 memalign(size_t alignment, size_t n); |
|
68 valloc(size_t n); |
|
69 mallinfo() |
|
70 mallopt(int parameter_number, int parameter_value) |
|
71 |
|
72 Additional functions: |
|
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]); |
|
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); |
|
75 pvalloc(size_t n); |
|
76 cfree(Void_t* p); |
|
77 malloc_trim(size_t pad); |
|
78 malloc_usable_size(Void_t* p); |
|
79 malloc_stats(); |
|
80 |
|
81 * Vital statistics: |
|
82 |
|
83 Supported pointer representation: 4 or 8 bytes |
|
84 Supported size_t representation: 4 or 8 bytes |
|
85 Note that size_t is allowed to be 4 bytes even if pointers are 8. |
|
86 You can adjust this by defining INTERNAL_SIZE_T |
|
87 |
|
88 Alignment: 2 * sizeof(size_t) (default) |
|
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for |
|
90 nearly all current machines and C compilers. However, you can |
|
91 define MALLOC_ALIGNMENT to be wider than this if necessary. |
|
92 |
|
93 Minimum overhead per allocated chunk: 4 or 8 bytes |
|
94 Each malloced chunk has a hidden word of overhead holding size |
|
95 and status information. |
|
96 |
|
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) |
|
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) |
|
99 |
|
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte |
|
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are |
|
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for |
|
103 free list pointers. Thus, the minimum allocatable size is |
|
104 16/24/32 bytes. |
|
105 |
|
106 Even a request for zero bytes (i.e., malloc(0)) returns a |
|
107 pointer to something of the minimum allocatable size. |
|
108 |
|
109 The maximum overhead wastage (i.e., number of extra bytes |
|
110 allocated than were requested in malloc) is less than or equal |
|
111 to the minimum size, except for requests >= mmap_threshold that |
|
112 are serviced via mmap(), where the worst case wastage is 2 * |
|
113 sizeof(size_t) bytes plus the remainder from a system page (the |
|
114 minimal mmap unit); typically 4096 or 8192 bytes. |
|
115 |
|
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages |
|
117 8-byte size_t: 2^64 minus about two pages |
|
118 |
|
119 It is assumed that (possibly signed) size_t values suffice to |
|
120 represent chunk sizes. `Possibly signed' is due to the fact |
|
121 that `size_t' may be defined on a system as either a signed or |
|
122 an unsigned type. The ISO C standard says that it must be |
|
123 unsigned, but a few systems are known not to adhere to this. |
|
124 Additionally, even when size_t is unsigned, sbrk (which is by |
|
125 default used to obtain memory from system) accepts signed |
|
126 arguments, and may not be able to handle size_t-wide arguments |
|
127 with negative sign bit. Generally, values that would |
|
128 appear as negative after accounting for overhead and alignment |
|
129 are supported only via mmap(), which does not have this |
|
130 limitation. |
|
131 |
|
132 Requests for sizes outside the allowed range will perform an optional |
|
133 failure action and then return null. (Requests may also |
|
134 also fail because a system is out of memory.) |
|
135 |
|
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined |
|
137 |
|
138 When USE_MALLOC_LOCK is defined, wrappers are created to |
|
139 surround every public call with either a pthread mutex or |
|
140 a win32 spinlock (depending on WIN32). This is not |
|
141 especially fast, and can be a major bottleneck. |
|
142 It is designed only to provide minimal protection |
|
143 in concurrent environments, and to provide a basis for |
|
144 extensions. If you are using malloc in a concurrent program, |
|
145 you would be far better off obtaining ptmalloc, which is |
|
146 derived from a version of this malloc, and is well-tuned for |
|
147 concurrent programs. (See http://www.malloc.de) Note that |
|
148 even when USE_MALLOC_LOCK is defined, you can can guarantee |
|
149 full thread-safety only if no threads acquire memory through |
|
150 direct calls to MORECORE or other system-level allocators. |
|
151 |
|
152 Compliance: I believe it is compliant with the 1997 Single Unix Specification |
|
153 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably |
|
154 others as well. |
|
155 |
|
156 * Synopsis of compile-time options: |
|
157 |
|
158 People have reported using previous versions of this malloc on all |
|
159 versions of Unix, sometimes by tweaking some of the defines |
|
160 below. It has been tested most extensively on Solaris and |
|
161 Linux. It is also reported to work on WIN32 platforms. |
|
162 People also report using it in stand-alone embedded systems. |
|
163 |
|
164 The implementation is in straight, hand-tuned ANSI C. It is not |
|
165 at all modular. (Sorry!) It uses a lot of macros. To be at all |
|
166 usable, this code should be compiled using an optimizing compiler |
|
167 (for example gcc -O3) that can simplify expressions and control |
|
168 paths. (FAQ: some macros import variables as arguments rather than |
|
169 declare locals because people reported that some debuggers |
|
170 otherwise get confused.) |
|
171 |
|
172 OPTION DEFAULT VALUE |
|
173 |
|
174 Compilation Environment options: |
|
175 |
|
176 __STD_C derived from C compiler defines |
|
177 WIN32 NOT defined |
|
178 HAVE_MEMCPY defined |
|
179 USE_MEMCPY 1 if HAVE_MEMCPY is defined |
|
180 HAVE_MMAP defined as 1 |
|
181 MMAP_CLEARS 1 |
|
182 HAVE_MREMAP 0 unless linux defined |
|
183 malloc_getpagesize derived from system #includes, or 4096 if not |
|
184 HAVE_USR_INCLUDE_MALLOC_H NOT defined |
|
185 LACKS_UNISTD_H NOT defined unless WIN32 |
|
186 LACKS_SYS_PARAM_H NOT defined unless WIN32 |
|
187 LACKS_SYS_MMAN_H NOT defined unless WIN32 |
|
188 LACKS_FCNTL_H NOT defined |
|
189 |
|
190 Changing default word sizes: |
|
191 |
|
192 INTERNAL_SIZE_T size_t |
|
193 MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T) |
|
194 PTR_UINT unsigned long |
|
195 CHUNK_SIZE_T unsigned long |
|
196 |
|
197 Configuration and functionality options: |
|
198 |
|
199 USE_DL_PREFIX NOT defined |
|
200 USE_PUBLIC_MALLOC_WRAPPERS NOT defined |
|
201 USE_MALLOC_LOCK NOT defined |
|
202 DEBUG NOT defined |
|
203 REALLOC_ZERO_BYTES_FREES NOT defined |
|
204 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op |
|
205 TRIM_FASTBINS 0 |
|
206 FIRST_SORTED_BIN_SIZE 512 |
|
207 |
|
208 Options for customizing MORECORE: |
|
209 |
|
210 MORECORE sbrk |
|
211 MORECORE_CONTIGUOUS 1 |
|
212 MORECORE_CANNOT_TRIM NOT defined |
|
213 MMAP_AS_MORECORE_SIZE (1024 * 1024) |
|
214 |
|
215 Tuning options that are also dynamically changeable via mallopt: |
|
216 |
|
217 DEFAULT_MXFAST 64 |
|
218 DEFAULT_TRIM_THRESHOLD 256 * 1024 |
|
219 DEFAULT_TOP_PAD 0 |
|
220 DEFAULT_MMAP_THRESHOLD 256 * 1024 |
|
221 DEFAULT_MMAP_MAX 65536 |
|
222 |
|
223 There are several other #defined constants and macros that you |
|
224 probably don't want to touch unless you are extending or adapting malloc. |
|
225 */ |
|
226 |
|
227 /* |
|
228 WIN32 sets up defaults for MS environment and compilers. |
|
229 Otherwise defaults are for unix. |
|
230 */ |
|
231 |
|
232 /* #define WIN32 */ |
|
233 |
|
234 #ifdef WIN32 |
|
235 |
|
236 #define WIN32_LEAN_AND_MEAN |
|
237 #include <windows.h> |
|
238 |
|
239 /* Win32 doesn't supply or need the following headers */ |
|
240 #define LACKS_UNISTD_H |
|
241 #define LACKS_SYS_PARAM_H |
|
242 #define LACKS_SYS_MMAN_H |
|
243 |
|
244 /* Use the supplied emulation of sbrk */ |
|
245 #ifndef MORECORE |
|
246 #define MORECORE sbrk |
|
247 #endif |
|
248 #define MORECORE_CONTIGUOUS 1 |
|
249 #define MORECORE_FAILURE ((void*)(-1)) |
|
250 |
|
251 /* Use the supplied emulation of mmap and munmap */ |
|
252 #ifndef HAVE_MMAP |
|
253 #define HAVE_MMAP 1 |
|
254 #endif |
|
255 #define MUNMAP_FAILURE (-1) |
|
256 #define MMAP_CLEARS 1 |
|
257 |
|
258 /* These values don't really matter in windows mmap emulation */ |
|
259 #define MAP_PRIVATE 1 |
|
260 #define MAP_ANONYMOUS 2 |
|
261 #define PROT_READ 1 |
|
262 #define PROT_WRITE 2 |
|
263 |
|
264 /* Emulation functions defined at the end of this file */ |
|
265 |
|
266 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */ |
|
267 #ifdef USE_MALLOC_LOCK |
|
268 static int slwait(int *sl); |
|
269 static int slrelease(int *sl); |
|
270 #endif |
|
271 |
|
272 static long getpagesize(void); |
|
273 static long getregionsize(void); |
|
274 static void *sbrk(long size); |
|
275 static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg); |
|
276 static long munmap(void *ptr, long size); |
|
277 |
|
278 static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed); |
|
279 static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user); |
|
280 |
|
281 #endif |
|
282 |
|
283 /* |
|
284 __STD_C should be nonzero if using ANSI-standard C compiler, a C++ |
|
285 compiler, or a C compiler sufficiently close to ANSI to get away |
|
286 with it. |
|
287 */ |
|
288 |
|
289 #ifndef __STD_C |
|
290 #if defined(__STDC__) || defined(_cplusplus) |
|
291 #define __STD_C 1 |
|
292 #else |
|
293 #define __STD_C 0 |
|
294 #endif |
|
295 #endif /*__STD_C*/ |
|
296 |
|
297 |
|
298 /* |
|
299 Void_t* is the pointer type that malloc should say it returns |
|
300 */ |
|
301 |
|
302 #ifndef Void_t |
|
303 #if (__STD_C || defined(WIN32)) |
|
304 #define Void_t void |
|
305 #else |
|
306 #define Void_t char |
|
307 #endif |
|
308 #endif /*Void_t*/ |
|
309 |
|
310 #if __STD_C |
|
311 #include <stddef.h> /* for size_t */ |
|
312 #else |
|
313 #include <sys/types.h> |
|
314 #endif |
|
315 |
|
316 #ifdef __cplusplus |
|
317 extern "C" { |
|
318 #endif |
|
319 |
|
320 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */ |
|
321 |
|
322 /* #define LACKS_UNISTD_H */ |
|
323 |
|
324 #ifndef LACKS_UNISTD_H |
|
325 #include <unistd.h> |
|
326 #endif |
|
327 |
|
328 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */ |
|
329 |
|
330 /* #define LACKS_SYS_PARAM_H */ |
|
331 |
|
332 |
|
333 #include <stdio.h> /* needed for malloc_stats */ |
|
334 |
|
335 #ifndef LACKS_ERRNO_H |
|
336 #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */ |
|
337 #endif |
|
338 |
|
339 /* |
|
340 Debugging: |
|
341 |
|
342 Because freed chunks may be overwritten with bookkeeping fields, this |
|
343 malloc will often die when freed memory is overwritten by user |
|
344 programs. This can be very effective (albeit in an annoying way) |
|
345 in helping track down dangling pointers. |
|
346 |
|
347 If you compile with -DDEBUG, a number of assertion checks are |
|
348 enabled that will catch more memory errors. You probably won't be |
|
349 able to make much sense of the actual assertion errors, but they |
|
350 should help you locate incorrectly overwritten memory. The |
|
351 checking is fairly extensive, and will slow down execution |
|
352 noticeably. Calling malloc_stats or mallinfo with DEBUG set will |
|
353 attempt to check every non-mmapped allocated and free chunk in the |
|
354 course of computing the summmaries. (By nature, mmapped regions |
|
355 cannot be checked very much automatically.) |
|
356 |
|
357 Setting DEBUG may also be helpful if you are trying to modify |
|
358 this code. The assertions in the check routines spell out in more |
|
359 detail the assumptions and invariants underlying the algorithms. |
|
360 |
|
361 Setting DEBUG does NOT provide an automated mechanism for checking |
|
362 that all accesses to malloced memory stay within their |
|
363 bounds. However, there are several add-ons and adaptations of this |
|
364 or other mallocs available that do this. |
|
365 */ |
|
366 |
|
367 #if DEBUG |
|
368 #include <assert.h> |
|
369 #else |
|
370 #define assert(x) ((void)0) |
|
371 #endif |
|
372 |
|
373 /* |
|
374 The unsigned integer type used for comparing any two chunk sizes. |
|
375 This should be at least as wide as size_t, but should not be signed. |
|
376 */ |
|
377 |
|
378 #ifndef CHUNK_SIZE_T |
|
379 #define CHUNK_SIZE_T unsigned long |
|
380 #endif |
|
381 |
|
382 /* |
|
383 The unsigned integer type used to hold addresses when they are are |
|
384 manipulated as integers. Except that it is not defined on all |
|
385 systems, intptr_t would suffice. |
|
386 */ |
|
387 #ifndef PTR_UINT |
|
388 #define PTR_UINT unsigned long |
|
389 #endif |
|
390 |
|
391 |
|
392 /* |
|
393 INTERNAL_SIZE_T is the word-size used for internal bookkeeping |
|
394 of chunk sizes. |
|
395 |
|
396 The default version is the same as size_t. |
|
397 |
|
398 While not strictly necessary, it is best to define this as an |
|
399 unsigned type, even if size_t is a signed type. This may avoid some |
|
400 artificial size limitations on some systems. |
|
401 |
|
402 On a 64-bit machine, you may be able to reduce malloc overhead by |
|
403 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the |
|
404 expense of not being able to handle more than 2^32 of malloced |
|
405 space. If this limitation is acceptable, you are encouraged to set |
|
406 this unless you are on a platform requiring 16byte alignments. In |
|
407 this case the alignment requirements turn out to negate any |
|
408 potential advantages of decreasing size_t word size. |
|
409 |
|
410 Implementors: Beware of the possible combinations of: |
|
411 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits, |
|
412 and might be the same width as int or as long |
|
413 - size_t might have different width and signedness as INTERNAL_SIZE_T |
|
414 - int and long might be 32 or 64 bits, and might be the same width |
|
415 To deal with this, most comparisons and difference computations |
|
416 among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being |
|
417 aware of the fact that casting an unsigned int to a wider long does |
|
418 not sign-extend. (This also makes checking for negative numbers |
|
419 awkward.) Some of these casts result in harmless compiler warnings |
|
420 on some systems. |
|
421 */ |
|
422 |
|
423 #ifndef INTERNAL_SIZE_T |
|
424 #define INTERNAL_SIZE_T size_t |
|
425 #endif |
|
426 |
|
427 /* The corresponding word size */ |
|
428 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T)) |
|
429 |
|
430 |
|
431 |
|
432 /* |
|
433 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks. |
|
434 It must be a power of two at least 2 * SIZE_SZ, even on machines |
|
435 for which smaller alignments would suffice. It may be defined as |
|
436 larger than this though. Note however that code and data structures |
|
437 are optimized for the case of 8-byte alignment. |
|
438 */ |
|
439 |
|
440 |
|
441 #ifndef MALLOC_ALIGNMENT |
|
442 #define MALLOC_ALIGNMENT (2 * SIZE_SZ) |
|
443 #endif |
|
444 |
|
445 /* The corresponding bit mask value */ |
|
446 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1) |
|
447 |
|
448 |
|
449 |
|
450 /* |
|
451 REALLOC_ZERO_BYTES_FREES should be set if a call to |
|
452 realloc with zero bytes should be the same as a call to free. |
|
453 Some people think it should. Otherwise, since this malloc |
|
454 returns a unique pointer for malloc(0), so does realloc(p, 0). |
|
455 */ |
|
456 |
|
457 /* #define REALLOC_ZERO_BYTES_FREES */ |
|
458 |
|
459 /* |
|
460 TRIM_FASTBINS controls whether free() of a very small chunk can |
|
461 immediately lead to trimming. Setting to true (1) can reduce memory |
|
462 footprint, but will almost always slow down programs that use a lot |
|
463 of small chunks. |
|
464 |
|
465 Define this only if you are willing to give up some speed to more |
|
466 aggressively reduce system-level memory footprint when releasing |
|
467 memory in programs that use many small chunks. You can get |
|
468 essentially the same effect by setting MXFAST to 0, but this can |
|
469 lead to even greater slowdowns in programs using many small chunks. |
|
470 TRIM_FASTBINS is an in-between compile-time option, that disables |
|
471 only those chunks bordering topmost memory from being placed in |
|
472 fastbins. |
|
473 */ |
|
474 |
|
475 #ifndef TRIM_FASTBINS |
|
476 #define TRIM_FASTBINS 0 |
|
477 #endif |
|
478 |
|
479 |
|
480 /* |
|
481 USE_DL_PREFIX will prefix all public routines with the string 'dl'. |
|
482 This is necessary when you only want to use this malloc in one part |
|
483 of a program, using your regular system malloc elsewhere. |
|
484 */ |
|
485 |
|
486 /* #define USE_DL_PREFIX */ |
|
487 |
|
488 |
|
489 /* |
|
490 USE_MALLOC_LOCK causes wrapper functions to surround each |
|
491 callable routine with pthread mutex lock/unlock. |
|
492 |
|
493 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined |
|
494 */ |
|
495 |
|
496 |
|
497 /* #define USE_MALLOC_LOCK */ |
|
498 |
|
499 |
|
500 /* |
|
501 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is |
|
502 actually a wrapper function that first calls MALLOC_PREACTION, then |
|
503 calls the internal routine, and follows it with |
|
504 MALLOC_POSTACTION. This is needed for locking, but you can also use |
|
505 this, without USE_MALLOC_LOCK, for purposes of interception, |
|
506 instrumentation, etc. It is a sad fact that using wrappers often |
|
507 noticeably degrades performance of malloc-intensive programs. |
|
508 */ |
|
509 |
|
510 #ifdef USE_MALLOC_LOCK |
|
511 #define USE_PUBLIC_MALLOC_WRAPPERS |
|
512 #else |
|
513 /* #define USE_PUBLIC_MALLOC_WRAPPERS */ |
|
514 #endif |
|
515 |
|
516 |
|
517 /* |
|
518 Two-phase name translation. |
|
519 All of the actual routines are given mangled names. |
|
520 When wrappers are used, they become the public callable versions. |
|
521 When DL_PREFIX is used, the callable names are prefixed. |
|
522 */ |
|
523 |
|
524 #ifndef USE_PUBLIC_MALLOC_WRAPPERS |
|
525 #define cALLOc public_cALLOc |
|
526 #define fREe public_fREe |
|
527 #define cFREe public_cFREe |
|
528 #define mALLOc public_mALLOc |
|
529 #define mEMALIGn public_mEMALIGn |
|
530 #define rEALLOc public_rEALLOc |
|
531 #define vALLOc public_vALLOc |
|
532 #define pVALLOc public_pVALLOc |
|
533 #define mALLINFo public_mALLINFo |
|
534 #define mALLOPt public_mALLOPt |
|
535 #define mTRIm public_mTRIm |
|
536 #define mSTATs public_mSTATs |
|
537 #define mUSABLe public_mUSABLe |
|
538 #define iCALLOc public_iCALLOc |
|
539 #define iCOMALLOc public_iCOMALLOc |
|
540 #endif |
|
541 |
|
542 #ifdef USE_DL_PREFIX |
|
543 #define public_cALLOc dlcalloc |
|
544 #define public_fREe dlfree |
|
545 #define public_cFREe dlcfree |
|
546 #define public_mALLOc dlmalloc |
|
547 #define public_mEMALIGn dlmemalign |
|
548 #define public_rEALLOc dlrealloc |
|
549 #define public_vALLOc dlvalloc |
|
550 #define public_pVALLOc dlpvalloc |
|
551 #define public_mALLINFo dlmallinfo |
|
552 #define public_mALLOPt dlmallopt |
|
553 #define public_mTRIm dlmalloc_trim |
|
554 #define public_mSTATs dlmalloc_stats |
|
555 #define public_mUSABLe dlmalloc_usable_size |
|
556 #define public_iCALLOc dlindependent_calloc |
|
557 #define public_iCOMALLOc dlindependent_comalloc |
|
558 #else /* USE_DL_PREFIX */ |
|
559 #define public_cALLOc calloc |
|
560 #define public_fREe free |
|
561 #define public_cFREe cfree |
|
562 #define public_mALLOc malloc |
|
563 #define public_mEMALIGn memalign |
|
564 #define public_rEALLOc realloc |
|
565 #define public_vALLOc valloc |
|
566 #define public_pVALLOc pvalloc |
|
567 #define public_mALLINFo mallinfo |
|
568 #define public_mALLOPt mallopt |
|
569 #define public_mTRIm malloc_trim |
|
570 #define public_mSTATs malloc_stats |
|
571 #define public_mUSABLe malloc_usable_size |
|
572 #define public_iCALLOc independent_calloc |
|
573 #define public_iCOMALLOc independent_comalloc |
|
574 #endif /* USE_DL_PREFIX */ |
|
575 |
|
576 |
|
577 /* |
|
578 HAVE_MEMCPY should be defined if you are not otherwise using |
|
579 ANSI STD C, but still have memcpy and memset in your C library |
|
580 and want to use them in calloc and realloc. Otherwise simple |
|
581 macro versions are defined below. |
|
582 |
|
583 USE_MEMCPY should be defined as 1 if you actually want to |
|
584 have memset and memcpy called. People report that the macro |
|
585 versions are faster than libc versions on some systems. |
|
586 |
|
587 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks |
|
588 (of <= 36 bytes) are manually unrolled in realloc and calloc. |
|
589 */ |
|
590 |
|
591 #define HAVE_MEMCPY |
|
592 |
|
593 #ifndef USE_MEMCPY |
|
594 #ifdef HAVE_MEMCPY |
|
595 #define USE_MEMCPY 1 |
|
596 #else |
|
597 #define USE_MEMCPY 0 |
|
598 #endif |
|
599 #endif |
|
600 |
|
601 |
|
602 #if (__STD_C || defined(HAVE_MEMCPY)) |
|
603 |
|
604 #ifdef WIN32 |
|
605 /* On Win32 memset and memcpy are already declared in windows.h */ |
|
606 #else |
|
607 #if __STD_C |
|
608 void* memset(void*, int, size_t); |
|
609 void* memcpy(void*, const void*, size_t); |
|
610 #else |
|
611 Void_t* memset(); |
|
612 Void_t* memcpy(); |
|
613 #endif |
|
614 #endif |
|
615 #endif |
|
616 |
|
617 /* |
|
618 MALLOC_FAILURE_ACTION is the action to take before "return 0" when |
|
619 malloc fails to be able to return memory, either because memory is |
|
620 exhausted or because of illegal arguments. |
|
621 |
|
622 By default, sets errno if running on STD_C platform, else does nothing. |
|
623 */ |
|
624 |
|
625 #ifndef MALLOC_FAILURE_ACTION |
|
626 #if __STD_C |
|
627 #define MALLOC_FAILURE_ACTION \ |
|
628 errno = ENOMEM; |
|
629 |
|
630 #else |
|
631 #define MALLOC_FAILURE_ACTION |
|
632 #endif |
|
633 #endif |
|
634 |
|
635 /* |
|
636 MORECORE-related declarations. By default, rely on sbrk |
|
637 */ |
|
638 |
|
639 |
|
640 #ifdef LACKS_UNISTD_H |
|
641 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
|
642 #if __STD_C |
|
643 extern Void_t* sbrk(ptrdiff_t); |
|
644 #else |
|
645 extern Void_t* sbrk(); |
|
646 #endif |
|
647 #endif |
|
648 #endif |
|
649 |
|
650 /* |
|
651 MORECORE is the name of the routine to call to obtain more memory |
|
652 from the system. See below for general guidance on writing |
|
653 alternative MORECORE functions, as well as a version for WIN32 and a |
|
654 sample version for pre-OSX macos. |
|
655 */ |
|
656 |
|
657 #ifndef MORECORE |
|
658 #define MORECORE sbrk |
|
659 #endif |
|
660 |
|
661 /* |
|
662 MORECORE_FAILURE is the value returned upon failure of MORECORE |
|
663 as well as mmap. Since it cannot be an otherwise valid memory address, |
|
664 and must reflect values of standard sys calls, you probably ought not |
|
665 try to redefine it. |
|
666 */ |
|
667 |
|
668 #ifndef MORECORE_FAILURE |
|
669 #define MORECORE_FAILURE (-1) |
|
670 #endif |
|
671 |
|
672 /* |
|
673 If MORECORE_CONTIGUOUS is true, take advantage of fact that |
|
674 consecutive calls to MORECORE with positive arguments always return |
|
675 contiguous increasing addresses. This is true of unix sbrk. Even |
|
676 if not defined, when regions happen to be contiguous, malloc will |
|
677 permit allocations spanning regions obtained from different |
|
678 calls. But defining this when applicable enables some stronger |
|
679 consistency checks and space efficiencies. |
|
680 */ |
|
681 |
|
682 #ifndef MORECORE_CONTIGUOUS |
|
683 #define MORECORE_CONTIGUOUS 1 |
|
684 #endif |
|
685 |
|
686 /* |
|
687 Define MORECORE_CANNOT_TRIM if your version of MORECORE |
|
688 cannot release space back to the system when given negative |
|
689 arguments. This is generally necessary only if you are using |
|
690 a hand-crafted MORECORE function that cannot handle negative arguments. |
|
691 */ |
|
692 |
|
693 /* #define MORECORE_CANNOT_TRIM */ |
|
694 |
|
695 |
|
696 /* |
|
697 Define HAVE_MMAP as true to optionally make malloc() use mmap() to |
|
698 allocate very large blocks. These will be returned to the |
|
699 operating system immediately after a free(). Also, if mmap |
|
700 is available, it is used as a backup strategy in cases where |
|
701 MORECORE fails to provide space from system. |
|
702 |
|
703 This malloc is best tuned to work with mmap for large requests. |
|
704 If you do not have mmap, operations involving very large chunks (1MB |
|
705 or so) may be slower than you'd like. |
|
706 */ |
|
707 |
|
708 #ifndef HAVE_MMAP |
|
709 #define HAVE_MMAP 1 |
|
710 #endif |
|
711 |
|
712 #if HAVE_MMAP |
|
713 /* |
|
714 Standard unix mmap using /dev/zero clears memory so calloc doesn't |
|
715 need to. |
|
716 */ |
|
717 |
|
718 #ifndef MMAP_CLEARS |
|
719 #define MMAP_CLEARS 1 |
|
720 #endif |
|
721 |
|
722 #else /* no mmap */ |
|
723 #ifndef MMAP_CLEARS |
|
724 #define MMAP_CLEARS 0 |
|
725 #endif |
|
726 #endif |
|
727 |
|
728 |
|
729 /* |
|
730 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if |
|
731 sbrk fails, and mmap is used as a backup (which is done only if |
|
732 HAVE_MMAP). The value must be a multiple of page size. This |
|
733 backup strategy generally applies only when systems have "holes" in |
|
734 address space, so sbrk cannot perform contiguous expansion, but |
|
735 there is still space available on system. On systems for which |
|
736 this is known to be useful (i.e. most linux kernels), this occurs |
|
737 only when programs allocate huge amounts of memory. Between this, |
|
738 and the fact that mmap regions tend to be limited, the size should |
|
739 be large, to avoid too many mmap calls and thus avoid running out |
|
740 of kernel resources. |
|
741 */ |
|
742 |
|
743 #ifndef MMAP_AS_MORECORE_SIZE |
|
744 #define MMAP_AS_MORECORE_SIZE (1024 * 1024) |
|
745 #endif |
|
746 |
|
747 /* |
|
748 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate |
|
749 large blocks. This is currently only possible on Linux with |
|
750 kernel versions newer than 1.3.77. |
|
751 */ |
|
752 |
|
753 #ifndef HAVE_MREMAP |
|
754 #ifdef linux |
|
755 #define HAVE_MREMAP 1 |
|
756 #else |
|
757 #define HAVE_MREMAP 0 |
|
758 #endif |
|
759 |
|
760 #endif /* HAVE_MMAP */ |
|
761 |
|
762 |
|
763 /* |
|
764 The system page size. To the extent possible, this malloc manages |
|
765 memory from the system in page-size units. Note that this value is |
|
766 cached during initialization into a field of malloc_state. So even |
|
767 if malloc_getpagesize is a function, it is only called once. |
|
768 |
|
769 The following mechanics for getpagesize were adapted from bsd/gnu |
|
770 getpagesize.h. If none of the system-probes here apply, a value of |
|
771 4096 is used, which should be OK: If they don't apply, then using |
|
772 the actual value probably doesn't impact performance. |
|
773 */ |
|
774 |
|
775 |
|
776 #ifndef malloc_getpagesize |
|
777 |
|
778 #ifndef LACKS_UNISTD_H |
|
779 # include <unistd.h> |
|
780 #endif |
|
781 |
|
782 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
|
783 # ifndef _SC_PAGE_SIZE |
|
784 # define _SC_PAGE_SIZE _SC_PAGESIZE |
|
785 # endif |
|
786 # endif |
|
787 |
|
788 # ifdef _SC_PAGE_SIZE |
|
789 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
|
790 # else |
|
791 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
|
792 extern size_t getpagesize(); |
|
793 # define malloc_getpagesize getpagesize() |
|
794 # else |
|
795 # ifdef WIN32 /* use supplied emulation of getpagesize */ |
|
796 # define malloc_getpagesize getpagesize() |
|
797 # else |
|
798 # ifndef LACKS_SYS_PARAM_H |
|
799 # include <sys/param.h> |
|
800 # endif |
|
801 # ifdef EXEC_PAGESIZE |
|
802 # define malloc_getpagesize EXEC_PAGESIZE |
|
803 # else |
|
804 # ifdef NBPG |
|
805 # ifndef CLSIZE |
|
806 # define malloc_getpagesize NBPG |
|
807 # else |
|
808 # define malloc_getpagesize (NBPG * CLSIZE) |
|
809 # endif |
|
810 # else |
|
811 # ifdef NBPC |
|
812 # define malloc_getpagesize NBPC |
|
813 # else |
|
814 # ifdef PAGESIZE |
|
815 # define malloc_getpagesize PAGESIZE |
|
816 # else /* just guess */ |
|
817 # define malloc_getpagesize (4096) |
|
818 # endif |
|
819 # endif |
|
820 # endif |
|
821 # endif |
|
822 # endif |
|
823 # endif |
|
824 # endif |
|
825 #endif |
|
826 |
|
827 /* |
|
828 This version of malloc supports the standard SVID/XPG mallinfo |
|
829 routine that returns a struct containing usage properties and |
|
830 statistics. It should work on any SVID/XPG compliant system that has |
|
831 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to |
|
832 install such a thing yourself, cut out the preliminary declarations |
|
833 as described above and below and save them in a malloc.h file. But |
|
834 there's no compelling reason to bother to do this.) |
|
835 |
|
836 The main declaration needed is the mallinfo struct that is returned |
|
837 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a |
|
838 bunch of fields that are not even meaningful in this version of |
|
839 malloc. These fields are are instead filled by mallinfo() with |
|
840 other numbers that might be of interest. |
|
841 |
|
842 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a |
|
843 /usr/include/malloc.h file that includes a declaration of struct |
|
844 mallinfo. If so, it is included; else an SVID2/XPG2 compliant |
|
845 version is declared below. These must be precisely the same for |
|
846 mallinfo() to work. The original SVID version of this struct, |
|
847 defined on most systems with mallinfo, declares all fields as |
|
848 ints. But some others define as unsigned long. If your system |
|
849 defines the fields using a type of different width than listed here, |
|
850 you must #include your system version and #define |
|
851 HAVE_USR_INCLUDE_MALLOC_H. |
|
852 */ |
|
853 |
|
854 /* #define HAVE_USR_INCLUDE_MALLOC_H */ |
|
855 |
|
856 #ifdef HAVE_USR_INCLUDE_MALLOC_H |
|
857 #include "/usr/include/malloc.h" |
|
858 #else |
|
859 |
|
860 /* SVID2/XPG mallinfo structure */ |
|
861 |
|
862 struct mallinfo { |
|
863 int arena; /* non-mmapped space allocated from system */ |
|
864 int ordblks; /* number of free chunks */ |
|
865 int smblks; /* number of fastbin blocks */ |
|
866 int hblks; /* number of mmapped regions */ |
|
867 int hblkhd; /* space in mmapped regions */ |
|
868 int usmblks; /* maximum total allocated space */ |
|
869 int fsmblks; /* space available in freed fastbin blocks */ |
|
870 int uordblks; /* total allocated space */ |
|
871 int fordblks; /* total free space */ |
|
872 int keepcost; /* top-most, releasable (via malloc_trim) space */ |
|
873 }; |
|
874 |
|
875 /* |
|
876 SVID/XPG defines four standard parameter numbers for mallopt, |
|
877 normally defined in malloc.h. Only one of these (M_MXFAST) is used |
|
878 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, |
|
879 so setting them has no effect. But this malloc also supports other |
|
880 options in mallopt described below. |
|
881 */ |
|
882 #endif |
|
883 |
|
884 |
|
885 /* ---------- description of public routines ------------ */ |
|
886 |
|
887 /* |
|
888 malloc(size_t n) |
|
889 Returns a pointer to a newly allocated chunk of at least n bytes, or null |
|
890 if no space is available. Additionally, on failure, errno is |
|
891 set to ENOMEM on ANSI C systems. |
|
892 |
|
893 If n is zero, malloc returns a minumum-sized chunk. (The minimum |
|
894 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit |
|
895 systems.) On most systems, size_t is an unsigned type, so calls |
|
896 with negative arguments are interpreted as requests for huge amounts |
|
897 of space, which will often fail. The maximum supported value of n |
|
898 differs across systems, but is in all cases less than the maximum |
|
899 representable value of a size_t. |
|
900 */ |
|
901 #if __STD_C |
|
902 Void_t* public_mALLOc(size_t); |
|
903 #else |
|
904 Void_t* public_mALLOc(); |
|
905 #endif |
|
906 |
|
907 /* |
|
908 free(Void_t* p) |
|
909 Releases the chunk of memory pointed to by p, that had been previously |
|
910 allocated using malloc or a related routine such as realloc. |
|
911 It has no effect if p is null. It can have arbitrary (i.e., bad!) |
|
912 effects if p has already been freed. |
|
913 |
|
914 Unless disabled (using mallopt), freeing very large spaces will |
|
915 when possible, automatically trigger operations that give |
|
916 back unused memory to the system, thus reducing program footprint. |
|
917 */ |
|
918 #if __STD_C |
|
919 void public_fREe(Void_t*); |
|
920 #else |
|
921 void public_fREe(); |
|
922 #endif |
|
923 |
|
924 /* |
|
925 calloc(size_t n_elements, size_t element_size); |
|
926 Returns a pointer to n_elements * element_size bytes, with all locations |
|
927 set to zero. |
|
928 */ |
|
929 #if __STD_C |
|
930 Void_t* public_cALLOc(size_t, size_t); |
|
931 #else |
|
932 Void_t* public_cALLOc(); |
|
933 #endif |
|
934 |
|
935 /* |
|
936 realloc(Void_t* p, size_t n) |
|
937 Returns a pointer to a chunk of size n that contains the same data |
|
938 as does chunk p up to the minimum of (n, p's size) bytes, or null |
|
939 if no space is available. |
|
940 |
|
941 The returned pointer may or may not be the same as p. The algorithm |
|
942 prefers extending p when possible, otherwise it employs the |
|
943 equivalent of a malloc-copy-free sequence. |
|
944 |
|
945 If p is null, realloc is equivalent to malloc. |
|
946 |
|
947 If space is not available, realloc returns null, errno is set (if on |
|
948 ANSI) and p is NOT freed. |
|
949 |
|
950 if n is for fewer bytes than already held by p, the newly unused |
|
951 space is lopped off and freed if possible. Unless the #define |
|
952 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of |
|
953 zero (re)allocates a minimum-sized chunk. |
|
954 |
|
955 Large chunks that were internally obtained via mmap will always |
|
956 be reallocated using malloc-copy-free sequences unless |
|
957 the system supports MREMAP (currently only linux). |
|
958 |
|
959 The old unix realloc convention of allowing the last-free'd chunk |
|
960 to be used as an argument to realloc is not supported. |
|
961 */ |
|
962 #if __STD_C |
|
963 Void_t* public_rEALLOc(Void_t*, size_t); |
|
964 #else |
|
965 Void_t* public_rEALLOc(); |
|
966 #endif |
|
967 |
|
968 /* |
|
969 memalign(size_t alignment, size_t n); |
|
970 Returns a pointer to a newly allocated chunk of n bytes, aligned |
|
971 in accord with the alignment argument. |
|
972 |
|
973 The alignment argument should be a power of two. If the argument is |
|
974 not a power of two, the nearest greater power is used. |
|
975 8-byte alignment is guaranteed by normal malloc calls, so don't |
|
976 bother calling memalign with an argument of 8 or less. |
|
977 |
|
978 Overreliance on memalign is a sure way to fragment space. |
|
979 */ |
|
980 #if __STD_C |
|
981 Void_t* public_mEMALIGn(size_t, size_t); |
|
982 #else |
|
983 Void_t* public_mEMALIGn(); |
|
984 #endif |
|
985 |
|
986 /* |
|
987 valloc(size_t n); |
|
988 Equivalent to memalign(pagesize, n), where pagesize is the page |
|
989 size of the system. If the pagesize is unknown, 4096 is used. |
|
990 */ |
|
991 #if __STD_C |
|
992 Void_t* public_vALLOc(size_t); |
|
993 #else |
|
994 Void_t* public_vALLOc(); |
|
995 #endif |
|
996 |
|
997 |
|
998 |
|
999 /* |
|
1000 mallopt(int parameter_number, int parameter_value) |
|
1001 Sets tunable parameters The format is to provide a |
|
1002 (parameter-number, parameter-value) pair. mallopt then sets the |
|
1003 corresponding parameter to the argument value if it can (i.e., so |
|
1004 long as the value is meaningful), and returns 1 if successful else |
|
1005 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, |
|
1006 normally defined in malloc.h. Only one of these (M_MXFAST) is used |
|
1007 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, |
|
1008 so setting them has no effect. But this malloc also supports four |
|
1009 other options in mallopt. See below for details. Briefly, supported |
|
1010 parameters are as follows (listed defaults are for "typical" |
|
1011 configurations). |
|
1012 |
|
1013 Symbol param # default allowed param values |
|
1014 M_MXFAST 1 64 0-80 (0 disables fastbins) |
|
1015 M_TRIM_THRESHOLD -1 256*1024 any (-1U disables trimming) |
|
1016 M_TOP_PAD -2 0 any |
|
1017 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) |
|
1018 M_MMAP_MAX -4 65536 any (0 disables use of mmap) |
|
1019 */ |
|
1020 #if __STD_C |
|
1021 int public_mALLOPt(int, int); |
|
1022 #else |
|
1023 int public_mALLOPt(); |
|
1024 #endif |
|
1025 |
|
1026 |
|
1027 /* |
|
1028 mallinfo() |
|
1029 Returns (by copy) a struct containing various summary statistics: |
|
1030 |
|
1031 arena: current total non-mmapped bytes allocated from system |
|
1032 ordblks: the number of free chunks |
|
1033 smblks: the number of fastbin blocks (i.e., small chunks that |
|
1034 have been freed but not use resused or consolidated) |
|
1035 hblks: current number of mmapped regions |
|
1036 hblkhd: total bytes held in mmapped regions |
|
1037 usmblks: the maximum total allocated space. This will be greater |
|
1038 than current total if trimming has occurred. |
|
1039 fsmblks: total bytes held in fastbin blocks |
|
1040 uordblks: current total allocated space (normal or mmapped) |
|
1041 fordblks: total free space |
|
1042 keepcost: the maximum number of bytes that could ideally be released |
|
1043 back to system via malloc_trim. ("ideally" means that |
|
1044 it ignores page restrictions etc.) |
|
1045 |
|
1046 Because these fields are ints, but internal bookkeeping may |
|
1047 be kept as longs, the reported values may wrap around zero and |
|
1048 thus be inaccurate. |
|
1049 */ |
|
1050 #if __STD_C |
|
1051 struct mallinfo public_mALLINFo(void); |
|
1052 #else |
|
1053 struct mallinfo public_mALLINFo(); |
|
1054 #endif |
|
1055 |
|
1056 /* |
|
1057 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]); |
|
1058 |
|
1059 independent_calloc is similar to calloc, but instead of returning a |
|
1060 single cleared space, it returns an array of pointers to n_elements |
|
1061 independent elements that can hold contents of size elem_size, each |
|
1062 of which starts out cleared, and can be independently freed, |
|
1063 realloc'ed etc. The elements are guaranteed to be adjacently |
|
1064 allocated (this is not guaranteed to occur with multiple callocs or |
|
1065 mallocs), which may also improve cache locality in some |
|
1066 applications. |
|
1067 |
|
1068 The "chunks" argument is optional (i.e., may be null, which is |
|
1069 probably the most typical usage). If it is null, the returned array |
|
1070 is itself dynamically allocated and should also be freed when it is |
|
1071 no longer needed. Otherwise, the chunks array must be of at least |
|
1072 n_elements in length. It is filled in with the pointers to the |
|
1073 chunks. |
|
1074 |
|
1075 In either case, independent_calloc returns this pointer array, or |
|
1076 null if the allocation failed. If n_elements is zero and "chunks" |
|
1077 is null, it returns a chunk representing an array with zero elements |
|
1078 (which should be freed if not wanted). |
|
1079 |
|
1080 Each element must be individually freed when it is no longer |
|
1081 needed. If you'd like to instead be able to free all at once, you |
|
1082 should instead use regular calloc and assign pointers into this |
|
1083 space to represent elements. (In this case though, you cannot |
|
1084 independently free elements.) |
|
1085 |
|
1086 independent_calloc simplifies and speeds up implementations of many |
|
1087 kinds of pools. It may also be useful when constructing large data |
|
1088 structures that initially have a fixed number of fixed-sized nodes, |
|
1089 but the number is not known at compile time, and some of the nodes |
|
1090 may later need to be freed. For example: |
|
1091 |
|
1092 struct Node { int item; struct Node* next; }; |
|
1093 |
|
1094 struct Node* build_list() { |
|
1095 struct Node** pool; |
|
1096 int n = read_number_of_nodes_needed(); |
|
1097 if (n <= 0) return 0; |
|
1098 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); |
|
1099 if (pool == 0) die(); |
|
1100 // organize into a linked list... |
|
1101 struct Node* first = pool[0]; |
|
1102 for (i = 0; i < n-1; ++i) |
|
1103 pool[i]->next = pool[i+1]; |
|
1104 free(pool); // Can now free the array (or not, if it is needed later) |
|
1105 return first; |
|
1106 } |
|
1107 */ |
|
1108 #if __STD_C |
|
1109 Void_t** public_iCALLOc(size_t, size_t, Void_t**); |
|
1110 #else |
|
1111 Void_t** public_iCALLOc(); |
|
1112 #endif |
|
1113 |
|
1114 /* |
|
1115 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); |
|
1116 |
|
1117 independent_comalloc allocates, all at once, a set of n_elements |
|
1118 chunks with sizes indicated in the "sizes" array. It returns |
|
1119 an array of pointers to these elements, each of which can be |
|
1120 independently freed, realloc'ed etc. The elements are guaranteed to |
|
1121 be adjacently allocated (this is not guaranteed to occur with |
|
1122 multiple callocs or mallocs), which may also improve cache locality |
|
1123 in some applications. |
|
1124 |
|
1125 The "chunks" argument is optional (i.e., may be null). If it is null |
|
1126 the returned array is itself dynamically allocated and should also |
|
1127 be freed when it is no longer needed. Otherwise, the chunks array |
|
1128 must be of at least n_elements in length. It is filled in with the |
|
1129 pointers to the chunks. |
|
1130 |
|
1131 In either case, independent_comalloc returns this pointer array, or |
|
1132 null if the allocation failed. If n_elements is zero and chunks is |
|
1133 null, it returns a chunk representing an array with zero elements |
|
1134 (which should be freed if not wanted). |
|
1135 |
|
1136 Each element must be individually freed when it is no longer |
|
1137 needed. If you'd like to instead be able to free all at once, you |
|
1138 should instead use a single regular malloc, and assign pointers at |
|
1139 particular offsets in the aggregate space. (In this case though, you |
|
1140 cannot independently free elements.) |
|
1141 |
|
1142 independent_comallac differs from independent_calloc in that each |
|
1143 element may have a different size, and also that it does not |
|
1144 automatically clear elements. |
|
1145 |
|
1146 independent_comalloc can be used to speed up allocation in cases |
|
1147 where several structs or objects must always be allocated at the |
|
1148 same time. For example: |
|
1149 |
|
1150 struct Head { ... } |
|
1151 struct Foot { ... } |
|
1152 |
|
1153 void send_message(char* msg) { |
|
1154 int msglen = strlen(msg); |
|
1155 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; |
|
1156 void* chunks[3]; |
|
1157 if (independent_comalloc(3, sizes, chunks) == 0) |
|
1158 die(); |
|
1159 struct Head* head = (struct Head*)(chunks[0]); |
|
1160 char* body = (char*)(chunks[1]); |
|
1161 struct Foot* foot = (struct Foot*)(chunks[2]); |
|
1162 // ... |
|
1163 } |
|
1164 |
|
1165 In general though, independent_comalloc is worth using only for |
|
1166 larger values of n_elements. For small values, you probably won't |
|
1167 detect enough difference from series of malloc calls to bother. |
|
1168 |
|
1169 Overuse of independent_comalloc can increase overall memory usage, |
|
1170 since it cannot reuse existing noncontiguous small chunks that |
|
1171 might be available for some of the elements. |
|
1172 */ |
|
1173 #if __STD_C |
|
1174 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**); |
|
1175 #else |
|
1176 Void_t** public_iCOMALLOc(); |
|
1177 #endif |
|
1178 |
|
1179 |
|
1180 /* |
|
1181 pvalloc(size_t n); |
|
1182 Equivalent to valloc(minimum-page-that-holds(n)), that is, |
|
1183 round up n to nearest pagesize. |
|
1184 */ |
|
1185 #if __STD_C |
|
1186 Void_t* public_pVALLOc(size_t); |
|
1187 #else |
|
1188 Void_t* public_pVALLOc(); |
|
1189 #endif |
|
1190 |
|
1191 /* |
|
1192 cfree(Void_t* p); |
|
1193 Equivalent to free(p). |
|
1194 |
|
1195 cfree is needed/defined on some systems that pair it with calloc, |
|
1196 for odd historical reasons (such as: cfree is used in example |
|
1197 code in the first edition of K&R). |
|
1198 */ |
|
1199 #if __STD_C |
|
1200 void public_cFREe(Void_t*); |
|
1201 #else |
|
1202 void public_cFREe(); |
|
1203 #endif |
|
1204 |
|
1205 /* |
|
1206 malloc_trim(size_t pad); |
|
1207 |
|
1208 If possible, gives memory back to the system (via negative |
|
1209 arguments to sbrk) if there is unused memory at the `high' end of |
|
1210 the malloc pool. You can call this after freeing large blocks of |
|
1211 memory to potentially reduce the system-level memory requirements |
|
1212 of a program. However, it cannot guarantee to reduce memory. Under |
|
1213 some allocation patterns, some large free blocks of memory will be |
|
1214 locked between two used chunks, so they cannot be given back to |
|
1215 the system. |
|
1216 |
|
1217 The `pad' argument to malloc_trim represents the amount of free |
|
1218 trailing space to leave untrimmed. If this argument is zero, |
|
1219 only the minimum amount of memory to maintain internal data |
|
1220 structures will be left (one page or less). Non-zero arguments |
|
1221 can be supplied to maintain enough trailing space to service |
|
1222 future expected allocations without having to re-obtain memory |
|
1223 from the system. |
|
1224 |
|
1225 Malloc_trim returns 1 if it actually released any memory, else 0. |
|
1226 On systems that do not support "negative sbrks", it will always |
|
1227 rreturn 0. |
|
1228 */ |
|
1229 #if __STD_C |
|
1230 int public_mTRIm(size_t); |
|
1231 #else |
|
1232 int public_mTRIm(); |
|
1233 #endif |
|
1234 |
|
1235 /* |
|
1236 malloc_usable_size(Void_t* p); |
|
1237 |
|
1238 Returns the number of bytes you can actually use in |
|
1239 an allocated chunk, which may be more than you requested (although |
|
1240 often not) due to alignment and minimum size constraints. |
|
1241 You can use this many bytes without worrying about |
|
1242 overwriting other allocated objects. This is not a particularly great |
|
1243 programming practice. malloc_usable_size can be more useful in |
|
1244 debugging and assertions, for example: |
|
1245 |
|
1246 p = malloc(n); |
|
1247 assert(malloc_usable_size(p) >= 256); |
|
1248 |
|
1249 */ |
|
1250 #if __STD_C |
|
1251 size_t public_mUSABLe(Void_t*); |
|
1252 #else |
|
1253 size_t public_mUSABLe(); |
|
1254 #endif |
|
1255 |
|
1256 /* |
|
1257 malloc_stats(); |
|
1258 Prints on stderr the amount of space obtained from the system (both |
|
1259 via sbrk and mmap), the maximum amount (which may be more than |
|
1260 current if malloc_trim and/or munmap got called), and the current |
|
1261 number of bytes allocated via malloc (or realloc, etc) but not yet |
|
1262 freed. Note that this is the number of bytes allocated, not the |
|
1263 number requested. It will be larger than the number requested |
|
1264 because of alignment and bookkeeping overhead. Because it includes |
|
1265 alignment wastage as being in use, this figure may be greater than |
|
1266 zero even when no user-level chunks are allocated. |
|
1267 |
|
1268 The reported current and maximum system memory can be inaccurate if |
|
1269 a program makes other calls to system memory allocation functions |
|
1270 (normally sbrk) outside of malloc. |
|
1271 |
|
1272 malloc_stats prints only the most commonly interesting statistics. |
|
1273 More information can be obtained by calling mallinfo. |
|
1274 |
|
1275 */ |
|
1276 #if __STD_C |
|
1277 void public_mSTATs(); |
|
1278 #else |
|
1279 void public_mSTATs(); |
|
1280 #endif |
|
1281 |
|
1282 /* mallopt tuning options */ |
|
1283 |
|
1284 /* |
|
1285 M_MXFAST is the maximum request size used for "fastbins", special bins |
|
1286 that hold returned chunks without consolidating their spaces. This |
|
1287 enables future requests for chunks of the same size to be handled |
|
1288 very quickly, but can increase fragmentation, and thus increase the |
|
1289 overall memory footprint of a program. |
|
1290 |
|
1291 This malloc manages fastbins very conservatively yet still |
|
1292 efficiently, so fragmentation is rarely a problem for values less |
|
1293 than or equal to the default. The maximum supported value of MXFAST |
|
1294 is 80. You wouldn't want it any higher than this anyway. Fastbins |
|
1295 are designed especially for use with many small structs, objects or |
|
1296 strings -- the default handles structs/objects/arrays with sizes up |
|
1297 to 16 4byte fields, or small strings representing words, tokens, |
|
1298 etc. Using fastbins for larger objects normally worsens |
|
1299 fragmentation without improving speed. |
|
1300 |
|
1301 M_MXFAST is set in REQUEST size units. It is internally used in |
|
1302 chunksize units, which adds padding and alignment. You can reduce |
|
1303 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc |
|
1304 algorithm to be a closer approximation of fifo-best-fit in all cases, |
|
1305 not just for larger requests, but will generally cause it to be |
|
1306 slower. |
|
1307 */ |
|
1308 |
|
1309 |
|
1310 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */ |
|
1311 #ifndef M_MXFAST |
|
1312 #define M_MXFAST 1 |
|
1313 #endif |
|
1314 |
|
1315 #ifndef DEFAULT_MXFAST |
|
1316 #define DEFAULT_MXFAST 64 |
|
1317 #endif |
|
1318 |
|
1319 |
|
1320 /* |
|
1321 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory |
|
1322 to keep before releasing via malloc_trim in free(). |
|
1323 |
|
1324 Automatic trimming is mainly useful in long-lived programs. |
|
1325 Because trimming via sbrk can be slow on some systems, and can |
|
1326 sometimes be wasteful (in cases where programs immediately |
|
1327 afterward allocate more large chunks) the value should be high |
|
1328 enough so that your overall system performance would improve by |
|
1329 releasing this much memory. |
|
1330 |
|
1331 The trim threshold and the mmap control parameters (see below) |
|
1332 can be traded off with one another. Trimming and mmapping are |
|
1333 two different ways of releasing unused memory back to the |
|
1334 system. Between these two, it is often possible to keep |
|
1335 system-level demands of a long-lived program down to a bare |
|
1336 minimum. For example, in one test suite of sessions measuring |
|
1337 the XF86 X server on Linux, using a trim threshold of 128K and a |
|
1338 mmap threshold of 192K led to near-minimal long term resource |
|
1339 consumption. |
|
1340 |
|
1341 If you are using this malloc in a long-lived program, it should |
|
1342 pay to experiment with these values. As a rough guide, you |
|
1343 might set to a value close to the average size of a process |
|
1344 (program) running on your system. Releasing this much memory |
|
1345 would allow such a process to run in memory. Generally, it's |
|
1346 worth it to tune for trimming rather tham memory mapping when a |
|
1347 program undergoes phases where several large chunks are |
|
1348 allocated and released in ways that can reuse each other's |
|
1349 storage, perhaps mixed with phases where there are no such |
|
1350 chunks at all. And in well-behaved long-lived programs, |
|
1351 controlling release of large blocks via trimming versus mapping |
|
1352 is usually faster. |
|
1353 |
|
1354 However, in most programs, these parameters serve mainly as |
|
1355 protection against the system-level effects of carrying around |
|
1356 massive amounts of unneeded memory. Since frequent calls to |
|
1357 sbrk, mmap, and munmap otherwise degrade performance, the default |
|
1358 parameters are set to relatively high values that serve only as |
|
1359 safeguards. |
|
1360 |
|
1361 The trim value must be greater than page size to have any useful |
|
1362 effect. To disable trimming completely, you can set to |
|
1363 (unsigned long)(-1) |
|
1364 |
|
1365 Trim settings interact with fastbin (MXFAST) settings: Unless |
|
1366 TRIM_FASTBINS is defined, automatic trimming never takes place upon |
|
1367 freeing a chunk with size less than or equal to MXFAST. Trimming is |
|
1368 instead delayed until subsequent freeing of larger chunks. However, |
|
1369 you can still force an attempted trim by calling malloc_trim. |
|
1370 |
|
1371 Also, trimming is not generally possible in cases where |
|
1372 the main arena is obtained via mmap. |
|
1373 |
|
1374 Note that the trick some people use of mallocing a huge space and |
|
1375 then freeing it at program startup, in an attempt to reserve system |
|
1376 memory, doesn't have the intended effect under automatic trimming, |
|
1377 since that memory will immediately be returned to the system. |
|
1378 */ |
|
1379 |
|
1380 #define M_TRIM_THRESHOLD -1 |
|
1381 |
|
1382 #ifndef DEFAULT_TRIM_THRESHOLD |
|
1383 #define DEFAULT_TRIM_THRESHOLD (256 * 1024) |
|
1384 #endif |
|
1385 |
|
1386 /* |
|
1387 M_TOP_PAD is the amount of extra `padding' space to allocate or |
|
1388 retain whenever sbrk is called. It is used in two ways internally: |
|
1389 |
|
1390 * When sbrk is called to extend the top of the arena to satisfy |
|
1391 a new malloc request, this much padding is added to the sbrk |
|
1392 request. |
|
1393 |
|
1394 * When malloc_trim is called automatically from free(), |
|
1395 it is used as the `pad' argument. |
|
1396 |
|
1397 In both cases, the actual amount of padding is rounded |
|
1398 so that the end of the arena is always a system page boundary. |
|
1399 |
|
1400 The main reason for using padding is to avoid calling sbrk so |
|
1401 often. Having even a small pad greatly reduces the likelihood |
|
1402 that nearly every malloc request during program start-up (or |
|
1403 after trimming) will invoke sbrk, which needlessly wastes |
|
1404 time. |
|
1405 |
|
1406 Automatic rounding-up to page-size units is normally sufficient |
|
1407 to avoid measurable overhead, so the default is 0. However, in |
|
1408 systems where sbrk is relatively slow, it can pay to increase |
|
1409 this value, at the expense of carrying around more memory than |
|
1410 the program needs. |
|
1411 */ |
|
1412 |
|
1413 #define M_TOP_PAD -2 |
|
1414 |
|
1415 #ifndef DEFAULT_TOP_PAD |
|
1416 #define DEFAULT_TOP_PAD (0) |
|
1417 #endif |
|
1418 |
|
1419 /* |
|
1420 M_MMAP_THRESHOLD is the request size threshold for using mmap() |
|
1421 to service a request. Requests of at least this size that cannot |
|
1422 be allocated using already-existing space will be serviced via mmap. |
|
1423 (If enough normal freed space already exists it is used instead.) |
|
1424 |
|
1425 Using mmap segregates relatively large chunks of memory so that |
|
1426 they can be individually obtained and released from the host |
|
1427 system. A request serviced through mmap is never reused by any |
|
1428 other request (at least not directly; the system may just so |
|
1429 happen to remap successive requests to the same locations). |
|
1430 |
|
1431 Segregating space in this way has the benefits that: |
|
1432 |
|
1433 1. Mmapped space can ALWAYS be individually released back |
|
1434 to the system, which helps keep the system level memory |
|
1435 demands of a long-lived program low. |
|
1436 2. Mapped memory can never become `locked' between |
|
1437 other chunks, as can happen with normally allocated chunks, which |
|
1438 means that even trimming via malloc_trim would not release them. |
|
1439 3. On some systems with "holes" in address spaces, mmap can obtain |
|
1440 memory that sbrk cannot. |
|
1441 |
|
1442 However, it has the disadvantages that: |
|
1443 |
|
1444 1. The space cannot be reclaimed, consolidated, and then |
|
1445 used to service later requests, as happens with normal chunks. |
|
1446 2. It can lead to more wastage because of mmap page alignment |
|
1447 requirements |
|
1448 3. It causes malloc performance to be more dependent on host |
|
1449 system memory management support routines which may vary in |
|
1450 implementation quality and may impose arbitrary |
|
1451 limitations. Generally, servicing a request via normal |
|
1452 malloc steps is faster than going through a system's mmap. |
|
1453 |
|
1454 The advantages of mmap nearly always outweigh disadvantages for |
|
1455 "large" chunks, but the value of "large" varies across systems. The |
|
1456 default is an empirically derived value that works well in most |
|
1457 systems. |
|
1458 */ |
|
1459 |
|
1460 #define M_MMAP_THRESHOLD -3 |
|
1461 |
|
1462 #ifndef DEFAULT_MMAP_THRESHOLD |
|
1463 #define DEFAULT_MMAP_THRESHOLD (256 * 1024) |
|
1464 #endif |
|
1465 |
|
1466 /* |
|
1467 M_MMAP_MAX is the maximum number of requests to simultaneously |
|
1468 service using mmap. This parameter exists because |
|
1469 . Some systems have a limited number of internal tables for |
|
1470 use by mmap, and using more than a few of them may degrade |
|
1471 performance. |
|
1472 |
|
1473 The default is set to a value that serves only as a safeguard. |
|
1474 Setting to 0 disables use of mmap for servicing large requests. If |
|
1475 HAVE_MMAP is not set, the default value is 0, and attempts to set it |
|
1476 to non-zero values in mallopt will fail. |
|
1477 */ |
|
1478 |
|
1479 #define M_MMAP_MAX -4 |
|
1480 |
|
1481 #ifndef DEFAULT_MMAP_MAX |
|
1482 #if HAVE_MMAP |
|
1483 #define DEFAULT_MMAP_MAX (65536) |
|
1484 #else |
|
1485 #define DEFAULT_MMAP_MAX (0) |
|
1486 #endif |
|
1487 #endif |
|
1488 |
|
1489 #ifdef __cplusplus |
|
1490 }; /* end of extern "C" */ |
|
1491 #endif |
|
1492 |
|
1493 /* |
|
1494 ======================================================================== |
|
1495 To make a fully customizable malloc.h header file, cut everything |
|
1496 above this line, put into file malloc.h, edit to suit, and #include it |
|
1497 on the next line, as well as in programs that use this malloc. |
|
1498 ======================================================================== |
|
1499 */ |
|
1500 |
|
1501 /* #include "malloc.h" */ |
|
1502 |
|
1503 /* --------------------- public wrappers ---------------------- */ |
|
1504 |
|
1505 #ifdef USE_PUBLIC_MALLOC_WRAPPERS |
|
1506 |
|
1507 /* Declare all routines as internal */ |
|
1508 #if __STD_C |
|
1509 static Void_t* mALLOc(size_t); |
|
1510 static void fREe(Void_t*); |
|
1511 static Void_t* rEALLOc(Void_t*, size_t); |
|
1512 static Void_t* mEMALIGn(size_t, size_t); |
|
1513 static Void_t* vALLOc(size_t); |
|
1514 static Void_t* pVALLOc(size_t); |
|
1515 static Void_t* cALLOc(size_t, size_t); |
|
1516 static Void_t** iCALLOc(size_t, size_t, Void_t**); |
|
1517 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**); |
|
1518 static void cFREe(Void_t*); |
|
1519 static int mTRIm(size_t); |
|
1520 static size_t mUSABLe(Void_t*); |
|
1521 static void mSTATs(); |
|
1522 static int mALLOPt(int, int); |
|
1523 static struct mallinfo mALLINFo(void); |
|
1524 #else |
|
1525 static Void_t* mALLOc(); |
|
1526 static void fREe(); |
|
1527 static Void_t* rEALLOc(); |
|
1528 static Void_t* mEMALIGn(); |
|
1529 static Void_t* vALLOc(); |
|
1530 static Void_t* pVALLOc(); |
|
1531 static Void_t* cALLOc(); |
|
1532 static Void_t** iCALLOc(); |
|
1533 static Void_t** iCOMALLOc(); |
|
1534 static void cFREe(); |
|
1535 static int mTRIm(); |
|
1536 static size_t mUSABLe(); |
|
1537 static void mSTATs(); |
|
1538 static int mALLOPt(); |
|
1539 static struct mallinfo mALLINFo(); |
|
1540 #endif |
|
1541 |
|
1542 /* |
|
1543 MALLOC_PREACTION and MALLOC_POSTACTION should be |
|
1544 defined to return 0 on success, and nonzero on failure. |
|
1545 The return value of MALLOC_POSTACTION is currently ignored |
|
1546 in wrapper functions since there is no reasonable default |
|
1547 action to take on failure. |
|
1548 */ |
|
1549 |
|
1550 |
|
1551 #ifdef USE_MALLOC_LOCK |
|
1552 |
|
1553 #ifdef WIN32 |
|
1554 |
|
1555 static int mALLOC_MUTEx; |
|
1556 #define MALLOC_PREACTION slwait(&mALLOC_MUTEx) |
|
1557 #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx) |
|
1558 |
|
1559 #else |
|
1560 |
|
1561 #include <pthread.h> |
|
1562 |
|
1563 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER; |
|
1564 |
|
1565 #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx) |
|
1566 #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx) |
|
1567 |
|
1568 #endif /* USE_MALLOC_LOCK */ |
|
1569 |
|
1570 #else |
|
1571 |
|
1572 /* Substitute anything you like for these */ |
|
1573 |
|
1574 #define MALLOC_PREACTION (0) |
|
1575 #define MALLOC_POSTACTION (0) |
|
1576 |
|
1577 #endif |
|
1578 |
|
1579 Void_t* public_mALLOc(size_t bytes) { |
|
1580 Void_t* m; |
|
1581 if (MALLOC_PREACTION != 0) { |
|
1582 return 0; |
|
1583 } |
|
1584 m = mALLOc(bytes); |
|
1585 if (MALLOC_POSTACTION != 0) { |
|
1586 } |
|
1587 return m; |
|
1588 } |
|
1589 |
|
1590 void public_fREe(Void_t* m) { |
|
1591 if (MALLOC_PREACTION != 0) { |
|
1592 return; |
|
1593 } |
|
1594 fREe(m); |
|
1595 if (MALLOC_POSTACTION != 0) { |
|
1596 } |
|
1597 } |
|
1598 |
|
1599 Void_t* public_rEALLOc(Void_t* m, size_t bytes) { |
|
1600 if (MALLOC_PREACTION != 0) { |
|
1601 return 0; |
|
1602 } |
|
1603 m = rEALLOc(m, bytes); |
|
1604 if (MALLOC_POSTACTION != 0) { |
|
1605 } |
|
1606 return m; |
|
1607 } |
|
1608 |
|
1609 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) { |
|
1610 Void_t* m; |
|
1611 if (MALLOC_PREACTION != 0) { |
|
1612 return 0; |
|
1613 } |
|
1614 m = mEMALIGn(alignment, bytes); |
|
1615 if (MALLOC_POSTACTION != 0) { |
|
1616 } |
|
1617 return m; |
|
1618 } |
|
1619 |
|
1620 Void_t* public_vALLOc(size_t bytes) { |
|
1621 Void_t* m; |
|
1622 if (MALLOC_PREACTION != 0) { |
|
1623 return 0; |
|
1624 } |
|
1625 m = vALLOc(bytes); |
|
1626 if (MALLOC_POSTACTION != 0) { |
|
1627 } |
|
1628 return m; |
|
1629 } |
|
1630 |
|
1631 Void_t* public_pVALLOc(size_t bytes) { |
|
1632 Void_t* m; |
|
1633 if (MALLOC_PREACTION != 0) { |
|
1634 return 0; |
|
1635 } |
|
1636 m = pVALLOc(bytes); |
|
1637 if (MALLOC_POSTACTION != 0) { |
|
1638 } |
|
1639 return m; |
|
1640 } |
|
1641 |
|
1642 Void_t* public_cALLOc(size_t n, size_t elem_size) { |
|
1643 Void_t* m; |
|
1644 if (MALLOC_PREACTION != 0) { |
|
1645 return 0; |
|
1646 } |
|
1647 m = cALLOc(n, elem_size); |
|
1648 if (MALLOC_POSTACTION != 0) { |
|
1649 } |
|
1650 return m; |
|
1651 } |
|
1652 |
|
1653 |
|
1654 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) { |
|
1655 Void_t** m; |
|
1656 if (MALLOC_PREACTION != 0) { |
|
1657 return 0; |
|
1658 } |
|
1659 m = iCALLOc(n, elem_size, chunks); |
|
1660 if (MALLOC_POSTACTION != 0) { |
|
1661 } |
|
1662 return m; |
|
1663 } |
|
1664 |
|
1665 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) { |
|
1666 Void_t** m; |
|
1667 if (MALLOC_PREACTION != 0) { |
|
1668 return 0; |
|
1669 } |
|
1670 m = iCOMALLOc(n, sizes, chunks); |
|
1671 if (MALLOC_POSTACTION != 0) { |
|
1672 } |
|
1673 return m; |
|
1674 } |
|
1675 |
|
1676 void public_cFREe(Void_t* m) { |
|
1677 if (MALLOC_PREACTION != 0) { |
|
1678 return; |
|
1679 } |
|
1680 cFREe(m); |
|
1681 if (MALLOC_POSTACTION != 0) { |
|
1682 } |
|
1683 } |
|
1684 |
|
1685 int public_mTRIm(size_t s) { |
|
1686 int result; |
|
1687 if (MALLOC_PREACTION != 0) { |
|
1688 return 0; |
|
1689 } |
|
1690 result = mTRIm(s); |
|
1691 if (MALLOC_POSTACTION != 0) { |
|
1692 } |
|
1693 return result; |
|
1694 } |
|
1695 |
|
1696 size_t public_mUSABLe(Void_t* m) { |
|
1697 size_t result; |
|
1698 if (MALLOC_PREACTION != 0) { |
|
1699 return 0; |
|
1700 } |
|
1701 result = mUSABLe(m); |
|
1702 if (MALLOC_POSTACTION != 0) { |
|
1703 } |
|
1704 return result; |
|
1705 } |
|
1706 |
|
1707 void public_mSTATs() { |
|
1708 if (MALLOC_PREACTION != 0) { |
|
1709 return; |
|
1710 } |
|
1711 mSTATs(); |
|
1712 if (MALLOC_POSTACTION != 0) { |
|
1713 } |
|
1714 } |
|
1715 |
|
1716 struct mallinfo public_mALLINFo() { |
|
1717 struct mallinfo m; |
|
1718 if (MALLOC_PREACTION != 0) { |
|
1719 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
|
1720 return nm; |
|
1721 } |
|
1722 m = mALLINFo(); |
|
1723 if (MALLOC_POSTACTION != 0) { |
|
1724 } |
|
1725 return m; |
|
1726 } |
|
1727 |
|
1728 int public_mALLOPt(int p, int v) { |
|
1729 int result; |
|
1730 if (MALLOC_PREACTION != 0) { |
|
1731 return 0; |
|
1732 } |
|
1733 result = mALLOPt(p, v); |
|
1734 if (MALLOC_POSTACTION != 0) { |
|
1735 } |
|
1736 return result; |
|
1737 } |
|
1738 |
|
1739 #endif |
|
1740 |
|
1741 |
|
1742 |
|
1743 /* ------------- Optional versions of memcopy ---------------- */ |
|
1744 |
|
1745 |
|
1746 #if USE_MEMCPY |
|
1747 |
|
1748 /* |
|
1749 Note: memcpy is ONLY invoked with non-overlapping regions, |
|
1750 so the (usually slower) memmove is not needed. |
|
1751 */ |
|
1752 |
|
1753 #define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes) |
|
1754 #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes) |
|
1755 |
|
1756 #else /* !USE_MEMCPY */ |
|
1757 |
|
1758 /* Use Duff's device for good zeroing/copying performance. */ |
|
1759 |
|
1760 #define MALLOC_ZERO(charp, nbytes) \ |
|
1761 do { \ |
|
1762 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ |
|
1763 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ |
|
1764 long mcn; \ |
|
1765 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ |
|
1766 switch (mctmp) { \ |
|
1767 case 0: for(;;) { *mzp++ = 0; \ |
|
1768 case 7: *mzp++ = 0; \ |
|
1769 case 6: *mzp++ = 0; \ |
|
1770 case 5: *mzp++ = 0; \ |
|
1771 case 4: *mzp++ = 0; \ |
|
1772 case 3: *mzp++ = 0; \ |
|
1773 case 2: *mzp++ = 0; \ |
|
1774 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ |
|
1775 } \ |
|
1776 } while(0) |
|
1777 |
|
1778 #define MALLOC_COPY(dest,src,nbytes) \ |
|
1779 do { \ |
|
1780 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ |
|
1781 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ |
|
1782 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ |
|
1783 long mcn; \ |
|
1784 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ |
|
1785 switch (mctmp) { \ |
|
1786 case 0: for(;;) { *mcdst++ = *mcsrc++; \ |
|
1787 case 7: *mcdst++ = *mcsrc++; \ |
|
1788 case 6: *mcdst++ = *mcsrc++; \ |
|
1789 case 5: *mcdst++ = *mcsrc++; \ |
|
1790 case 4: *mcdst++ = *mcsrc++; \ |
|
1791 case 3: *mcdst++ = *mcsrc++; \ |
|
1792 case 2: *mcdst++ = *mcsrc++; \ |
|
1793 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ |
|
1794 } \ |
|
1795 } while(0) |
|
1796 |
|
1797 #endif |
|
1798 |
|
1799 /* ------------------ MMAP support ------------------ */ |
|
1800 |
|
1801 |
|
1802 #if HAVE_MMAP |
|
1803 |
|
1804 #ifndef LACKS_FCNTL_H |
|
1805 #include <fcntl.h> |
|
1806 #endif |
|
1807 |
|
1808 #ifndef LACKS_SYS_MMAN_H |
|
1809 #include <sys/mman.h> |
|
1810 #endif |
|
1811 |
|
1812 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
|
1813 #define MAP_ANONYMOUS MAP_ANON |
|
1814 #endif |
|
1815 |
|
1816 /* |
|
1817 Nearly all versions of mmap support MAP_ANONYMOUS, |
|
1818 so the following is unlikely to be needed, but is |
|
1819 supplied just in case. |
|
1820 */ |
|
1821 |
|
1822 #ifndef MAP_ANONYMOUS |
|
1823 |
|
1824 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
|
1825 |
|
1826 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \ |
|
1827 (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
|
1828 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \ |
|
1829 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) |
|
1830 |
|
1831 #else |
|
1832 |
|
1833 #define MMAP(addr, size, prot, flags) \ |
|
1834 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0)) |
|
1835 |
|
1836 #endif |
|
1837 |
|
1838 |
|
1839 #endif /* HAVE_MMAP */ |
|
1840 |
|
1841 |
|
1842 /* |
|
1843 ----------------------- Chunk representations ----------------------- |
|
1844 */ |
|
1845 |
|
1846 |
|
1847 /* |
|
1848 This struct declaration is misleading (but accurate and necessary). |
|
1849 It declares a "view" into memory allowing access to necessary |
|
1850 fields at known offsets from a given base. See explanation below. |
|
1851 */ |
|
1852 |
|
1853 struct malloc_chunk { |
|
1854 |
|
1855 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ |
|
1856 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ |
|
1857 |
|
1858 struct malloc_chunk* fd; /* double links -- used only if free. */ |
|
1859 struct malloc_chunk* bk; |
|
1860 }; |
|
1861 |
|
1862 |
|
1863 typedef struct malloc_chunk* mchunkptr; |
|
1864 |
|
1865 /* |
|
1866 malloc_chunk details: |
|
1867 |
|
1868 (The following includes lightly edited explanations by Colin Plumb.) |
|
1869 |
|
1870 Chunks of memory are maintained using a `boundary tag' method as |
|
1871 described in e.g., Knuth or Standish. (See the paper by Paul |
|
1872 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a |
|
1873 survey of such techniques.) Sizes of free chunks are stored both |
|
1874 in the front of each chunk and at the end. This makes |
|
1875 consolidating fragmented chunks into bigger chunks very fast. The |
|
1876 size fields also hold bits representing whether chunks are free or |
|
1877 in use. |
|
1878 |
|
1879 An allocated chunk looks like this: |
|
1880 |
|
1881 |
|
1882 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1883 | Size of previous chunk, if allocated | | |
|
1884 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1885 | Size of chunk, in bytes |P| |
|
1886 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1887 | User data starts here... . |
|
1888 . . |
|
1889 . (malloc_usable_space() bytes) . |
|
1890 . | |
|
1891 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1892 | Size of chunk | |
|
1893 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1894 |
|
1895 |
|
1896 Where "chunk" is the front of the chunk for the purpose of most of |
|
1897 the malloc code, but "mem" is the pointer that is returned to the |
|
1898 user. "Nextchunk" is the beginning of the next contiguous chunk. |
|
1899 |
|
1900 Chunks always begin on even word boundries, so the mem portion |
|
1901 (which is returned to the user) is also on an even word boundary, and |
|
1902 thus at least double-word aligned. |
|
1903 |
|
1904 Free chunks are stored in circular doubly-linked lists, and look like this: |
|
1905 |
|
1906 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1907 | Size of previous chunk | |
|
1908 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1909 `head:' | Size of chunk, in bytes |P| |
|
1910 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1911 | Forward pointer to next chunk in list | |
|
1912 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1913 | Back pointer to previous chunk in list | |
|
1914 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1915 | Unused space (may be 0 bytes long) . |
|
1916 . . |
|
1917 . | |
|
1918 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1919 `foot:' | Size of chunk, in bytes | |
|
1920 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
1921 |
|
1922 The P (PREV_INUSE) bit, stored in the unused low-order bit of the |
|
1923 chunk size (which is always a multiple of two words), is an in-use |
|
1924 bit for the *previous* chunk. If that bit is *clear*, then the |
|
1925 word before the current chunk size contains the previous chunk |
|
1926 size, and can be used to find the front of the previous chunk. |
|
1927 The very first chunk allocated always has this bit set, |
|
1928 preventing access to non-existent (or non-owned) memory. If |
|
1929 prev_inuse is set for any given chunk, then you CANNOT determine |
|
1930 the size of the previous chunk, and might even get a memory |
|
1931 addressing fault when trying to do so. |
|
1932 |
|
1933 Note that the `foot' of the current chunk is actually represented |
|
1934 as the prev_size of the NEXT chunk. This makes it easier to |
|
1935 deal with alignments etc but can be very confusing when trying |
|
1936 to extend or adapt this code. |
|
1937 |
|
1938 The two exceptions to all this are |
|
1939 |
|
1940 1. The special chunk `top' doesn't bother using the |
|
1941 trailing size field since there is no next contiguous chunk |
|
1942 that would have to index off it. After initialization, `top' |
|
1943 is forced to always exist. If it would become less than |
|
1944 MINSIZE bytes long, it is replenished. |
|
1945 |
|
1946 2. Chunks allocated via mmap, which have the second-lowest-order |
|
1947 bit (IS_MMAPPED) set in their size fields. Because they are |
|
1948 allocated one-by-one, each must contain its own trailing size field. |
|
1949 |
|
1950 */ |
|
1951 |
|
1952 /* |
|
1953 ---------- Size and alignment checks and conversions ---------- |
|
1954 */ |
|
1955 |
|
1956 /* conversion from malloc headers to user pointers, and back */ |
|
1957 |
|
1958 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ)) |
|
1959 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) |
|
1960 |
|
1961 /* The smallest possible chunk */ |
|
1962 #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk)) |
|
1963 |
|
1964 /* The smallest size we can malloc is an aligned minimal chunk */ |
|
1965 |
|
1966 #define MINSIZE \ |
|
1967 (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)) |
|
1968 |
|
1969 /* Check if m has acceptable alignment */ |
|
1970 |
|
1971 #define aligned_OK(m) (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0) |
|
1972 |
|
1973 |
|
1974 /* |
|
1975 Check if a request is so large that it would wrap around zero when |
|
1976 padded and aligned. To simplify some other code, the bound is made |
|
1977 low enough so that adding MINSIZE will also not wrap around sero. |
|
1978 */ |
|
1979 |
|
1980 #define REQUEST_OUT_OF_RANGE(req) \ |
|
1981 ((CHUNK_SIZE_T)(req) >= \ |
|
1982 (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE)) |
|
1983 |
|
1984 /* pad request bytes into a usable size -- internal version */ |
|
1985 |
|
1986 #define request2size(req) \ |
|
1987 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \ |
|
1988 MINSIZE : \ |
|
1989 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK) |
|
1990 |
|
1991 /* Same, except also perform argument check */ |
|
1992 |
|
1993 #define checked_request2size(req, sz) \ |
|
1994 if (REQUEST_OUT_OF_RANGE(req)) { \ |
|
1995 MALLOC_FAILURE_ACTION; \ |
|
1996 return 0; \ |
|
1997 } \ |
|
1998 (sz) = request2size(req); |
|
1999 |
|
2000 /* |
|
2001 --------------- Physical chunk operations --------------- |
|
2002 */ |
|
2003 |
|
2004 |
|
2005 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */ |
|
2006 #define PREV_INUSE 0x1 |
|
2007 |
|
2008 /* extract inuse bit of previous chunk */ |
|
2009 #define prev_inuse(p) ((p)->size & PREV_INUSE) |
|
2010 |
|
2011 |
|
2012 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */ |
|
2013 #define IS_MMAPPED 0x2 |
|
2014 |
|
2015 /* check for mmap()'ed chunk */ |
|
2016 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED) |
|
2017 |
|
2018 /* |
|
2019 Bits to mask off when extracting size |
|
2020 |
|
2021 Note: IS_MMAPPED is intentionally not masked off from size field in |
|
2022 macros for which mmapped chunks should never be seen. This should |
|
2023 cause helpful core dumps to occur if it is tried by accident by |
|
2024 people extending or adapting this malloc. |
|
2025 */ |
|
2026 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED) |
|
2027 |
|
2028 /* Get size, ignoring use bits */ |
|
2029 #define chunksize(p) ((p)->size & ~(SIZE_BITS)) |
|
2030 |
|
2031 |
|
2032 /* Ptr to next physical malloc_chunk. */ |
|
2033 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) )) |
|
2034 |
|
2035 /* Ptr to previous physical malloc_chunk */ |
|
2036 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) )) |
|
2037 |
|
2038 /* Treat space at ptr + offset as a chunk */ |
|
2039 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
|
2040 |
|
2041 /* extract p's inuse bit */ |
|
2042 #define inuse(p)\ |
|
2043 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE) |
|
2044 |
|
2045 /* set/clear chunk as being inuse without otherwise disturbing */ |
|
2046 #define set_inuse(p)\ |
|
2047 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE |
|
2048 |
|
2049 #define clear_inuse(p)\ |
|
2050 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE) |
|
2051 |
|
2052 |
|
2053 /* check/set/clear inuse bits in known places */ |
|
2054 #define inuse_bit_at_offset(p, s)\ |
|
2055 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE) |
|
2056 |
|
2057 #define set_inuse_bit_at_offset(p, s)\ |
|
2058 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE) |
|
2059 |
|
2060 #define clear_inuse_bit_at_offset(p, s)\ |
|
2061 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE)) |
|
2062 |
|
2063 |
|
2064 /* Set size at head, without disturbing its use bit */ |
|
2065 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s))) |
|
2066 |
|
2067 /* Set size/use field */ |
|
2068 #define set_head(p, s) ((p)->size = (s)) |
|
2069 |
|
2070 /* Set size at footer (only when chunk is not in use) */ |
|
2071 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s)) |
|
2072 |
|
2073 |
|
2074 /* |
|
2075 -------------------- Internal data structures -------------------- |
|
2076 |
|
2077 All internal state is held in an instance of malloc_state defined |
|
2078 below. There are no other static variables, except in two optional |
|
2079 cases: |
|
2080 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. |
|
2081 * If HAVE_MMAP is true, but mmap doesn't support |
|
2082 MAP_ANONYMOUS, a dummy file descriptor for mmap. |
|
2083 |
|
2084 Beware of lots of tricks that minimize the total bookkeeping space |
|
2085 requirements. The result is a little over 1K bytes (for 4byte |
|
2086 pointers and size_t.) |
|
2087 */ |
|
2088 |
|
2089 /* |
|
2090 Bins |
|
2091 |
|
2092 An array of bin headers for free chunks. Each bin is doubly |
|
2093 linked. The bins are approximately proportionally (log) spaced. |
|
2094 There are a lot of these bins (128). This may look excessive, but |
|
2095 works very well in practice. Most bins hold sizes that are |
|
2096 unusual as malloc request sizes, but are more usual for fragments |
|
2097 and consolidated sets of chunks, which is what these bins hold, so |
|
2098 they can be found quickly. All procedures maintain the invariant |
|
2099 that no consolidated chunk physically borders another one, so each |
|
2100 chunk in a list is known to be preceeded and followed by either |
|
2101 inuse chunks or the ends of memory. |
|
2102 |
|
2103 Chunks in bins are kept in size order, with ties going to the |
|
2104 approximately least recently used chunk. Ordering isn't needed |
|
2105 for the small bins, which all contain the same-sized chunks, but |
|
2106 facilitates best-fit allocation for larger chunks. These lists |
|
2107 are just sequential. Keeping them in order almost never requires |
|
2108 enough traversal to warrant using fancier ordered data |
|
2109 structures. |
|
2110 |
|
2111 Chunks of the same size are linked with the most |
|
2112 recently freed at the front, and allocations are taken from the |
|
2113 back. This results in LRU (FIFO) allocation order, which tends |
|
2114 to give each chunk an equal opportunity to be consolidated with |
|
2115 adjacent freed chunks, resulting in larger free chunks and less |
|
2116 fragmentation. |
|
2117 |
|
2118 To simplify use in double-linked lists, each bin header acts |
|
2119 as a malloc_chunk. This avoids special-casing for headers. |
|
2120 But to conserve space and improve locality, we allocate |
|
2121 only the fd/bk pointers of bins, and then use repositioning tricks |
|
2122 to treat these as the fields of a malloc_chunk*. |
|
2123 */ |
|
2124 |
|
2125 typedef struct malloc_chunk* mbinptr; |
|
2126 |
|
2127 /* addressing -- note that bin_at(0) does not exist */ |
|
2128 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1))) |
|
2129 |
|
2130 /* analog of ++bin */ |
|
2131 #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1))) |
|
2132 |
|
2133 /* Reminders about list directionality within bins */ |
|
2134 #define first(b) ((b)->fd) |
|
2135 #define last(b) ((b)->bk) |
|
2136 |
|
2137 /* Take a chunk off a bin list */ |
|
2138 #define unlink(P, BK, FD) { \ |
|
2139 FD = P->fd; \ |
|
2140 BK = P->bk; \ |
|
2141 FD->bk = BK; \ |
|
2142 BK->fd = FD; \ |
|
2143 } |
|
2144 |
|
2145 /* |
|
2146 Indexing |
|
2147 |
|
2148 Bins for sizes < 512 bytes contain chunks of all the same size, spaced |
|
2149 8 bytes apart. Larger bins are approximately logarithmically spaced: |
|
2150 |
|
2151 64 bins of size 8 |
|
2152 32 bins of size 64 |
|
2153 16 bins of size 512 |
|
2154 8 bins of size 4096 |
|
2155 4 bins of size 32768 |
|
2156 2 bins of size 262144 |
|
2157 1 bin of size what's left |
|
2158 |
|
2159 The bins top out around 1MB because we expect to service large |
|
2160 requests via mmap. |
|
2161 */ |
|
2162 |
|
2163 #define NBINS 96 |
|
2164 #define NSMALLBINS 32 |
|
2165 #define SMALLBIN_WIDTH 8 |
|
2166 #define MIN_LARGE_SIZE 256 |
|
2167 |
|
2168 #define in_smallbin_range(sz) \ |
|
2169 ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE) |
|
2170 |
|
2171 #define smallbin_index(sz) (((unsigned)(sz)) >> 3) |
|
2172 |
|
2173 /* |
|
2174 Compute index for size. We expect this to be inlined when |
|
2175 compiled with optimization, else not, which works out well. |
|
2176 */ |
|
2177 static int largebin_index(unsigned int sz) { |
|
2178 unsigned int x = sz >> SMALLBIN_WIDTH; |
|
2179 unsigned int m; /* bit position of highest set bit of m */ |
|
2180 |
|
2181 if (x >= 0x10000) return NBINS-1; |
|
2182 |
|
2183 /* On intel, use BSRL instruction to find highest bit */ |
|
2184 #if defined(__GNUC__) && defined(i386) |
|
2185 |
|
2186 __asm__("bsrl %1,%0\n\t" |
|
2187 : "=r" (m) |
|
2188 : "g" (x)); |
|
2189 |
|
2190 #else |
|
2191 { |
|
2192 /* |
|
2193 Based on branch-free nlz algorithm in chapter 5 of Henry |
|
2194 S. Warren Jr's book "Hacker's Delight". |
|
2195 */ |
|
2196 |
|
2197 unsigned int n = ((x - 0x100) >> 16) & 8; |
|
2198 x <<= n; |
|
2199 m = ((x - 0x1000) >> 16) & 4; |
|
2200 n += m; |
|
2201 x <<= m; |
|
2202 m = ((x - 0x4000) >> 16) & 2; |
|
2203 n += m; |
|
2204 x = (x << m) >> 14; |
|
2205 m = 13 - n + (x & ~(x>>1)); |
|
2206 } |
|
2207 #endif |
|
2208 |
|
2209 /* Use next 2 bits to create finer-granularity bins */ |
|
2210 return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3); |
|
2211 } |
|
2212 |
|
2213 #define bin_index(sz) \ |
|
2214 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz)) |
|
2215 |
|
2216 /* |
|
2217 FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the |
|
2218 first bin that is maintained in sorted order. This must |
|
2219 be the smallest size corresponding to a given bin. |
|
2220 |
|
2221 Normally, this should be MIN_LARGE_SIZE. But you can weaken |
|
2222 best fit guarantees to sometimes speed up malloc by increasing value. |
|
2223 Doing this means that malloc may choose a chunk that is |
|
2224 non-best-fitting by up to the width of the bin. |
|
2225 |
|
2226 Some useful cutoff values: |
|
2227 512 - all bins sorted |
|
2228 2560 - leaves bins <= 64 bytes wide unsorted |
|
2229 12288 - leaves bins <= 512 bytes wide unsorted |
|
2230 65536 - leaves bins <= 4096 bytes wide unsorted |
|
2231 262144 - leaves bins <= 32768 bytes wide unsorted |
|
2232 -1 - no bins sorted (not recommended!) |
|
2233 */ |
|
2234 |
|
2235 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE |
|
2236 /* #define FIRST_SORTED_BIN_SIZE 65536 */ |
|
2237 |
|
2238 /* |
|
2239 Unsorted chunks |
|
2240 |
|
2241 All remainders from chunk splits, as well as all returned chunks, |
|
2242 are first placed in the "unsorted" bin. They are then placed |
|
2243 in regular bins after malloc gives them ONE chance to be used before |
|
2244 binning. So, basically, the unsorted_chunks list acts as a queue, |
|
2245 with chunks being placed on it in free (and malloc_consolidate), |
|
2246 and taken off (to be either used or placed in bins) in malloc. |
|
2247 */ |
|
2248 |
|
2249 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */ |
|
2250 #define unsorted_chunks(M) (bin_at(M, 1)) |
|
2251 |
|
2252 /* |
|
2253 Top |
|
2254 |
|
2255 The top-most available chunk (i.e., the one bordering the end of |
|
2256 available memory) is treated specially. It is never included in |
|
2257 any bin, is used only if no other chunk is available, and is |
|
2258 released back to the system if it is very large (see |
|
2259 M_TRIM_THRESHOLD). Because top initially |
|
2260 points to its own bin with initial zero size, thus forcing |
|
2261 extension on the first malloc request, we avoid having any special |
|
2262 code in malloc to check whether it even exists yet. But we still |
|
2263 need to do so when getting memory from system, so we make |
|
2264 initial_top treat the bin as a legal but unusable chunk during the |
|
2265 interval between initialization and the first call to |
|
2266 sYSMALLOc. (This is somewhat delicate, since it relies on |
|
2267 the 2 preceding words to be zero during this interval as well.) |
|
2268 */ |
|
2269 |
|
2270 /* Conveniently, the unsorted bin can be used as dummy top on first call */ |
|
2271 #define initial_top(M) (unsorted_chunks(M)) |
|
2272 |
|
2273 /* |
|
2274 Binmap |
|
2275 |
|
2276 To help compensate for the large number of bins, a one-level index |
|
2277 structure is used for bin-by-bin searching. `binmap' is a |
|
2278 bitvector recording whether bins are definitely empty so they can |
|
2279 be skipped over during during traversals. The bits are NOT always |
|
2280 cleared as soon as bins are empty, but instead only |
|
2281 when they are noticed to be empty during traversal in malloc. |
|
2282 */ |
|
2283 |
|
2284 /* Conservatively use 32 bits per map word, even if on 64bit system */ |
|
2285 #define BINMAPSHIFT 5 |
|
2286 #define BITSPERMAP (1U << BINMAPSHIFT) |
|
2287 #define BINMAPSIZE (NBINS / BITSPERMAP) |
|
2288 |
|
2289 #define idx2block(i) ((i) >> BINMAPSHIFT) |
|
2290 #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1)))) |
|
2291 |
|
2292 #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i)) |
|
2293 #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i))) |
|
2294 #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i)) |
|
2295 |
|
2296 /* |
|
2297 Fastbins |
|
2298 |
|
2299 An array of lists holding recently freed small chunks. Fastbins |
|
2300 are not doubly linked. It is faster to single-link them, and |
|
2301 since chunks are never removed from the middles of these lists, |
|
2302 double linking is not necessary. Also, unlike regular bins, they |
|
2303 are not even processed in FIFO order (they use faster LIFO) since |
|
2304 ordering doesn't much matter in the transient contexts in which |
|
2305 fastbins are normally used. |
|
2306 |
|
2307 Chunks in fastbins keep their inuse bit set, so they cannot |
|
2308 be consolidated with other free chunks. malloc_consolidate |
|
2309 releases all chunks in fastbins and consolidates them with |
|
2310 other free chunks. |
|
2311 */ |
|
2312 |
|
2313 typedef struct malloc_chunk* mfastbinptr; |
|
2314 |
|
2315 /* offset 2 to use otherwise unindexable first 2 bins */ |
|
2316 #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2) |
|
2317 |
|
2318 /* The maximum fastbin request size we support */ |
|
2319 #define MAX_FAST_SIZE 80 |
|
2320 |
|
2321 #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1) |
|
2322 |
|
2323 /* |
|
2324 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free() |
|
2325 that triggers automatic consolidation of possibly-surrounding |
|
2326 fastbin chunks. This is a heuristic, so the exact value should not |
|
2327 matter too much. It is defined at half the default trim threshold as a |
|
2328 compromise heuristic to only attempt consolidation if it is likely |
|
2329 to lead to trimming. However, it is not dynamically tunable, since |
|
2330 consolidation reduces fragmentation surrounding loarge chunks even |
|
2331 if trimming is not used. |
|
2332 */ |
|
2333 |
|
2334 #define FASTBIN_CONSOLIDATION_THRESHOLD \ |
|
2335 ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1) |
|
2336 |
|
2337 /* |
|
2338 Since the lowest 2 bits in max_fast don't matter in size comparisons, |
|
2339 they are used as flags. |
|
2340 */ |
|
2341 |
|
2342 /* |
|
2343 ANYCHUNKS_BIT held in max_fast indicates that there may be any |
|
2344 freed chunks at all. It is set true when entering a chunk into any |
|
2345 bin. |
|
2346 */ |
|
2347 |
|
2348 #define ANYCHUNKS_BIT (1U) |
|
2349 |
|
2350 #define have_anychunks(M) (((M)->max_fast & ANYCHUNKS_BIT)) |
|
2351 #define set_anychunks(M) ((M)->max_fast |= ANYCHUNKS_BIT) |
|
2352 #define clear_anychunks(M) ((M)->max_fast &= ~ANYCHUNKS_BIT) |
|
2353 |
|
2354 /* |
|
2355 FASTCHUNKS_BIT held in max_fast indicates that there are probably |
|
2356 some fastbin chunks. It is set true on entering a chunk into any |
|
2357 fastbin, and cleared only in malloc_consolidate. |
|
2358 */ |
|
2359 |
|
2360 #define FASTCHUNKS_BIT (2U) |
|
2361 |
|
2362 #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT)) |
|
2363 #define set_fastchunks(M) ((M)->max_fast |= (FASTCHUNKS_BIT|ANYCHUNKS_BIT)) |
|
2364 #define clear_fastchunks(M) ((M)->max_fast &= ~(FASTCHUNKS_BIT)) |
|
2365 |
|
2366 /* |
|
2367 Set value of max_fast. |
|
2368 Use impossibly small value if 0. |
|
2369 */ |
|
2370 |
|
2371 #define set_max_fast(M, s) \ |
|
2372 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \ |
|
2373 ((M)->max_fast & (FASTCHUNKS_BIT|ANYCHUNKS_BIT)) |
|
2374 |
|
2375 #define get_max_fast(M) \ |
|
2376 ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT)) |
|
2377 |
|
2378 |
|
2379 /* |
|
2380 morecore_properties is a status word holding dynamically discovered |
|
2381 or controlled properties of the morecore function |
|
2382 */ |
|
2383 |
|
2384 #define MORECORE_CONTIGUOUS_BIT (1U) |
|
2385 |
|
2386 #define contiguous(M) \ |
|
2387 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT)) |
|
2388 #define noncontiguous(M) \ |
|
2389 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT) == 0) |
|
2390 #define set_contiguous(M) \ |
|
2391 ((M)->morecore_properties |= MORECORE_CONTIGUOUS_BIT) |
|
2392 #define set_noncontiguous(M) \ |
|
2393 ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT) |
|
2394 |
|
2395 |
|
2396 /* |
|
2397 ----------- Internal state representation and initialization ----------- |
|
2398 */ |
|
2399 |
|
2400 struct malloc_state { |
|
2401 |
|
2402 /* The maximum chunk size to be eligible for fastbin */ |
|
2403 INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */ |
|
2404 |
|
2405 /* Fastbins */ |
|
2406 mfastbinptr fastbins[NFASTBINS]; |
|
2407 |
|
2408 /* Base of the topmost chunk -- not otherwise kept in a bin */ |
|
2409 mchunkptr top; |
|
2410 |
|
2411 /* The remainder from the most recent split of a small request */ |
|
2412 mchunkptr last_remainder; |
|
2413 |
|
2414 /* Normal bins packed as described above */ |
|
2415 mchunkptr bins[NBINS * 2]; |
|
2416 |
|
2417 /* Bitmap of bins. Trailing zero map handles cases of largest binned size */ |
|
2418 unsigned int binmap[BINMAPSIZE+1]; |
|
2419 |
|
2420 /* Tunable parameters */ |
|
2421 CHUNK_SIZE_T trim_threshold; |
|
2422 INTERNAL_SIZE_T top_pad; |
|
2423 INTERNAL_SIZE_T mmap_threshold; |
|
2424 |
|
2425 /* Memory map support */ |
|
2426 int n_mmaps; |
|
2427 int n_mmaps_max; |
|
2428 int max_n_mmaps; |
|
2429 |
|
2430 /* Cache malloc_getpagesize */ |
|
2431 unsigned int pagesize; |
|
2432 |
|
2433 /* Track properties of MORECORE */ |
|
2434 unsigned int morecore_properties; |
|
2435 |
|
2436 /* Statistics */ |
|
2437 INTERNAL_SIZE_T mmapped_mem; |
|
2438 INTERNAL_SIZE_T sbrked_mem; |
|
2439 INTERNAL_SIZE_T max_sbrked_mem; |
|
2440 INTERNAL_SIZE_T max_mmapped_mem; |
|
2441 INTERNAL_SIZE_T max_total_mem; |
|
2442 }; |
|
2443 |
|
2444 typedef struct malloc_state *mstate; |
|
2445 |
|
2446 /* |
|
2447 There is exactly one instance of this struct in this malloc. |
|
2448 If you are adapting this malloc in a way that does NOT use a static |
|
2449 malloc_state, you MUST explicitly zero-fill it before using. This |
|
2450 malloc relies on the property that malloc_state is initialized to |
|
2451 all zeroes (as is true of C statics). |
|
2452 */ |
|
2453 |
|
2454 #ifndef get_malloc_state |
|
2455 static struct malloc_state av_; /* never directly referenced */ |
|
2456 |
|
2457 /* |
|
2458 All uses of av_ are via get_malloc_state(). |
|
2459 At most one "call" to get_malloc_state is made per invocation of |
|
2460 the public versions of malloc and free, but other routines |
|
2461 that in turn invoke malloc and/or free may call more then once. |
|
2462 Also, it is called in check* routines if DEBUG is set. |
|
2463 */ |
|
2464 |
|
2465 #define get_malloc_state() (&(av_)) |
|
2466 #endif |
|
2467 |
|
2468 /* |
|
2469 Initialize a malloc_state struct. |
|
2470 |
|
2471 This is called only from within malloc_consolidate, which needs |
|
2472 be called in the same contexts anyway. It is never called directly |
|
2473 outside of malloc_consolidate because some optimizing compilers try |
|
2474 to inline it at all call points, which turns out not to be an |
|
2475 optimization at all. (Inlining it in malloc_consolidate is fine though.) |
|
2476 */ |
|
2477 |
|
2478 #if __STD_C |
|
2479 static void malloc_init_state(mstate av) |
|
2480 #else |
|
2481 static void malloc_init_state(av) mstate av; |
|
2482 #endif |
|
2483 { |
|
2484 int i; |
|
2485 mbinptr bin; |
|
2486 |
|
2487 /* Establish circular links for normal bins */ |
|
2488 for (i = 1; i < NBINS; ++i) { |
|
2489 bin = bin_at(av,i); |
|
2490 bin->fd = bin->bk = bin; |
|
2491 } |
|
2492 |
|
2493 av->top_pad = DEFAULT_TOP_PAD; |
|
2494 av->n_mmaps_max = DEFAULT_MMAP_MAX; |
|
2495 av->mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
|
2496 av->trim_threshold = DEFAULT_TRIM_THRESHOLD; |
|
2497 |
|
2498 #if MORECORE_CONTIGUOUS |
|
2499 set_contiguous(av); |
|
2500 #else |
|
2501 set_noncontiguous(av); |
|
2502 #endif |
|
2503 |
|
2504 |
|
2505 set_max_fast(av, DEFAULT_MXFAST); |
|
2506 |
|
2507 av->top = initial_top(av); |
|
2508 av->pagesize = malloc_getpagesize; |
|
2509 } |
|
2510 |
|
2511 /* |
|
2512 Other internal utilities operating on mstates |
|
2513 */ |
|
2514 |
|
2515 #if __STD_C |
|
2516 static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate); |
|
2517 static int sYSTRIm(size_t, mstate); |
|
2518 static void malloc_consolidate(mstate); |
|
2519 static Void_t** iALLOc(size_t, size_t*, int, Void_t**); |
|
2520 #else |
|
2521 static Void_t* sYSMALLOc(); |
|
2522 static int sYSTRIm(); |
|
2523 static void malloc_consolidate(); |
|
2524 static Void_t** iALLOc(); |
|
2525 #endif |
|
2526 |
|
2527 /* |
|
2528 Debugging support |
|
2529 |
|
2530 These routines make a number of assertions about the states |
|
2531 of data structures that should be true at all times. If any |
|
2532 are not true, it's very likely that a user program has somehow |
|
2533 trashed memory. (It's also possible that there is a coding error |
|
2534 in malloc. In which case, please report it!) |
|
2535 */ |
|
2536 |
|
2537 #if ! DEBUG |
|
2538 |
|
2539 #define check_chunk(P) |
|
2540 #define check_free_chunk(P) |
|
2541 #define check_inuse_chunk(P) |
|
2542 #define check_remalloced_chunk(P,N) |
|
2543 #define check_malloced_chunk(P,N) |
|
2544 #define check_malloc_state() |
|
2545 |
|
2546 #else |
|
2547 #define check_chunk(P) do_check_chunk(P) |
|
2548 #define check_free_chunk(P) do_check_free_chunk(P) |
|
2549 #define check_inuse_chunk(P) do_check_inuse_chunk(P) |
|
2550 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N) |
|
2551 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N) |
|
2552 #define check_malloc_state() do_check_malloc_state() |
|
2553 |
|
2554 /* |
|
2555 Properties of all chunks |
|
2556 */ |
|
2557 |
|
2558 #if __STD_C |
|
2559 static void do_check_chunk(mchunkptr p) |
|
2560 #else |
|
2561 static void do_check_chunk(p) mchunkptr p; |
|
2562 #endif |
|
2563 { |
|
2564 mstate av = get_malloc_state(); |
|
2565 CHUNK_SIZE_T sz = chunksize(p); |
|
2566 /* min and max possible addresses assuming contiguous allocation */ |
|
2567 char* max_address = (char*)(av->top) + chunksize(av->top); |
|
2568 char* min_address = max_address - av->sbrked_mem; |
|
2569 |
|
2570 if (!chunk_is_mmapped(p)) { |
|
2571 |
|
2572 /* Has legal address ... */ |
|
2573 if (p != av->top) { |
|
2574 if (contiguous(av)) { |
|
2575 assert(((char*)p) >= min_address); |
|
2576 assert(((char*)p + sz) <= ((char*)(av->top))); |
|
2577 } |
|
2578 } |
|
2579 else { |
|
2580 /* top size is always at least MINSIZE */ |
|
2581 assert((CHUNK_SIZE_T)(sz) >= MINSIZE); |
|
2582 /* top predecessor always marked inuse */ |
|
2583 assert(prev_inuse(p)); |
|
2584 } |
|
2585 |
|
2586 } |
|
2587 else { |
|
2588 #if HAVE_MMAP |
|
2589 /* address is outside main heap */ |
|
2590 if (contiguous(av) && av->top != initial_top(av)) { |
|
2591 assert(((char*)p) < min_address || ((char*)p) > max_address); |
|
2592 } |
|
2593 /* chunk is page-aligned */ |
|
2594 assert(((p->prev_size + sz) & (av->pagesize-1)) == 0); |
|
2595 /* mem is aligned */ |
|
2596 assert(aligned_OK(chunk2mem(p))); |
|
2597 #else |
|
2598 /* force an appropriate assert violation if debug set */ |
|
2599 assert(!chunk_is_mmapped(p)); |
|
2600 #endif |
|
2601 } |
|
2602 } |
|
2603 |
|
2604 /* |
|
2605 Properties of free chunks |
|
2606 */ |
|
2607 |
|
2608 #if __STD_C |
|
2609 static void do_check_free_chunk(mchunkptr p) |
|
2610 #else |
|
2611 static void do_check_free_chunk(p) mchunkptr p; |
|
2612 #endif |
|
2613 { |
|
2614 mstate av = get_malloc_state(); |
|
2615 |
|
2616 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; |
|
2617 mchunkptr next = chunk_at_offset(p, sz); |
|
2618 |
|
2619 do_check_chunk(p); |
|
2620 |
|
2621 /* Chunk must claim to be free ... */ |
|
2622 assert(!inuse(p)); |
|
2623 assert (!chunk_is_mmapped(p)); |
|
2624 |
|
2625 /* Unless a special marker, must have OK fields */ |
|
2626 if ((CHUNK_SIZE_T)(sz) >= MINSIZE) |
|
2627 { |
|
2628 assert((sz & MALLOC_ALIGN_MASK) == 0); |
|
2629 assert(aligned_OK(chunk2mem(p))); |
|
2630 /* ... matching footer field */ |
|
2631 assert(next->prev_size == sz); |
|
2632 /* ... and is fully consolidated */ |
|
2633 assert(prev_inuse(p)); |
|
2634 assert (next == av->top || inuse(next)); |
|
2635 |
|
2636 /* ... and has minimally sane links */ |
|
2637 assert(p->fd->bk == p); |
|
2638 assert(p->bk->fd == p); |
|
2639 } |
|
2640 else /* markers are always of size SIZE_SZ */ |
|
2641 assert(sz == SIZE_SZ); |
|
2642 } |
|
2643 |
|
2644 /* |
|
2645 Properties of inuse chunks |
|
2646 */ |
|
2647 |
|
2648 #if __STD_C |
|
2649 static void do_check_inuse_chunk(mchunkptr p) |
|
2650 #else |
|
2651 static void do_check_inuse_chunk(p) mchunkptr p; |
|
2652 #endif |
|
2653 { |
|
2654 mstate av = get_malloc_state(); |
|
2655 mchunkptr next; |
|
2656 do_check_chunk(p); |
|
2657 |
|
2658 if (chunk_is_mmapped(p)) |
|
2659 return; /* mmapped chunks have no next/prev */ |
|
2660 |
|
2661 /* Check whether it claims to be in use ... */ |
|
2662 assert(inuse(p)); |
|
2663 |
|
2664 next = next_chunk(p); |
|
2665 |
|
2666 /* ... and is surrounded by OK chunks. |
|
2667 Since more things can be checked with free chunks than inuse ones, |
|
2668 if an inuse chunk borders them and debug is on, it's worth doing them. |
|
2669 */ |
|
2670 if (!prev_inuse(p)) { |
|
2671 /* Note that we cannot even look at prev unless it is not inuse */ |
|
2672 mchunkptr prv = prev_chunk(p); |
|
2673 assert(next_chunk(prv) == p); |
|
2674 do_check_free_chunk(prv); |
|
2675 } |
|
2676 |
|
2677 if (next == av->top) { |
|
2678 assert(prev_inuse(next)); |
|
2679 assert(chunksize(next) >= MINSIZE); |
|
2680 } |
|
2681 else if (!inuse(next)) |
|
2682 do_check_free_chunk(next); |
|
2683 } |
|
2684 |
|
2685 /* |
|
2686 Properties of chunks recycled from fastbins |
|
2687 */ |
|
2688 |
|
2689 #if __STD_C |
|
2690 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) |
|
2691 #else |
|
2692 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; |
|
2693 #endif |
|
2694 { |
|
2695 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; |
|
2696 |
|
2697 do_check_inuse_chunk(p); |
|
2698 |
|
2699 /* Legal size ... */ |
|
2700 assert((sz & MALLOC_ALIGN_MASK) == 0); |
|
2701 assert((CHUNK_SIZE_T)(sz) >= MINSIZE); |
|
2702 /* ... and alignment */ |
|
2703 assert(aligned_OK(chunk2mem(p))); |
|
2704 /* chunk is less than MINSIZE more than request */ |
|
2705 assert((long)(sz) - (long)(s) >= 0); |
|
2706 assert((long)(sz) - (long)(s + MINSIZE) < 0); |
|
2707 } |
|
2708 |
|
2709 /* |
|
2710 Properties of nonrecycled chunks at the point they are malloced |
|
2711 */ |
|
2712 |
|
2713 #if __STD_C |
|
2714 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) |
|
2715 #else |
|
2716 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; |
|
2717 #endif |
|
2718 { |
|
2719 /* same as recycled case ... */ |
|
2720 do_check_remalloced_chunk(p, s); |
|
2721 |
|
2722 /* |
|
2723 ... plus, must obey implementation invariant that prev_inuse is |
|
2724 always true of any allocated chunk; i.e., that each allocated |
|
2725 chunk borders either a previously allocated and still in-use |
|
2726 chunk, or the base of its memory arena. This is ensured |
|
2727 by making all allocations from the the `lowest' part of any found |
|
2728 chunk. This does not necessarily hold however for chunks |
|
2729 recycled via fastbins. |
|
2730 */ |
|
2731 |
|
2732 assert(prev_inuse(p)); |
|
2733 } |
|
2734 |
|
2735 |
|
2736 /* |
|
2737 Properties of malloc_state. |
|
2738 |
|
2739 This may be useful for debugging malloc, as well as detecting user |
|
2740 programmer errors that somehow write into malloc_state. |
|
2741 |
|
2742 If you are extending or experimenting with this malloc, you can |
|
2743 probably figure out how to hack this routine to print out or |
|
2744 display chunk addresses, sizes, bins, and other instrumentation. |
|
2745 */ |
|
2746 |
|
2747 static void do_check_malloc_state() |
|
2748 { |
|
2749 mstate av = get_malloc_state(); |
|
2750 unsigned int i; |
|
2751 mchunkptr p; |
|
2752 mchunkptr q; |
|
2753 mbinptr b; |
|
2754 unsigned int binbit; |
|
2755 int empty; |
|
2756 unsigned int idx; |
|
2757 INTERNAL_SIZE_T size; |
|
2758 CHUNK_SIZE_T total = 0; |
|
2759 int max_fast_bin; |
|
2760 |
|
2761 /* internal size_t must be no wider than pointer type */ |
|
2762 assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*)); |
|
2763 |
|
2764 /* alignment is a power of 2 */ |
|
2765 assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0); |
|
2766 |
|
2767 /* cannot run remaining checks until fully initialized */ |
|
2768 if (av->top == 0 || av->top == initial_top(av)) |
|
2769 return; |
|
2770 |
|
2771 /* pagesize is a power of 2 */ |
|
2772 assert((av->pagesize & (av->pagesize-1)) == 0); |
|
2773 |
|
2774 /* properties of fastbins */ |
|
2775 |
|
2776 /* max_fast is in allowed range */ |
|
2777 assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE)); |
|
2778 |
|
2779 max_fast_bin = fastbin_index(av->max_fast); |
|
2780 |
|
2781 for (i = 0; i < NFASTBINS; ++i) { |
|
2782 p = av->fastbins[i]; |
|
2783 |
|
2784 /* all bins past max_fast are empty */ |
|
2785 if (i > max_fast_bin) |
|
2786 assert(p == 0); |
|
2787 |
|
2788 while (p != 0) { |
|
2789 /* each chunk claims to be inuse */ |
|
2790 do_check_inuse_chunk(p); |
|
2791 total += chunksize(p); |
|
2792 /* chunk belongs in this bin */ |
|
2793 assert(fastbin_index(chunksize(p)) == i); |
|
2794 p = p->fd; |
|
2795 } |
|
2796 } |
|
2797 |
|
2798 if (total != 0) |
|
2799 assert(have_fastchunks(av)); |
|
2800 else if (!have_fastchunks(av)) |
|
2801 assert(total == 0); |
|
2802 |
|
2803 /* check normal bins */ |
|
2804 for (i = 1; i < NBINS; ++i) { |
|
2805 b = bin_at(av,i); |
|
2806 |
|
2807 /* binmap is accurate (except for bin 1 == unsorted_chunks) */ |
|
2808 if (i >= 2) { |
|
2809 binbit = get_binmap(av,i); |
|
2810 empty = last(b) == b; |
|
2811 if (!binbit) |
|
2812 assert(empty); |
|
2813 else if (!empty) |
|
2814 assert(binbit); |
|
2815 } |
|
2816 |
|
2817 for (p = last(b); p != b; p = p->bk) { |
|
2818 /* each chunk claims to be free */ |
|
2819 do_check_free_chunk(p); |
|
2820 size = chunksize(p); |
|
2821 total += size; |
|
2822 if (i >= 2) { |
|
2823 /* chunk belongs in bin */ |
|
2824 idx = bin_index(size); |
|
2825 assert(idx == i); |
|
2826 /* lists are sorted */ |
|
2827 if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) { |
|
2828 assert(p->bk == b || |
|
2829 (CHUNK_SIZE_T)chunksize(p->bk) >= |
|
2830 (CHUNK_SIZE_T)chunksize(p)); |
|
2831 } |
|
2832 } |
|
2833 /* chunk is followed by a legal chain of inuse chunks */ |
|
2834 for (q = next_chunk(p); |
|
2835 (q != av->top && inuse(q) && |
|
2836 (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE); |
|
2837 q = next_chunk(q)) |
|
2838 do_check_inuse_chunk(q); |
|
2839 } |
|
2840 } |
|
2841 |
|
2842 /* top chunk is OK */ |
|
2843 check_chunk(av->top); |
|
2844 |
|
2845 /* sanity checks for statistics */ |
|
2846 |
|
2847 assert(total <= (CHUNK_SIZE_T)(av->max_total_mem)); |
|
2848 assert(av->n_mmaps >= 0); |
|
2849 assert(av->n_mmaps <= av->max_n_mmaps); |
|
2850 |
|
2851 assert((CHUNK_SIZE_T)(av->sbrked_mem) <= |
|
2852 (CHUNK_SIZE_T)(av->max_sbrked_mem)); |
|
2853 |
|
2854 assert((CHUNK_SIZE_T)(av->mmapped_mem) <= |
|
2855 (CHUNK_SIZE_T)(av->max_mmapped_mem)); |
|
2856 |
|
2857 assert((CHUNK_SIZE_T)(av->max_total_mem) >= |
|
2858 (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem)); |
|
2859 } |
|
2860 #endif |
|
2861 |
|
2862 |
|
2863 /* ----------- Routines dealing with system allocation -------------- */ |
|
2864 |
|
2865 /* |
|
2866 sysmalloc handles malloc cases requiring more memory from the system. |
|
2867 On entry, it is assumed that av->top does not have enough |
|
2868 space to service request for nb bytes, thus requiring that av->top |
|
2869 be extended or replaced. |
|
2870 */ |
|
2871 |
|
2872 #if __STD_C |
|
2873 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av) |
|
2874 #else |
|
2875 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av; |
|
2876 #endif |
|
2877 { |
|
2878 mchunkptr old_top; /* incoming value of av->top */ |
|
2879 INTERNAL_SIZE_T old_size; /* its size */ |
|
2880 char* old_end; /* its end address */ |
|
2881 |
|
2882 long size; /* arg to first MORECORE or mmap call */ |
|
2883 char* brk; /* return value from MORECORE */ |
|
2884 |
|
2885 long correction; /* arg to 2nd MORECORE call */ |
|
2886 char* snd_brk; /* 2nd return val */ |
|
2887 |
|
2888 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ |
|
2889 INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */ |
|
2890 char* aligned_brk; /* aligned offset into brk */ |
|
2891 |
|
2892 mchunkptr p; /* the allocated/returned chunk */ |
|
2893 mchunkptr remainder; /* remainder from allocation */ |
|
2894 CHUNK_SIZE_T remainder_size; /* its size */ |
|
2895 |
|
2896 CHUNK_SIZE_T sum; /* for updating stats */ |
|
2897 |
|
2898 size_t pagemask = av->pagesize - 1; |
|
2899 |
|
2900 /* |
|
2901 If there is space available in fastbins, consolidate and retry |
|
2902 malloc from scratch rather than getting memory from system. This |
|
2903 can occur only if nb is in smallbin range so we didn't consolidate |
|
2904 upon entry to malloc. It is much easier to handle this case here |
|
2905 than in malloc proper. |
|
2906 */ |
|
2907 |
|
2908 if (have_fastchunks(av)) { |
|
2909 assert(in_smallbin_range(nb)); |
|
2910 malloc_consolidate(av); |
|
2911 return mALLOc(nb - MALLOC_ALIGN_MASK); |
|
2912 } |
|
2913 |
|
2914 |
|
2915 #if HAVE_MMAP |
|
2916 |
|
2917 /* |
|
2918 If have mmap, and the request size meets the mmap threshold, and |
|
2919 the system supports mmap, and there are few enough currently |
|
2920 allocated mmapped regions, try to directly map this request |
|
2921 rather than expanding top. |
|
2922 */ |
|
2923 |
|
2924 if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) && |
|
2925 (av->n_mmaps < av->n_mmaps_max)) { |
|
2926 |
|
2927 char* mm; /* return value from mmap call*/ |
|
2928 |
|
2929 /* |
|
2930 Round up size to nearest page. For mmapped chunks, the overhead |
|
2931 is one SIZE_SZ unit larger than for normal chunks, because there |
|
2932 is no following chunk whose prev_size field could be used. |
|
2933 */ |
|
2934 size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask; |
|
2935 |
|
2936 /* Don't try if size wraps around 0 */ |
|
2937 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) { |
|
2938 |
|
2939 mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); |
|
2940 |
|
2941 if (mm != (char*)(MORECORE_FAILURE)) { |
|
2942 |
|
2943 /* |
|
2944 The offset to the start of the mmapped region is stored |
|
2945 in the prev_size field of the chunk. This allows us to adjust |
|
2946 returned start address to meet alignment requirements here |
|
2947 and in memalign(), and still be able to compute proper |
|
2948 address argument for later munmap in free() and realloc(). |
|
2949 */ |
|
2950 |
|
2951 front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK; |
|
2952 if (front_misalign > 0) { |
|
2953 correction = MALLOC_ALIGNMENT - front_misalign; |
|
2954 p = (mchunkptr)(mm + correction); |
|
2955 p->prev_size = correction; |
|
2956 set_head(p, (size - correction) |IS_MMAPPED); |
|
2957 } |
|
2958 else { |
|
2959 p = (mchunkptr)mm; |
|
2960 p->prev_size = 0; |
|
2961 set_head(p, size|IS_MMAPPED); |
|
2962 } |
|
2963 |
|
2964 /* update statistics */ |
|
2965 |
|
2966 if (++av->n_mmaps > av->max_n_mmaps) |
|
2967 av->max_n_mmaps = av->n_mmaps; |
|
2968 |
|
2969 sum = av->mmapped_mem += size; |
|
2970 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) |
|
2971 av->max_mmapped_mem = sum; |
|
2972 sum += av->sbrked_mem; |
|
2973 if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) |
|
2974 av->max_total_mem = sum; |
|
2975 |
|
2976 check_chunk(p); |
|
2977 |
|
2978 return chunk2mem(p); |
|
2979 } |
|
2980 } |
|
2981 } |
|
2982 #endif |
|
2983 |
|
2984 /* Record incoming configuration of top */ |
|
2985 |
|
2986 old_top = av->top; |
|
2987 old_size = chunksize(old_top); |
|
2988 old_end = (char*)(chunk_at_offset(old_top, old_size)); |
|
2989 |
|
2990 brk = snd_brk = (char*)(MORECORE_FAILURE); |
|
2991 |
|
2992 /* |
|
2993 If not the first time through, we require old_size to be |
|
2994 at least MINSIZE and to have prev_inuse set. |
|
2995 */ |
|
2996 |
|
2997 assert((old_top == initial_top(av) && old_size == 0) || |
|
2998 ((CHUNK_SIZE_T) (old_size) >= MINSIZE && |
|
2999 prev_inuse(old_top))); |
|
3000 |
|
3001 /* Precondition: not enough current space to satisfy nb request */ |
|
3002 assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE)); |
|
3003 |
|
3004 /* Precondition: all fastbins are consolidated */ |
|
3005 assert(!have_fastchunks(av)); |
|
3006 |
|
3007 |
|
3008 /* Request enough space for nb + pad + overhead */ |
|
3009 |
|
3010 size = nb + av->top_pad + MINSIZE; |
|
3011 |
|
3012 /* |
|
3013 If contiguous, we can subtract out existing space that we hope to |
|
3014 combine with new space. We add it back later only if |
|
3015 we don't actually get contiguous space. |
|
3016 */ |
|
3017 |
|
3018 if (contiguous(av)) |
|
3019 size -= old_size; |
|
3020 |
|
3021 /* |
|
3022 Round to a multiple of page size. |
|
3023 If MORECORE is not contiguous, this ensures that we only call it |
|
3024 with whole-page arguments. And if MORECORE is contiguous and |
|
3025 this is not first time through, this preserves page-alignment of |
|
3026 previous calls. Otherwise, we correct to page-align below. |
|
3027 */ |
|
3028 |
|
3029 size = (size + pagemask) & ~pagemask; |
|
3030 |
|
3031 /* |
|
3032 Don't try to call MORECORE if argument is so big as to appear |
|
3033 negative. Note that since mmap takes size_t arg, it may succeed |
|
3034 below even if we cannot call MORECORE. |
|
3035 */ |
|
3036 |
|
3037 if (size > 0) |
|
3038 brk = (char*)(MORECORE(size)); |
|
3039 |
|
3040 /* |
|
3041 If have mmap, try using it as a backup when MORECORE fails or |
|
3042 cannot be used. This is worth doing on systems that have "holes" in |
|
3043 address space, so sbrk cannot extend to give contiguous space, but |
|
3044 space is available elsewhere. Note that we ignore mmap max count |
|
3045 and threshold limits, since the space will not be used as a |
|
3046 segregated mmap region. |
|
3047 */ |
|
3048 |
|
3049 #if HAVE_MMAP |
|
3050 if (brk == (char*)(MORECORE_FAILURE)) { |
|
3051 |
|
3052 /* Cannot merge with old top, so add its size back in */ |
|
3053 if (contiguous(av)) |
|
3054 size = (size + old_size + pagemask) & ~pagemask; |
|
3055 |
|
3056 /* If we are relying on mmap as backup, then use larger units */ |
|
3057 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE)) |
|
3058 size = MMAP_AS_MORECORE_SIZE; |
|
3059 |
|
3060 /* Don't try if size wraps around 0 */ |
|
3061 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) { |
|
3062 |
|
3063 brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); |
|
3064 |
|
3065 if (brk != (char*)(MORECORE_FAILURE)) { |
|
3066 |
|
3067 /* We do not need, and cannot use, another sbrk call to find end */ |
|
3068 snd_brk = brk + size; |
|
3069 |
|
3070 /* |
|
3071 Record that we no longer have a contiguous sbrk region. |
|
3072 After the first time mmap is used as backup, we do not |
|
3073 ever rely on contiguous space since this could incorrectly |
|
3074 bridge regions. |
|
3075 */ |
|
3076 set_noncontiguous(av); |
|
3077 } |
|
3078 } |
|
3079 } |
|
3080 #endif |
|
3081 |
|
3082 if (brk != (char*)(MORECORE_FAILURE)) { |
|
3083 av->sbrked_mem += size; |
|
3084 |
|
3085 /* |
|
3086 If MORECORE extends previous space, we can likewise extend top size. |
|
3087 */ |
|
3088 |
|
3089 if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) { |
|
3090 set_head(old_top, (size + old_size) | PREV_INUSE); |
|
3091 } |
|
3092 |
|
3093 /* |
|
3094 Otherwise, make adjustments: |
|
3095 |
|
3096 * If the first time through or noncontiguous, we need to call sbrk |
|
3097 just to find out where the end of memory lies. |
|
3098 |
|
3099 * We need to ensure that all returned chunks from malloc will meet |
|
3100 MALLOC_ALIGNMENT |
|
3101 |
|
3102 * If there was an intervening foreign sbrk, we need to adjust sbrk |
|
3103 request size to account for fact that we will not be able to |
|
3104 combine new space with existing space in old_top. |
|
3105 |
|
3106 * Almost all systems internally allocate whole pages at a time, in |
|
3107 which case we might as well use the whole last page of request. |
|
3108 So we allocate enough more memory to hit a page boundary now, |
|
3109 which in turn causes future contiguous calls to page-align. |
|
3110 */ |
|
3111 |
|
3112 else { |
|
3113 front_misalign = 0; |
|
3114 end_misalign = 0; |
|
3115 correction = 0; |
|
3116 aligned_brk = brk; |
|
3117 |
|
3118 /* |
|
3119 If MORECORE returns an address lower than we have seen before, |
|
3120 we know it isn't really contiguous. This and some subsequent |
|
3121 checks help cope with non-conforming MORECORE functions and |
|
3122 the presence of "foreign" calls to MORECORE from outside of |
|
3123 malloc or by other threads. We cannot guarantee to detect |
|
3124 these in all cases, but cope with the ones we do detect. |
|
3125 */ |
|
3126 if (contiguous(av) && old_size != 0 && brk < old_end) { |
|
3127 set_noncontiguous(av); |
|
3128 } |
|
3129 |
|
3130 /* handle contiguous cases */ |
|
3131 if (contiguous(av)) { |
|
3132 |
|
3133 /* |
|
3134 We can tolerate forward non-contiguities here (usually due |
|
3135 to foreign calls) but treat them as part of our space for |
|
3136 stats reporting. |
|
3137 */ |
|
3138 if (old_size != 0) |
|
3139 av->sbrked_mem += brk - old_end; |
|
3140 |
|
3141 /* Guarantee alignment of first new chunk made from this space */ |
|
3142 |
|
3143 front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK; |
|
3144 if (front_misalign > 0) { |
|
3145 |
|
3146 /* |
|
3147 Skip over some bytes to arrive at an aligned position. |
|
3148 We don't need to specially mark these wasted front bytes. |
|
3149 They will never be accessed anyway because |
|
3150 prev_inuse of av->top (and any chunk created from its start) |
|
3151 is always true after initialization. |
|
3152 */ |
|
3153 |
|
3154 correction = MALLOC_ALIGNMENT - front_misalign; |
|
3155 aligned_brk += correction; |
|
3156 } |
|
3157 |
|
3158 /* |
|
3159 If this isn't adjacent to existing space, then we will not |
|
3160 be able to merge with old_top space, so must add to 2nd request. |
|
3161 */ |
|
3162 |
|
3163 correction += old_size; |
|
3164 |
|
3165 /* Extend the end address to hit a page boundary */ |
|
3166 end_misalign = (INTERNAL_SIZE_T)(brk + size + correction); |
|
3167 correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign; |
|
3168 |
|
3169 assert(correction >= 0); |
|
3170 snd_brk = (char*)(MORECORE(correction)); |
|
3171 |
|
3172 if (snd_brk == (char*)(MORECORE_FAILURE)) { |
|
3173 /* |
|
3174 If can't allocate correction, try to at least find out current |
|
3175 brk. It might be enough to proceed without failing. |
|
3176 */ |
|
3177 correction = 0; |
|
3178 snd_brk = (char*)(MORECORE(0)); |
|
3179 } |
|
3180 else if (snd_brk < brk) { |
|
3181 /* |
|
3182 If the second call gives noncontiguous space even though |
|
3183 it says it won't, the only course of action is to ignore |
|
3184 results of second call, and conservatively estimate where |
|
3185 the first call left us. Also set noncontiguous, so this |
|
3186 won't happen again, leaving at most one hole. |
|
3187 |
|
3188 Note that this check is intrinsically incomplete. Because |
|
3189 MORECORE is allowed to give more space than we ask for, |
|
3190 there is no reliable way to detect a noncontiguity |
|
3191 producing a forward gap for the second call. |
|
3192 */ |
|
3193 snd_brk = brk + size; |
|
3194 correction = 0; |
|
3195 set_noncontiguous(av); |
|
3196 } |
|
3197 |
|
3198 } |
|
3199 |
|
3200 /* handle non-contiguous cases */ |
|
3201 else { |
|
3202 /* MORECORE/mmap must correctly align */ |
|
3203 assert(aligned_OK(chunk2mem(brk))); |
|
3204 |
|
3205 /* Find out current end of memory */ |
|
3206 if (snd_brk == (char*)(MORECORE_FAILURE)) { |
|
3207 snd_brk = (char*)(MORECORE(0)); |
|
3208 av->sbrked_mem += snd_brk - brk - size; |
|
3209 } |
|
3210 } |
|
3211 |
|
3212 /* Adjust top based on results of second sbrk */ |
|
3213 if (snd_brk != (char*)(MORECORE_FAILURE)) { |
|
3214 av->top = (mchunkptr)aligned_brk; |
|
3215 set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); |
|
3216 av->sbrked_mem += correction; |
|
3217 |
|
3218 /* |
|
3219 If not the first time through, we either have a |
|
3220 gap due to foreign sbrk or a non-contiguous region. Insert a |
|
3221 double fencepost at old_top to prevent consolidation with space |
|
3222 we don't own. These fenceposts are artificial chunks that are |
|
3223 marked as inuse and are in any case too small to use. We need |
|
3224 two to make sizes and alignments work out. |
|
3225 */ |
|
3226 |
|
3227 if (old_size != 0) { |
|
3228 /* |
|
3229 Shrink old_top to insert fenceposts, keeping size a |
|
3230 multiple of MALLOC_ALIGNMENT. We know there is at least |
|
3231 enough space in old_top to do this. |
|
3232 */ |
|
3233 old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK; |
|
3234 set_head(old_top, old_size | PREV_INUSE); |
|
3235 |
|
3236 /* |
|
3237 Note that the following assignments completely overwrite |
|
3238 old_top when old_size was previously MINSIZE. This is |
|
3239 intentional. We need the fencepost, even if old_top otherwise gets |
|
3240 lost. |
|
3241 */ |
|
3242 chunk_at_offset(old_top, old_size )->size = |
|
3243 SIZE_SZ|PREV_INUSE; |
|
3244 |
|
3245 chunk_at_offset(old_top, old_size + SIZE_SZ)->size = |
|
3246 SIZE_SZ|PREV_INUSE; |
|
3247 |
|
3248 /* |
|
3249 If possible, release the rest, suppressing trimming. |
|
3250 */ |
|
3251 if (old_size >= MINSIZE) { |
|
3252 INTERNAL_SIZE_T tt = av->trim_threshold; |
|
3253 av->trim_threshold = (INTERNAL_SIZE_T)(-1); |
|
3254 fREe(chunk2mem(old_top)); |
|
3255 av->trim_threshold = tt; |
|
3256 } |
|
3257 } |
|
3258 } |
|
3259 } |
|
3260 |
|
3261 /* Update statistics */ |
|
3262 sum = av->sbrked_mem; |
|
3263 if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem)) |
|
3264 av->max_sbrked_mem = sum; |
|
3265 |
|
3266 sum += av->mmapped_mem; |
|
3267 if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) |
|
3268 av->max_total_mem = sum; |
|
3269 |
|
3270 check_malloc_state(); |
|
3271 |
|
3272 /* finally, do the allocation */ |
|
3273 |
|
3274 p = av->top; |
|
3275 size = chunksize(p); |
|
3276 |
|
3277 /* check that one of the above allocation paths succeeded */ |
|
3278 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) { |
|
3279 remainder_size = size - nb; |
|
3280 remainder = chunk_at_offset(p, nb); |
|
3281 av->top = remainder; |
|
3282 set_head(p, nb | PREV_INUSE); |
|
3283 set_head(remainder, remainder_size | PREV_INUSE); |
|
3284 check_malloced_chunk(p, nb); |
|
3285 return chunk2mem(p); |
|
3286 } |
|
3287 |
|
3288 } |
|
3289 |
|
3290 /* catch all failure paths */ |
|
3291 MALLOC_FAILURE_ACTION; |
|
3292 return 0; |
|
3293 } |
|
3294 |
|
3295 |
|
3296 |
|
3297 |
|
3298 /* |
|
3299 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back |
|
3300 to the system (via negative arguments to sbrk) if there is unused |
|
3301 memory at the `high' end of the malloc pool. It is called |
|
3302 automatically by free() when top space exceeds the trim |
|
3303 threshold. It is also called by the public malloc_trim routine. It |
|
3304 returns 1 if it actually released any memory, else 0. |
|
3305 */ |
|
3306 |
|
3307 #if __STD_C |
|
3308 static int sYSTRIm(size_t pad, mstate av) |
|
3309 #else |
|
3310 static int sYSTRIm(pad, av) size_t pad; mstate av; |
|
3311 #endif |
|
3312 { |
|
3313 long top_size; /* Amount of top-most memory */ |
|
3314 long extra; /* Amount to release */ |
|
3315 long released; /* Amount actually released */ |
|
3316 char* current_brk; /* address returned by pre-check sbrk call */ |
|
3317 char* new_brk; /* address returned by post-check sbrk call */ |
|
3318 size_t pagesz; |
|
3319 |
|
3320 pagesz = av->pagesize; |
|
3321 top_size = chunksize(av->top); |
|
3322 |
|
3323 /* Release in pagesize units, keeping at least one page */ |
|
3324 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; |
|
3325 |
|
3326 if (extra > 0) { |
|
3327 |
|
3328 /* |
|
3329 Only proceed if end of memory is where we last set it. |
|
3330 This avoids problems if there were foreign sbrk calls. |
|
3331 */ |
|
3332 current_brk = (char*)(MORECORE(0)); |
|
3333 if (current_brk == (char*)(av->top) + top_size) { |
|
3334 |
|
3335 /* |
|
3336 Attempt to release memory. We ignore MORECORE return value, |
|
3337 and instead call again to find out where new end of memory is. |
|
3338 This avoids problems if first call releases less than we asked, |
|
3339 of if failure somehow altered brk value. (We could still |
|
3340 encounter problems if it altered brk in some very bad way, |
|
3341 but the only thing we can do is adjust anyway, which will cause |
|
3342 some downstream failure.) |
|
3343 */ |
|
3344 |
|
3345 MORECORE(-extra); |
|
3346 new_brk = (char*)(MORECORE(0)); |
|
3347 |
|
3348 if (new_brk != (char*)MORECORE_FAILURE) { |
|
3349 released = (long)(current_brk - new_brk); |
|
3350 |
|
3351 if (released != 0) { |
|
3352 /* Success. Adjust top. */ |
|
3353 av->sbrked_mem -= released; |
|
3354 set_head(av->top, (top_size - released) | PREV_INUSE); |
|
3355 check_malloc_state(); |
|
3356 return 1; |
|
3357 } |
|
3358 } |
|
3359 } |
|
3360 } |
|
3361 return 0; |
|
3362 } |
|
3363 |
|
3364 /* |
|
3365 ------------------------------ malloc ------------------------------ |
|
3366 */ |
|
3367 |
|
3368 |
|
3369 #if __STD_C |
|
3370 Void_t* mALLOc(size_t bytes) |
|
3371 #else |
|
3372 Void_t* mALLOc(bytes) size_t bytes; |
|
3373 #endif |
|
3374 { |
|
3375 mstate av = get_malloc_state(); |
|
3376 |
|
3377 INTERNAL_SIZE_T nb; /* normalized request size */ |
|
3378 unsigned int idx; /* associated bin index */ |
|
3379 mbinptr bin; /* associated bin */ |
|
3380 mfastbinptr* fb; /* associated fastbin */ |
|
3381 |
|
3382 mchunkptr victim; /* inspected/selected chunk */ |
|
3383 INTERNAL_SIZE_T size; /* its size */ |
|
3384 int victim_index; /* its bin index */ |
|
3385 |
|
3386 mchunkptr remainder; /* remainder from a split */ |
|
3387 CHUNK_SIZE_T remainder_size; /* its size */ |
|
3388 |
|
3389 unsigned int block; /* bit map traverser */ |
|
3390 unsigned int bit; /* bit map traverser */ |
|
3391 unsigned int map; /* current word of binmap */ |
|
3392 |
|
3393 mchunkptr fwd; /* misc temp for linking */ |
|
3394 mchunkptr bck; /* misc temp for linking */ |
|
3395 |
|
3396 /* |
|
3397 Convert request size to internal form by adding SIZE_SZ bytes |
|
3398 overhead plus possibly more to obtain necessary alignment and/or |
|
3399 to obtain a size of at least MINSIZE, the smallest allocatable |
|
3400 size. Also, checked_request2size traps (returning 0) request sizes |
|
3401 that are so large that they wrap around zero when padded and |
|
3402 aligned. |
|
3403 */ |
|
3404 |
|
3405 checked_request2size(bytes, nb); |
|
3406 |
|
3407 /* |
|
3408 Bypass search if no frees yet |
|
3409 */ |
|
3410 if (!have_anychunks(av)) { |
|
3411 if (av->max_fast == 0) /* initialization check */ |
|
3412 malloc_consolidate(av); |
|
3413 goto use_top; |
|
3414 } |
|
3415 |
|
3416 /* |
|
3417 If the size qualifies as a fastbin, first check corresponding bin. |
|
3418 */ |
|
3419 |
|
3420 if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) { |
|
3421 fb = &(av->fastbins[(fastbin_index(nb))]); |
|
3422 if ( (victim = *fb) != 0) { |
|
3423 *fb = victim->fd; |
|
3424 check_remalloced_chunk(victim, nb); |
|
3425 return chunk2mem(victim); |
|
3426 } |
|
3427 } |
|
3428 |
|
3429 /* |
|
3430 If a small request, check regular bin. Since these "smallbins" |
|
3431 hold one size each, no searching within bins is necessary. |
|
3432 (For a large request, we need to wait until unsorted chunks are |
|
3433 processed to find best fit. But for small ones, fits are exact |
|
3434 anyway, so we can check now, which is faster.) |
|
3435 */ |
|
3436 |
|
3437 if (in_smallbin_range(nb)) { |
|
3438 idx = smallbin_index(nb); |
|
3439 bin = bin_at(av,idx); |
|
3440 |
|
3441 if ( (victim = last(bin)) != bin) { |
|
3442 bck = victim->bk; |
|
3443 set_inuse_bit_at_offset(victim, nb); |
|
3444 bin->bk = bck; |
|
3445 bck->fd = bin; |
|
3446 |
|
3447 check_malloced_chunk(victim, nb); |
|
3448 return chunk2mem(victim); |
|
3449 } |
|
3450 } |
|
3451 |
|
3452 /* |
|
3453 If this is a large request, consolidate fastbins before continuing. |
|
3454 While it might look excessive to kill all fastbins before |
|
3455 even seeing if there is space available, this avoids |
|
3456 fragmentation problems normally associated with fastbins. |
|
3457 Also, in practice, programs tend to have runs of either small or |
|
3458 large requests, but less often mixtures, so consolidation is not |
|
3459 invoked all that often in most programs. And the programs that |
|
3460 it is called frequently in otherwise tend to fragment. |
|
3461 */ |
|
3462 |
|
3463 else { |
|
3464 idx = largebin_index(nb); |
|
3465 if (have_fastchunks(av)) |
|
3466 malloc_consolidate(av); |
|
3467 } |
|
3468 |
|
3469 /* |
|
3470 Process recently freed or remaindered chunks, taking one only if |
|
3471 it is exact fit, or, if this a small request, the chunk is remainder from |
|
3472 the most recent non-exact fit. Place other traversed chunks in |
|
3473 bins. Note that this step is the only place in any routine where |
|
3474 chunks are placed in bins. |
|
3475 */ |
|
3476 |
|
3477 while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) { |
|
3478 bck = victim->bk; |
|
3479 size = chunksize(victim); |
|
3480 |
|
3481 /* |
|
3482 If a small request, try to use last remainder if it is the |
|
3483 only chunk in unsorted bin. This helps promote locality for |
|
3484 runs of consecutive small requests. This is the only |
|
3485 exception to best-fit, and applies only when there is |
|
3486 no exact fit for a small chunk. |
|
3487 */ |
|
3488 |
|
3489 if (in_smallbin_range(nb) && |
|
3490 bck == unsorted_chunks(av) && |
|
3491 victim == av->last_remainder && |
|
3492 (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) { |
|
3493 |
|
3494 /* split and reattach remainder */ |
|
3495 remainder_size = size - nb; |
|
3496 remainder = chunk_at_offset(victim, nb); |
|
3497 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
|
3498 av->last_remainder = remainder; |
|
3499 remainder->bk = remainder->fd = unsorted_chunks(av); |
|
3500 |
|
3501 set_head(victim, nb | PREV_INUSE); |
|
3502 set_head(remainder, remainder_size | PREV_INUSE); |
|
3503 set_foot(remainder, remainder_size); |
|
3504 |
|
3505 check_malloced_chunk(victim, nb); |
|
3506 return chunk2mem(victim); |
|
3507 } |
|
3508 |
|
3509 /* remove from unsorted list */ |
|
3510 unsorted_chunks(av)->bk = bck; |
|
3511 bck->fd = unsorted_chunks(av); |
|
3512 |
|
3513 /* Take now instead of binning if exact fit */ |
|
3514 |
|
3515 if (size == nb) { |
|
3516 set_inuse_bit_at_offset(victim, size); |
|
3517 check_malloced_chunk(victim, nb); |
|
3518 return chunk2mem(victim); |
|
3519 } |
|
3520 |
|
3521 /* place chunk in bin */ |
|
3522 |
|
3523 if (in_smallbin_range(size)) { |
|
3524 victim_index = smallbin_index(size); |
|
3525 bck = bin_at(av, victim_index); |
|
3526 fwd = bck->fd; |
|
3527 } |
|
3528 else { |
|
3529 victim_index = largebin_index(size); |
|
3530 bck = bin_at(av, victim_index); |
|
3531 fwd = bck->fd; |
|
3532 |
|
3533 if (fwd != bck) { |
|
3534 /* if smaller than smallest, place first */ |
|
3535 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) { |
|
3536 fwd = bck; |
|
3537 bck = bck->bk; |
|
3538 } |
|
3539 else if ((CHUNK_SIZE_T)(size) >= |
|
3540 (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) { |
|
3541 |
|
3542 /* maintain large bins in sorted order */ |
|
3543 size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */ |
|
3544 while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size)) |
|
3545 fwd = fwd->fd; |
|
3546 bck = fwd->bk; |
|
3547 } |
|
3548 } |
|
3549 } |
|
3550 |
|
3551 mark_bin(av, victim_index); |
|
3552 victim->bk = bck; |
|
3553 victim->fd = fwd; |
|
3554 fwd->bk = victim; |
|
3555 bck->fd = victim; |
|
3556 } |
|
3557 |
|
3558 /* |
|
3559 If a large request, scan through the chunks of current bin to |
|
3560 find one that fits. (This will be the smallest that fits unless |
|
3561 FIRST_SORTED_BIN_SIZE has been changed from default.) This is |
|
3562 the only step where an unbounded number of chunks might be |
|
3563 scanned without doing anything useful with them. However the |
|
3564 lists tend to be short. |
|
3565 */ |
|
3566 |
|
3567 if (!in_smallbin_range(nb)) { |
|
3568 bin = bin_at(av, idx); |
|
3569 |
|
3570 for (victim = last(bin); victim != bin; victim = victim->bk) { |
|
3571 size = chunksize(victim); |
|
3572 |
|
3573 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) { |
|
3574 remainder_size = size - nb; |
|
3575 unlink(victim, bck, fwd); |
|
3576 |
|
3577 /* Exhaust */ |
|
3578 if (remainder_size < MINSIZE) { |
|
3579 set_inuse_bit_at_offset(victim, size); |
|
3580 check_malloced_chunk(victim, nb); |
|
3581 return chunk2mem(victim); |
|
3582 } |
|
3583 /* Split */ |
|
3584 else { |
|
3585 remainder = chunk_at_offset(victim, nb); |
|
3586 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
|
3587 remainder->bk = remainder->fd = unsorted_chunks(av); |
|
3588 set_head(victim, nb | PREV_INUSE); |
|
3589 set_head(remainder, remainder_size | PREV_INUSE); |
|
3590 set_foot(remainder, remainder_size); |
|
3591 check_malloced_chunk(victim, nb); |
|
3592 return chunk2mem(victim); |
|
3593 } |
|
3594 } |
|
3595 } |
|
3596 } |
|
3597 |
|
3598 /* |
|
3599 Search for a chunk by scanning bins, starting with next largest |
|
3600 bin. This search is strictly by best-fit; i.e., the smallest |
|
3601 (with ties going to approximately the least recently used) chunk |
|
3602 that fits is selected. |
|
3603 |
|
3604 The bitmap avoids needing to check that most blocks are nonempty. |
|
3605 */ |
|
3606 |
|
3607 ++idx; |
|
3608 bin = bin_at(av,idx); |
|
3609 block = idx2block(idx); |
|
3610 map = av->binmap[block]; |
|
3611 bit = idx2bit(idx); |
|
3612 |
|
3613 for (;;) { |
|
3614 |
|
3615 /* Skip rest of block if there are no more set bits in this block. */ |
|
3616 if (bit > map || bit == 0) { |
|
3617 do { |
|
3618 if (++block >= BINMAPSIZE) /* out of bins */ |
|
3619 goto use_top; |
|
3620 } while ( (map = av->binmap[block]) == 0); |
|
3621 |
|
3622 bin = bin_at(av, (block << BINMAPSHIFT)); |
|
3623 bit = 1; |
|
3624 } |
|
3625 |
|
3626 /* Advance to bin with set bit. There must be one. */ |
|
3627 while ((bit & map) == 0) { |
|
3628 bin = next_bin(bin); |
|
3629 bit <<= 1; |
|
3630 assert(bit != 0); |
|
3631 } |
|
3632 |
|
3633 /* Inspect the bin. It is likely to be non-empty */ |
|
3634 victim = last(bin); |
|
3635 |
|
3636 /* If a false alarm (empty bin), clear the bit. */ |
|
3637 if (victim == bin) { |
|
3638 av->binmap[block] = map &= ~bit; /* Write through */ |
|
3639 bin = next_bin(bin); |
|
3640 bit <<= 1; |
|
3641 } |
|
3642 |
|
3643 else { |
|
3644 size = chunksize(victim); |
|
3645 |
|
3646 /* We know the first chunk in this bin is big enough to use. */ |
|
3647 assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)); |
|
3648 |
|
3649 remainder_size = size - nb; |
|
3650 |
|
3651 /* unlink */ |
|
3652 bck = victim->bk; |
|
3653 bin->bk = bck; |
|
3654 bck->fd = bin; |
|
3655 |
|
3656 /* Exhaust */ |
|
3657 if (remainder_size < MINSIZE) { |
|
3658 set_inuse_bit_at_offset(victim, size); |
|
3659 check_malloced_chunk(victim, nb); |
|
3660 return chunk2mem(victim); |
|
3661 } |
|
3662 |
|
3663 /* Split */ |
|
3664 else { |
|
3665 remainder = chunk_at_offset(victim, nb); |
|
3666 |
|
3667 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; |
|
3668 remainder->bk = remainder->fd = unsorted_chunks(av); |
|
3669 /* advertise as last remainder */ |
|
3670 if (in_smallbin_range(nb)) |
|
3671 av->last_remainder = remainder; |
|
3672 |
|
3673 set_head(victim, nb | PREV_INUSE); |
|
3674 set_head(remainder, remainder_size | PREV_INUSE); |
|
3675 set_foot(remainder, remainder_size); |
|
3676 check_malloced_chunk(victim, nb); |
|
3677 return chunk2mem(victim); |
|
3678 } |
|
3679 } |
|
3680 } |
|
3681 |
|
3682 use_top: |
|
3683 /* |
|
3684 If large enough, split off the chunk bordering the end of memory |
|
3685 (held in av->top). Note that this is in accord with the best-fit |
|
3686 search rule. In effect, av->top is treated as larger (and thus |
|
3687 less well fitting) than any other available chunk since it can |
|
3688 be extended to be as large as necessary (up to system |
|
3689 limitations). |
|
3690 |
|
3691 We require that av->top always exists (i.e., has size >= |
|
3692 MINSIZE) after initialization, so if it would otherwise be |
|
3693 exhuasted by current request, it is replenished. (The main |
|
3694 reason for ensuring it exists is that we may need MINSIZE space |
|
3695 to put in fenceposts in sysmalloc.) |
|
3696 */ |
|
3697 |
|
3698 victim = av->top; |
|
3699 size = chunksize(victim); |
|
3700 |
|
3701 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) { |
|
3702 remainder_size = size - nb; |
|
3703 remainder = chunk_at_offset(victim, nb); |
|
3704 av->top = remainder; |
|
3705 set_head(victim, nb | PREV_INUSE); |
|
3706 set_head(remainder, remainder_size | PREV_INUSE); |
|
3707 |
|
3708 check_malloced_chunk(victim, nb); |
|
3709 return chunk2mem(victim); |
|
3710 } |
|
3711 |
|
3712 /* |
|
3713 If no space in top, relay to handle system-dependent cases |
|
3714 */ |
|
3715 return sYSMALLOc(nb, av); |
|
3716 } |
|
3717 |
|
3718 /* |
|
3719 ------------------------------ free ------------------------------ |
|
3720 */ |
|
3721 |
|
3722 #if __STD_C |
|
3723 void fREe(Void_t* mem) |
|
3724 #else |
|
3725 void fREe(mem) Void_t* mem; |
|
3726 #endif |
|
3727 { |
|
3728 mstate av = get_malloc_state(); |
|
3729 |
|
3730 mchunkptr p; /* chunk corresponding to mem */ |
|
3731 INTERNAL_SIZE_T size; /* its size */ |
|
3732 mfastbinptr* fb; /* associated fastbin */ |
|
3733 mchunkptr nextchunk; /* next contiguous chunk */ |
|
3734 INTERNAL_SIZE_T nextsize; /* its size */ |
|
3735 int nextinuse; /* true if nextchunk is used */ |
|
3736 INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */ |
|
3737 mchunkptr bck; /* misc temp for linking */ |
|
3738 mchunkptr fwd; /* misc temp for linking */ |
|
3739 |
|
3740 /* free(0) has no effect */ |
|
3741 if (mem != 0) { |
|
3742 p = mem2chunk(mem); |
|
3743 size = chunksize(p); |
|
3744 |
|
3745 check_inuse_chunk(p); |
|
3746 |
|
3747 /* |
|
3748 If eligible, place chunk on a fastbin so it can be found |
|
3749 and used quickly in malloc. |
|
3750 */ |
|
3751 |
|
3752 if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast) |
|
3753 |
|
3754 #if TRIM_FASTBINS |
|
3755 /* |
|
3756 If TRIM_FASTBINS set, don't place chunks |
|
3757 bordering top into fastbins |
|
3758 */ |
|
3759 && (chunk_at_offset(p, size) != av->top) |
|
3760 #endif |
|
3761 ) { |
|
3762 |
|
3763 set_fastchunks(av); |
|
3764 fb = &(av->fastbins[fastbin_index(size)]); |
|
3765 p->fd = *fb; |
|
3766 *fb = p; |
|
3767 } |
|
3768 |
|
3769 /* |
|
3770 Consolidate other non-mmapped chunks as they arrive. |
|
3771 */ |
|
3772 |
|
3773 else if (!chunk_is_mmapped(p)) { |
|
3774 set_anychunks(av); |
|
3775 |
|
3776 nextchunk = chunk_at_offset(p, size); |
|
3777 nextsize = chunksize(nextchunk); |
|
3778 |
|
3779 /* consolidate backward */ |
|
3780 if (!prev_inuse(p)) { |
|
3781 prevsize = p->prev_size; |
|
3782 size += prevsize; |
|
3783 p = chunk_at_offset(p, -((long) prevsize)); |
|
3784 unlink(p, bck, fwd); |
|
3785 } |
|
3786 |
|
3787 if (nextchunk != av->top) { |
|
3788 /* get and clear inuse bit */ |
|
3789 nextinuse = inuse_bit_at_offset(nextchunk, nextsize); |
|
3790 set_head(nextchunk, nextsize); |
|
3791 |
|
3792 /* consolidate forward */ |
|
3793 if (!nextinuse) { |
|
3794 unlink(nextchunk, bck, fwd); |
|
3795 size += nextsize; |
|
3796 } |
|
3797 |
|
3798 /* |
|
3799 Place the chunk in unsorted chunk list. Chunks are |
|
3800 not placed into regular bins until after they have |
|
3801 been given one chance to be used in malloc. |
|
3802 */ |
|
3803 |
|
3804 bck = unsorted_chunks(av); |
|
3805 fwd = bck->fd; |
|
3806 p->bk = bck; |
|
3807 p->fd = fwd; |
|
3808 bck->fd = p; |
|
3809 fwd->bk = p; |
|
3810 |
|
3811 set_head(p, size | PREV_INUSE); |
|
3812 set_foot(p, size); |
|
3813 |
|
3814 check_free_chunk(p); |
|
3815 } |
|
3816 |
|
3817 /* |
|
3818 If the chunk borders the current high end of memory, |
|
3819 consolidate into top |
|
3820 */ |
|
3821 |
|
3822 else { |
|
3823 size += nextsize; |
|
3824 set_head(p, size | PREV_INUSE); |
|
3825 av->top = p; |
|
3826 check_chunk(p); |
|
3827 } |
|
3828 |
|
3829 /* |
|
3830 If freeing a large space, consolidate possibly-surrounding |
|
3831 chunks. Then, if the total unused topmost memory exceeds trim |
|
3832 threshold, ask malloc_trim to reduce top. |
|
3833 |
|
3834 Unless max_fast is 0, we don't know if there are fastbins |
|
3835 bordering top, so we cannot tell for sure whether threshold |
|
3836 has been reached unless fastbins are consolidated. But we |
|
3837 don't want to consolidate on each free. As a compromise, |
|
3838 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD |
|
3839 is reached. |
|
3840 */ |
|
3841 |
|
3842 if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { |
|
3843 if (have_fastchunks(av)) |
|
3844 malloc_consolidate(av); |
|
3845 |
|
3846 #ifndef MORECORE_CANNOT_TRIM |
|
3847 if ((CHUNK_SIZE_T)(chunksize(av->top)) >= |
|
3848 (CHUNK_SIZE_T)(av->trim_threshold)) |
|
3849 sYSTRIm(av->top_pad, av); |
|
3850 #endif |
|
3851 } |
|
3852 |
|
3853 } |
|
3854 /* |
|
3855 If the chunk was allocated via mmap, release via munmap() |
|
3856 Note that if HAVE_MMAP is false but chunk_is_mmapped is |
|
3857 true, then user must have overwritten memory. There's nothing |
|
3858 we can do to catch this error unless DEBUG is set, in which case |
|
3859 check_inuse_chunk (above) will have triggered error. |
|
3860 */ |
|
3861 |
|
3862 else { |
|
3863 #if HAVE_MMAP |
|
3864 int ret; |
|
3865 INTERNAL_SIZE_T offset = p->prev_size; |
|
3866 av->n_mmaps--; |
|
3867 av->mmapped_mem -= (size + offset); |
|
3868 ret = munmap((char*)p - offset, size + offset); |
|
3869 /* munmap returns non-zero on failure */ |
|
3870 assert(ret == 0); |
|
3871 #endif |
|
3872 } |
|
3873 } |
|
3874 } |
|
3875 |
|
3876 /* |
|
3877 ------------------------- malloc_consolidate ------------------------- |
|
3878 |
|
3879 malloc_consolidate is a specialized version of free() that tears |
|
3880 down chunks held in fastbins. Free itself cannot be used for this |
|
3881 purpose since, among other things, it might place chunks back onto |
|
3882 fastbins. So, instead, we need to use a minor variant of the same |
|
3883 code. |
|
3884 |
|
3885 Also, because this routine needs to be called the first time through |
|
3886 malloc anyway, it turns out to be the perfect place to trigger |
|
3887 initialization code. |
|
3888 */ |
|
3889 |
|
3890 #if __STD_C |
|
3891 static void malloc_consolidate(mstate av) |
|
3892 #else |
|
3893 static void malloc_consolidate(av) mstate av; |
|
3894 #endif |
|
3895 { |
|
3896 mfastbinptr* fb; /* current fastbin being consolidated */ |
|
3897 mfastbinptr* maxfb; /* last fastbin (for loop control) */ |
|
3898 mchunkptr p; /* current chunk being consolidated */ |
|
3899 mchunkptr nextp; /* next chunk to consolidate */ |
|
3900 mchunkptr unsorted_bin; /* bin header */ |
|
3901 mchunkptr first_unsorted; /* chunk to link to */ |
|
3902 |
|
3903 /* These have same use as in free() */ |
|
3904 mchunkptr nextchunk; |
|
3905 INTERNAL_SIZE_T size; |
|
3906 INTERNAL_SIZE_T nextsize; |
|
3907 INTERNAL_SIZE_T prevsize; |
|
3908 int nextinuse; |
|
3909 mchunkptr bck; |
|
3910 mchunkptr fwd; |
|
3911 |
|
3912 /* |
|
3913 If max_fast is 0, we know that av hasn't |
|
3914 yet been initialized, in which case do so below |
|
3915 */ |
|
3916 |
|
3917 if (av->max_fast != 0) { |
|
3918 clear_fastchunks(av); |
|
3919 |
|
3920 unsorted_bin = unsorted_chunks(av); |
|
3921 |
|
3922 /* |
|
3923 Remove each chunk from fast bin and consolidate it, placing it |
|
3924 then in unsorted bin. Among other reasons for doing this, |
|
3925 placing in unsorted bin avoids needing to calculate actual bins |
|
3926 until malloc is sure that chunks aren't immediately going to be |
|
3927 reused anyway. |
|
3928 */ |
|
3929 |
|
3930 maxfb = &(av->fastbins[fastbin_index(av->max_fast)]); |
|
3931 fb = &(av->fastbins[0]); |
|
3932 do { |
|
3933 if ( (p = *fb) != 0) { |
|
3934 *fb = 0; |
|
3935 |
|
3936 do { |
|
3937 check_inuse_chunk(p); |
|
3938 nextp = p->fd; |
|
3939 |
|
3940 /* Slightly streamlined version of consolidation code in free() */ |
|
3941 size = p->size & ~PREV_INUSE; |
|
3942 nextchunk = chunk_at_offset(p, size); |
|
3943 nextsize = chunksize(nextchunk); |
|
3944 |
|
3945 if (!prev_inuse(p)) { |
|
3946 prevsize = p->prev_size; |
|
3947 size += prevsize; |
|
3948 p = chunk_at_offset(p, -((long) prevsize)); |
|
3949 unlink(p, bck, fwd); |
|
3950 } |
|
3951 |
|
3952 if (nextchunk != av->top) { |
|
3953 nextinuse = inuse_bit_at_offset(nextchunk, nextsize); |
|
3954 set_head(nextchunk, nextsize); |
|
3955 |
|
3956 if (!nextinuse) { |
|
3957 size += nextsize; |
|
3958 unlink(nextchunk, bck, fwd); |
|
3959 } |
|
3960 |
|
3961 first_unsorted = unsorted_bin->fd; |
|
3962 unsorted_bin->fd = p; |
|
3963 first_unsorted->bk = p; |
|
3964 |
|
3965 set_head(p, size | PREV_INUSE); |
|
3966 p->bk = unsorted_bin; |
|
3967 p->fd = first_unsorted; |
|
3968 set_foot(p, size); |
|
3969 } |
|
3970 |
|
3971 else { |
|
3972 size += nextsize; |
|
3973 set_head(p, size | PREV_INUSE); |
|
3974 av->top = p; |
|
3975 } |
|
3976 |
|
3977 } while ( (p = nextp) != 0); |
|
3978 |
|
3979 } |
|
3980 } while (fb++ != maxfb); |
|
3981 } |
|
3982 else { |
|
3983 malloc_init_state(av); |
|
3984 check_malloc_state(); |
|
3985 } |
|
3986 } |
|
3987 |
|
3988 /* |
|
3989 ------------------------------ realloc ------------------------------ |
|
3990 */ |
|
3991 |
|
3992 |
|
3993 #if __STD_C |
|
3994 Void_t* rEALLOc(Void_t* oldmem, size_t bytes) |
|
3995 #else |
|
3996 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes; |
|
3997 #endif |
|
3998 { |
|
3999 mstate av = get_malloc_state(); |
|
4000 |
|
4001 INTERNAL_SIZE_T nb; /* padded request size */ |
|
4002 |
|
4003 mchunkptr oldp; /* chunk corresponding to oldmem */ |
|
4004 INTERNAL_SIZE_T oldsize; /* its size */ |
|
4005 |
|
4006 mchunkptr newp; /* chunk to return */ |
|
4007 INTERNAL_SIZE_T newsize; /* its size */ |
|
4008 Void_t* newmem; /* corresponding user mem */ |
|
4009 |
|
4010 mchunkptr next; /* next contiguous chunk after oldp */ |
|
4011 |
|
4012 mchunkptr remainder; /* extra space at end of newp */ |
|
4013 CHUNK_SIZE_T remainder_size; /* its size */ |
|
4014 |
|
4015 mchunkptr bck; /* misc temp for linking */ |
|
4016 mchunkptr fwd; /* misc temp for linking */ |
|
4017 |
|
4018 CHUNK_SIZE_T copysize; /* bytes to copy */ |
|
4019 unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */ |
|
4020 INTERNAL_SIZE_T* s; /* copy source */ |
|
4021 INTERNAL_SIZE_T* d; /* copy destination */ |
|
4022 |
|
4023 |
|
4024 #ifdef REALLOC_ZERO_BYTES_FREES |
|
4025 if (bytes == 0) { |
|
4026 fREe(oldmem); |
|
4027 return 0; |
|
4028 } |
|
4029 #endif |
|
4030 |
|
4031 /* realloc of null is supposed to be same as malloc */ |
|
4032 if (oldmem == 0) return mALLOc(bytes); |
|
4033 |
|
4034 checked_request2size(bytes, nb); |
|
4035 |
|
4036 oldp = mem2chunk(oldmem); |
|
4037 oldsize = chunksize(oldp); |
|
4038 |
|
4039 check_inuse_chunk(oldp); |
|
4040 |
|
4041 if (!chunk_is_mmapped(oldp)) { |
|
4042 |
|
4043 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) { |
|
4044 /* already big enough; split below */ |
|
4045 newp = oldp; |
|
4046 newsize = oldsize; |
|
4047 } |
|
4048 |
|
4049 else { |
|
4050 next = chunk_at_offset(oldp, oldsize); |
|
4051 |
|
4052 /* Try to expand forward into top */ |
|
4053 if (next == av->top && |
|
4054 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >= |
|
4055 (CHUNK_SIZE_T)(nb + MINSIZE)) { |
|
4056 set_head_size(oldp, nb); |
|
4057 av->top = chunk_at_offset(oldp, nb); |
|
4058 set_head(av->top, (newsize - nb) | PREV_INUSE); |
|
4059 return chunk2mem(oldp); |
|
4060 } |
|
4061 |
|
4062 /* Try to expand forward into next chunk; split off remainder below */ |
|
4063 else if (next != av->top && |
|
4064 !inuse(next) && |
|
4065 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >= |
|
4066 (CHUNK_SIZE_T)(nb)) { |
|
4067 newp = oldp; |
|
4068 unlink(next, bck, fwd); |
|
4069 } |
|
4070 |
|
4071 /* allocate, copy, free */ |
|
4072 else { |
|
4073 newmem = mALLOc(nb - MALLOC_ALIGN_MASK); |
|
4074 if (newmem == 0) |
|
4075 return 0; /* propagate failure */ |
|
4076 |
|
4077 newp = mem2chunk(newmem); |
|
4078 newsize = chunksize(newp); |
|
4079 |
|
4080 /* |
|
4081 Avoid copy if newp is next chunk after oldp. |
|
4082 */ |
|
4083 if (newp == next) { |
|
4084 newsize += oldsize; |
|
4085 newp = oldp; |
|
4086 } |
|
4087 else { |
|
4088 /* |
|
4089 Unroll copy of <= 36 bytes (72 if 8byte sizes) |
|
4090 We know that contents have an odd number of |
|
4091 INTERNAL_SIZE_T-sized words; minimally 3. |
|
4092 */ |
|
4093 |
|
4094 copysize = oldsize - SIZE_SZ; |
|
4095 s = (INTERNAL_SIZE_T*)(oldmem); |
|
4096 d = (INTERNAL_SIZE_T*)(newmem); |
|
4097 ncopies = copysize / sizeof(INTERNAL_SIZE_T); |
|
4098 assert(ncopies >= 3); |
|
4099 |
|
4100 if (ncopies > 9) |
|
4101 MALLOC_COPY(d, s, copysize); |
|
4102 |
|
4103 else { |
|
4104 *(d+0) = *(s+0); |
|
4105 *(d+1) = *(s+1); |
|
4106 *(d+2) = *(s+2); |
|
4107 if (ncopies > 4) { |
|
4108 *(d+3) = *(s+3); |
|
4109 *(d+4) = *(s+4); |
|
4110 if (ncopies > 6) { |
|
4111 *(d+5) = *(s+5); |
|
4112 *(d+6) = *(s+6); |
|
4113 if (ncopies > 8) { |
|
4114 *(d+7) = *(s+7); |
|
4115 *(d+8) = *(s+8); |
|
4116 } |
|
4117 } |
|
4118 } |
|
4119 } |
|
4120 |
|
4121 fREe(oldmem); |
|
4122 check_inuse_chunk(newp); |
|
4123 return chunk2mem(newp); |
|
4124 } |
|
4125 } |
|
4126 } |
|
4127 |
|
4128 /* If possible, free extra space in old or extended chunk */ |
|
4129 |
|
4130 assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb)); |
|
4131 |
|
4132 remainder_size = newsize - nb; |
|
4133 |
|
4134 if (remainder_size < MINSIZE) { /* not enough extra to split off */ |
|
4135 set_head_size(newp, newsize); |
|
4136 set_inuse_bit_at_offset(newp, newsize); |
|
4137 } |
|
4138 else { /* split remainder */ |
|
4139 remainder = chunk_at_offset(newp, nb); |
|
4140 set_head_size(newp, nb); |
|
4141 set_head(remainder, remainder_size | PREV_INUSE); |
|
4142 /* Mark remainder as inuse so free() won't complain */ |
|
4143 set_inuse_bit_at_offset(remainder, remainder_size); |
|
4144 fREe(chunk2mem(remainder)); |
|
4145 } |
|
4146 |
|
4147 check_inuse_chunk(newp); |
|
4148 return chunk2mem(newp); |
|
4149 } |
|
4150 |
|
4151 /* |
|
4152 Handle mmap cases |
|
4153 */ |
|
4154 |
|
4155 else { |
|
4156 #if HAVE_MMAP |
|
4157 |
|
4158 #if HAVE_MREMAP |
|
4159 INTERNAL_SIZE_T offset = oldp->prev_size; |
|
4160 size_t pagemask = av->pagesize - 1; |
|
4161 char *cp; |
|
4162 CHUNK_SIZE_T sum; |
|
4163 |
|
4164 /* Note the extra SIZE_SZ overhead */ |
|
4165 newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask; |
|
4166 |
|
4167 /* don't need to remap if still within same page */ |
|
4168 if (oldsize == newsize - offset) |
|
4169 return oldmem; |
|
4170 |
|
4171 cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1); |
|
4172 |
|
4173 if (cp != (char*)MORECORE_FAILURE) { |
|
4174 |
|
4175 newp = (mchunkptr)(cp + offset); |
|
4176 set_head(newp, (newsize - offset)|IS_MMAPPED); |
|
4177 |
|
4178 assert(aligned_OK(chunk2mem(newp))); |
|
4179 assert((newp->prev_size == offset)); |
|
4180 |
|
4181 /* update statistics */ |
|
4182 sum = av->mmapped_mem += newsize - oldsize; |
|
4183 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem)) |
|
4184 av->max_mmapped_mem = sum; |
|
4185 sum += av->sbrked_mem; |
|
4186 if (sum > (CHUNK_SIZE_T)(av->max_total_mem)) |
|
4187 av->max_total_mem = sum; |
|
4188 |
|
4189 return chunk2mem(newp); |
|
4190 } |
|
4191 #endif |
|
4192 |
|
4193 /* Note the extra SIZE_SZ overhead. */ |
|
4194 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ)) |
|
4195 newmem = oldmem; /* do nothing */ |
|
4196 else { |
|
4197 /* Must alloc, copy, free. */ |
|
4198 newmem = mALLOc(nb - MALLOC_ALIGN_MASK); |
|
4199 if (newmem != 0) { |
|
4200 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); |
|
4201 fREe(oldmem); |
|
4202 } |
|
4203 } |
|
4204 return newmem; |
|
4205 |
|
4206 #else |
|
4207 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */ |
|
4208 check_malloc_state(); |
|
4209 MALLOC_FAILURE_ACTION; |
|
4210 return 0; |
|
4211 #endif |
|
4212 } |
|
4213 } |
|
4214 |
|
4215 /* |
|
4216 ------------------------------ memalign ------------------------------ |
|
4217 */ |
|
4218 |
|
4219 #if __STD_C |
|
4220 Void_t* mEMALIGn(size_t alignment, size_t bytes) |
|
4221 #else |
|
4222 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes; |
|
4223 #endif |
|
4224 { |
|
4225 INTERNAL_SIZE_T nb; /* padded request size */ |
|
4226 char* m; /* memory returned by malloc call */ |
|
4227 mchunkptr p; /* corresponding chunk */ |
|
4228 char* brk; /* alignment point within p */ |
|
4229 mchunkptr newp; /* chunk to return */ |
|
4230 INTERNAL_SIZE_T newsize; /* its size */ |
|
4231 INTERNAL_SIZE_T leadsize; /* leading space before alignment point */ |
|
4232 mchunkptr remainder; /* spare room at end to split off */ |
|
4233 CHUNK_SIZE_T remainder_size; /* its size */ |
|
4234 INTERNAL_SIZE_T size; |
|
4235 |
|
4236 /* If need less alignment than we give anyway, just relay to malloc */ |
|
4237 |
|
4238 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes); |
|
4239 |
|
4240 /* Otherwise, ensure that it is at least a minimum chunk size */ |
|
4241 |
|
4242 if (alignment < MINSIZE) alignment = MINSIZE; |
|
4243 |
|
4244 /* Make sure alignment is power of 2 (in case MINSIZE is not). */ |
|
4245 if ((alignment & (alignment - 1)) != 0) { |
|
4246 size_t a = MALLOC_ALIGNMENT * 2; |
|
4247 while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1; |
|
4248 alignment = a; |
|
4249 } |
|
4250 |
|
4251 checked_request2size(bytes, nb); |
|
4252 |
|
4253 /* |
|
4254 Strategy: find a spot within that chunk that meets the alignment |
|
4255 request, and then possibly free the leading and trailing space. |
|
4256 */ |
|
4257 |
|
4258 |
|
4259 /* Call malloc with worst case padding to hit alignment. */ |
|
4260 |
|
4261 m = (char*)(mALLOc(nb + alignment + MINSIZE)); |
|
4262 |
|
4263 if (m == 0) return 0; /* propagate failure */ |
|
4264 |
|
4265 p = mem2chunk(m); |
|
4266 |
|
4267 if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */ |
|
4268 |
|
4269 /* |
|
4270 Find an aligned spot inside chunk. Since we need to give back |
|
4271 leading space in a chunk of at least MINSIZE, if the first |
|
4272 calculation places us at a spot with less than MINSIZE leader, |
|
4273 we can move to the next aligned spot -- we've allocated enough |
|
4274 total room so that this is always possible. |
|
4275 */ |
|
4276 |
|
4277 brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) & |
|
4278 -((signed long) alignment))); |
|
4279 if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE) |
|
4280 brk += alignment; |
|
4281 |
|
4282 newp = (mchunkptr)brk; |
|
4283 leadsize = brk - (char*)(p); |
|
4284 newsize = chunksize(p) - leadsize; |
|
4285 |
|
4286 /* For mmapped chunks, just adjust offset */ |
|
4287 if (chunk_is_mmapped(p)) { |
|
4288 newp->prev_size = p->prev_size + leadsize; |
|
4289 set_head(newp, newsize|IS_MMAPPED); |
|
4290 return chunk2mem(newp); |
|
4291 } |
|
4292 |
|
4293 /* Otherwise, give back leader, use the rest */ |
|
4294 set_head(newp, newsize | PREV_INUSE); |
|
4295 set_inuse_bit_at_offset(newp, newsize); |
|
4296 set_head_size(p, leadsize); |
|
4297 fREe(chunk2mem(p)); |
|
4298 p = newp; |
|
4299 |
|
4300 assert (newsize >= nb && |
|
4301 (((PTR_UINT)(chunk2mem(p))) % alignment) == 0); |
|
4302 } |
|
4303 |
|
4304 /* Also give back spare room at the end */ |
|
4305 if (!chunk_is_mmapped(p)) { |
|
4306 size = chunksize(p); |
|
4307 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) { |
|
4308 remainder_size = size - nb; |
|
4309 remainder = chunk_at_offset(p, nb); |
|
4310 set_head(remainder, remainder_size | PREV_INUSE); |
|
4311 set_head_size(p, nb); |
|
4312 fREe(chunk2mem(remainder)); |
|
4313 } |
|
4314 } |
|
4315 |
|
4316 check_inuse_chunk(p); |
|
4317 return chunk2mem(p); |
|
4318 } |
|
4319 |
|
4320 /* |
|
4321 ------------------------------ calloc ------------------------------ |
|
4322 */ |
|
4323 |
|
4324 #if __STD_C |
|
4325 Void_t* cALLOc(size_t n_elements, size_t elem_size) |
|
4326 #else |
|
4327 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size; |
|
4328 #endif |
|
4329 { |
|
4330 mchunkptr p; |
|
4331 CHUNK_SIZE_T clearsize; |
|
4332 CHUNK_SIZE_T nclears; |
|
4333 INTERNAL_SIZE_T* d; |
|
4334 |
|
4335 Void_t* mem = mALLOc(n_elements * elem_size); |
|
4336 |
|
4337 if (mem != 0) { |
|
4338 p = mem2chunk(mem); |
|
4339 |
|
4340 if (!chunk_is_mmapped(p)) |
|
4341 { |
|
4342 /* |
|
4343 Unroll clear of <= 36 bytes (72 if 8byte sizes) |
|
4344 We know that contents have an odd number of |
|
4345 INTERNAL_SIZE_T-sized words; minimally 3. |
|
4346 */ |
|
4347 |
|
4348 d = (INTERNAL_SIZE_T*)mem; |
|
4349 clearsize = chunksize(p) - SIZE_SZ; |
|
4350 nclears = clearsize / sizeof(INTERNAL_SIZE_T); |
|
4351 assert(nclears >= 3); |
|
4352 |
|
4353 if (nclears > 9) |
|
4354 MALLOC_ZERO(d, clearsize); |
|
4355 |
|
4356 else { |
|
4357 *(d+0) = 0; |
|
4358 *(d+1) = 0; |
|
4359 *(d+2) = 0; |
|
4360 if (nclears > 4) { |
|
4361 *(d+3) = 0; |
|
4362 *(d+4) = 0; |
|
4363 if (nclears > 6) { |
|
4364 *(d+5) = 0; |
|
4365 *(d+6) = 0; |
|
4366 if (nclears > 8) { |
|
4367 *(d+7) = 0; |
|
4368 *(d+8) = 0; |
|
4369 } |
|
4370 } |
|
4371 } |
|
4372 } |
|
4373 } |
|
4374 #if ! MMAP_CLEARS |
|
4375 else |
|
4376 { |
|
4377 d = (INTERNAL_SIZE_T*)mem; |
|
4378 /* |
|
4379 Note the additional SIZE_SZ |
|
4380 */ |
|
4381 clearsize = chunksize(p) - 2*SIZE_SZ; |
|
4382 MALLOC_ZERO(d, clearsize); |
|
4383 } |
|
4384 #endif |
|
4385 } |
|
4386 return mem; |
|
4387 } |
|
4388 |
|
4389 /* |
|
4390 ------------------------------ cfree ------------------------------ |
|
4391 */ |
|
4392 |
|
4393 #if __STD_C |
|
4394 void cFREe(Void_t *mem) |
|
4395 #else |
|
4396 void cFREe(mem) Void_t *mem; |
|
4397 #endif |
|
4398 { |
|
4399 fREe(mem); |
|
4400 } |
|
4401 |
|
4402 /* |
|
4403 ------------------------- independent_calloc ------------------------- |
|
4404 */ |
|
4405 |
|
4406 #if __STD_C |
|
4407 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[]) |
|
4408 #else |
|
4409 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[]; |
|
4410 #endif |
|
4411 { |
|
4412 size_t sz = elem_size; /* serves as 1-element array */ |
|
4413 /* opts arg of 3 means all elements are same size, and should be cleared */ |
|
4414 return iALLOc(n_elements, &sz, 3, chunks); |
|
4415 } |
|
4416 |
|
4417 /* |
|
4418 ------------------------- independent_comalloc ------------------------- |
|
4419 */ |
|
4420 |
|
4421 #if __STD_C |
|
4422 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[]) |
|
4423 #else |
|
4424 Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[]; |
|
4425 #endif |
|
4426 { |
|
4427 return iALLOc(n_elements, sizes, 0, chunks); |
|
4428 } |
|
4429 |
|
4430 |
|
4431 /* |
|
4432 ------------------------------ ialloc ------------------------------ |
|
4433 ialloc provides common support for independent_X routines, handling all of |
|
4434 the combinations that can result. |
|
4435 |
|
4436 The opts arg has: |
|
4437 bit 0 set if all elements are same size (using sizes[0]) |
|
4438 bit 1 set if elements should be zeroed |
|
4439 */ |
|
4440 |
|
4441 |
|
4442 #if __STD_C |
|
4443 static Void_t** iALLOc(size_t n_elements, |
|
4444 size_t* sizes, |
|
4445 int opts, |
|
4446 Void_t* chunks[]) |
|
4447 #else |
|
4448 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[]; |
|
4449 #endif |
|
4450 { |
|
4451 mstate av = get_malloc_state(); |
|
4452 INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */ |
|
4453 INTERNAL_SIZE_T contents_size; /* total size of elements */ |
|
4454 INTERNAL_SIZE_T array_size; /* request size of pointer array */ |
|
4455 Void_t* mem; /* malloced aggregate space */ |
|
4456 mchunkptr p; /* corresponding chunk */ |
|
4457 INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */ |
|
4458 Void_t** marray; /* either "chunks" or malloced ptr array */ |
|
4459 mchunkptr array_chunk; /* chunk for malloced ptr array */ |
|
4460 int mmx; /* to disable mmap */ |
|
4461 INTERNAL_SIZE_T size; |
|
4462 size_t i; |
|
4463 |
|
4464 /* Ensure initialization */ |
|
4465 if (av->max_fast == 0) malloc_consolidate(av); |
|
4466 |
|
4467 /* compute array length, if needed */ |
|
4468 if (chunks != 0) { |
|
4469 if (n_elements == 0) |
|
4470 return chunks; /* nothing to do */ |
|
4471 marray = chunks; |
|
4472 array_size = 0; |
|
4473 } |
|
4474 else { |
|
4475 /* if empty req, must still return chunk representing empty array */ |
|
4476 if (n_elements == 0) |
|
4477 return (Void_t**) mALLOc(0); |
|
4478 marray = 0; |
|
4479 array_size = request2size(n_elements * (sizeof(Void_t*))); |
|
4480 } |
|
4481 |
|
4482 /* compute total element size */ |
|
4483 if (opts & 0x1) { /* all-same-size */ |
|
4484 element_size = request2size(*sizes); |
|
4485 contents_size = n_elements * element_size; |
|
4486 } |
|
4487 else { /* add up all the sizes */ |
|
4488 element_size = 0; |
|
4489 contents_size = 0; |
|
4490 for (i = 0; i != n_elements; ++i) |
|
4491 contents_size += request2size(sizes[i]); |
|
4492 } |
|
4493 |
|
4494 /* subtract out alignment bytes from total to minimize overallocation */ |
|
4495 size = contents_size + array_size - MALLOC_ALIGN_MASK; |
|
4496 |
|
4497 /* |
|
4498 Allocate the aggregate chunk. |
|
4499 But first disable mmap so malloc won't use it, since |
|
4500 we would not be able to later free/realloc space internal |
|
4501 to a segregated mmap region. |
|
4502 */ |
|
4503 mmx = av->n_mmaps_max; /* disable mmap */ |
|
4504 av->n_mmaps_max = 0; |
|
4505 mem = mALLOc(size); |
|
4506 av->n_mmaps_max = mmx; /* reset mmap */ |
|
4507 if (mem == 0) |
|
4508 return 0; |
|
4509 |
|
4510 p = mem2chunk(mem); |
|
4511 assert(!chunk_is_mmapped(p)); |
|
4512 remainder_size = chunksize(p); |
|
4513 |
|
4514 if (opts & 0x2) { /* optionally clear the elements */ |
|
4515 MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size); |
|
4516 } |
|
4517 |
|
4518 /* If not provided, allocate the pointer array as final part of chunk */ |
|
4519 if (marray == 0) { |
|
4520 array_chunk = chunk_at_offset(p, contents_size); |
|
4521 marray = (Void_t**) (chunk2mem(array_chunk)); |
|
4522 set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE); |
|
4523 remainder_size = contents_size; |
|
4524 } |
|
4525 |
|
4526 /* split out elements */ |
|
4527 for (i = 0; ; ++i) { |
|
4528 marray[i] = chunk2mem(p); |
|
4529 if (i != n_elements-1) { |
|
4530 if (element_size != 0) |
|
4531 size = element_size; |
|
4532 else |
|
4533 size = request2size(sizes[i]); |
|
4534 remainder_size -= size; |
|
4535 set_head(p, size | PREV_INUSE); |
|
4536 p = chunk_at_offset(p, size); |
|
4537 } |
|
4538 else { /* the final element absorbs any overallocation slop */ |
|
4539 set_head(p, remainder_size | PREV_INUSE); |
|
4540 break; |
|
4541 } |
|
4542 } |
|
4543 |
|
4544 #if DEBUG |
|
4545 if (marray != chunks) { |
|
4546 /* final element must have exactly exhausted chunk */ |
|
4547 if (element_size != 0) |
|
4548 assert(remainder_size == element_size); |
|
4549 else |
|
4550 assert(remainder_size == request2size(sizes[i])); |
|
4551 check_inuse_chunk(mem2chunk(marray)); |
|
4552 } |
|
4553 |
|
4554 for (i = 0; i != n_elements; ++i) |
|
4555 check_inuse_chunk(mem2chunk(marray[i])); |
|
4556 #endif |
|
4557 |
|
4558 return marray; |
|
4559 } |
|
4560 |
|
4561 |
|
4562 /* |
|
4563 ------------------------------ valloc ------------------------------ |
|
4564 */ |
|
4565 |
|
4566 #if __STD_C |
|
4567 Void_t* vALLOc(size_t bytes) |
|
4568 #else |
|
4569 Void_t* vALLOc(bytes) size_t bytes; |
|
4570 #endif |
|
4571 { |
|
4572 /* Ensure initialization */ |
|
4573 mstate av = get_malloc_state(); |
|
4574 if (av->max_fast == 0) malloc_consolidate(av); |
|
4575 return mEMALIGn(av->pagesize, bytes); |
|
4576 } |
|
4577 |
|
4578 /* |
|
4579 ------------------------------ pvalloc ------------------------------ |
|
4580 */ |
|
4581 |
|
4582 |
|
4583 #if __STD_C |
|
4584 Void_t* pVALLOc(size_t bytes) |
|
4585 #else |
|
4586 Void_t* pVALLOc(bytes) size_t bytes; |
|
4587 #endif |
|
4588 { |
|
4589 mstate av = get_malloc_state(); |
|
4590 size_t pagesz; |
|
4591 |
|
4592 /* Ensure initialization */ |
|
4593 if (av->max_fast == 0) malloc_consolidate(av); |
|
4594 pagesz = av->pagesize; |
|
4595 return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1)); |
|
4596 } |
|
4597 |
|
4598 |
|
4599 /* |
|
4600 ------------------------------ malloc_trim ------------------------------ |
|
4601 */ |
|
4602 |
|
4603 #if __STD_C |
|
4604 int mTRIm(size_t pad) |
|
4605 #else |
|
4606 int mTRIm(pad) size_t pad; |
|
4607 #endif |
|
4608 { |
|
4609 mstate av = get_malloc_state(); |
|
4610 /* Ensure initialization/consolidation */ |
|
4611 malloc_consolidate(av); |
|
4612 |
|
4613 #ifndef MORECORE_CANNOT_TRIM |
|
4614 return sYSTRIm(pad, av); |
|
4615 #else |
|
4616 return 0; |
|
4617 #endif |
|
4618 } |
|
4619 |
|
4620 |
|
4621 /* |
|
4622 ------------------------- malloc_usable_size ------------------------- |
|
4623 */ |
|
4624 |
|
4625 #if __STD_C |
|
4626 size_t mUSABLe(Void_t* mem) |
|
4627 #else |
|
4628 size_t mUSABLe(mem) Void_t* mem; |
|
4629 #endif |
|
4630 { |
|
4631 mchunkptr p; |
|
4632 if (mem != 0) { |
|
4633 p = mem2chunk(mem); |
|
4634 if (chunk_is_mmapped(p)) |
|
4635 return chunksize(p) - 2*SIZE_SZ; |
|
4636 else if (inuse(p)) |
|
4637 return chunksize(p) - SIZE_SZ; |
|
4638 } |
|
4639 return 0; |
|
4640 } |
|
4641 |
|
4642 /* |
|
4643 ------------------------------ mallinfo ------------------------------ |
|
4644 */ |
|
4645 |
|
4646 struct mallinfo mALLINFo() |
|
4647 { |
|
4648 mstate av = get_malloc_state(); |
|
4649 struct mallinfo mi; |
|
4650 int i; |
|
4651 mbinptr b; |
|
4652 mchunkptr p; |
|
4653 INTERNAL_SIZE_T avail; |
|
4654 INTERNAL_SIZE_T fastavail; |
|
4655 int nblocks; |
|
4656 int nfastblocks; |
|
4657 |
|
4658 /* Ensure initialization */ |
|
4659 if (av->top == 0) malloc_consolidate(av); |
|
4660 |
|
4661 check_malloc_state(); |
|
4662 |
|
4663 /* Account for top */ |
|
4664 avail = chunksize(av->top); |
|
4665 nblocks = 1; /* top always exists */ |
|
4666 |
|
4667 /* traverse fastbins */ |
|
4668 nfastblocks = 0; |
|
4669 fastavail = 0; |
|
4670 |
|
4671 for (i = 0; i < (int)NFASTBINS; ++i) { |
|
4672 for (p = av->fastbins[i]; p != 0; p = p->fd) { |
|
4673 ++nfastblocks; |
|
4674 fastavail += chunksize(p); |
|
4675 } |
|
4676 } |
|
4677 |
|
4678 avail += fastavail; |
|
4679 |
|
4680 /* traverse regular bins */ |
|
4681 for (i = 1; i < NBINS; ++i) { |
|
4682 b = bin_at(av, i); |
|
4683 for (p = last(b); p != b; p = p->bk) { |
|
4684 ++nblocks; |
|
4685 avail += chunksize(p); |
|
4686 } |
|
4687 } |
|
4688 |
|
4689 mi.smblks = nfastblocks; |
|
4690 mi.ordblks = nblocks; |
|
4691 mi.fordblks = avail; |
|
4692 mi.uordblks = av->sbrked_mem - avail; |
|
4693 mi.arena = av->sbrked_mem; |
|
4694 mi.hblks = av->n_mmaps; |
|
4695 mi.hblkhd = av->mmapped_mem; |
|
4696 mi.fsmblks = fastavail; |
|
4697 mi.keepcost = chunksize(av->top); |
|
4698 mi.usmblks = av->max_total_mem; |
|
4699 return mi; |
|
4700 } |
|
4701 |
|
4702 /* |
|
4703 ------------------------------ malloc_stats ------------------------------ |
|
4704 */ |
|
4705 |
|
4706 void mSTATs() |
|
4707 { |
|
4708 struct mallinfo mi = mALLINFo(); |
|
4709 |
|
4710 #ifdef WIN32 |
|
4711 { |
|
4712 CHUNK_SIZE_T free, reserved, committed; |
|
4713 vminfo (&free, &reserved, &committed); |
|
4714 fprintf(stderr, "free bytes = %10lu\n", |
|
4715 free); |
|
4716 fprintf(stderr, "reserved bytes = %10lu\n", |
|
4717 reserved); |
|
4718 fprintf(stderr, "committed bytes = %10lu\n", |
|
4719 committed); |
|
4720 } |
|
4721 #endif |
|
4722 |
|
4723 |
|
4724 fprintf(stderr, "max system bytes = %10lu\n", |
|
4725 (CHUNK_SIZE_T)(mi.usmblks)); |
|
4726 fprintf(stderr, "system bytes = %10lu\n", |
|
4727 (CHUNK_SIZE_T)(mi.arena + mi.hblkhd)); |
|
4728 fprintf(stderr, "in use bytes = %10lu\n", |
|
4729 (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd)); |
|
4730 |
|
4731 #ifdef WIN32 |
|
4732 { |
|
4733 CHUNK_SIZE_T kernel, user; |
|
4734 if (cpuinfo (TRUE, &kernel, &user)) { |
|
4735 fprintf(stderr, "kernel ms = %10lu\n", |
|
4736 kernel); |
|
4737 fprintf(stderr, "user ms = %10lu\n", |
|
4738 user); |
|
4739 } |
|
4740 } |
|
4741 #endif |
|
4742 } |
|
4743 |
|
4744 |
|
4745 /* |
|
4746 ------------------------------ mallopt ------------------------------ |
|
4747 */ |
|
4748 |
|
4749 #if __STD_C |
|
4750 int mALLOPt(int param_number, int value) |
|
4751 #else |
|
4752 int mALLOPt(param_number, value) int param_number; int value; |
|
4753 #endif |
|
4754 { |
|
4755 mstate av = get_malloc_state(); |
|
4756 /* Ensure initialization/consolidation */ |
|
4757 malloc_consolidate(av); |
|
4758 |
|
4759 switch(param_number) { |
|
4760 case M_MXFAST: |
|
4761 if (value >= 0 && value <= MAX_FAST_SIZE) { |
|
4762 set_max_fast(av, value); |
|
4763 return 1; |
|
4764 } |
|
4765 else |
|
4766 return 0; |
|
4767 |
|
4768 case M_TRIM_THRESHOLD: |
|
4769 av->trim_threshold = value; |
|
4770 return 1; |
|
4771 |
|
4772 case M_TOP_PAD: |
|
4773 av->top_pad = value; |
|
4774 return 1; |
|
4775 |
|
4776 case M_MMAP_THRESHOLD: |
|
4777 av->mmap_threshold = value; |
|
4778 return 1; |
|
4779 |
|
4780 case M_MMAP_MAX: |
|
4781 #if !HAVE_MMAP |
|
4782 if (value != 0) |
|
4783 return 0; |
|
4784 #endif |
|
4785 av->n_mmaps_max = value; |
|
4786 return 1; |
|
4787 |
|
4788 default: |
|
4789 return 0; |
|
4790 } |
|
4791 } |
|
4792 |
|
4793 |
|
4794 /* |
|
4795 -------------------- Alternative MORECORE functions -------------------- |
|
4796 */ |
|
4797 |
|
4798 |
|
4799 /* |
|
4800 General Requirements for MORECORE. |
|
4801 |
|
4802 The MORECORE function must have the following properties: |
|
4803 |
|
4804 If MORECORE_CONTIGUOUS is false: |
|
4805 |
|
4806 * MORECORE must allocate in multiples of pagesize. It will |
|
4807 only be called with arguments that are multiples of pagesize. |
|
4808 |
|
4809 * MORECORE(0) must return an address that is at least |
|
4810 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.) |
|
4811 |
|
4812 else (i.e. If MORECORE_CONTIGUOUS is true): |
|
4813 |
|
4814 * Consecutive calls to MORECORE with positive arguments |
|
4815 return increasing addresses, indicating that space has been |
|
4816 contiguously extended. |
|
4817 |
|
4818 * MORECORE need not allocate in multiples of pagesize. |
|
4819 Calls to MORECORE need not have args of multiples of pagesize. |
|
4820 |
|
4821 * MORECORE need not page-align. |
|
4822 |
|
4823 In either case: |
|
4824 |
|
4825 * MORECORE may allocate more memory than requested. (Or even less, |
|
4826 but this will generally result in a malloc failure.) |
|
4827 |
|
4828 * MORECORE must not allocate memory when given argument zero, but |
|
4829 instead return one past the end address of memory from previous |
|
4830 nonzero call. This malloc does NOT call MORECORE(0) |
|
4831 until at least one call with positive arguments is made, so |
|
4832 the initial value returned is not important. |
|
4833 |
|
4834 * Even though consecutive calls to MORECORE need not return contiguous |
|
4835 addresses, it must be OK for malloc'ed chunks to span multiple |
|
4836 regions in those cases where they do happen to be contiguous. |
|
4837 |
|
4838 * MORECORE need not handle negative arguments -- it may instead |
|
4839 just return MORECORE_FAILURE when given negative arguments. |
|
4840 Negative arguments are always multiples of pagesize. MORECORE |
|
4841 must not misinterpret negative args as large positive unsigned |
|
4842 args. You can suppress all such calls from even occurring by defining |
|
4843 MORECORE_CANNOT_TRIM, |
|
4844 |
|
4845 There is some variation across systems about the type of the |
|
4846 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot |
|
4847 actually be size_t, because sbrk supports negative args, so it is |
|
4848 normally the signed type of the same width as size_t (sometimes |
|
4849 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much |
|
4850 matter though. Internally, we use "long" as arguments, which should |
|
4851 work across all reasonable possibilities. |
|
4852 |
|
4853 Additionally, if MORECORE ever returns failure for a positive |
|
4854 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous |
|
4855 system allocator. This is a useful backup strategy for systems with |
|
4856 holes in address spaces -- in this case sbrk cannot contiguously |
|
4857 expand the heap, but mmap may be able to map noncontiguous space. |
|
4858 |
|
4859 If you'd like mmap to ALWAYS be used, you can define MORECORE to be |
|
4860 a function that always returns MORECORE_FAILURE. |
|
4861 |
|
4862 Malloc only has limited ability to detect failures of MORECORE |
|
4863 to supply contiguous space when it says it can. In particular, |
|
4864 multithreaded programs that do not use locks may result in |
|
4865 rece conditions across calls to MORECORE that result in gaps |
|
4866 that cannot be detected as such, and subsequent corruption. |
|
4867 |
|
4868 If you are using this malloc with something other than sbrk (or its |
|
4869 emulation) to supply memory regions, you probably want to set |
|
4870 MORECORE_CONTIGUOUS as false. As an example, here is a custom |
|
4871 allocator kindly contributed for pre-OSX macOS. It uses virtually |
|
4872 but not necessarily physically contiguous non-paged memory (locked |
|
4873 in, present and won't get swapped out). You can use it by |
|
4874 uncommenting this section, adding some #includes, and setting up the |
|
4875 appropriate defines above: |
|
4876 |
|
4877 #define MORECORE osMoreCore |
|
4878 #define MORECORE_CONTIGUOUS 0 |
|
4879 |
|
4880 There is also a shutdown routine that should somehow be called for |
|
4881 cleanup upon program exit. |
|
4882 |
|
4883 #define MAX_POOL_ENTRIES 100 |
|
4884 #define MINIMUM_MORECORE_SIZE (64 * 1024) |
|
4885 static int next_os_pool; |
|
4886 void *our_os_pools[MAX_POOL_ENTRIES]; |
|
4887 |
|
4888 void *osMoreCore(int size) |
|
4889 { |
|
4890 void *ptr = 0; |
|
4891 static void *sbrk_top = 0; |
|
4892 |
|
4893 if (size > 0) |
|
4894 { |
|
4895 if (size < MINIMUM_MORECORE_SIZE) |
|
4896 size = MINIMUM_MORECORE_SIZE; |
|
4897 if (CurrentExecutionLevel() == kTaskLevel) |
|
4898 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
|
4899 if (ptr == 0) |
|
4900 { |
|
4901 return (void *) MORECORE_FAILURE; |
|
4902 } |
|
4903 // save ptrs so they can be freed during cleanup |
|
4904 our_os_pools[next_os_pool] = ptr; |
|
4905 next_os_pool++; |
|
4906 ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
|
4907 sbrk_top = (char *) ptr + size; |
|
4908 return ptr; |
|
4909 } |
|
4910 else if (size < 0) |
|
4911 { |
|
4912 // we don't currently support shrink behavior |
|
4913 return (void *) MORECORE_FAILURE; |
|
4914 } |
|
4915 else |
|
4916 { |
|
4917 return sbrk_top; |
|
4918 } |
|
4919 } |
|
4920 |
|
4921 // cleanup any allocated memory pools |
|
4922 // called as last thing before shutting down driver |
|
4923 |
|
4924 void osCleanupMem(void) |
|
4925 { |
|
4926 void **ptr; |
|
4927 |
|
4928 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
|
4929 if (*ptr) |
|
4930 { |
|
4931 PoolDeallocate(*ptr); |
|
4932 *ptr = 0; |
|
4933 } |
|
4934 } |
|
4935 |
|
4936 */ |
|
4937 |
|
4938 |
|
4939 /* |
|
4940 -------------------------------------------------------------- |
|
4941 |
|
4942 Emulation of sbrk for win32. |
|
4943 Donated by J. Walter <Walter@GeNeSys-e.de>. |
|
4944 For additional information about this code, and malloc on Win32, see |
|
4945 http://www.genesys-e.de/jwalter/ |
|
4946 */ |
|
4947 |
|
4948 |
|
4949 #ifdef WIN32 |
|
4950 |
|
4951 #ifdef _DEBUG |
|
4952 /* #define TRACE */ |
|
4953 #endif |
|
4954 |
|
4955 /* Support for USE_MALLOC_LOCK */ |
|
4956 #ifdef USE_MALLOC_LOCK |
|
4957 |
|
4958 /* Wait for spin lock */ |
|
4959 static int slwait (int *sl) { |
|
4960 while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) |
|
4961 Sleep (0); |
|
4962 return 0; |
|
4963 } |
|
4964 |
|
4965 /* Release spin lock */ |
|
4966 static int slrelease (int *sl) { |
|
4967 InterlockedExchange (sl, 0); |
|
4968 return 0; |
|
4969 } |
|
4970 |
|
4971 #ifdef NEEDED |
|
4972 /* Spin lock for emulation code */ |
|
4973 static int g_sl; |
|
4974 #endif |
|
4975 |
|
4976 #endif /* USE_MALLOC_LOCK */ |
|
4977 |
|
4978 /* getpagesize for windows */ |
|
4979 static long getpagesize (void) { |
|
4980 static long g_pagesize = 0; |
|
4981 if (! g_pagesize) { |
|
4982 SYSTEM_INFO system_info; |
|
4983 GetSystemInfo (&system_info); |
|
4984 g_pagesize = system_info.dwPageSize; |
|
4985 } |
|
4986 return g_pagesize; |
|
4987 } |
|
4988 static long getregionsize (void) { |
|
4989 static long g_regionsize = 0; |
|
4990 if (! g_regionsize) { |
|
4991 SYSTEM_INFO system_info; |
|
4992 GetSystemInfo (&system_info); |
|
4993 g_regionsize = system_info.dwAllocationGranularity; |
|
4994 } |
|
4995 return g_regionsize; |
|
4996 } |
|
4997 |
|
4998 /* A region list entry */ |
|
4999 typedef struct _region_list_entry { |
|
5000 void *top_allocated; |
|
5001 void *top_committed; |
|
5002 void *top_reserved; |
|
5003 long reserve_size; |
|
5004 struct _region_list_entry *previous; |
|
5005 } region_list_entry; |
|
5006 |
|
5007 /* Allocate and link a region entry in the region list */ |
|
5008 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) { |
|
5009 region_list_entry *next = |
|
5010 (region_list_entry *)HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry)); |
|
5011 if (! next) |
|
5012 return FALSE; |
|
5013 next->top_allocated = (char *) base_reserved; |
|
5014 next->top_committed = (char *) base_reserved; |
|
5015 next->top_reserved = (char *) base_reserved + reserve_size; |
|
5016 next->reserve_size = reserve_size; |
|
5017 next->previous = *last; |
|
5018 *last = next; |
|
5019 return TRUE; |
|
5020 } |
|
5021 /* Free and unlink the last region entry from the region list */ |
|
5022 static int region_list_remove (region_list_entry **last) { |
|
5023 region_list_entry *previous = (*last)->previous; |
|
5024 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last)) |
|
5025 return FALSE; |
|
5026 *last = previous; |
|
5027 return TRUE; |
|
5028 } |
|
5029 |
|
5030 #define CEIL(size,to) (((size)+(to)-1)&~((to)-1)) |
|
5031 #define FLOOR(size,to) ((size)&~((to)-1)) |
|
5032 |
|
5033 #define SBRK_SCALE 0 |
|
5034 /* #define SBRK_SCALE 1 */ |
|
5035 /* #define SBRK_SCALE 2 */ |
|
5036 /* #define SBRK_SCALE 4 */ |
|
5037 |
|
5038 /* sbrk for windows */ |
|
5039 static void *sbrk (long size) { |
|
5040 static long g_pagesize, g_my_pagesize; |
|
5041 static long g_regionsize, g_my_regionsize; |
|
5042 static region_list_entry *g_last; |
|
5043 void *result = (void *) MORECORE_FAILURE; |
|
5044 #ifdef TRACE |
|
5045 printf ("sbrk %d\n", size); |
|
5046 #endif |
|
5047 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5048 /* Wait for spin lock */ |
|
5049 slwait (&g_sl); |
|
5050 #endif |
|
5051 /* First time initialization */ |
|
5052 if (! g_pagesize) { |
|
5053 g_pagesize = getpagesize (); |
|
5054 g_my_pagesize = g_pagesize << SBRK_SCALE; |
|
5055 } |
|
5056 if (! g_regionsize) { |
|
5057 g_regionsize = getregionsize (); |
|
5058 g_my_regionsize = g_regionsize << SBRK_SCALE; |
|
5059 } |
|
5060 if (! g_last) { |
|
5061 if (! region_list_append (&g_last, 0, 0)) |
|
5062 goto sbrk_exit; |
|
5063 } |
|
5064 /* Assert invariants */ |
|
5065 assert (g_last); |
|
5066 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && |
|
5067 g_last->top_allocated <= g_last->top_committed); |
|
5068 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && |
|
5069 g_last->top_committed <= g_last->top_reserved && |
|
5070 (unsigned) g_last->top_committed % g_pagesize == 0); |
|
5071 assert ((unsigned) g_last->top_reserved % g_regionsize == 0); |
|
5072 assert ((unsigned) g_last->reserve_size % g_regionsize == 0); |
|
5073 /* Allocation requested? */ |
|
5074 if (size >= 0) { |
|
5075 /* Allocation size is the requested size */ |
|
5076 long allocate_size = size; |
|
5077 /* Compute the size to commit */ |
|
5078 long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
|
5079 /* Do we reach the commit limit? */ |
|
5080 if (to_commit > 0) { |
|
5081 /* Round size to commit */ |
|
5082 long commit_size = CEIL (to_commit, g_my_pagesize); |
|
5083 /* Compute the size to reserve */ |
|
5084 long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved; |
|
5085 /* Do we reach the reserve limit? */ |
|
5086 if (to_reserve > 0) { |
|
5087 /* Compute the remaining size to commit in the current region */ |
|
5088 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed; |
|
5089 if (remaining_commit_size > 0) { |
|
5090 /* Assert preconditions */ |
|
5091 assert ((unsigned) g_last->top_committed % g_pagesize == 0); |
|
5092 assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); { |
|
5093 /* Commit this */ |
|
5094 void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size, |
|
5095 MEM_COMMIT, PAGE_READWRITE); |
|
5096 /* Check returned pointer for consistency */ |
|
5097 if (base_committed != g_last->top_committed) |
|
5098 goto sbrk_exit; |
|
5099 /* Assert postconditions */ |
|
5100 assert ((unsigned) base_committed % g_pagesize == 0); |
|
5101 #ifdef TRACE |
|
5102 printf ("Commit %p %d\n", base_committed, remaining_commit_size); |
|
5103 #endif |
|
5104 /* Adjust the regions commit top */ |
|
5105 g_last->top_committed = (char *) base_committed + remaining_commit_size; |
|
5106 } |
|
5107 } { |
|
5108 /* Now we are going to search and reserve. */ |
|
5109 int contiguous = -1; |
|
5110 int found = FALSE; |
|
5111 MEMORY_BASIC_INFORMATION memory_info; |
|
5112 void *base_reserved; |
|
5113 long reserve_size; |
|
5114 do { |
|
5115 /* Assume contiguous memory */ |
|
5116 contiguous = TRUE; |
|
5117 /* Round size to reserve */ |
|
5118 reserve_size = CEIL (to_reserve, g_my_regionsize); |
|
5119 /* Start with the current region's top */ |
|
5120 memory_info.BaseAddress = g_last->top_reserved; |
|
5121 /* Assert preconditions */ |
|
5122 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
|
5123 assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
|
5124 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { |
|
5125 /* Assert postconditions */ |
|
5126 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
|
5127 #ifdef TRACE |
|
5128 printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, |
|
5129 memory_info.State == MEM_FREE ? "FREE": |
|
5130 (memory_info.State == MEM_RESERVE ? "RESERVED": |
|
5131 (memory_info.State == MEM_COMMIT ? "COMMITTED": "?"))); |
|
5132 #endif |
|
5133 /* Region is free, well aligned and big enough: we are done */ |
|
5134 if (memory_info.State == MEM_FREE && |
|
5135 (unsigned) memory_info.BaseAddress % g_regionsize == 0 && |
|
5136 memory_info.RegionSize >= (unsigned) reserve_size) { |
|
5137 found = TRUE; |
|
5138 break; |
|
5139 } |
|
5140 /* From now on we can't get contiguous memory! */ |
|
5141 contiguous = FALSE; |
|
5142 /* Recompute size to reserve */ |
|
5143 reserve_size = CEIL (allocate_size, g_my_regionsize); |
|
5144 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; |
|
5145 /* Assert preconditions */ |
|
5146 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); |
|
5147 assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
|
5148 } |
|
5149 /* Search failed? */ |
|
5150 if (! found) |
|
5151 goto sbrk_exit; |
|
5152 /* Assert preconditions */ |
|
5153 assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0); |
|
5154 assert (0 < reserve_size && reserve_size % g_regionsize == 0); |
|
5155 /* Try to reserve this */ |
|
5156 base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, |
|
5157 MEM_RESERVE, PAGE_NOACCESS); |
|
5158 if (! base_reserved) { |
|
5159 int rc = GetLastError (); |
|
5160 if (rc != ERROR_INVALID_ADDRESS) |
|
5161 goto sbrk_exit; |
|
5162 } |
|
5163 /* A null pointer signals (hopefully) a race condition with another thread. */ |
|
5164 /* In this case, we try again. */ |
|
5165 } while (! base_reserved); |
|
5166 /* Check returned pointer for consistency */ |
|
5167 if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress) |
|
5168 goto sbrk_exit; |
|
5169 /* Assert postconditions */ |
|
5170 assert ((unsigned) base_reserved % g_regionsize == 0); |
|
5171 #ifdef TRACE |
|
5172 printf ("Reserve %p %d\n", base_reserved, reserve_size); |
|
5173 #endif |
|
5174 /* Did we get contiguous memory? */ |
|
5175 if (contiguous) { |
|
5176 long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated; |
|
5177 /* Adjust allocation size */ |
|
5178 allocate_size -= start_size; |
|
5179 /* Adjust the regions allocation top */ |
|
5180 g_last->top_allocated = g_last->top_committed; |
|
5181 /* Recompute the size to commit */ |
|
5182 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
|
5183 /* Round size to commit */ |
|
5184 commit_size = CEIL (to_commit, g_my_pagesize); |
|
5185 } |
|
5186 /* Append the new region to the list */ |
|
5187 if (! region_list_append (&g_last, base_reserved, reserve_size)) |
|
5188 goto sbrk_exit; |
|
5189 /* Didn't we get contiguous memory? */ |
|
5190 if (! contiguous) { |
|
5191 /* Recompute the size to commit */ |
|
5192 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; |
|
5193 /* Round size to commit */ |
|
5194 commit_size = CEIL (to_commit, g_my_pagesize); |
|
5195 } |
|
5196 } |
|
5197 } |
|
5198 /* Assert preconditions */ |
|
5199 assert ((unsigned) g_last->top_committed % g_pagesize == 0); |
|
5200 assert (0 < commit_size && commit_size % g_pagesize == 0); { |
|
5201 /* Commit this */ |
|
5202 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, |
|
5203 MEM_COMMIT, PAGE_READWRITE); |
|
5204 /* Check returned pointer for consistency */ |
|
5205 if (base_committed != g_last->top_committed) |
|
5206 goto sbrk_exit; |
|
5207 /* Assert postconditions */ |
|
5208 assert ((unsigned) base_committed % g_pagesize == 0); |
|
5209 #ifdef TRACE |
|
5210 printf ("Commit %p %d\n", base_committed, commit_size); |
|
5211 #endif |
|
5212 /* Adjust the regions commit top */ |
|
5213 g_last->top_committed = (char *) base_committed + commit_size; |
|
5214 } |
|
5215 } |
|
5216 /* Adjust the regions allocation top */ |
|
5217 g_last->top_allocated = (char *) g_last->top_allocated + allocate_size; |
|
5218 result = (char *) g_last->top_allocated - size; |
|
5219 /* Deallocation requested? */ |
|
5220 } else if (size < 0) { |
|
5221 long deallocate_size = - size; |
|
5222 /* As long as we have a region to release */ |
|
5223 while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) { |
|
5224 /* Get the size to release */ |
|
5225 long release_size = g_last->reserve_size; |
|
5226 /* Get the base address */ |
|
5227 void *base_reserved = (char *) g_last->top_reserved - release_size; |
|
5228 /* Assert preconditions */ |
|
5229 assert ((unsigned) base_reserved % g_regionsize == 0); |
|
5230 assert (0 < release_size && release_size % g_regionsize == 0); { |
|
5231 /* Release this */ |
|
5232 int rc = VirtualFree (base_reserved, 0, |
|
5233 MEM_RELEASE); |
|
5234 /* Check returned code for consistency */ |
|
5235 if (! rc) |
|
5236 goto sbrk_exit; |
|
5237 #ifdef TRACE |
|
5238 printf ("Release %p %d\n", base_reserved, release_size); |
|
5239 #endif |
|
5240 } |
|
5241 /* Adjust deallocation size */ |
|
5242 deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved; |
|
5243 /* Remove the old region from the list */ |
|
5244 if (! region_list_remove (&g_last)) |
|
5245 goto sbrk_exit; |
|
5246 } { |
|
5247 /* Compute the size to decommit */ |
|
5248 long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size); |
|
5249 if (to_decommit >= g_my_pagesize) { |
|
5250 /* Compute the size to decommit */ |
|
5251 long decommit_size = FLOOR (to_decommit, g_my_pagesize); |
|
5252 /* Compute the base address */ |
|
5253 void *base_committed = (char *) g_last->top_committed - decommit_size; |
|
5254 /* Assert preconditions */ |
|
5255 assert ((unsigned) base_committed % g_pagesize == 0); |
|
5256 assert (0 < decommit_size && decommit_size % g_pagesize == 0); { |
|
5257 /* Decommit this */ |
|
5258 int rc = VirtualFree ((char *) base_committed, decommit_size, |
|
5259 MEM_DECOMMIT); |
|
5260 /* Check returned code for consistency */ |
|
5261 if (! rc) |
|
5262 goto sbrk_exit; |
|
5263 #ifdef TRACE |
|
5264 printf ("Decommit %p %d\n", base_committed, decommit_size); |
|
5265 #endif |
|
5266 } |
|
5267 /* Adjust deallocation size and regions commit and allocate top */ |
|
5268 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed; |
|
5269 g_last->top_committed = base_committed; |
|
5270 g_last->top_allocated = base_committed; |
|
5271 } |
|
5272 } |
|
5273 /* Adjust regions allocate top */ |
|
5274 g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size; |
|
5275 /* Check for underflow */ |
|
5276 if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated || |
|
5277 g_last->top_allocated > g_last->top_committed) { |
|
5278 /* Adjust regions allocate top */ |
|
5279 g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size; |
|
5280 goto sbrk_exit; |
|
5281 } |
|
5282 result = g_last->top_allocated; |
|
5283 } |
|
5284 /* Assert invariants */ |
|
5285 assert (g_last); |
|
5286 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && |
|
5287 g_last->top_allocated <= g_last->top_committed); |
|
5288 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && |
|
5289 g_last->top_committed <= g_last->top_reserved && |
|
5290 (unsigned) g_last->top_committed % g_pagesize == 0); |
|
5291 assert ((unsigned) g_last->top_reserved % g_regionsize == 0); |
|
5292 assert ((unsigned) g_last->reserve_size % g_regionsize == 0); |
|
5293 |
|
5294 sbrk_exit: |
|
5295 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5296 /* Release spin lock */ |
|
5297 slrelease (&g_sl); |
|
5298 #endif |
|
5299 return result; |
|
5300 } |
|
5301 |
|
5302 /* mmap for windows */ |
|
5303 static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) { |
|
5304 static long g_pagesize; |
|
5305 static long g_regionsize; |
|
5306 #ifdef TRACE |
|
5307 printf ("mmap %d\n", size); |
|
5308 #endif |
|
5309 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5310 /* Wait for spin lock */ |
|
5311 slwait (&g_sl); |
|
5312 #endif |
|
5313 /* First time initialization */ |
|
5314 if (! g_pagesize) |
|
5315 g_pagesize = getpagesize (); |
|
5316 if (! g_regionsize) |
|
5317 g_regionsize = getregionsize (); |
|
5318 /* Assert preconditions */ |
|
5319 assert ((unsigned) ptr % g_regionsize == 0); |
|
5320 assert (size % g_pagesize == 0); |
|
5321 /* Allocate this */ |
|
5322 ptr = VirtualAlloc (ptr, size, |
|
5323 MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE); |
|
5324 if (! ptr) { |
|
5325 ptr = (void *) MORECORE_FAILURE; |
|
5326 goto mmap_exit; |
|
5327 } |
|
5328 /* Assert postconditions */ |
|
5329 assert ((unsigned) ptr % g_regionsize == 0); |
|
5330 #ifdef TRACE |
|
5331 printf ("Commit %p %d\n", ptr, size); |
|
5332 #endif |
|
5333 mmap_exit: |
|
5334 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5335 /* Release spin lock */ |
|
5336 slrelease (&g_sl); |
|
5337 #endif |
|
5338 return ptr; |
|
5339 } |
|
5340 |
|
5341 /* munmap for windows */ |
|
5342 static long munmap (void *ptr, long size) { |
|
5343 static long g_pagesize; |
|
5344 static long g_regionsize; |
|
5345 int rc = MUNMAP_FAILURE; |
|
5346 #ifdef TRACE |
|
5347 printf ("munmap %p %d\n", ptr, size); |
|
5348 #endif |
|
5349 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5350 /* Wait for spin lock */ |
|
5351 slwait (&g_sl); |
|
5352 #endif |
|
5353 /* First time initialization */ |
|
5354 if (! g_pagesize) |
|
5355 g_pagesize = getpagesize (); |
|
5356 if (! g_regionsize) |
|
5357 g_regionsize = getregionsize (); |
|
5358 /* Assert preconditions */ |
|
5359 assert ((unsigned) ptr % g_regionsize == 0); |
|
5360 assert (size % g_pagesize == 0); |
|
5361 /* Free this */ |
|
5362 if (! VirtualFree (ptr, 0, |
|
5363 MEM_RELEASE)) |
|
5364 goto munmap_exit; |
|
5365 rc = 0; |
|
5366 #ifdef TRACE |
|
5367 printf ("Release %p %d\n", ptr, size); |
|
5368 #endif |
|
5369 munmap_exit: |
|
5370 #if defined (USE_MALLOC_LOCK) && defined (NEEDED) |
|
5371 /* Release spin lock */ |
|
5372 slrelease (&g_sl); |
|
5373 #endif |
|
5374 return rc; |
|
5375 } |
|
5376 |
|
5377 static void vminfo (CHUNK_SIZE_T *free, CHUNK_SIZE_T *reserved, CHUNK_SIZE_T *committed) { |
|
5378 MEMORY_BASIC_INFORMATION memory_info; |
|
5379 memory_info.BaseAddress = 0; |
|
5380 *free = *reserved = *committed = 0; |
|
5381 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { |
|
5382 switch (memory_info.State) { |
|
5383 case MEM_FREE: |
|
5384 *free += memory_info.RegionSize; |
|
5385 break; |
|
5386 case MEM_RESERVE: |
|
5387 *reserved += memory_info.RegionSize; |
|
5388 break; |
|
5389 case MEM_COMMIT: |
|
5390 *committed += memory_info.RegionSize; |
|
5391 break; |
|
5392 } |
|
5393 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; |
|
5394 } |
|
5395 } |
|
5396 |
|
5397 #ifdef Q_OS_WINCE |
|
5398 |
|
5399 #include <tlhelp32.h> |
|
5400 |
|
5401 static int cpuinfo (int whole, CHUNK_SIZE_T *kernel, CHUNK_SIZE_T *user) { |
|
5402 if (whole) { |
|
5403 __int64 totalKernel64 = 0; |
|
5404 __int64 totalUser64 = 0; |
|
5405 |
|
5406 HANDLE threadSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, GetCurrentProcessId()); |
|
5407 if (threadSnapshot == INVALID_HANDLE_VALUE) { |
|
5408 *kernel = 0; |
|
5409 *user = 0; |
|
5410 return FALSE; |
|
5411 } |
|
5412 |
|
5413 THREADENTRY32 threadEntry; |
|
5414 threadEntry.dwSize = sizeof(THREADENTRY32); |
|
5415 |
|
5416 if (!Thread32First(threadSnapshot, &threadEntry)) { |
|
5417 *kernel = 0; |
|
5418 *user = 0; |
|
5419 return FALSE; |
|
5420 } |
|
5421 |
|
5422 do { |
|
5423 __int64 creation64, exit64, kernel64, user64; |
|
5424 HANDLE threadHandle = OpenProcess(0, false, threadEntry.th32ThreadID); |
|
5425 |
|
5426 int rc = GetThreadTimes(threadHandle, |
|
5427 (FILETIME *) &creation64, |
|
5428 (FILETIME *) &exit64, |
|
5429 (FILETIME *) &kernel64, |
|
5430 (FILETIME *) &user64); |
|
5431 |
|
5432 CloseHandle(threadHandle); |
|
5433 |
|
5434 if (!rc) |
|
5435 continue; |
|
5436 |
|
5437 totalKernel64 += kernel64; |
|
5438 totalUser64 += user64; |
|
5439 } while (Thread32Next(threadSnapshot, &threadEntry)); |
|
5440 |
|
5441 CloseHandle(threadSnapshot); |
|
5442 |
|
5443 *kernel = (CHUNK_SIZE_T) (totalKernel64 / 10000); |
|
5444 *user = (CHUNK_SIZE_T) (totalUser64 / 10000); |
|
5445 return TRUE; |
|
5446 } else { |
|
5447 __int64 creation64, exit64, kernel64, user64; |
|
5448 int rc = GetThreadTimes (GetCurrentThread (), |
|
5449 (FILETIME *) &creation64, |
|
5450 (FILETIME *) &exit64, |
|
5451 (FILETIME *) &kernel64, |
|
5452 (FILETIME *) &user64); |
|
5453 if (! rc) { |
|
5454 *kernel = 0; |
|
5455 *user = 0; |
|
5456 return FALSE; |
|
5457 } |
|
5458 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000); |
|
5459 *user = (CHUNK_SIZE_T) (user64 / 10000); |
|
5460 return TRUE; |
|
5461 } |
|
5462 } |
|
5463 #else |
|
5464 static int cpuinfo (int whole, CHUNK_SIZE_T *kernel, CHUNK_SIZE_T *user) { |
|
5465 if (whole) { |
|
5466 __int64 creation64, exit64, kernel64, user64; |
|
5467 int rc = GetProcessTimes (GetCurrentProcess (), |
|
5468 (FILETIME *) &creation64, |
|
5469 (FILETIME *) &exit64, |
|
5470 (FILETIME *) &kernel64, |
|
5471 (FILETIME *) &user64); |
|
5472 if (! rc) { |
|
5473 *kernel = 0; |
|
5474 *user = 0; |
|
5475 return FALSE; |
|
5476 } |
|
5477 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000); |
|
5478 *user = (CHUNK_SIZE_T) (user64 / 10000); |
|
5479 return TRUE; |
|
5480 } else { |
|
5481 __int64 creation64, exit64, kernel64, user64; |
|
5482 int rc = GetThreadTimes (GetCurrentThread (), |
|
5483 (FILETIME *) &creation64, |
|
5484 (FILETIME *) &exit64, |
|
5485 (FILETIME *) &kernel64, |
|
5486 (FILETIME *) &user64); |
|
5487 if (! rc) { |
|
5488 *kernel = 0; |
|
5489 *user = 0; |
|
5490 return FALSE; |
|
5491 } |
|
5492 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000); |
|
5493 *user = (CHUNK_SIZE_T) (user64 / 10000); |
|
5494 return TRUE; |
|
5495 } |
|
5496 } |
|
5497 #endif |
|
5498 |
|
5499 #endif /* WIN32 */ |
|
5500 |
|
5501 /* ------------------------------------------------------------ |
|
5502 History: |
|
5503 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
|
5504 * Fix malloc_state bitmap array misdeclaration |
|
5505 |
|
5506 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) |
|
5507 * Allow tuning of FIRST_SORTED_BIN_SIZE |
|
5508 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. |
|
5509 * Better detection and support for non-contiguousness of MORECORE. |
|
5510 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger |
|
5511 * Bypass most of malloc if no frees. Thanks To Emery Berger. |
|
5512 * Fix freeing of old top non-contiguous chunk im sysmalloc. |
|
5513 * Raised default trim and map thresholds to 256K. |
|
5514 * Fix mmap-related #defines. Thanks to Lubos Lunak. |
|
5515 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. |
|
5516 * Branch-free bin calculation |
|
5517 * Default trim and mmap thresholds now 256K. |
|
5518 |
|
5519 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
|
5520 * Introduce independent_comalloc and independent_calloc. |
|
5521 Thanks to Michael Pachos for motivation and help. |
|
5522 * Make optional .h file available |
|
5523 * Allow > 2GB requests on 32bit systems. |
|
5524 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
|
5525 Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
|
5526 and Anonymous. |
|
5527 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
|
5528 helping test this.) |
|
5529 * memalign: check alignment arg |
|
5530 * realloc: don't try to shift chunks backwards, since this |
|
5531 leads to more fragmentation in some programs and doesn't |
|
5532 seem to help in any others. |
|
5533 * Collect all cases in malloc requiring system memory into sYSMALLOc |
|
5534 * Use mmap as backup to sbrk |
|
5535 * Place all internal state in malloc_state |
|
5536 * Introduce fastbins (although similar to 2.5.1) |
|
5537 * Many minor tunings and cosmetic improvements |
|
5538 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
|
5539 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
|
5540 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
|
5541 * Include errno.h to support default failure action. |
|
5542 |
|
5543 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
|
5544 * return null for negative arguments |
|
5545 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
|
5546 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
|
5547 (e.g. WIN32 platforms) |
|
5548 * Cleanup header file inclusion for WIN32 platforms |
|
5549 * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
|
5550 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
|
5551 memory allocation routines |
|
5552 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
|
5553 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
|
5554 usage of 'assert' in non-WIN32 code |
|
5555 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
|
5556 avoid infinite loop |
|
5557 * Always call 'fREe()' rather than 'free()' |
|
5558 |
|
5559 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
|
5560 * Fixed ordering problem with boundary-stamping |
|
5561 |
|
5562 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
|
5563 * Added pvalloc, as recommended by H.J. Liu |
|
5564 * Added 64bit pointer support mainly from Wolfram Gloger |
|
5565 * Added anonymously donated WIN32 sbrk emulation |
|
5566 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
|
5567 * malloc_extend_top: fix mask error that caused wastage after |
|
5568 foreign sbrks |
|
5569 * Add linux mremap support code from HJ Liu |
|
5570 |
|
5571 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
|
5572 * Integrated most documentation with the code. |
|
5573 * Add support for mmap, with help from |
|
5574 Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
|
5575 * Use last_remainder in more cases. |
|
5576 * Pack bins using idea from colin@nyx10.cs.du.edu |
|
5577 * Use ordered bins instead of best-fit threshhold |
|
5578 * Eliminate block-local decls to simplify tracing and debugging. |
|
5579 * Support another case of realloc via move into top |
|
5580 * Fix error occuring when initial sbrk_base not word-aligned. |
|
5581 * Rely on page size for units instead of SBRK_UNIT to |
|
5582 avoid surprises about sbrk alignment conventions. |
|
5583 * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
|
5584 (raymond@es.ele.tue.nl) for the suggestion. |
|
5585 * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
|
5586 * More precautions for cases where other routines call sbrk, |
|
5587 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
|
5588 * Added macros etc., allowing use in linux libc from |
|
5589 H.J. Lu (hjl@gnu.ai.mit.edu) |
|
5590 * Inverted this history list |
|
5591 |
|
5592 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
|
5593 * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
|
5594 * Removed all preallocation code since under current scheme |
|
5595 the work required to undo bad preallocations exceeds |
|
5596 the work saved in good cases for most test programs. |
|
5597 * No longer use return list or unconsolidated bins since |
|
5598 no scheme using them consistently outperforms those that don't |
|
5599 given above changes. |
|
5600 * Use best fit for very large chunks to prevent some worst-cases. |
|
5601 * Added some support for debugging |
|
5602 |
|
5603 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
|
5604 * Removed footers when chunks are in use. Thanks to |
|
5605 Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
|
5606 |
|
5607 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
|
5608 * Added malloc_trim, with help from Wolfram Gloger |
|
5609 (wmglo@Dent.MED.Uni-Muenchen.DE). |
|
5610 |
|
5611 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
|
5612 |
|
5613 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
|
5614 * realloc: try to expand in both directions |
|
5615 * malloc: swap order of clean-bin strategy; |
|
5616 * realloc: only conditionally expand backwards |
|
5617 * Try not to scavenge used bins |
|
5618 * Use bin counts as a guide to preallocation |
|
5619 * Occasionally bin return list chunks in first scan |
|
5620 * Add a few optimizations from colin@nyx10.cs.du.edu |
|
5621 |
|
5622 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
|
5623 * faster bin computation & slightly different binning |
|
5624 * merged all consolidations to one part of malloc proper |
|
5625 (eliminating old malloc_find_space & malloc_clean_bin) |
|
5626 * Scan 2 returns chunks (not just 1) |
|
5627 * Propagate failure in realloc if malloc returns 0 |
|
5628 * Add stuff to allow compilation on non-ANSI compilers |
|
5629 from kpv@research.att.com |
|
5630 |
|
5631 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
|
5632 * removed potential for odd address access in prev_chunk |
|
5633 * removed dependency on getpagesize.h |
|
5634 * misc cosmetics and a bit more internal documentation |
|
5635 * anticosmetics: mangled names in macros to evade debugger strangeness |
|
5636 * tested on sparc, hp-700, dec-mips, rs6000 |
|
5637 with gcc & native cc (hp, dec only) allowing |
|
5638 Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
|
5639 |
|
5640 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
|
5641 * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
|
5642 structure of old version, but most details differ.) |
|
5643 |
|
5644 */ |