webengine/osswebengine/MemoryManager/Src/fast_malloc.cpp
changeset 0 dd21522fd290
child 8 7c90e6132015
equal deleted inserted replaced
-1:000000000000 0:dd21522fd290
       
     1 /*
       
     2   This is a version (aka dlmalloc) of malloc/free/realloc written by
       
     3   Doug Lea and released to the public domain, as explained at
       
     4   http://creativecommons.org/licenses/publicdomain.  Send questions,
       
     5   comments, complaints, performance data, etc to dl@cs.oswego.edu
       
     6 
       
     7 * Version 2.8.2 Sun Jun 12 16:05:14 2005  Doug Lea  (dl at gee)
       
     8 
       
     9    Note: There may be an updated version of this malloc obtainable at
       
    10            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
       
    11          Check before installing!
       
    12 
       
    13 * Quickstart
       
    14 
       
    15   This library is all in one file to simplify the most common usage:
       
    16   ftp it, compile it (-O3), and link it into another program. All of
       
    17   the compile-time options default to reasonable values for use on
       
    18   most platforms.  You might later want to step through various
       
    19   compile-time and dynamic tuning options.
       
    20 
       
    21   For convenience, an include file for code using this malloc is at:
       
    22      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.0.h
       
    23   You don't really need this .h file unless you call functions not
       
    24   defined in your system include files.  The .h file contains only the
       
    25   excerpts from this file needed for using this malloc on ANSI C/C++
       
    26   systems, so long as you haven't changed compile-time options about
       
    27   naming and tuning parameters.  If you do, then you can create your
       
    28   own malloc.h that does include all settings by cutting at the point
       
    29   indicated below. Note that you may already by default be using a C
       
    30   library containing a malloc that is based on some version of this
       
    31   malloc (for example in linux). You might still want to use the one
       
    32   in this file to customize settings or to avoid overheads associated
       
    33   with library versions.
       
    34 
       
    35 * Vital statistics:
       
    36 
       
    37   Supported pointer/size_t representation:       4 or 8 bytes
       
    38        size_t MUST be an unsigned type of the same width as
       
    39        pointers. (If you are using an ancient system that declares
       
    40        size_t as a signed type, or need it to be a different width
       
    41        than pointers, you can use a previous release of this malloc
       
    42        (e.g. 2.7.2) supporting these.)
       
    43 
       
    44   Alignment:                                     8 bytes (default)
       
    45        This suffices for nearly all current machines and C compilers.
       
    46        However, you can define MALLOC_ALIGNMENT to be wider than this
       
    47        if necessary (up to 128bytes), at the expense of using more space.
       
    48 
       
    49   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
       
    50                                           8 or 16 bytes (if 8byte sizes)
       
    51        Each malloced chunk has a hidden word of overhead holding size
       
    52        and status information, and additional cross-check word
       
    53        if FOOTERS is defined.
       
    54 
       
    55   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
       
    56                           8-byte ptrs:  32 bytes    (including overhead)
       
    57 
       
    58        Even a request for zero bytes (i.e., malloc(0)) returns a
       
    59        pointer to something of the minimum allocatable size.
       
    60        The maximum overhead wastage (i.e., number of extra bytes
       
    61        allocated than were requested in malloc) is less than or equal
       
    62        to the minimum size, except for requests >= mmap_threshold that
       
    63        are serviced via mmap(), where the worst case wastage is about
       
    64        32 bytes plus the remainder from a system page (the minimal
       
    65        mmap unit); typically 4096 or 8192 bytes.
       
    66 
       
    67   Security: static-safe; optionally more or less
       
    68        The "security" of malloc refers to the ability of malicious
       
    69        code to accentuate the effects of errors (for example, freeing
       
    70        space that is not currently malloc'ed or overwriting past the
       
    71        ends of chunks) in code that calls malloc.  This malloc
       
    72        guarantees not to modify any memory locations below the base of
       
    73        heap, i.e., static variables, even in the presence of usage
       
    74        errors.  The routines additionally detect most improper frees
       
    75        and reallocs.  All this holds as long as the static bookkeeping
       
    76        for malloc itself is not corrupted by some other means.  This
       
    77        is only one aspect of security -- these checks do not, and
       
    78        cannot, detect all possible programming errors.
       
    79 
       
    80        If FOOTERS is defined nonzero, then each allocated chunk
       
    81        carries an additional check word to verify that it was malloced
       
    82        from its space.  These check words are the same within each
       
    83        execution of a program using malloc, but differ across
       
    84        executions, so externally crafted fake chunks cannot be
       
    85        freed. This improves security by rejecting frees/reallocs that
       
    86        could corrupt heap memory, in addition to the checks preventing
       
    87        writes to statics that are always on.  This may further improve
       
    88        security at the expense of time and space overhead.  (Note that
       
    89        FOOTERS may also be worth using with MSPACES.)
       
    90 
       
    91        By default detected errors cause the program to abort (calling
       
    92        "abort()"). You can override this to instead proceed past
       
    93        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
       
    94        has no effect, and a malloc that encounters a bad address
       
    95        caused by user overwrites will ignore the bad address by
       
    96        dropping pointers and indices to all known memory. This may
       
    97        be appropriate for programs that should continue if at all
       
    98        possible in the face of programming errors, although they may
       
    99        run out of memory because dropped memory is never reclaimed.
       
   100 
       
   101        If you don't like either of these options, you can define
       
   102        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
       
   103        else. And if if you are sure that your program using malloc has
       
   104        no errors or vulnerabilities, you can define INSECURE to 1,
       
   105        which might (or might not) provide a small performance improvement.
       
   106 
       
   107   Thread-safety: NOT thread-safe unless USE_LOCKS defined
       
   108        When USE_LOCKS is defined, each public call to malloc, free,
       
   109        etc is surrounded with either a pthread mutex or a win32
       
   110        spinlock (depending on WIN32). This is not especially fast, and
       
   111        can be a major bottleneck.  It is designed only to provide
       
   112        minimal protection in concurrent environments, and to provide a
       
   113        basis for extensions.  If you are using malloc in a concurrent
       
   114        program, consider instead using ptmalloc, which is derived from
       
   115        a version of this malloc. (See http://www.malloc.de).
       
   116 
       
   117   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
       
   118        This malloc can use unix sbrk or any emulation (invoked using
       
   119        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
       
   120        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
       
   121        memory.  On most unix systems, it tends to work best if both
       
   122        MORECORE and MMAP are enabled.  On Win32, it uses emulations
       
   123        based on VirtualAlloc. It also uses common C library functions
       
   124        like memset.
       
   125 
       
   126   Compliance: I believe it is compliant with the Single Unix Specification
       
   127        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
       
   128        others as well.
       
   129 
       
   130 * Overview of algorithms
       
   131 
       
   132   This is not the fastest, most space-conserving, most portable, or
       
   133   most tunable malloc ever written. However it is among the fastest
       
   134   while also being among the most space-conserving, portable and
       
   135   tunable.  Consistent balance across these factors results in a good
       
   136   general-purpose allocator for malloc-intensive programs.
       
   137 
       
   138   In most ways, this malloc is a best-fit allocator. Generally, it
       
   139   chooses the best-fitting existing chunk for a request, with ties
       
   140   broken in approximately least-recently-used order. (This strategy
       
   141   normally maintains low fragmentation.) However, for requests less
       
   142   than 256bytes, it deviates from best-fit when there is not an
       
   143   exactly fitting available chunk by preferring to use space adjacent
       
   144   to that used for the previous small request, as well as by breaking
       
   145   ties in approximately most-recently-used order. (These enhance
       
   146   locality of series of small allocations.)  And for very large requests
       
   147   (>= 256Kb by default), it relies on system memory mapping
       
   148   facilities, if supported.  (This helps avoid carrying around and
       
   149   possibly fragmenting memory used only for large chunks.)
       
   150 
       
   151   All operations (except malloc_stats and mallinfo) have execution
       
   152   times that are bounded by a constant factor of the number of bits in
       
   153   a size_t, not counting any clearing in calloc or copying in realloc,
       
   154   or actions surrounding MORECORE and MMAP that have times
       
   155   proportional to the number of non-contiguous regions returned by
       
   156   system allocation routines, which is often just 1.
       
   157 
       
   158   The implementation is not very modular and seriously overuses
       
   159   macros. Perhaps someday all C compilers will do as good a job
       
   160   inlining modular code as can now be done by brute-force expansion,
       
   161   but now, enough of them seem not to.
       
   162 
       
   163   Some compilers issue a lot of warnings about code that is
       
   164   dead/unreachable only on some platforms, and also about intentional
       
   165   uses of negation on unsigned types. All known cases of each can be
       
   166   ignored.
       
   167 
       
   168   For a longer but out of date high-level description, see
       
   169      http://gee.cs.oswego.edu/dl/html/malloc.html
       
   170 
       
   171 * MSPACES
       
   172   If MSPACES is defined, then in addition to malloc, free, etc.,
       
   173   this file also defines mspace_malloc, mspace_free, etc. These
       
   174   are versions of malloc routines that take an "mspace" argument
       
   175   obtained using create_mspace, to control all internal bookkeeping.
       
   176   If ONLY_MSPACES is defined, only these versions are compiled.
       
   177   So if you would like to use this allocator for only some allocations,
       
   178   and your system malloc for others, you can compile with
       
   179   ONLY_MSPACES and then do something like...
       
   180     static mspace mymspace = create_mspace(0,0); // for example
       
   181     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
       
   182 
       
   183   (Note: If you only need one instance of an mspace, you can instead
       
   184   use "USE_DL_PREFIX" to relabel the global malloc.)
       
   185 
       
   186   You can similarly create thread-local allocators by storing
       
   187   mspaces as thread-locals. For example:
       
   188     static __thread mspace tlms = 0;
       
   189     void*  tlmalloc(size_t bytes) {
       
   190       if (tlms == 0) tlms = create_mspace(0, 0);
       
   191       return mspace_malloc(tlms, bytes);
       
   192     }
       
   193     void  tlfree(void* mem) { mspace_free(tlms, mem); }
       
   194 
       
   195   Unless FOOTERS is defined, each mspace is completely independent.
       
   196   You cannot allocate from one and free to another (although
       
   197   conformance is only weakly checked, so usage errors are not always
       
   198   caught). If FOOTERS is defined, then each chunk carries around a tag
       
   199   indicating its originating mspace, and frees are directed to their
       
   200   originating spaces.
       
   201 
       
   202  -------------------------  Compile-time options ---------------------------
       
   203 
       
   204 WIN32                    default: defined if _WIN32 defined
       
   205   Defining WIN32 sets up defaults for MS environment and compilers.
       
   206   Otherwise defaults are for unix.
       
   207 
       
   208 MALLOC_ALIGNMENT         default: 8
       
   209   Controls the minimum alignment for malloc'ed chunks.  It must be a
       
   210   power of two and at least 8, even on machines for which smaller
       
   211   alignments would suffice. It may be defined as larger than this
       
   212   though. Note however that code and data structures are optimized for
       
   213   the case of 8-byte alignment.
       
   214 
       
   215 MSPACES                  default: 0 (false)
       
   216   If true, compile in support for independent allocation spaces.
       
   217   This is only supported if HAVE_MMAP is true.
       
   218 
       
   219 ONLY_MSPACES             default: 0 (false)
       
   220   If true, only compile in mspace versions, not regular versions.
       
   221 
       
   222 USE_LOCKS                default: 0 (false)
       
   223   Causes each call to each public routine to be surrounded with
       
   224   pthread or WIN32 mutex lock/unlock. (If set true, this can be
       
   225   overridden on a per-mspace basis for mspace versions.)
       
   226 
       
   227 FOOTERS                  default: 0
       
   228   If true, provide extra checking and dispatching by placing
       
   229   information in the footers of allocated chunks. This adds
       
   230   space and time overhead.
       
   231 
       
   232 INSECURE                 default: 0
       
   233   If true, omit checks for usage errors and heap space overwrites.
       
   234 
       
   235 USE_DL_PREFIX            default: NOT defined
       
   236   Causes compiler to prefix all public routines with the string 'dl'.
       
   237   This can be useful when you only want to use this malloc in one part
       
   238   of a program, using your regular system malloc elsewhere.
       
   239 
       
   240 ABORT                    default: defined as abort()
       
   241   Defines how to abort on failed checks.  On most systems, a failed
       
   242   check cannot die with an "assert" or even print an informative
       
   243   message, because the underlying print routines in turn call malloc,
       
   244   which will fail again.  Generally, the best policy is to simply call
       
   245   abort(). It's not very useful to do more than this because many
       
   246   errors due to overwriting will show up as address faults (null, odd
       
   247   addresses etc) rather than malloc-triggered checks, so will also
       
   248   abort.  Also, most compilers know that abort() does not return, so
       
   249   can better optimize code conditionally calling it.
       
   250 
       
   251 PROCEED_ON_ERROR           default: defined as 0 (false)
       
   252   Controls whether detected bad addresses cause them to bypassed
       
   253   rather than aborting. If set, detected bad arguments to free and
       
   254   realloc are ignored. And all bookkeeping information is zeroed out
       
   255   upon a detected overwrite of freed heap space, thus losing the
       
   256   ability to ever return it from malloc again, but enabling the
       
   257   application to proceed. If PROCEED_ON_ERROR is defined, the
       
   258   static variable malloc_corruption_error_count is compiled in
       
   259   and can be examined to see if errors have occurred. This option
       
   260   generates slower code than the default abort policy.
       
   261 
       
   262 DEBUG                    default: NOT defined
       
   263   The DEBUG setting is mainly intended for people trying to modify
       
   264   this code or diagnose problems when porting to new platforms.
       
   265   However, it may also be able to better isolate user errors than just
       
   266   using runtime checks.  The assertions in the check routines spell
       
   267   out in more detail the assumptions and invariants underlying the
       
   268   algorithms.  The checking is fairly extensive, and will slow down
       
   269   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
       
   270   set will attempt to check every non-mmapped allocated and free chunk
       
   271   in the course of computing the summaries.
       
   272 
       
   273 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
       
   274   Debugging assertion failures can be nearly impossible if your
       
   275   version of the assert macro causes malloc to be called, which will
       
   276   lead to a cascade of further failures, blowing the runtime stack.
       
   277   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
       
   278   which will usually make debugging easier.
       
   279 
       
   280 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
       
   281   The action to take before "return 0" when malloc fails to be able to
       
   282   return memory because there is none available.
       
   283 
       
   284 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
       
   285   True if this system supports sbrk or an emulation of it.
       
   286 
       
   287 MORECORE                  default: sbrk
       
   288   The name of the sbrk-style system routine to call to obtain more
       
   289   memory.  See below for guidance on writing custom MORECORE
       
   290   functions. The type of the argument to sbrk/MORECORE varies across
       
   291   systems.  It cannot be size_t, because it supports negative
       
   292   arguments, so it is normally the signed type of the same width as
       
   293   size_t (sometimes declared as "intptr_t").  It doesn't much matter
       
   294   though. Internally, we only call if with arguments less than half
       
   295   the max value of a size_t, which should work across all reasonable
       
   296   possibilities, although sometimes generating compiler warnings.  See
       
   297   near the end of this file for guidelines for creating a custom
       
   298   version of MORECORE.
       
   299 
       
   300 MORECORE_CONTIGUOUS       default: 1 (true)
       
   301   If true, take advantage of fact that consecutive calls to MORECORE
       
   302   with positive arguments always return contiguous increasing
       
   303   addresses.  This is true of unix sbrk. It does not hurt too much to
       
   304   set it true anyway, since malloc copes with non-contiguities.
       
   305   Setting it false when definitely non-contiguous saves time
       
   306   and possibly wasted space it would take to discover this though.
       
   307 
       
   308 MORECORE_CANNOT_TRIM      default: NOT defined
       
   309   True if MORECORE cannot release space back to the system when given
       
   310   negative arguments. This is generally necessary only if you are
       
   311   using a hand-crafted MORECORE function that cannot handle negative
       
   312   arguments.
       
   313 
       
   314 HAVE_MMAP                 default: 1 (true)
       
   315   True if this system supports mmap or an emulation of it.  If so, and
       
   316   HAVE_MORECORE is not true, MMAP is used for all system
       
   317   allocation. If set and HAVE_MORECORE is true as well, MMAP is
       
   318   primarily used to directly allocate very large blocks. It is also
       
   319   used as a backup strategy in cases where MORECORE fails to provide
       
   320   space from system. Note: A single call to MUNMAP is assumed to be
       
   321   able to unmap memory that may have be allocated using multiple calls
       
   322   to MMAP, so long as they are adjacent.
       
   323 
       
   324 HAVE_MREMAP               default: 1 on linux, else 0
       
   325   If true realloc() uses mremap() to re-allocate large blocks and
       
   326   extend or shrink allocation spaces.
       
   327 
       
   328 MMAP_CLEARS               default: 1 on unix
       
   329   True if mmap clears memory so calloc doesn't need to. This is true
       
   330   for standard unix mmap using /dev/zero.
       
   331 
       
   332 USE_BUILTIN_FFS            default: 0 (i.e., not used)
       
   333   Causes malloc to use the builtin ffs() function to compute indices.
       
   334   Some compilers may recognize and intrinsify ffs to be faster than the
       
   335   supplied C version. Also, the case of x86 using gcc is special-cased
       
   336   to an asm instruction, so is already as fast as it can be, and so
       
   337   this setting has no effect. (On most x86s, the asm version is only
       
   338   slightly faster than the C version.)
       
   339 
       
   340 malloc_getpagesize         default: derive from system includes, or 4096.
       
   341   The system page size. To the extent possible, this malloc manages
       
   342   memory from the system in page-size units.  This may be (and
       
   343   usually is) a function rather than a constant. This is ignored
       
   344   if WIN32, where page size is determined using getSystemInfo during
       
   345   initialization.
       
   346 
       
   347 USE_DEV_RANDOM             default: 0 (i.e., not used)
       
   348   Causes malloc to use /dev/random to initialize secure magic seed for
       
   349   stamping footers. Otherwise, the current time is used.
       
   350 
       
   351 NO_MALLINFO                default: 0
       
   352   If defined, don't compile "mallinfo". This can be a simple way
       
   353   of dealing with mismatches between system declarations and
       
   354   those in this file.
       
   355 
       
   356 MALLINFO_FIELD_TYPE        default: size_t
       
   357   The type of the fields in the mallinfo struct. This was originally
       
   358   defined as "int" in SVID etc, but is more usefully defined as
       
   359   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
       
   360 
       
   361 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
       
   362 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
       
   363 LACKS_STDLIB_H                default: NOT defined unless on WIN32
       
   364   Define these if your system does not have these header files.
       
   365   You might need to manually insert some of the declarations they provide.
       
   366 
       
   367 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
       
   368                                 system_info.dwAllocationGranularity in WIN32,
       
   369                                 otherwise 64K.
       
   370       Also settable using mallopt(M_GRANULARITY, x)
       
   371   The unit for allocating and deallocating memory from the system.  On
       
   372   most systems with contiguous MORECORE, there is no reason to
       
   373   make this more than a page. However, systems with MMAP tend to
       
   374   either require or encourage larger granularities.  You can increase
       
   375   this value to prevent system allocation functions to be called so
       
   376   often, especially if they are slow.  The value must be at least one
       
   377   page and must be a power of two.  Setting to 0 causes initialization
       
   378   to either page size or win32 region size.  (Note: In previous
       
   379   versions of malloc, the equivalent of this option was called
       
   380   "TOP_PAD")
       
   381 
       
   382 DEFAULT_TRIM_THRESHOLD    default: 2MB
       
   383       Also settable using mallopt(M_TRIM_THRESHOLD, x)
       
   384   The maximum amount of unused top-most memory to keep before
       
   385   releasing via malloc_trim in free().  Automatic trimming is mainly
       
   386   useful in long-lived programs using contiguous MORECORE.  Because
       
   387   trimming via sbrk can be slow on some systems, and can sometimes be
       
   388   wasteful (in cases where programs immediately afterward allocate
       
   389   more large chunks) the value should be high enough so that your
       
   390   overall system performance would improve by releasing this much
       
   391   memory.  As a rough guide, you might set to a value close to the
       
   392   average size of a process (program) running on your system.
       
   393   Releasing this much memory would allow such a process to run in
       
   394   memory.  Generally, it is worth tuning trim thresholds when a
       
   395   program undergoes phases where several large chunks are allocated
       
   396   and released in ways that can reuse each other's storage, perhaps
       
   397   mixed with phases where there are no such chunks at all. The trim
       
   398   value must be greater than page size to have any useful effect.  To
       
   399   disable trimming completely, you can set to -1U. Note that the trick
       
   400   some people use of mallocing a huge space and then freeing it at
       
   401   program startup, in an attempt to reserve system memory, doesn't
       
   402   have the intended effect under automatic trimming, since that memory
       
   403   will immediately be returned to the system.
       
   404 
       
   405 DEFAULT_MMAP_THRESHOLD       default: 256K
       
   406       Also settable using mallopt(M_MMAP_THRESHOLD, x)
       
   407   The request size threshold for using MMAP to directly service a
       
   408   request. Requests of at least this size that cannot be allocated
       
   409   using already-existing space will be serviced via mmap.  (If enough
       
   410   normal freed space already exists it is used instead.)  Using mmap
       
   411   segregates relatively large chunks of memory so that they can be
       
   412   individually obtained and released from the host system. A request
       
   413   serviced through mmap is never reused by any other request (at least
       
   414   not directly; the system may just so happen to remap successive
       
   415   requests to the same locations).  Segregating space in this way has
       
   416   the benefits that: Mmapped space can always be individually released
       
   417   back to the system, which helps keep the system level memory demands
       
   418   of a long-lived program low.  Also, mapped memory doesn't become
       
   419   `locked' between other chunks, as can happen with normally allocated
       
   420   chunks, which means that even trimming via malloc_trim would not
       
   421   release them.  However, it has the disadvantage that the space
       
   422   cannot be reclaimed, consolidated, and then used to service later
       
   423   requests, as happens with normal chunks.  The advantages of mmap
       
   424   nearly always outweigh disadvantages for "large" chunks, but the
       
   425   value of "large" may vary across systems.  The default is an
       
   426   empirically derived value that works well in most systems. You can
       
   427   disable mmap by setting to -1U.
       
   428 
       
   429 */
       
   430 #include <e32base.h>
       
   431 #include <e32hal.h>
       
   432 #include <hal.h>
       
   433 
       
   434 #include "MemoryManager.h"
       
   435 
       
   436 //#define OOM_LOGGING
       
   437 #include "MemoryLogger.h"
       
   438 
       
   439 /*
       
   440 #ifndef WIN32
       
   441 #ifdef _WIN32
       
   442 #define WIN32 1
       
   443 #endif
       
   444 #endif
       
   445 */
       
   446 #ifdef WIN32
       
   447 #define WIN32_LEAN_AND_MEAN
       
   448 #include <windows.h>
       
   449 #define HAVE_MMAP 1
       
   450 #define HAVE_MORECORE 0
       
   451 #define LACKS_UNISTD_H
       
   452 #define LACKS_SYS_PARAM_H
       
   453 #define LACKS_SYS_MMAN_H
       
   454 #define LACKS_STRING_H
       
   455 #define LACKS_STRINGS_H
       
   456 #define LACKS_SYS_TYPES_H
       
   457 #define LACKS_ERRNO_H
       
   458 #define MALLOC_FAILURE_ACTION
       
   459 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
       
   460 #endif
       
   461 
       
   462 #if defined(DARWIN) || defined(_DARWIN)
       
   463 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
       
   464 #ifndef HAVE_MORECORE
       
   465 #define HAVE_MORECORE 0
       
   466 #define HAVE_MMAP 1
       
   467 #endif
       
   468 #endif
       
   469 
       
   470 #define LACKS_SYS_MMAN_H 1
       
   471 #ifndef LACKS_SYS_TYPES_H
       
   472 #include <sys/types.h>  /* For size_t */
       
   473 #endif
       
   474 #ifndef ONLY_MSPACES
       
   475 #define ONLY_MSPACES 0
       
   476 #endif
       
   477 #ifndef MSPACES
       
   478 #if ONLY_MSPACES
       
   479 #define MSPACES 1
       
   480 #else
       
   481 #define MSPACES 0
       
   482 #endif
       
   483 #endif
       
   484 #ifndef MALLOC_ALIGNMENT
       
   485 #define MALLOC_ALIGNMENT (8U)
       
   486 #endif
       
   487 #ifndef FOOTERS
       
   488 #define FOOTERS 0
       
   489 #endif
       
   490 #ifndef ABORT
       
   491 #define ABORT  log_abort(__FILE__, __LINE__)
       
   492 #endif
       
   493 #ifndef ABORT_ON_ASSERT_FAILURE
       
   494 #define ABORT_ON_ASSERT_FAILURE 1
       
   495 #endif
       
   496 #ifndef PROCEED_ON_ERROR
       
   497 #define PROCEED_ON_ERROR 0
       
   498 #endif
       
   499 #ifndef USE_LOCKS
       
   500 #define USE_LOCKS 1
       
   501 #endif
       
   502 #ifndef INSECURE
       
   503 #define INSECURE 1
       
   504 #endif
       
   505 #ifndef HAVE_MMAP
       
   506 #define HAVE_MMAP 1
       
   507 #endif
       
   508 #ifndef MMAP_CLEARS
       
   509 #define MMAP_CLEARS 1
       
   510 #endif
       
   511 #ifndef HAVE_MREMAP
       
   512 #ifdef linux
       
   513 #define HAVE_MREMAP 1
       
   514 #else
       
   515 #define HAVE_MREMAP 0
       
   516 #endif
       
   517 #endif
       
   518 #ifndef MALLOC_FAILURE_ACTION
       
   519 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
       
   520 #endif
       
   521 #ifndef HAVE_MORECORE
       
   522 #if ONLY_MSPACES
       
   523 #define HAVE_MORECORE 0
       
   524 #else
       
   525 #define HAVE_MORECORE 1
       
   526 #endif
       
   527 #endif
       
   528 #if !HAVE_MORECORE
       
   529 #define MORECORE_CONTIGUOUS 0
       
   530 #else
       
   531 #ifndef MORECORE
       
   532 #define MORECORE chunkMoreCore
       
   533 #endif
       
   534 #ifndef MORECORE_CONTIGUOUS
       
   535 #define MORECORE_CONTIGUOUS 1
       
   536 #endif
       
   537 #endif
       
   538 #ifndef DEFAULT_GRANULARITY
       
   539 #if MORECORE_CONTIGUOUS
       
   540 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
       
   541 #else
       
   542 #define DEFAULT_GRANULARITY (64 * 1024U)
       
   543 #endif
       
   544 #endif
       
   545 #ifndef DEFAULT_TRIM_THRESHOLD
       
   546 #ifndef MORECORE_CANNOT_TRIM
       
   547 #define DEFAULT_TRIM_THRESHOLD (64 * 1024U)
       
   548 #else
       
   549 #define DEFAULT_TRIM_THRESHOLD (-1U)
       
   550 #endif
       
   551 #endif
       
   552 #ifndef DEFAULT_MMAP_THRESHOLD
       
   553 #if HAVE_MMAP
       
   554 #define DEFAULT_MMAP_THRESHOLD (4U * 1024U)
       
   555 #else
       
   556 #define DEFAULT_MMAP_THRESHOLD (-1U)
       
   557 #endif
       
   558 #endif
       
   559 #ifndef USE_BUILTIN_FFS
       
   560 #define USE_BUILTIN_FFS 0
       
   561 #endif
       
   562 #ifndef USE_DEV_RANDOM
       
   563 #define USE_DEV_RANDOM 0
       
   564 #endif
       
   565 #ifndef NO_MALLINFO
       
   566 #define NO_MALLINFO 0
       
   567 #endif
       
   568 #ifndef MALLINFO_FIELD_TYPE
       
   569 #define MALLINFO_FIELD_TYPE size_t
       
   570 #endif
       
   571 
       
   572 /*
       
   573   mallopt tuning options.  SVID/XPG defines four standard parameter
       
   574   numbers for mallopt, normally defined in malloc.h.  None of these
       
   575   are used in this malloc, so setting them has no effect. But this
       
   576   malloc does support the following options.
       
   577 */
       
   578 
       
   579 #define M_TRIM_THRESHOLD     (-1)
       
   580 #define M_GRANULARITY        (-2)
       
   581 #define M_MMAP_THRESHOLD     (-3)
       
   582 
       
   583 /* ------------------------ Mallinfo declarations ------------------------ */
       
   584 
       
   585 #if !NO_MALLINFO
       
   586 /*
       
   587   This version of malloc supports the standard SVID/XPG mallinfo
       
   588   routine that returns a struct containing usage properties and
       
   589   statistics. It should work on any system that has a
       
   590   /usr/include/malloc.h defining struct mallinfo.  The main
       
   591   declaration needed is the mallinfo struct that is returned (by-copy)
       
   592   by mallinfo().  The malloinfo struct contains a bunch of fields that
       
   593   are not even meaningful in this version of malloc.  These fields are
       
   594   are instead filled by mallinfo() with other numbers that might be of
       
   595   interest.
       
   596 
       
   597   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
       
   598   /usr/include/malloc.h file that includes a declaration of struct
       
   599   mallinfo.  If so, it is included; else a compliant version is
       
   600   declared below.  These must be precisely the same for mallinfo() to
       
   601   work.  The original SVID version of this struct, defined on most
       
   602   systems with mallinfo, declares all fields as ints. But some others
       
   603   define as unsigned long. If your system defines the fields using a
       
   604   type of different width than listed here, you MUST #include your
       
   605   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
       
   606 */
       
   607 
       
   608 /* #define HAVE_USR_INCLUDE_MALLOC_H */
       
   609 
       
   610 #ifdef HAVE_USR_INCLUDE_MALLOC_H
       
   611 #include "/usr/include/malloc.h"
       
   612 #else
       
   613 
       
   614 struct mallinfo {
       
   615   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
       
   616   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
       
   617   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
       
   618   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
       
   619   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
       
   620   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
       
   621   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
       
   622   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
       
   623   MALLINFO_FIELD_TYPE fordblks; /* total free space */
       
   624   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
       
   625 };
       
   626 
       
   627 #endif
       
   628 #endif
       
   629 
       
   630 #if !ONLY_MSPACES
       
   631 
       
   632 /* ------------------- Declarations of public routines ------------------- */
       
   633 
       
   634 #ifndef USE_DL_PREFIX
       
   635 #define dlcalloc               fast_calloc
       
   636 #define dlfree                 fast_free
       
   637 #define dlmalloc               fast_malloc
       
   638 #define dlmemalign             fast_memalign
       
   639 #define dlrealloc              fast_realloc
       
   640 #define dlvalloc               fast_valloc
       
   641 #define dlpvalloc              fast_pvalloc
       
   642 #define dlmallinfo             fast_mallinfo
       
   643 #define dlmallopt              fast_mallopt
       
   644 #define dlmalloc_trim          fast_malloc_trim
       
   645 #define dlmalloc_stats         fast_malloc_stats
       
   646 #define dlmalloc_usable_size   fast_malloc_usable_size
       
   647 #define dlmalloc_footprint     fast_malloc_footprint
       
   648 #define dlindependent_calloc   fast_independent_calloc
       
   649 #define dlindependent_comalloc fast_independent_comalloc
       
   650 #endif
       
   651 
       
   652 
       
   653 /*
       
   654   malloc(size_t n)
       
   655   Returns a pointer to a newly allocated chunk of at least n bytes, or
       
   656   null if no space is available, in which case errno is set to ENOMEM
       
   657   on ANSI C systems.
       
   658 
       
   659   If n is zero, malloc returns a minimum-sized chunk. (The minimum
       
   660   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
       
   661   systems.)  Note that size_t is an unsigned type, so calls with
       
   662   arguments that would be negative if signed are interpreted as
       
   663   requests for huge amounts of space, which will often fail. The
       
   664   maximum supported value of n differs across systems, but is in all
       
   665   cases less than the maximum representable value of a size_t.
       
   666 */
       
   667 void* dlmalloc(size_t);
       
   668 
       
   669 /*
       
   670   free(void* p)
       
   671   Releases the chunk of memory pointed to by p, that had been previously
       
   672   allocated using malloc or a related routine such as realloc.
       
   673   It has no effect if p is null. If p was not malloced or already
       
   674   freed, free(p) will by default cuase the current program to abort.
       
   675 */
       
   676 void  dlfree(void*);
       
   677 
       
   678 /*
       
   679   calloc(size_t n_elements, size_t element_size);
       
   680   Returns a pointer to n_elements * element_size bytes, with all locations
       
   681   set to zero.
       
   682 */
       
   683 void* dlcalloc(size_t, size_t);
       
   684 
       
   685 /*
       
   686   realloc(void* p, size_t n)
       
   687   Returns a pointer to a chunk of size n that contains the same data
       
   688   as does chunk p up to the minimum of (n, p's size) bytes, or null
       
   689   if no space is available.
       
   690 
       
   691   The returned pointer may or may not be the same as p. The algorithm
       
   692   prefers extending p in most cases when possible, otherwise it
       
   693   employs the equivalent of a malloc-copy-free sequence.
       
   694 
       
   695   If p is null, realloc is equivalent to malloc.
       
   696 
       
   697   If space is not available, realloc returns null, errno is set (if on
       
   698   ANSI) and p is NOT freed.
       
   699 
       
   700   if n is for fewer bytes than already held by p, the newly unused
       
   701   space is lopped off and freed if possible.  realloc with a size
       
   702   argument of zero (re)allocates a minimum-sized chunk.
       
   703 
       
   704   The old unix realloc convention of allowing the last-free'd chunk
       
   705   to be used as an argument to realloc is not supported.
       
   706 */
       
   707 
       
   708 void* dlrealloc(void*, size_t);
       
   709 
       
   710 /*
       
   711   memalign(size_t alignment, size_t n);
       
   712   Returns a pointer to a newly allocated chunk of n bytes, aligned
       
   713   in accord with the alignment argument.
       
   714 
       
   715   The alignment argument should be a power of two. If the argument is
       
   716   not a power of two, the nearest greater power is used.
       
   717   8-byte alignment is guaranteed by normal malloc calls, so don't
       
   718   bother calling memalign with an argument of 8 or less.
       
   719 
       
   720   Overreliance on memalign is a sure way to fragment space.
       
   721 */
       
   722 void* dlmemalign(size_t, size_t);
       
   723 
       
   724 /*
       
   725   valloc(size_t n);
       
   726   Equivalent to memalign(pagesize, n), where pagesize is the page
       
   727   size of the system. If the pagesize is unknown, 4096 is used.
       
   728 */
       
   729 void* dlvalloc(size_t);
       
   730 
       
   731 /*
       
   732   mallopt(int parameter_number, int parameter_value)
       
   733   Sets tunable parameters The format is to provide a
       
   734   (parameter-number, parameter-value) pair.  mallopt then sets the
       
   735   corresponding parameter to the argument value if it can (i.e., so
       
   736   long as the value is meaningful), and returns 1 if successful else
       
   737   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
       
   738   normally defined in malloc.h.  None of these are use in this malloc,
       
   739   so setting them has no effect. But this malloc also supports other
       
   740   options in mallopt. See below for details.  Briefly, supported
       
   741   parameters are as follows (listed defaults are for "typical"
       
   742   configurations).
       
   743 
       
   744   Symbol            param #  default    allowed param values
       
   745   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1U disables trimming)
       
   746   M_GRANULARITY        -2     page size   any power of 2 >= page size
       
   747   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
       
   748 */
       
   749 int dlmallopt(int, int);
       
   750 
       
   751 /*
       
   752   malloc_footprint();
       
   753   Returns the number of bytes obtained from the system.  The total
       
   754   number of bytes allocated by malloc, realloc etc., is less than this
       
   755   value. Unlike mallinfo, this function returns only a precomputed
       
   756   result, so can be called frequently to monitor memory consumption.
       
   757   Even if locks are otherwise defined, this function does not use them,
       
   758   so results might not be up to date.
       
   759 */
       
   760 size_t dlmalloc_footprint();
       
   761 
       
   762 #if !NO_MALLINFO
       
   763 /*
       
   764   mallinfo()
       
   765   Returns (by copy) a struct containing various summary statistics:
       
   766 
       
   767   arena:     current total non-mmapped bytes allocated from system
       
   768   ordblks:   the number of free chunks
       
   769   smblks:    always zero.
       
   770   hblks:     current number of mmapped regions
       
   771   hblkhd:    total bytes held in mmapped regions
       
   772   usmblks:   the maximum total allocated space. This will be greater
       
   773                 than current total if trimming has occurred.
       
   774   fsmblks:   always zero
       
   775   uordblks:  current total allocated space (normal or mmapped)
       
   776   fordblks:  total free space
       
   777   keepcost:  the maximum number of bytes that could ideally be released
       
   778                back to system via malloc_trim. ("ideally" means that
       
   779                it ignores page restrictions etc.)
       
   780 
       
   781   Because these fields are ints, but internal bookkeeping may
       
   782   be kept as longs, the reported values may wrap around zero and
       
   783   thus be inaccurate.
       
   784 */
       
   785 struct mallinfo dlmallinfo(void);
       
   786 #endif
       
   787 
       
   788 /*
       
   789   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
       
   790 
       
   791   independent_calloc is similar to calloc, but instead of returning a
       
   792   single cleared space, it returns an array of pointers to n_elements
       
   793   independent elements that can hold contents of size elem_size, each
       
   794   of which starts out cleared, and can be independently freed,
       
   795   realloc'ed etc. The elements are guaranteed to be adjacently
       
   796   allocated (this is not guaranteed to occur with multiple callocs or
       
   797   mallocs), which may also improve cache locality in some
       
   798   applications.
       
   799 
       
   800   The "chunks" argument is optional (i.e., may be null, which is
       
   801   probably the most typical usage). If it is null, the returned array
       
   802   is itself dynamically allocated and should also be freed when it is
       
   803   no longer needed. Otherwise, the chunks array must be of at least
       
   804   n_elements in length. It is filled in with the pointers to the
       
   805   chunks.
       
   806 
       
   807   In either case, independent_calloc returns this pointer array, or
       
   808   null if the allocation failed.  If n_elements is zero and "chunks"
       
   809   is null, it returns a chunk representing an array with zero elements
       
   810   (which should be freed if not wanted).
       
   811 
       
   812   Each element must be individually freed when it is no longer
       
   813   needed. If you'd like to instead be able to free all at once, you
       
   814   should instead use regular calloc and assign pointers into this
       
   815   space to represent elements.  (In this case though, you cannot
       
   816   independently free elements.)
       
   817 
       
   818   independent_calloc simplifies and speeds up implementations of many
       
   819   kinds of pools.  It may also be useful when constructing large data
       
   820   structures that initially have a fixed number of fixed-sized nodes,
       
   821   but the number is not known at compile time, and some of the nodes
       
   822   may later need to be freed. For example:
       
   823 
       
   824   struct Node { int item; struct Node* next; };
       
   825 
       
   826   struct Node* build_list() {
       
   827     struct Node** pool;
       
   828     int n = read_number_of_nodes_needed();
       
   829     if (n <= 0) return 0;
       
   830     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
       
   831     if (pool == 0) die();
       
   832     // organize into a linked list...
       
   833     struct Node* first = pool[0];
       
   834     for (i = 0; i < n-1; ++i)
       
   835       pool[i]->next = pool[i+1];
       
   836     free(pool);     // Can now free the array (or not, if it is needed later)
       
   837     return first;
       
   838   }
       
   839 */
       
   840 void** dlindependent_calloc(size_t, size_t, void**);
       
   841 
       
   842 /*
       
   843   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
       
   844 
       
   845   independent_comalloc allocates, all at once, a set of n_elements
       
   846   chunks with sizes indicated in the "sizes" array.    It returns
       
   847   an array of pointers to these elements, each of which can be
       
   848   independently freed, realloc'ed etc. The elements are guaranteed to
       
   849   be adjacently allocated (this is not guaranteed to occur with
       
   850   multiple callocs or mallocs), which may also improve cache locality
       
   851   in some applications.
       
   852 
       
   853   The "chunks" argument is optional (i.e., may be null). If it is null
       
   854   the returned array is itself dynamically allocated and should also
       
   855   be freed when it is no longer needed. Otherwise, the chunks array
       
   856   must be of at least n_elements in length. It is filled in with the
       
   857   pointers to the chunks.
       
   858 
       
   859   In either case, independent_comalloc returns this pointer array, or
       
   860   null if the allocation failed.  If n_elements is zero and chunks is
       
   861   null, it returns a chunk representing an array with zero elements
       
   862   (which should be freed if not wanted).
       
   863 
       
   864   Each element must be individually freed when it is no longer
       
   865   needed. If you'd like to instead be able to free all at once, you
       
   866   should instead use a single regular malloc, and assign pointers at
       
   867   particular offsets in the aggregate space. (In this case though, you
       
   868   cannot independently free elements.)
       
   869 
       
   870   independent_comallac differs from independent_calloc in that each
       
   871   element may have a different size, and also that it does not
       
   872   automatically clear elements.
       
   873 
       
   874   independent_comalloc can be used to speed up allocation in cases
       
   875   where several structs or objects must always be allocated at the
       
   876   same time.  For example:
       
   877 
       
   878   struct Head { ... }
       
   879   struct Foot { ... }
       
   880 
       
   881   void send_message(char* msg) {
       
   882     int msglen = strlen(msg);
       
   883     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
       
   884     void* chunks[3];
       
   885     if (independent_comalloc(3, sizes, chunks) == 0)
       
   886       die();
       
   887     struct Head* head = (struct Head*)(chunks[0]);
       
   888     char*        body = (char*)(chunks[1]);
       
   889     struct Foot* foot = (struct Foot*)(chunks[2]);
       
   890     // ...
       
   891   }
       
   892 
       
   893   In general though, independent_comalloc is worth using only for
       
   894   larger values of n_elements. For small values, you probably won't
       
   895   detect enough difference from series of malloc calls to bother.
       
   896 
       
   897   Overuse of independent_comalloc can increase overall memory usage,
       
   898   since it cannot reuse existing noncontiguous small chunks that
       
   899   might be available for some of the elements.
       
   900 */
       
   901 void** dlindependent_comalloc(size_t, size_t*, void**);
       
   902 
       
   903 
       
   904 /*
       
   905   pvalloc(size_t n);
       
   906   Equivalent to valloc(minimum-page-that-holds(n)), that is,
       
   907   round up n to nearest pagesize.
       
   908  */
       
   909 void*  dlpvalloc(size_t);
       
   910 
       
   911 /*
       
   912   malloc_trim(size_t pad);
       
   913 
       
   914   If possible, gives memory back to the system (via negative arguments
       
   915   to sbrk) if there is unused memory at the `high' end of the malloc
       
   916   pool or in unused MMAP segments. You can call this after freeing
       
   917   large blocks of memory to potentially reduce the system-level memory
       
   918   requirements of a program. However, it cannot guarantee to reduce
       
   919   memory. Under some allocation patterns, some large free blocks of
       
   920   memory will be locked between two used chunks, so they cannot be
       
   921   given back to the system.
       
   922 
       
   923   The `pad' argument to malloc_trim represents the amount of free
       
   924   trailing space to leave untrimmed. If this argument is zero, only
       
   925   the minimum amount of memory to maintain internal data structures
       
   926   will be left. Non-zero arguments can be supplied to maintain enough
       
   927   trailing space to service future expected allocations without having
       
   928   to re-obtain memory from the system.
       
   929 
       
   930   Malloc_trim returns 1 if it actually released any memory, else 0.
       
   931 */
       
   932 int  dlmalloc_trim(size_t);
       
   933 
       
   934 /*
       
   935   malloc_usable_size(void* p);
       
   936 
       
   937   Returns the number of bytes you can actually use in
       
   938   an allocated chunk, which may be more than you requested (although
       
   939   often not) due to alignment and minimum size constraints.
       
   940   You can use this many bytes without worrying about
       
   941   overwriting other allocated objects. This is not a particularly great
       
   942   programming practice. malloc_usable_size can be more useful in
       
   943   debugging and assertions, for example:
       
   944 
       
   945   p = malloc(n);
       
   946   assert(malloc_usable_size(p) >= 256);
       
   947 */
       
   948 size_t dlmalloc_usable_size(void*);
       
   949 
       
   950 /*
       
   951   malloc_stats();
       
   952   Prints on stderr the amount of space obtained from the system (both
       
   953   via sbrk and mmap), the maximum amount (which may be more than
       
   954   current if malloc_trim and/or munmap got called), and the current
       
   955   number of bytes allocated via malloc (or realloc, etc) but not yet
       
   956   freed. Note that this is the number of bytes allocated, not the
       
   957   number requested. It will be larger than the number requested
       
   958   because of alignment and bookkeeping overhead. Because it includes
       
   959   alignment wastage as being in use, this figure may be greater than
       
   960   zero even when no user-level chunks are allocated.
       
   961 
       
   962   The reported current and maximum system memory can be inaccurate if
       
   963   a program makes other calls to system memory allocation functions
       
   964   (normally sbrk) outside of malloc.
       
   965 
       
   966   malloc_stats prints only the most commonly interesting statistics.
       
   967   More information can be obtained by calling mallinfo.
       
   968 */
       
   969 void  dlmalloc_stats();
       
   970 
       
   971 #endif
       
   972 
       
   973 #if MSPACES
       
   974 
       
   975 /*
       
   976   mspace is an opaque type representing an independent
       
   977   region of space that supports mspace_malloc, etc.
       
   978 */
       
   979 typedef void* mspace;
       
   980 
       
   981 /*
       
   982   create_mspace creates and returns a new independent space with the
       
   983   given initial capacity, or, if 0, the default granularity size.  It
       
   984   returns null if there is no system memory available to create the
       
   985   space.  If argument locked is non-zero, the space uses a separate
       
   986   lock to control access. The capacity of the space will grow
       
   987   dynamically as needed to service mspace_malloc requests.  You can
       
   988   control the sizes of incremental increases of this space by
       
   989   compiling with a different DEFAULT_GRANULARITY or dynamically
       
   990   setting with mallopt(M_GRANULARITY, value).
       
   991 */
       
   992 mspace create_mspace(size_t capacity, int locked);
       
   993 
       
   994 /*
       
   995   destroy_mspace destroys the given space, and attempts to return all
       
   996   of its memory back to the system, returning the total number of
       
   997   bytes freed. After destruction, the results of access to all memory
       
   998   used by the space become undefined.
       
   999 */
       
  1000 size_t destroy_mspace(mspace msp);
       
  1001 
       
  1002 /*
       
  1003   create_mspace_with_base uses the memory supplied as the initial base
       
  1004   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
       
  1005   space is used for bookkeeping, so the capacity must be at least this
       
  1006   large. (Otherwise 0 is returned.) When this initial space is
       
  1007   exhausted, additional memory will be obtained from the system.
       
  1008   Destroying this space will deallocate all additionally allocated
       
  1009   space (if possible) but not the initial base.
       
  1010 */
       
  1011 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
       
  1012 
       
  1013 /*
       
  1014   mspace_malloc behaves as malloc, but operates within
       
  1015   the given space.
       
  1016 */
       
  1017 void* mspace_malloc(mspace msp, size_t bytes);
       
  1018 
       
  1019 /*
       
  1020   mspace_free behaves as free, but operates within
       
  1021   the given space.
       
  1022 
       
  1023   If compiled with FOOTERS==1, mspace_free is not actually needed.
       
  1024   free may be called instead of mspace_free because freed chunks from
       
  1025   any space are handled by their originating spaces.
       
  1026 */
       
  1027 void mspace_free(mspace msp, void* mem);
       
  1028 
       
  1029 /*
       
  1030   mspace_realloc behaves as realloc, but operates within
       
  1031   the given space.
       
  1032 
       
  1033   If compiled with FOOTERS==1, mspace_realloc is not actually
       
  1034   needed.  realloc may be called instead of mspace_realloc because
       
  1035   realloced chunks from any space are handled by their originating
       
  1036   spaces.
       
  1037 */
       
  1038 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
       
  1039 
       
  1040 /*
       
  1041   mspace_calloc behaves as calloc, but operates within
       
  1042   the given space.
       
  1043 */
       
  1044 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
       
  1045 
       
  1046 /*
       
  1047   mspace_memalign behaves as memalign, but operates within
       
  1048   the given space.
       
  1049 */
       
  1050 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
       
  1051 
       
  1052 /*
       
  1053   mspace_independent_calloc behaves as independent_calloc, but
       
  1054   operates within the given space.
       
  1055 */
       
  1056 void** mspace_independent_calloc(mspace msp, size_t n_elements,
       
  1057                                  size_t elem_size, void* chunks[]);
       
  1058 
       
  1059 /*
       
  1060   mspace_independent_comalloc behaves as independent_comalloc, but
       
  1061   operates within the given space.
       
  1062 */
       
  1063 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
       
  1064                                    size_t sizes[], void* chunks[]);
       
  1065 
       
  1066 /*
       
  1067   mspace_footprint() returns the number of bytes obtained from the
       
  1068   system for this space.
       
  1069 */
       
  1070 size_t mspace_footprint(mspace msp);
       
  1071 
       
  1072 
       
  1073 #if !NO_MALLINFO
       
  1074 /*
       
  1075   mspace_mallinfo behaves as mallinfo, but reports properties of
       
  1076   the given space.
       
  1077 */
       
  1078 struct mallinfo mspace_mallinfo(mspace msp);
       
  1079 #endif
       
  1080 
       
  1081 /*
       
  1082   mspace_malloc_stats behaves as malloc_stats, but reports
       
  1083   properties of the given space.
       
  1084 */
       
  1085 void mspace_malloc_stats(mspace msp);
       
  1086 
       
  1087 /*
       
  1088   mspace_trim behaves as malloc_trim, but
       
  1089   operates within the given space.
       
  1090 */
       
  1091 int mspace_trim(mspace msp, size_t pad);
       
  1092 
       
  1093 /*
       
  1094   An alias for mallopt.
       
  1095 */
       
  1096 int mspace_mallopt(int, int);
       
  1097 
       
  1098 #endif
       
  1099 
       
  1100 /*
       
  1101   ========================================================================
       
  1102   To make a fully customizable malloc.h header file, cut everything
       
  1103   above this line, put into file malloc.h, edit to suit, and #include it
       
  1104   on the next line, as well as in programs that use this malloc.
       
  1105   ========================================================================
       
  1106 */
       
  1107 
       
  1108 /* #include "malloc.h" */
       
  1109 
       
  1110 /*------------------------------ internal #includes ---------------------- */
       
  1111 
       
  1112 #ifdef WIN32
       
  1113 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
       
  1114 #endif
       
  1115 
       
  1116 #include <stdio.h>       /* for printing in malloc_stats */
       
  1117 
       
  1118 #ifndef LACKS_ERRNO_H
       
  1119 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
       
  1120 #endif
       
  1121 #if FOOTERS
       
  1122 #include <time.h>        /* for magic initialization */
       
  1123 #endif
       
  1124 #ifndef LACKS_STDLIB_H
       
  1125 #include <stdlib.h>      /* for abort() */
       
  1126 #endif
       
  1127 #ifdef DEBUG
       
  1128 #if ABORT_ON_ASSERT_FAILURE
       
  1129 #define assert(x) if(!(x)) ABORT
       
  1130 #else
       
  1131 #include <assert.h>
       
  1132 #endif
       
  1133 #else
       
  1134 #define assert(x)
       
  1135 #endif
       
  1136 #ifndef LACKS_STRING_H
       
  1137 #include <string.h>      /* for memset etc */
       
  1138 #endif
       
  1139 #if USE_BUILTIN_FFS
       
  1140 #ifndef LACKS_STRINGS_H
       
  1141 #include <strings.h>     /* for ffs */
       
  1142 #endif
       
  1143 #endif
       
  1144 #if HAVE_MMAP
       
  1145 #ifndef LACKS_SYS_MMAN_H
       
  1146 #include <sys/mman.h>    /* for mmap */
       
  1147 #endif
       
  1148 #ifndef LACKS_FCNTL_H
       
  1149 #include <fcntl.h>
       
  1150 #endif
       
  1151 #endif
       
  1152 #if HAVE_MORECORE
       
  1153 #ifndef LACKS_UNISTD_H
       
  1154 #include <unistd.h>     /* for sbrk */
       
  1155 #else
       
  1156 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
       
  1157 extern void*     sbrk(ptrdiff_t);
       
  1158 #endif
       
  1159 #endif
       
  1160 #endif
       
  1161 
       
  1162 // added by Nokia --
       
  1163 void* chunkMoreCore(int size);
       
  1164 void log_abort(const char* file, unsigned int line);
       
  1165 #define malloc_getpagesize 4096
       
  1166 #define PAGE_ALIGN 4095
       
  1167 // end ---
       
  1168 
       
  1169 #ifndef WIN32
       
  1170 #ifndef malloc_getpagesize
       
  1171 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
       
  1172 #    ifndef _SC_PAGE_SIZE
       
  1173 #      define _SC_PAGE_SIZE _SC_PAGESIZE
       
  1174 #    endif
       
  1175 #  endif
       
  1176 #  ifdef _SC_PAGE_SIZE
       
  1177 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
       
  1178 #  else
       
  1179 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
       
  1180        extern size_t getpagesize();
       
  1181 #      define malloc_getpagesize getpagesize()
       
  1182 #    else
       
  1183 #      ifdef WIN32 /* use supplied emulation of getpagesize */
       
  1184 #        define malloc_getpagesize getpagesize()
       
  1185 #      else
       
  1186 #        ifndef LACKS_SYS_PARAM_H
       
  1187 #          include <sys/param.h>
       
  1188 #        endif
       
  1189 #        ifdef EXEC_PAGESIZE
       
  1190 #          define malloc_getpagesize EXEC_PAGESIZE
       
  1191 #        else
       
  1192 #          ifdef NBPG
       
  1193 #            ifndef CLSIZE
       
  1194 #              define malloc_getpagesize NBPG
       
  1195 #            else
       
  1196 #              define malloc_getpagesize (NBPG * CLSIZE)
       
  1197 #            endif
       
  1198 #          else
       
  1199 #            ifdef NBPC
       
  1200 #              define malloc_getpagesize NBPC
       
  1201 #            else
       
  1202 #              ifdef PAGESIZE
       
  1203 #                define malloc_getpagesize PAGESIZE
       
  1204 #              else /* just guess */
       
  1205 #                define malloc_getpagesize (4096U)
       
  1206 #              endif
       
  1207 #            endif
       
  1208 #          endif
       
  1209 #        endif
       
  1210 #      endif
       
  1211 #    endif
       
  1212 #  endif
       
  1213 #endif
       
  1214 #endif
       
  1215 
       
  1216 /* ------------------- size_t and alignment properties -------------------- */
       
  1217 
       
  1218 /* The byte and bit size of a size_t */
       
  1219 #define SIZE_T_SIZE         (sizeof(size_t))
       
  1220 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
       
  1221 
       
  1222 /* The size_t value with all bits set */
       
  1223 #define MAX_SIZE_T           (~(size_t)0)
       
  1224 
       
  1225 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
       
  1226 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - 1)
       
  1227 
       
  1228 /* True if address a has acceptable alignment */
       
  1229 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
       
  1230 
       
  1231 /* the number of bytes to offset an address to align it */
       
  1232 #define align_offset(A)\
       
  1233  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
       
  1234   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
       
  1235 
       
  1236 /* -------------------------- MMAP preliminaries ------------------------- */
       
  1237 
       
  1238 /*
       
  1239    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
       
  1240    checks to fail so compiler optimizer can delete code rather than
       
  1241    using so many "#if"s.
       
  1242 */
       
  1243 
       
  1244 
       
  1245 /* MORECORE and MMAP must return MFAIL on failure */
       
  1246 #define MFAIL                ((void*)(MAX_SIZE_T))
       
  1247 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
       
  1248 
       
  1249 #if !HAVE_MMAP
       
  1250 #define IS_MMAPPED_BIT       (0U)
       
  1251 #define USE_MMAP_BIT         (0U)
       
  1252 #define CALL_MMAP(s)         MFAIL
       
  1253 #define CALL_MUNMAP(a, s)    (-1)
       
  1254 #define DIRECT_MMAP(s)       MFAIL
       
  1255 
       
  1256 #else
       
  1257 #define IS_MMAPPED_BIT       (1U)
       
  1258 #define USE_MMAP_BIT         (1U)
       
  1259 
       
  1260 #if NOKIA_CHANGES  // nokia stuff
       
  1261 static void* symbian_mmap(size_t size);
       
  1262 static int symbian_munmap(void* ptr, size_t size);
       
  1263 
       
  1264 #define CALL_MMAP(s)         symbian_mmap(s)
       
  1265 #define CALL_MUNMAP(a, s)    symbian_munmap((a), (s))
       
  1266 #define DIRECT_MMAP(s)       symbian_mmap(s)
       
  1267 
       
  1268 #else  // NOKIA_CHANGES
       
  1269 #ifndef WIN32
       
  1270 #define CALL_MUNMAP(a, s)    munmap((a), (s))
       
  1271 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
       
  1272 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
       
  1273 #define MAP_ANONYMOUS        MAP_ANON
       
  1274 #endif
       
  1275 #ifdef MAP_ANONYMOUS
       
  1276 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
       
  1277 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
       
  1278 #else
       
  1279 /*
       
  1280    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
       
  1281    is unlikely to be needed, but is supplied just in case.
       
  1282 */
       
  1283 #define MMAP_FLAGS           (MAP_PRIVATE)
       
  1284 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
       
  1285 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
       
  1286            (dev_zero_fd = open("/dev/zero", O_RDWR), \
       
  1287             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
       
  1288             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
       
  1289 #endif
       
  1290 
       
  1291 #define DIRECT_MMAP(s)       CALL_MMAP(s)
       
  1292 #else
       
  1293 
       
  1294 /* Win32 MMAP via VirtualAlloc */
       
  1295 static void* win32mmap(size_t size) {
       
  1296   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
       
  1297   return (ptr != 0)? ptr: MFAIL;
       
  1298 }
       
  1299 
       
  1300 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
       
  1301 static void* win32direct_mmap(size_t size) {
       
  1302   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
       
  1303                            PAGE_READWRITE);
       
  1304   return (ptr != 0)? ptr: MFAIL;
       
  1305 }
       
  1306 
       
  1307 /* This function supports releasing coalesed segments */
       
  1308 static int win32munmap(void* ptr, size_t size) {
       
  1309   MEMORY_BASIC_INFORMATION minfo;
       
  1310   char* cptr = ptr;
       
  1311   while (size) {
       
  1312     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
       
  1313       return -1;
       
  1314     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
       
  1315         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
       
  1316       return -1;
       
  1317     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
       
  1318       return -1;
       
  1319     cptr += minfo.RegionSize;
       
  1320     size -= minfo.RegionSize;
       
  1321   }
       
  1322   return 0;
       
  1323 }
       
  1324 
       
  1325 #define CALL_MMAP(s)         win32mmap(s)
       
  1326 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
       
  1327 #define DIRECT_MMAP(s)       win32direct_mmap(s)
       
  1328 #endif
       
  1329 #endif
       
  1330 #endif
       
  1331 
       
  1332 #if HAVE_MMAP && HAVE_MREMAP
       
  1333 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
       
  1334 #else
       
  1335 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
       
  1336 #endif
       
  1337 
       
  1338 #if HAVE_MORECORE
       
  1339 #define CALL_MORECORE(S)     MORECORE(S)
       
  1340 #else
       
  1341 #define CALL_MORECORE(S)     MFAIL
       
  1342 #endif
       
  1343 
       
  1344 /* mstate bit set if continguous morecore disabled or failed */
       
  1345 #define USE_NONCONTIGUOUS_BIT (4U)
       
  1346 
       
  1347 /* --------------------------- Lock preliminaries ------------------------ */
       
  1348 
       
  1349 #if USE_LOCKS
       
  1350 
       
  1351 /*
       
  1352   When locks are defined, there are up to two global locks:
       
  1353 
       
  1354   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
       
  1355     MORECORE.  In many cases sys_alloc requires two calls, that should
       
  1356     not be interleaved with calls by other threads.  This does not
       
  1357     protect against direct calls to MORECORE by other threads not
       
  1358     using this lock, so there is still code to cope the best we can on
       
  1359     interference.
       
  1360 
       
  1361   * If using secure footers, magic_init_mutex ensures that mparams.magic is
       
  1362     initialized exactly once.
       
  1363 */
       
  1364 
       
  1365 #ifdef NOKIA_CHANGES
       
  1366 #define MLOCK_T RMutex
       
  1367 #if HAVE_MORECORE
       
  1368 static MLOCK_T morecore_mutex;
       
  1369 #endif
       
  1370 #if FOOTERS & !INSECURE
       
  1371 static MLOCK_T magic_init_mutex;
       
  1372 #endif
       
  1373 
       
  1374 static int WaitOnMutex(MLOCK_T* m)
       
  1375     {
       
  1376     if (!m->Handle())
       
  1377         {
       
  1378         TInt r = m->CreateLocal();
       
  1379         __ASSERT_ALWAYS(KErrNone==r,
       
  1380                     User::Panic(_L("MEMMAN"),r));
       
  1381         }
       
  1382     m->Wait();
       
  1383     return 0;
       
  1384     }
       
  1385 
       
  1386 static void ReleaseMutex(MLOCK_T* m)
       
  1387     {
       
  1388     if (!m->Handle())
       
  1389        {
       
  1390        TInt r = m->CreateLocal();
       
  1391        __ASSERT_ALWAYS(KErrNone==r,
       
  1392               User::Panic(_L("MEMMAN"),r));
       
  1393        return; 
       
  1394        }
       
  1395     m->Signal();
       
  1396     }
       
  1397 #define ACQUIRE_LOCK(l)      WaitOnMutex(l)
       
  1398 #define RELEASE_LOCK(l)      ReleaseMutex(l)
       
  1399 
       
  1400 #else
       
  1401 #ifndef WIN32
       
  1402 /* By default use posix locks */
       
  1403 #include <pthread.h>
       
  1404 #define MLOCK_T pthread_mutex_t
       
  1405 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
       
  1406 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
       
  1407 
       
  1408 #if HAVE_MORECORE
       
  1409 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
       
  1410 #endif
       
  1411 
       
  1412 #if FOOTERS & !INSECURE
       
  1413 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
       
  1414 #endif
       
  1415 
       
  1416 #else
       
  1417 /*
       
  1418    Because lock-protected regions have bounded times, and there
       
  1419    are no recursive lock calls, we can use simple spinlocks.
       
  1420 */
       
  1421 
       
  1422 #define MLOCK_T long
       
  1423 static int win32_acquire_lock (MLOCK_T *sl) {
       
  1424   for (;;) {
       
  1425 #ifdef InterlockedCompareExchangePointer
       
  1426     if (!InterlockedCompareExchange(sl, 1, 0))
       
  1427       return 0;
       
  1428 #else  /* Use older void* version */
       
  1429     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
       
  1430       return 0;
       
  1431 #endif
       
  1432     Sleep (0);
       
  1433   }
       
  1434 }
       
  1435 
       
  1436 static void win32_release_lock (MLOCK_T *sl) {
       
  1437   InterlockedExchange (sl, 0);
       
  1438 }
       
  1439 
       
  1440 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
       
  1441 #define RELEASE_LOCK(l)      win32_release_lock(l)
       
  1442 #if HAVE_MORECORE
       
  1443 static MLOCK_T morecore_mutex;
       
  1444 #endif
       
  1445 #if FOOTERS & !INSECURE
       
  1446 static MLOCK_T magic_init_mutex;
       
  1447 #endif
       
  1448 #endif
       
  1449 #endif
       
  1450 
       
  1451 #define USE_LOCK_BIT               (2U)
       
  1452 #else
       
  1453 #define USE_LOCK_BIT               (0U)
       
  1454 #endif
       
  1455 
       
  1456 #if USE_LOCKS && HAVE_MORECORE
       
  1457 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
       
  1458 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
       
  1459 #else
       
  1460 #define ACQUIRE_MORECORE_LOCK()
       
  1461 #define RELEASE_MORECORE_LOCK()
       
  1462 #endif
       
  1463 
       
  1464 #if USE_LOCKS && FOOTERS && !INSECURE
       
  1465 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
       
  1466 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
       
  1467 #else
       
  1468 #define ACQUIRE_MAGIC_INIT_LOCK()
       
  1469 #define RELEASE_MAGIC_INIT_LOCK()
       
  1470 #endif
       
  1471 
       
  1472 /* -----------------------  Chunk representations ------------------------ */
       
  1473 
       
  1474 /*
       
  1475   (The following includes lightly edited explanations by Colin Plumb.)
       
  1476 
       
  1477   The malloc_chunk declaration below is misleading (but accurate and
       
  1478   necessary).  It declares a "view" into memory allowing access to
       
  1479   necessary fields at known offsets from a given base.
       
  1480 
       
  1481   Chunks of memory are maintained using a `boundary tag' method as
       
  1482   originally described by Knuth.  (See the paper by Paul Wilson
       
  1483   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
       
  1484   techniques.)  Sizes of free chunks are stored both in the front of
       
  1485   each chunk and at the end.  This makes consolidating fragmented
       
  1486   chunks into bigger chunks fast.  The head fields also hold bits
       
  1487   representing whether chunks are free or in use.
       
  1488 
       
  1489   Here are some pictures to make it clearer.  They are "exploded" to
       
  1490   show that the state of a chunk can be thought of as extending from
       
  1491   the high 31 bits of the head field of its header through the
       
  1492   prev_foot and PINUSE_BIT bit of the following chunk header.
       
  1493 
       
  1494   A chunk that's in use looks like:
       
  1495 
       
  1496    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1497            | Size of previous chunk (if P = 1)                             |
       
  1498            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1499          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
       
  1500          | Size of this chunk                                         1| +-+
       
  1501    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1502          |                                                               |
       
  1503          +-                                                             -+
       
  1504          |                                                               |
       
  1505          +-                                                             -+
       
  1506          |                                                               :
       
  1507          +-      size - sizeof(size_t) available payload bytes          -+
       
  1508          :                                                               |
       
  1509  chunk-> +-                                                             -+
       
  1510          |                                                               |
       
  1511          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1512        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
       
  1513        | Size of next chunk (may or may not be in use)               | +-+
       
  1514  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1515 
       
  1516     And if it's free, it looks like this:
       
  1517 
       
  1518    chunk-> +-                                                             -+
       
  1519            | User payload (must be in use, or we would have merged!)       |
       
  1520            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1521          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
       
  1522          | Size of this chunk                                         0| +-+
       
  1523    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1524          | Next pointer                                                  |
       
  1525          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1526          | Prev pointer                                                  |
       
  1527          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1528          |                                                               :
       
  1529          +-      size - sizeof(struct chunk) unused bytes               -+
       
  1530          :                                                               |
       
  1531  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1532          | Size of this chunk                                            |
       
  1533          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1534        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
       
  1535        | Size of next chunk (must be in use, or we would have merged)| +-+
       
  1536  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1537        |                                                               :
       
  1538        +- User payload                                                -+
       
  1539        :                                                               |
       
  1540        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1541                                                                      |0|
       
  1542                                                                      +-+
       
  1543   Note that since we always merge adjacent free chunks, the chunks
       
  1544   adjacent to a free chunk must be in use.
       
  1545 
       
  1546   Given a pointer to a chunk (which can be derived trivially from the
       
  1547   payload pointer) we can, in O(1) time, find out whether the adjacent
       
  1548   chunks are free, and if so, unlink them from the lists that they
       
  1549   are on and merge them with the current chunk.
       
  1550 
       
  1551   Chunks always begin on even word boundaries, so the mem portion
       
  1552   (which is returned to the user) is also on an even word boundary, and
       
  1553   thus at least double-word aligned.
       
  1554 
       
  1555   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
       
  1556   chunk size (which is always a multiple of two words), is an in-use
       
  1557   bit for the *previous* chunk.  If that bit is *clear*, then the
       
  1558   word before the current chunk size contains the previous chunk
       
  1559   size, and can be used to find the front of the previous chunk.
       
  1560   The very first chunk allocated always has this bit set, preventing
       
  1561   access to non-existent (or non-owned) memory. If pinuse is set for
       
  1562   any given chunk, then you CANNOT determine the size of the
       
  1563   previous chunk, and might even get a memory addressing fault when
       
  1564   trying to do so.
       
  1565 
       
  1566   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
       
  1567   the chunk size redundantly records whether the current chunk is
       
  1568   inuse. This redundancy enables usage checks within free and realloc,
       
  1569   and reduces indirection when freeing and consolidating chunks.
       
  1570 
       
  1571   Each freshly allocated chunk must have both cinuse and pinuse set.
       
  1572   That is, each allocated chunk borders either a previously allocated
       
  1573   and still in-use chunk, or the base of its memory arena. This is
       
  1574   ensured by making all allocations from the the `lowest' part of any
       
  1575   found chunk.  Further, no free chunk physically borders another one,
       
  1576   so each free chunk is known to be preceded and followed by either
       
  1577   inuse chunks or the ends of memory.
       
  1578 
       
  1579   Note that the `foot' of the current chunk is actually represented
       
  1580   as the prev_foot of the NEXT chunk. This makes it easier to
       
  1581   deal with alignments etc but can be very confusing when trying
       
  1582   to extend or adapt this code.
       
  1583 
       
  1584   The exceptions to all this are
       
  1585 
       
  1586      1. The special chunk `top' is the top-most available chunk (i.e.,
       
  1587         the one bordering the end of available memory). It is treated
       
  1588         specially.  Top is never included in any bin, is used only if
       
  1589         no other chunk is available, and is released back to the
       
  1590         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
       
  1591         the top chunk is treated as larger (and thus less well
       
  1592         fitting) than any other available chunk.  The top chunk
       
  1593         doesn't update its trailing size field since there is no next
       
  1594         contiguous chunk that would have to index off it. However,
       
  1595         space is still allocated for it (TOP_FOOT_SIZE) to enable
       
  1596         separation or merging when space is extended.
       
  1597 
       
  1598      3. Chunks allocated via mmap, which have the lowest-order bit
       
  1599         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
       
  1600         PINUSE_BIT in their head fields.  Because they are allocated
       
  1601         one-by-one, each must carry its own prev_foot field, which is
       
  1602         also used to hold the offset this chunk has within its mmapped
       
  1603         region, which is needed to preserve alignment. Each mmapped
       
  1604         chunk is trailed by the first two fields of a fake next-chunk
       
  1605         for sake of usage checks.
       
  1606 
       
  1607 */
       
  1608 
       
  1609 struct malloc_chunk {
       
  1610   size_t               prev_foot;  /* Size of previous chunk (if free).  */
       
  1611   size_t               head;       /* Size and inuse bits. */
       
  1612   struct malloc_chunk* fd;         /* double links -- used only if free. */
       
  1613   struct malloc_chunk* bk;
       
  1614 };
       
  1615 
       
  1616 typedef struct malloc_chunk  mchunk;
       
  1617 typedef struct malloc_chunk* mchunkptr;
       
  1618 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
       
  1619 typedef unsigned int bindex_t;         /* Described below */
       
  1620 typedef unsigned int binmap_t;         /* Described below */
       
  1621 typedef unsigned int flag_t;           /* The type of various bit flag sets */
       
  1622 
       
  1623 /* ------------------- Chunks sizes and alignments ----------------------- */
       
  1624 
       
  1625 #define MCHUNK_SIZE         (sizeof(mchunk))
       
  1626 
       
  1627 #if FOOTERS
       
  1628 #define CHUNK_OVERHEAD      (SIZE_T_SIZE*2U)
       
  1629 #else
       
  1630 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
       
  1631 #endif
       
  1632 
       
  1633 /* MMapped chunks need a second word of overhead ... */
       
  1634 #define MMAP_CHUNK_OVERHEAD (SIZE_T_SIZE*2U)
       
  1635 /* ... and additional padding for fake next-chunk at foot */
       
  1636 #define MMAP_FOOT_PAD       (SIZE_T_SIZE*4U)
       
  1637 
       
  1638 /* The smallest size we can malloc is an aligned minimal chunk */
       
  1639 #define MIN_CHUNK_SIZE\
       
  1640   (size_t)(((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK))
       
  1641 
       
  1642 /* conversion from malloc headers to user pointers, and back */
       
  1643 #define chunk2mem(p)        ((void*)((char*)(p)       + (SIZE_T_SIZE*2U)))
       
  1644 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - (SIZE_T_SIZE*2U)))
       
  1645 /* chunk associated with aligned address A */
       
  1646 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
       
  1647 
       
  1648 /* Bounds on request (not chunk) sizes. */
       
  1649 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
       
  1650 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - 1U)
       
  1651 
       
  1652 /* pad request bytes into a usable size */
       
  1653 #define pad_request(req) \
       
  1654    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
       
  1655 
       
  1656 /* pad request, checking for minimum (but not maximum) */
       
  1657 #define request2size(req) \
       
  1658   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
       
  1659 
       
  1660 
       
  1661 /* ------------------ Operations on head and foot fields ----------------- */
       
  1662 
       
  1663 /*
       
  1664   The head field of a chunk is or'ed with PINUSE_BIT when previous
       
  1665   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
       
  1666   use. If the chunk was obtained with mmap, the prev_foot field has
       
  1667   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
       
  1668   mmapped region to the base of the chunk.
       
  1669 */
       
  1670 
       
  1671 #define PINUSE_BIT          (1U)
       
  1672 #define CINUSE_BIT          (2U)
       
  1673 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
       
  1674 
       
  1675 /* Head value for fenceposts */
       
  1676 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
       
  1677 
       
  1678 /* extraction of fields from head words */
       
  1679 #define cinuse(p)           ((p)->head & CINUSE_BIT)
       
  1680 #define pinuse(p)           ((p)->head & PINUSE_BIT)
       
  1681 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
       
  1682 
       
  1683 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
       
  1684 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
       
  1685 
       
  1686 /* Treat space at ptr +/- offset as a chunk */
       
  1687 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
       
  1688 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
       
  1689 
       
  1690 /* Ptr to next or previous physical malloc_chunk. */
       
  1691 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
       
  1692 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
       
  1693 
       
  1694 /* extract next chunk's pinuse bit */
       
  1695 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
       
  1696 
       
  1697 /* Get/set size at footer */
       
  1698 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
       
  1699 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
       
  1700 
       
  1701 /* Set size, pinuse bit, and foot */
       
  1702 #define set_size_and_pinuse_of_free_chunk(p, s)\
       
  1703   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
       
  1704 
       
  1705 /* Set size, pinuse bit, foot, and clear next pinuse */
       
  1706 #define set_free_with_pinuse(p, s, n)\
       
  1707   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
       
  1708 
       
  1709 #define is_mmapped(p)\
       
  1710   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
       
  1711 
       
  1712 /* Get the internal overhead associated with chunk p */
       
  1713 #define overhead_for(p)\
       
  1714  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
       
  1715 
       
  1716 /* Return true if malloced space is not necessarily cleared */
       
  1717 #if MMAP_CLEARS
       
  1718 #define calloc_must_clear(p) (!is_mmapped(p))
       
  1719 #else
       
  1720 #define calloc_must_clear(p) (1)
       
  1721 #endif
       
  1722 
       
  1723 /* ---------------------- Overlaid data structures ----------------------- */
       
  1724 
       
  1725 /*
       
  1726   When chunks are not in use, they are treated as nodes of either
       
  1727   lists or trees.
       
  1728 
       
  1729   "Small"  chunks are stored in circular doubly-linked lists, and look
       
  1730   like this:
       
  1731 
       
  1732     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1733             |             Size of previous chunk                            |
       
  1734             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1735     `head:' |             Size of chunk, in bytes                         |P|
       
  1736       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1737             |             Forward pointer to next chunk in list             |
       
  1738             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1739             |             Back pointer to previous chunk in list            |
       
  1740             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1741             |             Unused space (may be 0 bytes long)                .
       
  1742             .                                                               .
       
  1743             .                                                               |
       
  1744 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1745     `foot:' |             Size of chunk, in bytes                           |
       
  1746             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1747 
       
  1748   Larger chunks are kept in a form of bitwise digital trees (aka
       
  1749   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
       
  1750   free chunks greater than 256 bytes, their size doesn't impose any
       
  1751   constraints on user chunk sizes.  Each node looks like:
       
  1752 
       
  1753     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1754             |             Size of previous chunk                            |
       
  1755             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1756     `head:' |             Size of chunk, in bytes                         |P|
       
  1757       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1758             |             Forward pointer to next chunk of same size        |
       
  1759             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1760             |             Back pointer to previous chunk of same size       |
       
  1761             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1762             |             Pointer to left child (child[0])                  |
       
  1763             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1764             |             Pointer to right child (child[1])                 |
       
  1765             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1766             |             Pointer to parent                                 |
       
  1767             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1768             |             bin index of this chunk                           |
       
  1769             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1770             |             Unused space                                      .
       
  1771             .                                                               |
       
  1772 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1773     `foot:' |             Size of chunk, in bytes                           |
       
  1774             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       
  1775 
       
  1776   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
       
  1777   of the same size are arranged in a circularly-linked list, with only
       
  1778   the oldest chunk (the next to be used, in our FIFO ordering)
       
  1779   actually in the tree.  (Tree members are distinguished by a non-null
       
  1780   parent pointer.)  If a chunk with the same size an an existing node
       
  1781   is inserted, it is linked off the existing node using pointers that
       
  1782   work in the same way as fd/bk pointers of small chunks.
       
  1783 
       
  1784   Each tree contains a power of 2 sized range of chunk sizes (the
       
  1785   smallest is 0x100 <= x < 0x180), which is is divided in half at each
       
  1786   tree level, with the chunks in the smaller half of the range (0x100
       
  1787   <= x < 0x140 for the top nose) in the left subtree and the larger
       
  1788   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
       
  1789   done by inspecting individual bits.
       
  1790 
       
  1791   Using these rules, each node's left subtree contains all smaller
       
  1792   sizes than its right subtree.  However, the node at the root of each
       
  1793   subtree has no particular ordering relationship to either.  (The
       
  1794   dividing line between the subtree sizes is based on trie relation.)
       
  1795   If we remove the last chunk of a given size from the interior of the
       
  1796   tree, we need to replace it with a leaf node.  The tree ordering
       
  1797   rules permit a node to be replaced by any leaf below it.
       
  1798 
       
  1799   The smallest chunk in a tree (a common operation in a best-fit
       
  1800   allocator) can be found by walking a path to the leftmost leaf in
       
  1801   the tree.  Unlike a usual binary tree, where we follow left child
       
  1802   pointers until we reach a null, here we follow the right child
       
  1803   pointer any time the left one is null, until we reach a leaf with
       
  1804   both child pointers null. The smallest chunk in the tree will be
       
  1805   somewhere along that path.
       
  1806 
       
  1807   The worst case number of steps to add, find, or remove a node is
       
  1808   bounded by the number of bits differentiating chunks within
       
  1809   bins. Under current bin calculations, this ranges from 6 up to 21
       
  1810   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
       
  1811   is of course much better.
       
  1812 */
       
  1813 
       
  1814 struct malloc_tree_chunk {
       
  1815   /* The first four fields must be compatible with malloc_chunk */
       
  1816   size_t                    prev_foot;
       
  1817   size_t                    head;
       
  1818   struct malloc_tree_chunk* fd;
       
  1819   struct malloc_tree_chunk* bk;
       
  1820 
       
  1821   struct malloc_tree_chunk* child[2];
       
  1822   struct malloc_tree_chunk* parent;
       
  1823   bindex_t                  index;
       
  1824 };
       
  1825 
       
  1826 typedef struct malloc_tree_chunk  tchunk;
       
  1827 typedef struct malloc_tree_chunk* tchunkptr;
       
  1828 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
       
  1829 
       
  1830 /* A little helper macro for trees */
       
  1831 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
       
  1832 
       
  1833 /* ----------------------------- Segments -------------------------------- */
       
  1834 
       
  1835 /*
       
  1836   Each malloc space may include non-contiguous segments, held in a
       
  1837   list headed by an embedded malloc_segment record representing the
       
  1838   initial space. Segments also include flags holding properties of
       
  1839   the space. Large chunks that are directly allocated by mmap are not
       
  1840   included in this list. They are instead independently created and
       
  1841   destroyed without otherwise keeping track of them.
       
  1842 
       
  1843   Segment management mainly comes into play for spaces allocated by
       
  1844   MMAP.  Any call to MMAP might or might not return memory that is
       
  1845   adjacent to an existing segment.  MORECORE normally contiguously
       
  1846   extends the current space, so this space is almost always adjacent,
       
  1847   which is simpler and faster to deal with. (This is why MORECORE is
       
  1848   used preferentially to MMAP when both are available -- see
       
  1849   sys_alloc.)  When allocating using MMAP, we don't use any of the
       
  1850   hinting mechanisms (inconsistently) supported in various
       
  1851   implementations of unix mmap, or distinguish reserving from
       
  1852   committing memory. Instead, we just ask for space, and exploit
       
  1853   contiguity when we get it.  It is probably possible to do
       
  1854   better than this on some systems, but no general scheme seems
       
  1855   to be significantly better.
       
  1856 
       
  1857   Management entails a simpler variant of the consolidation scheme
       
  1858   used for chunks to reduce fragmentation -- new adjacent memory is
       
  1859   normally prepended or appended to an existing segment. However,
       
  1860   there are limitations compared to chunk consolidation that mostly
       
  1861   reflect the fact that segment processing is relatively infrequent
       
  1862   (occurring only when getting memory from system) and that we
       
  1863   don't expect to have huge numbers of segments:
       
  1864 
       
  1865   * Segments are not indexed, so traversal requires linear scans.  (It
       
  1866     would be possible to index these, but is not worth the extra
       
  1867     overhead and complexity for most programs on most platforms.)
       
  1868   * New segments are only appended to old ones when holding top-most
       
  1869     memory; if they cannot be prepended to others, they are held in
       
  1870     different segments.
       
  1871 
       
  1872   Except for the initial segment of an mstate (which holds its own
       
  1873   embedded segment record), segment records for one segment are
       
  1874   kept in a different segment (the one in effect when the new
       
  1875   segment was created).  So unmapping segments is delicate.
       
  1876 */
       
  1877 
       
  1878 struct malloc_segment {
       
  1879   char*        base;             /* base address */
       
  1880   size_t       size;             /* allocated size */
       
  1881   struct malloc_segment* next;   /* ptr to next segment */
       
  1882   flag_t       sflags;           /* mmap flag */
       
  1883 };
       
  1884 
       
  1885 typedef struct malloc_segment  msegment;
       
  1886 typedef struct malloc_segment* msegmentptr;
       
  1887 
       
  1888 /* ---------------------------- malloc_state ----------------------------- */
       
  1889 
       
  1890 /*
       
  1891    A malloc_state holds all of the bookkeeping for a space.
       
  1892    The main fields are:
       
  1893 
       
  1894   Top
       
  1895     The topmost chunk of the currently active segment. Its size is
       
  1896     cached in topsize.  The actual size of topmost space is
       
  1897     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
       
  1898     fenceposts and segment records if necessary when getting more
       
  1899     space from the system.  The size at which to autotrim top is
       
  1900     cached from mparams in trim_check, except that it is disabled if
       
  1901     an autotrim fails.
       
  1902 
       
  1903   Designated victim (dv)
       
  1904     This is the preferred chunk for servicing small requests that
       
  1905     don't have exact fits.  It is normally the chunk split off most
       
  1906     recently to service another small request.  Its size is cached in
       
  1907     dvsize. The link fields of this chunk are not maintained since it
       
  1908     is not kept in a bin.
       
  1909 
       
  1910   SmallBins
       
  1911     An array of bin headers for free chunks.  These bins hold chunks
       
  1912     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
       
  1913     chunks of all the same size, spaced 8 bytes apart.  To simplify
       
  1914     use in double-linked lists, each bin header acts as a malloc_chunk
       
  1915     pointing to the real first node, if it exists (else pointing to
       
  1916     itself).  This avoids special-casing for headers.  But to avoid
       
  1917     waste, we allocate only the fd/bk pointers of bins, and then use
       
  1918     repositioning tricks to treat these as the fields of a chunk.
       
  1919 
       
  1920   TreeBins
       
  1921     Treebins are pointers to the roots of trees holding a range of
       
  1922     sizes. There are 2 equally spaced treebins for each power of two
       
  1923     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
       
  1924     larger.
       
  1925 
       
  1926   Bin maps
       
  1927     There is one bit map for small bins ("smallmap") and one for
       
  1928     treebins ("treemap).  Each bin sets its bit when non-empty, and
       
  1929     clears the bit when empty.  Bit operations are then used to avoid
       
  1930     bin-by-bin searching -- nearly all "search" is done without ever
       
  1931     looking at bins that won't be selected.  The bit maps
       
  1932     conservatively use 32 bits per map word, even if on 64bit system.
       
  1933     For a good description of some of the bit-based techniques used
       
  1934     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
       
  1935     supplement at http://hackersdelight.org/). Many of these are
       
  1936     intended to reduce the branchiness of paths through malloc etc, as
       
  1937     well as to reduce the number of memory locations read or written.
       
  1938 
       
  1939   Segments
       
  1940     A list of segments headed by an embedded malloc_segment record
       
  1941     representing the initial space.
       
  1942 
       
  1943   Address check support
       
  1944     The least_addr field is the least address ever obtained from
       
  1945     MORECORE or MMAP. Attempted frees and reallocs of any address less
       
  1946     than this are trapped (unless INSECURE is defined).
       
  1947 
       
  1948   Magic tag
       
  1949     A cross-check field that should always hold same value as mparams.magic.
       
  1950 
       
  1951   Flags
       
  1952     Bits recording whether to use MMAP, locks, or contiguous MORECORE
       
  1953 
       
  1954   Statistics
       
  1955     Each space keeps track of current and maximum system memory
       
  1956     obtained via MORECORE or MMAP.
       
  1957 
       
  1958   Locking
       
  1959     If USE_LOCKS is defined, the "mutex" lock is acquired and released
       
  1960     around every public call using this mspace.
       
  1961 */
       
  1962 
       
  1963 /* Bin types, widths and sizes */
       
  1964 #define NSMALLBINS        (32U)
       
  1965 #define NTREEBINS         (32U)
       
  1966 #define SMALLBIN_SHIFT    (3U)
       
  1967 #define SMALLBIN_WIDTH    (1U << SMALLBIN_SHIFT)
       
  1968 #define TREEBIN_SHIFT     (8U)
       
  1969 #define MIN_LARGE_SIZE    (1U << TREEBIN_SHIFT)
       
  1970 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - 1)
       
  1971 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
       
  1972 
       
  1973 struct malloc_state {
       
  1974   binmap_t   smallmap;
       
  1975   binmap_t   treemap;
       
  1976   size_t     dvsize;
       
  1977   size_t     topsize;
       
  1978   char*      least_addr;
       
  1979   mchunkptr  dv;
       
  1980   mchunkptr  top;
       
  1981   size_t     trim_check;
       
  1982   size_t     magic;
       
  1983   mchunkptr  smallbins[(NSMALLBINS+1)*2];
       
  1984   tbinptr    treebins[NTREEBINS];
       
  1985   size_t     footprint;
       
  1986   size_t     max_footprint;
       
  1987   flag_t     mflags;
       
  1988 #if USE_LOCKS
       
  1989   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
       
  1990 #endif
       
  1991   msegment   seg;
       
  1992 };
       
  1993 
       
  1994 typedef struct malloc_state*    mstate;
       
  1995 
       
  1996 /* ------------- Global malloc_state and malloc_params ------------------- */
       
  1997 
       
  1998 /*
       
  1999   malloc_params holds global properties, including those that can be
       
  2000   dynamically set using mallopt. There is a single instance, mparams,
       
  2001   initialized in init_mparams.
       
  2002 */
       
  2003 
       
  2004 struct malloc_params {
       
  2005   size_t magic;
       
  2006   size_t page_size;
       
  2007   size_t granularity;
       
  2008   size_t mmap_threshold;
       
  2009   size_t trim_threshold;
       
  2010   flag_t default_mflags;
       
  2011 };
       
  2012 
       
  2013 static struct malloc_params mparams;
       
  2014 
       
  2015 /* The global malloc_state used for all non-"mspace" calls */
       
  2016 static struct malloc_state _gm_;
       
  2017 #define gm                 (&_gm_)
       
  2018 #define is_global(M)       ((M) == &_gm_)
       
  2019 #define is_initialized(M)  ((M)->top != 0)
       
  2020 
       
  2021 /* -------------------------- system alloc setup ------------------------- */
       
  2022 
       
  2023 /* Operations on mflags */
       
  2024 
       
  2025 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
       
  2026 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
       
  2027 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
       
  2028 
       
  2029 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
       
  2030 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
       
  2031 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
       
  2032 
       
  2033 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
       
  2034 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
       
  2035 
       
  2036 #define set_lock(M,L)\
       
  2037  ((M)->mflags = (L)?\
       
  2038   ((M)->mflags | USE_LOCK_BIT) :\
       
  2039   ((M)->mflags & ~USE_LOCK_BIT))
       
  2040 
       
  2041 /* page-align a size */
       
  2042 #define page_align(S)\
       
  2043  (((S) + (mparams.page_size)) & ~(mparams.page_size - 1))
       
  2044 
       
  2045 /* granularity-align a size */
       
  2046 #define granularity_align(S)\
       
  2047   (((S) + (mparams.granularity)) & ~(mparams.granularity - 1))
       
  2048 
       
  2049 #define is_page_aligned(S)\
       
  2050    (((size_t)(S) & (mparams.page_size - 1)) == 0)
       
  2051 #define is_granularity_aligned(S)\
       
  2052    (((size_t)(S) & (mparams.granularity - 1)) == 0)
       
  2053 
       
  2054 /*  True if segment S holds address A */
       
  2055 #define segment_holds(S, A)\
       
  2056   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
       
  2057 
       
  2058 /* Return segment holding given address */
       
  2059 static msegmentptr segment_holding(mstate m, char* addr) {
       
  2060   msegmentptr sp = &m->seg;
       
  2061   for (;;) {
       
  2062     if (addr >= sp->base && addr < sp->base + sp->size)
       
  2063       return sp;
       
  2064     if ((sp = sp->next) == 0)
       
  2065       return 0;
       
  2066   }
       
  2067 }
       
  2068 
       
  2069 /* Return true if segment contains a segment link */
       
  2070 static int has_segment_link(mstate m, msegmentptr ss) {
       
  2071   msegmentptr sp = &m->seg;
       
  2072   for (;;) {
       
  2073     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
       
  2074       return 1;
       
  2075     if ((sp = sp->next) == 0)
       
  2076       return 0;
       
  2077   }
       
  2078 }
       
  2079 
       
  2080 #ifndef MORECORE_CANNOT_TRIM
       
  2081 #define should_trim(M,s)  ((s) > (M)->trim_check)
       
  2082 #else
       
  2083 #define should_trim(M,s)  (0)
       
  2084 #endif
       
  2085 
       
  2086 /*
       
  2087   TOP_FOOT_SIZE is padding at the end of a segment, including space
       
  2088   that may be needed to place segment records and fenceposts when new
       
  2089   noncontiguous segments are added.
       
  2090 */
       
  2091 #define TOP_FOOT_SIZE\
       
  2092   (pad_request(MIN_CHUNK_SIZE + sizeof(struct malloc_segment)))
       
  2093 
       
  2094 
       
  2095 /* -------------------------------  Hooks -------------------------------- */
       
  2096 
       
  2097 /*
       
  2098   PREACTION should be defined to return 0 on success, and nonzero on
       
  2099   failure. If you are not using locking, you can redefine these to do
       
  2100   anything you like.
       
  2101 */
       
  2102 
       
  2103 #if USE_LOCKS
       
  2104 
       
  2105 #define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
       
  2106 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
       
  2107 #else
       
  2108 
       
  2109 #ifndef PREACTION
       
  2110 #define PREACTION(M) (0)
       
  2111 #endif
       
  2112 
       
  2113 #ifndef POSTACTION
       
  2114 #define POSTACTION(M)
       
  2115 #endif
       
  2116 
       
  2117 #endif
       
  2118 
       
  2119 /*
       
  2120   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
       
  2121   USAGE_ERROR_ACTION is triggered on detected bad frees and
       
  2122   reallocs. The argument p is an address that might have triggered the
       
  2123   fault. It is ignored by the two predefined actions, but might be
       
  2124   useful in custom actions that try to help diagnose errors.
       
  2125 */
       
  2126 
       
  2127 #if PROCEED_ON_ERROR
       
  2128 
       
  2129 /* A count of the number of corruption errors causing resets */
       
  2130 int malloc_corruption_error_count;
       
  2131 
       
  2132 /* default corruption action */
       
  2133 static void reset_on_error(mstate m);
       
  2134 
       
  2135 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
       
  2136 #define USAGE_ERROR_ACTION(m, p)
       
  2137 
       
  2138 #else
       
  2139 
       
  2140 #ifndef CORRUPTION_ERROR_ACTION
       
  2141 #define CORRUPTION_ERROR_ACTION(m) ABORT
       
  2142 #endif
       
  2143 
       
  2144 #ifndef USAGE_ERROR_ACTION
       
  2145 #define USAGE_ERROR_ACTION(m,p) ABORT
       
  2146 #endif
       
  2147 
       
  2148 #endif
       
  2149 
       
  2150 /* -------------------------- Debugging setup ---------------------------- */
       
  2151 
       
  2152 #if ! DEBUG
       
  2153 
       
  2154 #define check_free_chunk(M,P)
       
  2155 #define check_inuse_chunk(M,P)
       
  2156 #define check_malloced_chunk(M,P,N)
       
  2157 #define check_mmapped_chunk(M,P)
       
  2158 #define check_malloc_state(M)
       
  2159 #define check_top_chunk(M,P)
       
  2160 
       
  2161 #else
       
  2162 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
       
  2163 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
       
  2164 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
       
  2165 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
       
  2166 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
       
  2167 #define check_malloc_state(M)       do_check_malloc_state(M)
       
  2168 
       
  2169 static void   do_check_any_chunk(mstate m, mchunkptr p);
       
  2170 static void   do_check_top_chunk(mstate m, mchunkptr p);
       
  2171 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
       
  2172 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
       
  2173 static void   do_check_free_chunk(mstate m, mchunkptr p);
       
  2174 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
       
  2175 static void   do_check_tree(mstate m, tchunkptr t);
       
  2176 static void   do_check_treebin(mstate m, bindex_t i);
       
  2177 static void   do_check_smallbin(mstate m, bindex_t i);
       
  2178 static void   do_check_malloc_state(mstate m);
       
  2179 static int    bin_find(mstate m, mchunkptr x);
       
  2180 static size_t traverse_and_check(mstate m);
       
  2181 #endif
       
  2182 
       
  2183 /* ---------------------------- Indexing Bins ---------------------------- */
       
  2184 
       
  2185 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
       
  2186 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
       
  2187 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
       
  2188 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
       
  2189 
       
  2190 /* addressing by index. See above about smallbin repositioning */
       
  2191 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
       
  2192 #define treebin_at(M,i)     (&((M)->treebins[i]))
       
  2193 
       
  2194 /* assign tree index for size S to variable I */
       
  2195 #if defined(__GNUC__) && defined(i386)
       
  2196 #define compute_tree_index(S, I)\
       
  2197 {\
       
  2198   size_t X = S >> TREEBIN_SHIFT;\
       
  2199   if (X == 0)\
       
  2200     I = 0;\
       
  2201   else if (X > 0xFFFF)\
       
  2202     I = NTREEBINS-1;\
       
  2203   else {\
       
  2204     unsigned int K;\
       
  2205     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
       
  2206     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
       
  2207   }\
       
  2208 }
       
  2209 #else
       
  2210 #define compute_tree_index(S, I)\
       
  2211 {\
       
  2212   size_t X = S >> TREEBIN_SHIFT;\
       
  2213   if (X == 0)\
       
  2214     I = 0;\
       
  2215   else if (X > 0xFFFF)\
       
  2216     I = NTREEBINS-1;\
       
  2217   else {\
       
  2218     unsigned int Y = (unsigned int)X;\
       
  2219     unsigned int N = ((Y - 0x100) >> 16) & 8;\
       
  2220     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
       
  2221     N += K;\
       
  2222     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
       
  2223     K = 14 - N + ((Y <<= K) >> 15);\
       
  2224     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
       
  2225   }\
       
  2226 }
       
  2227 #endif
       
  2228 
       
  2229 /* Bit representing maximum resolved size in a treebin at i */
       
  2230 #define bit_for_tree_index(i) \
       
  2231    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
       
  2232 
       
  2233 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
       
  2234 #define leftshift_for_tree_index(i) \
       
  2235    ((i == NTREEBINS-1)? 0 : \
       
  2236     ((SIZE_T_BITSIZE-1) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
       
  2237 
       
  2238 /* The size of the smallest chunk held in bin with index i */
       
  2239 #define minsize_for_tree_index(i) \
       
  2240    (((size_t)(1U) << (((i) >> 1) + TREEBIN_SHIFT)) |  \
       
  2241    (((size_t)((i) & 1U)) << (((i) >> 1U) + TREEBIN_SHIFT - 1)))
       
  2242 
       
  2243 
       
  2244 /* ------------------------ Operations on bin maps ----------------------- */
       
  2245 
       
  2246 /* bit corresponding to given index */
       
  2247 #define idx2bit(i)              ((binmap_t)(1) << (i))
       
  2248 
       
  2249 /* Mark/Clear bits with given index */
       
  2250 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
       
  2251 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
       
  2252 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
       
  2253 
       
  2254 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
       
  2255 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
       
  2256 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
       
  2257 
       
  2258 /* index corresponding to given bit */
       
  2259 
       
  2260 #if defined(__GNUC__) && defined(i386)
       
  2261 #define compute_bit2idx(X, I)\
       
  2262 {\
       
  2263   unsigned int J;\
       
  2264   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
       
  2265   I = (bindex_t)J;\
       
  2266 }
       
  2267 
       
  2268 #else
       
  2269 #if  USE_BUILTIN_FFS
       
  2270 #define compute_bit2idx(X, I) I = ffs(X)-1
       
  2271 
       
  2272 #else
       
  2273 #define compute_bit2idx(X, I)\
       
  2274 {\
       
  2275   unsigned int Y = X - 1;\
       
  2276   unsigned int K = Y >> (16-4) & 16;\
       
  2277   unsigned int N = K;        Y >>= K;\
       
  2278   N += K = Y >> (8-3) &  8;  Y >>= K;\
       
  2279   N += K = Y >> (4-2) &  4;  Y >>= K;\
       
  2280   N += K = Y >> (2-1) &  2;  Y >>= K;\
       
  2281   N += K = Y >> (1-0) &  1;  Y >>= K;\
       
  2282   I = (bindex_t)(N + Y);\
       
  2283 }
       
  2284 #endif
       
  2285 #endif
       
  2286 
       
  2287 /* isolate the least set bit of a bitmap */
       
  2288 #define least_bit(x)         ((x) & -(x))
       
  2289 
       
  2290 /* mask with all bits to left of least bit of x on */
       
  2291 #define left_bits(x)         ((x<<1) | -(x<<1))
       
  2292 
       
  2293 /* mask with all bits to left of or equal to least bit of x on */
       
  2294 #define same_or_left_bits(x) ((x) | -(x))
       
  2295 
       
  2296 
       
  2297 /* ----------------------- Runtime Check Support ------------------------- */
       
  2298 
       
  2299 /*
       
  2300   For security, the main invariant is that malloc/free/etc never
       
  2301   writes to a static address other than malloc_state, unless static
       
  2302   malloc_state itself has been corrupted, which cannot occur via
       
  2303   malloc (because of these checks). In essence this means that we
       
  2304   believe all pointers, sizes, maps etc held in malloc_state, but
       
  2305   check all of those linked or offsetted from other embedded data
       
  2306   structures.  These checks are interspersed with main code in a way
       
  2307   that tends to minimize their run-time cost.
       
  2308 
       
  2309   When FOOTERS is defined, in addition to range checking, we also
       
  2310   verify footer fields of inuse chunks, which can be used guarantee
       
  2311   that the mstate controlling malloc/free is intact.  This is a
       
  2312   streamlined version of the approach described by William Robertson
       
  2313   et al in "Run-time Detection of Heap-based Overflows" LISA'03
       
  2314   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
       
  2315   of an inuse chunk holds the xor of its mstate and a random seed,
       
  2316   that is checked upon calls to free() and realloc().  This is
       
  2317   (probablistically) unguessable from outside the program, but can be
       
  2318   computed by any code successfully malloc'ing any chunk, so does not
       
  2319   itself provide protection against code that has already broken
       
  2320   security through some other means.  Unlike Robertson et al, we
       
  2321   always dynamically check addresses of all offset chunks (previous,
       
  2322   next, etc). This turns out to be cheaper than relying on hashes.
       
  2323 */
       
  2324 
       
  2325 #if !INSECURE
       
  2326 /* Check if address a is at least as high as any from MORECORE or MMAP */
       
  2327 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
       
  2328 /* Check if address of next chunk n is higher than base chunk p */
       
  2329 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
       
  2330 /* Check if p has its cinuse bit on */
       
  2331 #define ok_cinuse(p)     cinuse(p)
       
  2332 /* Check if p has its pinuse bit on */
       
  2333 #define ok_pinuse(p)     pinuse(p)
       
  2334 
       
  2335 #else
       
  2336 #define ok_address(M, a) (1)
       
  2337 #define ok_next(b, n)    (1)
       
  2338 #define ok_cinuse(p)     (1)
       
  2339 #define ok_pinuse(p)     (1)
       
  2340 #endif
       
  2341 
       
  2342 #if (FOOTERS && !INSECURE)
       
  2343 /* Check if (alleged) mstate m has expected magic field */
       
  2344 #define ok_magic(M)      ((M)->magic == mparams.magic)
       
  2345 #else
       
  2346 #define ok_magic(M)      (1)
       
  2347 #endif
       
  2348 
       
  2349 
       
  2350 /* In gcc, use __builtin_expect to minimize impact of checks */
       
  2351 #if !INSECURE
       
  2352 #if defined(__GNUC__) && __GNUC__ >= 3
       
  2353 #define RTCHECK(e)  __builtin_expect(e, 1)
       
  2354 #else
       
  2355 #define RTCHECK(e)  (e)
       
  2356 #endif
       
  2357 #else
       
  2358 #define RTCHECK(e)  (1)
       
  2359 #endif
       
  2360 
       
  2361 /* macros to set up inuse chunks with or without footers */
       
  2362 
       
  2363 #if !FOOTERS
       
  2364 
       
  2365 #define mark_inuse_foot(M,p,s)
       
  2366 
       
  2367 /* Set cinuse bit and pinuse bit of next chunk */
       
  2368 #define set_inuse(M,p,s)\
       
  2369   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
       
  2370   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
       
  2371 
       
  2372 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
       
  2373 #define set_inuse_and_pinuse(M,p,s)\
       
  2374   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
       
  2375   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
       
  2376 
       
  2377 /* Set size, cinuse and pinuse bit of this chunk */
       
  2378 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
       
  2379   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
       
  2380 
       
  2381 #else
       
  2382 
       
  2383 /* Set foot of inuse chunk to be xor of mstate and seed */
       
  2384 #define mark_inuse_foot(M,p,s)\
       
  2385   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
       
  2386 
       
  2387 #define get_mstate_for(p)\
       
  2388   ((mstate)(((mchunkptr)((char*)(p) +\
       
  2389     (chunksize(p))))->prev_foot ^ mparams.magic))
       
  2390 
       
  2391 #define set_inuse(M,p,s)\
       
  2392   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
       
  2393   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
       
  2394   mark_inuse_foot(M,p,s))
       
  2395 
       
  2396 #define set_inuse_and_pinuse(M,p,s)\
       
  2397   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
       
  2398   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
       
  2399  mark_inuse_foot(M,p,s))
       
  2400 
       
  2401 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
       
  2402   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
       
  2403   mark_inuse_foot(M, p, s))
       
  2404 
       
  2405 #endif
       
  2406 
       
  2407 /* ---------------------------- setting mparams -------------------------- */
       
  2408 
       
  2409 /* Initialize mparams */
       
  2410 static void init_mparams() {
       
  2411   if (mparams.page_size == 0) {
       
  2412 
       
  2413 #if (FOOTERS && !INSECURE)
       
  2414     {
       
  2415       size_t s;
       
  2416 #if USE_DEV_RANDOM
       
  2417       int fd;
       
  2418       unsigned char buf[sizeof(size_t)];
       
  2419       /* Try to use /dev/urandom, else fall back on using time */
       
  2420       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
       
  2421           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
       
  2422         s = *((size_t *) buf);
       
  2423         close(fd);
       
  2424       }
       
  2425       else
       
  2426 #endif
       
  2427         s = (size_t)(time(0) ^ (size_t)0x55555555U);
       
  2428 
       
  2429       s |= 8U;    /* ensure nonzero */
       
  2430       s &= ~7U;   /* improve chances of fault for bad values */
       
  2431 
       
  2432       ACQUIRE_MAGIC_INIT_LOCK();
       
  2433       if (mparams.magic == 0)
       
  2434         mparams.magic = s;
       
  2435       RELEASE_MAGIC_INIT_LOCK();
       
  2436     }
       
  2437 
       
  2438 #else
       
  2439     mparams.magic = (size_t)0x58585858U;
       
  2440 #endif
       
  2441 
       
  2442     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
       
  2443     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
       
  2444 #if MORECORE_CONTIGUOUS
       
  2445     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
       
  2446 #else
       
  2447     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
       
  2448 #endif
       
  2449 
       
  2450 #ifndef WIN32
       
  2451     mparams.page_size = malloc_getpagesize;
       
  2452     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
       
  2453                            DEFAULT_GRANULARITY : mparams.page_size);
       
  2454 #else
       
  2455     {
       
  2456       SYSTEM_INFO system_info;
       
  2457       GetSystemInfo(&system_info);
       
  2458       mparams.page_size = system_info.dwPageSize;
       
  2459       mparams.granularity = system_info.dwAllocationGranularity;
       
  2460     }
       
  2461 #endif
       
  2462 
       
  2463     /* Sanity-check configuration:
       
  2464        size_t must be unsigned and as wide as pointer type.
       
  2465        ints must be at least 4 bytes.
       
  2466        alignment must be at least 8.
       
  2467        Alignment, min chunk size, and page size must all be powers of 2.
       
  2468     */
       
  2469     if ((sizeof(size_t) != sizeof(char*)) ||
       
  2470         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
       
  2471         (sizeof(int) < 4)  ||
       
  2472         (MALLOC_ALIGNMENT < 8U) ||
       
  2473         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-1))    != 0) ||
       
  2474         ((MCHUNK_SIZE         & (MCHUNK_SIZE-1))         != 0) ||
       
  2475         ((mparams.granularity & (mparams.granularity-1)) != 0) ||
       
  2476         ((mparams.page_size   & (mparams.page_size-1))   != 0))
       
  2477       ABORT;
       
  2478   }
       
  2479 }
       
  2480 
       
  2481 /* support for mallopt */
       
  2482 static int change_mparam(int param_number, int value) {
       
  2483   size_t val = (size_t)value;
       
  2484   init_mparams();
       
  2485   switch(param_number) {
       
  2486   case M_TRIM_THRESHOLD:
       
  2487     mparams.trim_threshold = val;
       
  2488     return 1;
       
  2489   case M_GRANULARITY:
       
  2490     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
       
  2491       mparams.granularity = val;
       
  2492       return 1;
       
  2493     }
       
  2494     else
       
  2495       return 0;
       
  2496   case M_MMAP_THRESHOLD:
       
  2497     mparams.mmap_threshold = val;
       
  2498     return 1;
       
  2499   default:
       
  2500     return 0;
       
  2501   }
       
  2502 }
       
  2503 
       
  2504 #if DEBUG
       
  2505 /* ------------------------- Debugging Support --------------------------- */
       
  2506 
       
  2507 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
       
  2508 static void do_check_any_chunk(mstate m, mchunkptr p) {
       
  2509   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
       
  2510   assert(ok_address(m, p));
       
  2511 }
       
  2512 
       
  2513 /* Check properties of top chunk */
       
  2514 static void do_check_top_chunk(mstate m, mchunkptr p) {
       
  2515   msegmentptr sp = segment_holding(m, (char*)p);
       
  2516   size_t  sz = chunksize(p);
       
  2517   assert(sp != 0);
       
  2518   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
       
  2519   assert(ok_address(m, p));
       
  2520   assert(sz == m->topsize);
       
  2521   assert(sz > 0);
       
  2522   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
       
  2523   assert(pinuse(p));
       
  2524   assert(!next_pinuse(p));
       
  2525 }
       
  2526 
       
  2527 /* Check properties of (inuse) mmapped chunks */
       
  2528 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
       
  2529   size_t  sz = chunksize(p);
       
  2530   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
       
  2531   assert(is_mmapped(p));
       
  2532   assert(use_mmap(m));
       
  2533   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
       
  2534   assert(ok_address(m, p));
       
  2535   assert(!is_small(sz));
       
  2536   assert((len & (mparams.page_size-1)) == 0);
       
  2537   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
       
  2538   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
       
  2539 }
       
  2540 
       
  2541 /* Check properties of inuse chunks */
       
  2542 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
       
  2543   do_check_any_chunk(m, p);
       
  2544   assert(cinuse(p));
       
  2545   assert(next_pinuse(p));
       
  2546   /* If not pinuse and not mmapped, previous chunk has OK offset */
       
  2547   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
       
  2548   if (is_mmapped(p))
       
  2549     do_check_mmapped_chunk(m, p);
       
  2550 }
       
  2551 
       
  2552 /* Check properties of free chunks */
       
  2553 static void do_check_free_chunk(mstate m, mchunkptr p) {
       
  2554   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
       
  2555   mchunkptr next = chunk_plus_offset(p, sz);
       
  2556   do_check_any_chunk(m, p);
       
  2557   assert(!cinuse(p));
       
  2558   assert(!next_pinuse(p));
       
  2559   assert (!is_mmapped(p));
       
  2560   if (p != m->dv && p != m->top) {
       
  2561     if (sz >= MIN_CHUNK_SIZE) {
       
  2562       assert((sz & CHUNK_ALIGN_MASK) == 0);
       
  2563       assert(is_aligned(chunk2mem(p)));
       
  2564       assert(next->prev_foot == sz);
       
  2565       assert(pinuse(p));
       
  2566       assert (next == m->top || cinuse(next));
       
  2567       assert(p->fd->bk == p);
       
  2568       assert(p->bk->fd == p);
       
  2569     }
       
  2570     else  /* markers are always of size SIZE_T_SIZE */
       
  2571       assert(sz == SIZE_T_SIZE);
       
  2572   }
       
  2573 }
       
  2574 
       
  2575 /* Check properties of malloced chunks at the point they are malloced */
       
  2576 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
       
  2577   if (mem != 0) {
       
  2578     mchunkptr p = mem2chunk(mem);
       
  2579     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
       
  2580     do_check_inuse_chunk(m, p);
       
  2581     assert((sz & CHUNK_ALIGN_MASK) == 0);
       
  2582     assert(sz >= MIN_CHUNK_SIZE);
       
  2583     assert(sz >= s);
       
  2584     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
       
  2585     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
       
  2586   }
       
  2587 }
       
  2588 
       
  2589 /* Check a tree and its subtrees.  */
       
  2590 static void do_check_tree(mstate m, tchunkptr t) {
       
  2591   tchunkptr head = 0;
       
  2592   tchunkptr u = t;
       
  2593   bindex_t tindex = t->index;
       
  2594   size_t tsize = chunksize(t);
       
  2595   bindex_t idx;
       
  2596   compute_tree_index(tsize, idx);
       
  2597   assert(tindex == idx);
       
  2598   assert(tsize >= MIN_LARGE_SIZE);
       
  2599   assert(tsize >= minsize_for_tree_index(idx));
       
  2600   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
       
  2601 
       
  2602   do { /* traverse through chain of same-sized nodes */
       
  2603     do_check_any_chunk(m, ((mchunkptr)u));
       
  2604     assert(u->index == tindex);
       
  2605     assert(chunksize(u) == tsize);
       
  2606     assert(!cinuse(u));
       
  2607     assert(!next_pinuse(u));
       
  2608     assert(u->fd->bk == u);
       
  2609     assert(u->bk->fd == u);
       
  2610     if (u->parent == 0) {
       
  2611       assert(u->child[0] == 0);
       
  2612       assert(u->child[1] == 0);
       
  2613     }
       
  2614     else {
       
  2615       assert(head == 0); /* only one node on chain has parent */
       
  2616       head = u;
       
  2617       assert(u->parent != u);
       
  2618       assert (u->parent->child[0] == u ||
       
  2619               u->parent->child[1] == u ||
       
  2620               *((tbinptr*)(u->parent)) == u);
       
  2621       if (u->child[0] != 0) {
       
  2622         assert(u->child[0]->parent == u);
       
  2623         assert(u->child[0] != u);
       
  2624         do_check_tree(m, u->child[0]);
       
  2625       }
       
  2626       if (u->child[1] != 0) {
       
  2627         assert(u->child[1]->parent == u);
       
  2628         assert(u->child[1] != u);
       
  2629         do_check_tree(m, u->child[1]);
       
  2630       }
       
  2631       if (u->child[0] != 0 && u->child[1] != 0) {
       
  2632         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
       
  2633       }
       
  2634     }
       
  2635     u = u->fd;
       
  2636   } while (u != t);
       
  2637   assert(head != 0);
       
  2638 }
       
  2639 
       
  2640 /*  Check all the chunks in a treebin.  */
       
  2641 static void do_check_treebin(mstate m, bindex_t i) {
       
  2642   tbinptr* tb = treebin_at(m, i);
       
  2643   tchunkptr t = *tb;
       
  2644   int empty = (m->treemap & (1 << i)) == 0;
       
  2645   if (t == 0)
       
  2646     assert(empty);
       
  2647   if (!empty)
       
  2648     do_check_tree(m, t);
       
  2649 }
       
  2650 
       
  2651 /*  Check all the chunks in a smallbin.  */
       
  2652 static void do_check_smallbin(mstate m, bindex_t i) {
       
  2653   sbinptr b = smallbin_at(m, i);
       
  2654   mchunkptr p = b->bk;
       
  2655   unsigned int empty = (m->smallmap & (1 << i)) == 0;
       
  2656   if (p == b)
       
  2657     assert(empty);
       
  2658   if (!empty) {
       
  2659     for (; p != b; p = p->bk) {
       
  2660       size_t size = chunksize(p);
       
  2661       mchunkptr q;
       
  2662       /* each chunk claims to be free */
       
  2663       do_check_free_chunk(m, p);
       
  2664       /* chunk belongs in bin */
       
  2665       assert(small_index(size) == i);
       
  2666       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
       
  2667       /* chunk is followed by an inuse chunk */
       
  2668       q = next_chunk(p);
       
  2669       if (q->head != FENCEPOST_HEAD)
       
  2670         do_check_inuse_chunk(m, q);
       
  2671     }
       
  2672   }
       
  2673 }
       
  2674 
       
  2675 /* Find x in a bin. Used in other check functions. */
       
  2676 static int bin_find(mstate m, mchunkptr x) {
       
  2677   size_t size = chunksize(x);
       
  2678   if (is_small(size)) {
       
  2679     bindex_t sidx = small_index(size);
       
  2680     sbinptr b = smallbin_at(m, sidx);
       
  2681     if (smallmap_is_marked(m, sidx)) {
       
  2682       mchunkptr p = b;
       
  2683       do {
       
  2684         if (p == x)
       
  2685           return 1;
       
  2686       } while ((p = p->fd) != b);
       
  2687     }
       
  2688   }
       
  2689   else {
       
  2690     bindex_t tidx;
       
  2691     compute_tree_index(size, tidx);
       
  2692     if (treemap_is_marked(m, tidx)) {
       
  2693       tchunkptr t = *treebin_at(m, tidx);
       
  2694       size_t sizebits = size << leftshift_for_tree_index(tidx);
       
  2695       while (t != 0 && chunksize(t) != size) {
       
  2696         t = t->child[(sizebits >> (SIZE_T_BITSIZE-1)) & 1];
       
  2697         sizebits <<= 1;
       
  2698       }
       
  2699       if (t != 0) {
       
  2700         tchunkptr u = t;
       
  2701         do {
       
  2702           if (u == (tchunkptr)x)
       
  2703             return 1;
       
  2704         } while ((u = u->fd) != t);
       
  2705       }
       
  2706     }
       
  2707   }
       
  2708   return 0;
       
  2709 }
       
  2710 
       
  2711 /* Traverse each chunk and check it; return total */
       
  2712 static size_t traverse_and_check(mstate m) {
       
  2713   size_t sum = 0;
       
  2714   if (is_initialized(m)) {
       
  2715     msegmentptr s = &m->seg;
       
  2716     sum += m->topsize + TOP_FOOT_SIZE;
       
  2717     while (s != 0) {
       
  2718       mchunkptr q = align_as_chunk(s->base);
       
  2719       mchunkptr lastq = 0;
       
  2720       assert(pinuse(q));
       
  2721       while (segment_holds(s, q) &&
       
  2722              q != m->top && q->head != FENCEPOST_HEAD) {
       
  2723         sum += chunksize(q);
       
  2724         if (cinuse(q)) {
       
  2725           assert(!bin_find(m, q));
       
  2726           do_check_inuse_chunk(m, q);
       
  2727         }
       
  2728         else {
       
  2729           assert(q == m->dv || bin_find(m, q));
       
  2730           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
       
  2731           do_check_free_chunk(m, q);
       
  2732         }
       
  2733         lastq = q;
       
  2734         q = next_chunk(q);
       
  2735       }
       
  2736       s = s->next;
       
  2737     }
       
  2738   }
       
  2739   return sum;
       
  2740 }
       
  2741 
       
  2742 /* Check all properties of malloc_state. */
       
  2743 static void do_check_malloc_state(mstate m) {
       
  2744   bindex_t i;
       
  2745   size_t total;
       
  2746   /* check bins */
       
  2747   for (i = 0; i < NSMALLBINS; ++i)
       
  2748     do_check_smallbin(m, i);
       
  2749   for (i = 0; i < NTREEBINS; ++i)
       
  2750     do_check_treebin(m, i);
       
  2751 
       
  2752   if (m->dvsize != 0) { /* check dv chunk */
       
  2753     do_check_any_chunk(m, m->dv);
       
  2754     assert(m->dvsize == chunksize(m->dv));
       
  2755     assert(m->dvsize >= MIN_CHUNK_SIZE);
       
  2756     assert(bin_find(m, m->dv) == 0);
       
  2757   }
       
  2758 
       
  2759   if (m->top != 0) {   /* check top chunk */
       
  2760     do_check_top_chunk(m, m->top);
       
  2761     assert(m->topsize == chunksize(m->top));
       
  2762     assert(m->topsize > 0);
       
  2763     assert(bin_find(m, m->top) == 0);
       
  2764   }
       
  2765 
       
  2766   total = traverse_and_check(m);
       
  2767   assert(total <= m->footprint);
       
  2768   assert(m->footprint <= m->max_footprint);
       
  2769 }
       
  2770 #endif
       
  2771 
       
  2772 /* ----------------------------- statistics ------------------------------ */
       
  2773 
       
  2774 #if !NO_MALLINFO
       
  2775 static struct mallinfo internal_mallinfo(mstate m) {
       
  2776   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
       
  2777   if (!PREACTION(m)) {
       
  2778     check_malloc_state(m);
       
  2779     if (is_initialized(m)) {
       
  2780       size_t nfree = 1; /* top always free */
       
  2781       size_t free = m->topsize + TOP_FOOT_SIZE;
       
  2782       size_t sum = free;
       
  2783       msegmentptr s = &m->seg;
       
  2784       while (s != 0) {
       
  2785         mchunkptr q = align_as_chunk(s->base);
       
  2786         while (segment_holds(s, q) &&
       
  2787                q != m->top && q->head != FENCEPOST_HEAD) {
       
  2788           size_t sz = chunksize(q);
       
  2789           sum += sz;
       
  2790           if (!cinuse(q)) {
       
  2791             free += sz;
       
  2792             ++nfree;
       
  2793           }
       
  2794           q = next_chunk(q);
       
  2795         }
       
  2796         s = s->next;
       
  2797       }
       
  2798 
       
  2799       nm.arena    = sum;
       
  2800       nm.ordblks  = nfree;
       
  2801       nm.fordblks = free;
       
  2802       nm.hblkhd   = m->max_footprint - sum;
       
  2803       nm.usmblks  = m->max_footprint;
       
  2804       nm.uordblks = m->footprint - free;
       
  2805       nm.keepcost = m->topsize;
       
  2806     }
       
  2807 
       
  2808     POSTACTION(m);
       
  2809   }
       
  2810   return nm;
       
  2811 }
       
  2812 #endif
       
  2813 
       
  2814 static void internal_malloc_stats(mstate m) {
       
  2815   if (!PREACTION(m)) {
       
  2816     size_t maxfp = 0;
       
  2817     size_t fp = 0;
       
  2818     size_t used = 0;
       
  2819     check_malloc_state(m);
       
  2820     if (is_initialized(m)) {
       
  2821       msegmentptr s = &m->seg;
       
  2822       maxfp = m->max_footprint;
       
  2823       fp = m->footprint;
       
  2824       used = fp - (m->topsize + TOP_FOOT_SIZE);
       
  2825 
       
  2826       while (s != 0) {
       
  2827         mchunkptr q = align_as_chunk(s->base);
       
  2828         while (segment_holds(s, q) &&
       
  2829                q != m->top && q->head != FENCEPOST_HEAD) {
       
  2830           if (!cinuse(q))
       
  2831             used -= chunksize(q);
       
  2832           q = next_chunk(q);
       
  2833         }
       
  2834         s = s->next;
       
  2835       }
       
  2836     }
       
  2837 
       
  2838     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
       
  2839     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
       
  2840     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
       
  2841 
       
  2842     POSTACTION(m);
       
  2843   }
       
  2844 }
       
  2845 
       
  2846 /* ----------------------- Operations on smallbins ----------------------- */
       
  2847 
       
  2848 /*
       
  2849   Various forms of linking and unlinking are defined as macros.  Even
       
  2850   the ones for trees, which are very long but have very short typical
       
  2851   paths.  This is ugly but reduces reliance on inlining support of
       
  2852   compilers.
       
  2853 */
       
  2854 
       
  2855 /* Link a free chunk into a smallbin  */
       
  2856 #define insert_small_chunk(M, P, S) {\
       
  2857   bindex_t I  = small_index(S);\
       
  2858   mchunkptr B = smallbin_at(M, I);\
       
  2859   mchunkptr F = B;\
       
  2860   assert(S >= MIN_CHUNK_SIZE);\
       
  2861   if (!smallmap_is_marked(M, I))\
       
  2862     mark_smallmap(M, I);\
       
  2863   else if (RTCHECK(ok_address(M, B->fd)))\
       
  2864     F = B->fd;\
       
  2865   else {\
       
  2866     CORRUPTION_ERROR_ACTION(M);\
       
  2867   }\
       
  2868   B->fd = P;\
       
  2869   F->bk = P;\
       
  2870   P->fd = F;\
       
  2871   P->bk = B;\
       
  2872 }
       
  2873 
       
  2874 /* Unlink a chunk from a smallbin  */
       
  2875 #define unlink_small_chunk(M, P, S) {\
       
  2876   mchunkptr F = P->fd;\
       
  2877   mchunkptr B = P->bk;\
       
  2878   bindex_t I = small_index(S);\
       
  2879   assert(P != B);\
       
  2880   assert(P != F);\
       
  2881   assert(chunksize(P) == small_index2size(I));\
       
  2882   if (F == B)\
       
  2883     clear_smallmap(M, I);\
       
  2884   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
       
  2885                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
       
  2886     F->bk = B;\
       
  2887     B->fd = F;\
       
  2888   }\
       
  2889   else {\
       
  2890     CORRUPTION_ERROR_ACTION(M);\
       
  2891   }\
       
  2892 }
       
  2893 
       
  2894 /* Unlink the first chunk from a smallbin */
       
  2895 #define unlink_first_small_chunk(M, B, P, I) {\
       
  2896   mchunkptr F = P->fd;\
       
  2897   assert(P != B);\
       
  2898   assert(P != F);\
       
  2899   assert(chunksize(P) == small_index2size(I));\
       
  2900   if (B == F)\
       
  2901     clear_smallmap(M, I);\
       
  2902   else if (RTCHECK(ok_address(M, F))) {\
       
  2903     B->fd = F;\
       
  2904     F->bk = B;\
       
  2905   }\
       
  2906   else {\
       
  2907     CORRUPTION_ERROR_ACTION(M);\
       
  2908   }\
       
  2909 }
       
  2910 
       
  2911 /* Replace dv node, binning the old one */
       
  2912 /* Used only when dvsize known to be small */
       
  2913 #define replace_dv(M, P, S) {\
       
  2914   size_t DVS = M->dvsize;\
       
  2915   if (DVS != 0) {\
       
  2916     mchunkptr DV = M->dv;\
       
  2917     assert(is_small(DVS));\
       
  2918     insert_small_chunk(M, DV, DVS);\
       
  2919   }\
       
  2920   M->dvsize = S;\
       
  2921   M->dv = P;\
       
  2922 }
       
  2923 
       
  2924 /* ------------------------- Operations on trees ------------------------- */
       
  2925 
       
  2926 /* Insert chunk into tree */
       
  2927 #define insert_large_chunk(M, X, S) {\
       
  2928   tbinptr* H;\
       
  2929   bindex_t I;\
       
  2930   compute_tree_index(S, I);\
       
  2931   H = treebin_at(M, I);\
       
  2932   X->index = I;\
       
  2933   X->child[0] = X->child[1] = 0;\
       
  2934   if (!treemap_is_marked(M, I)) {\
       
  2935     mark_treemap(M, I);\
       
  2936     *H = X;\
       
  2937     X->parent = (tchunkptr)H;\
       
  2938     X->fd = X->bk = X;\
       
  2939   }\
       
  2940   else {\
       
  2941     tchunkptr T = *H;\
       
  2942     size_t K = S << leftshift_for_tree_index(I);\
       
  2943     for (;;) {\
       
  2944       if (chunksize(T) != S) {\
       
  2945         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-1)) & 1]);\
       
  2946         K <<= 1;\
       
  2947         if (*C != 0)\
       
  2948           T = *C;\
       
  2949         else if (RTCHECK(ok_address(M, C))) {\
       
  2950           *C = X;\
       
  2951           X->parent = T;\
       
  2952           X->fd = X->bk = X;\
       
  2953           break;\
       
  2954         }\
       
  2955         else {\
       
  2956           CORRUPTION_ERROR_ACTION(M);\
       
  2957           break;\
       
  2958         }\
       
  2959       }\
       
  2960       else {\
       
  2961         tchunkptr F = T->fd;\
       
  2962         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
       
  2963           T->fd = F->bk = X;\
       
  2964           X->fd = F;\
       
  2965           X->bk = T;\
       
  2966           X->parent = 0;\
       
  2967           break;\
       
  2968         }\
       
  2969         else {\
       
  2970           CORRUPTION_ERROR_ACTION(M);\
       
  2971           break;\
       
  2972         }\
       
  2973       }\
       
  2974     }\
       
  2975   }\
       
  2976 }
       
  2977 
       
  2978 /*
       
  2979   Unlink steps:
       
  2980 
       
  2981   1. If x is a chained node, unlink it from its same-sized fd/bk links
       
  2982      and choose its bk node as its replacement.
       
  2983   2. If x was the last node of its size, but not a leaf node, it must
       
  2984      be replaced with a leaf node (not merely one with an open left or
       
  2985      right), to make sure that lefts and rights of descendents
       
  2986      correspond properly to bit masks.  We use the rightmost descendent
       
  2987      of x.  We could use any other leaf, but this is easy to locate and
       
  2988      tends to counteract removal of leftmosts elsewhere, and so keeps
       
  2989      paths shorter than minimally guaranteed.  This doesn't loop much
       
  2990      because on average a node in a tree is near the bottom.
       
  2991   3. If x is the base of a chain (i.e., has parent links) relink
       
  2992      x's parent and children to x's replacement (or null if none).
       
  2993 */
       
  2994 
       
  2995 #define unlink_large_chunk(M, X) {\
       
  2996   tchunkptr XP = X->parent;\
       
  2997   tchunkptr R;\
       
  2998   if (X->bk != X) {\
       
  2999     tchunkptr F = X->fd;\
       
  3000     R = X->bk;\
       
  3001     if (RTCHECK(ok_address(M, F))) {\
       
  3002       F->bk = R;\
       
  3003       R->fd = F;\
       
  3004     }\
       
  3005     else {\
       
  3006       CORRUPTION_ERROR_ACTION(M);\
       
  3007     }\
       
  3008   }\
       
  3009   else {\
       
  3010     tchunkptr* RP;\
       
  3011     if (((R = *(RP = &(X->child[1]))) != 0) ||\
       
  3012         ((R = *(RP = &(X->child[0]))) != 0)) {\
       
  3013       tchunkptr* CP;\
       
  3014       while ((*(CP = &(R->child[1])) != 0) ||\
       
  3015              (*(CP = &(R->child[0])) != 0)) {\
       
  3016         R = *(RP = CP);\
       
  3017       }\
       
  3018       if (RTCHECK(ok_address(M, RP)))\
       
  3019         *RP = 0;\
       
  3020       else {\
       
  3021         CORRUPTION_ERROR_ACTION(M);\
       
  3022       }\
       
  3023     }\
       
  3024   }\
       
  3025   if (XP != 0) {\
       
  3026     tbinptr* H = treebin_at(M, X->index);\
       
  3027     if (X == *H) {\
       
  3028       if ((*H = R) == 0) \
       
  3029         clear_treemap(M, X->index);\
       
  3030     }\
       
  3031     else if (RTCHECK(ok_address(M, XP))) {\
       
  3032       if (XP->child[0] == X) \
       
  3033         XP->child[0] = R;\
       
  3034       else \
       
  3035         XP->child[1] = R;\
       
  3036     }\
       
  3037     else\
       
  3038       CORRUPTION_ERROR_ACTION(M);\
       
  3039     if (R != 0) {\
       
  3040       if (RTCHECK(ok_address(M, R))) {\
       
  3041         tchunkptr C0, C1;\
       
  3042         R->parent = XP;\
       
  3043         if ((C0 = X->child[0]) != 0) {\
       
  3044           if (RTCHECK(ok_address(M, C0))) {\
       
  3045             R->child[0] = C0;\
       
  3046             C0->parent = R;\
       
  3047           }\
       
  3048           else\
       
  3049             CORRUPTION_ERROR_ACTION(M);\
       
  3050         }\
       
  3051         if ((C1 = X->child[1]) != 0) {\
       
  3052           if (RTCHECK(ok_address(M, C1))) {\
       
  3053             R->child[1] = C1;\
       
  3054             C1->parent = R;\
       
  3055           }\
       
  3056           else\
       
  3057             CORRUPTION_ERROR_ACTION(M);\
       
  3058         }\
       
  3059       }\
       
  3060       else\
       
  3061         CORRUPTION_ERROR_ACTION(M);\
       
  3062     }\
       
  3063   }\
       
  3064 }
       
  3065 
       
  3066 /* Relays to large vs small bin operations */
       
  3067 
       
  3068 #define insert_chunk(M, P, S)\
       
  3069   if (is_small(S)) insert_small_chunk(M, P, S)\
       
  3070   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
       
  3071 
       
  3072 #define unlink_chunk(M, P, S)\
       
  3073   if (is_small(S)) unlink_small_chunk(M, P, S)\
       
  3074   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
       
  3075 
       
  3076 
       
  3077 /* Relays to internal calls to malloc/free from realloc, memalign etc */
       
  3078 
       
  3079 #if ONLY_MSPACES
       
  3080 #define internal_malloc(m, b) mspace_malloc(m, b)
       
  3081 #define internal_free(m, mem) mspace_free(m,mem);
       
  3082 #else
       
  3083 #if MSPACES
       
  3084 #define internal_malloc(m, b)\
       
  3085    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
       
  3086 #define internal_free(m, mem)\
       
  3087    if (m == gm) dlfree(mem); else mspace_free(m,mem);
       
  3088 #else
       
  3089 #define internal_malloc(m, b) dlmalloc(b)
       
  3090 #define internal_free(m, mem) dlfree(mem)
       
  3091 #endif
       
  3092 #endif
       
  3093 
       
  3094 /* -----------------------  Direct-mmapping chunks ----------------------- */
       
  3095 
       
  3096 /*
       
  3097   Directly mmapped chunks are set up with an offset to the start of
       
  3098   the mmapped region stored in the prev_foot field of the chunk. This
       
  3099   allows reconstruction of the required argument to MUNMAP when freed,
       
  3100   and also allows adjustment of the returned chunk to meet alignment
       
  3101   requirements (especially in memalign).  There is also enough space
       
  3102   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
       
  3103   the PINUSE bit so frees can be checked.
       
  3104 */
       
  3105 
       
  3106 /* Malloc using mmap */
       
  3107 static void* mmap_alloc(mstate m, size_t nb) {
       
  3108   size_t mmsize = granularity_align(nb + 6*SIZE_T_SIZE + CHUNK_ALIGN_MASK);
       
  3109   if (mmsize > nb) {     /* Check for wrap around 0 */
       
  3110     char* mm = (char*)(DIRECT_MMAP(mmsize));
       
  3111     if (mm != CMFAIL) {
       
  3112       size_t offset = align_offset(chunk2mem(mm));
       
  3113       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
       
  3114       mchunkptr p = (mchunkptr)(mm + offset);
       
  3115       p->prev_foot = offset | IS_MMAPPED_BIT;
       
  3116       (p)->head = (psize|CINUSE_BIT);
       
  3117       mark_inuse_foot(m, p, psize);
       
  3118       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
       
  3119       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
       
  3120 
       
  3121       if (mm < m->least_addr)
       
  3122         m->least_addr = mm;
       
  3123       if ((m->footprint += mmsize) > m->max_footprint)
       
  3124         m->max_footprint = m->footprint;
       
  3125       assert(is_aligned(chunk2mem(p)));
       
  3126       check_mmapped_chunk(m, p);
       
  3127       return chunk2mem(p);
       
  3128     }
       
  3129   }
       
  3130   return 0;
       
  3131 }
       
  3132 
       
  3133 /* Realloc using mmap */
       
  3134 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
       
  3135   size_t oldsize = chunksize(oldp);
       
  3136   if (is_small(nb)) /* Can't shrink mmap regions below small size */
       
  3137     return 0;
       
  3138   /* Keep old chunk if big enough but not too big */
       
  3139   if (oldsize >= nb + SIZE_T_SIZE &&
       
  3140       (oldsize - nb) <= 2U * mparams.granularity)
       
  3141     return oldp;
       
  3142   else {
       
  3143     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
       
  3144     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
       
  3145     size_t newmmsize = granularity_align(nb + 6 * SIZE_T_SIZE +
       
  3146                                          CHUNK_ALIGN_MASK);
       
  3147     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
       
  3148                                   oldmmsize, newmmsize, 1);
       
  3149     if (cp != CMFAIL) {
       
  3150       mchunkptr newp = (mchunkptr)(cp + offset);
       
  3151       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
       
  3152       newp->head = (psize|CINUSE_BIT);
       
  3153       mark_inuse_foot(m, newp, psize);
       
  3154       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
       
  3155       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
       
  3156 
       
  3157       if (cp < m->least_addr)
       
  3158         m->least_addr = cp;
       
  3159       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
       
  3160         m->max_footprint = m->footprint;
       
  3161       check_mmapped_chunk(m, newp);
       
  3162       return newp;
       
  3163     }
       
  3164   }
       
  3165   return 0;
       
  3166 }
       
  3167 
       
  3168 /* -------------------------- mspace management -------------------------- */
       
  3169 
       
  3170 /* Initialize top chunk and its size */
       
  3171 static void init_top(mstate m, mchunkptr p, size_t psize) {
       
  3172   /* Ensure alignment */
       
  3173   size_t offset = align_offset(chunk2mem(p));
       
  3174   p = (mchunkptr)((char*)p + offset);
       
  3175   psize -= offset;
       
  3176 
       
  3177   m->top = p;
       
  3178   m->topsize = psize;
       
  3179   p->head = psize | PINUSE_BIT;
       
  3180   /* set size of fake trailing chunk holding overhead space only once */
       
  3181   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
       
  3182   m->trim_check = mparams.trim_threshold; /* reset on each update */
       
  3183 }
       
  3184 
       
  3185 /* Initialize bins for a new mstate that is otherwise zeroed out */
       
  3186 static void init_bins(mstate m) {
       
  3187   /* Establish circular links for smallbins */
       
  3188   bindex_t i;
       
  3189   for (i = 0; i < NSMALLBINS; ++i) {
       
  3190     sbinptr bin = smallbin_at(m,i);
       
  3191     bin->fd = bin->bk = bin;
       
  3192   }
       
  3193 }
       
  3194 
       
  3195 #if PROCEED_ON_ERROR
       
  3196 
       
  3197 /* default corruption action */
       
  3198 static void reset_on_error(mstate m) {
       
  3199   int i;
       
  3200   ++malloc_corruption_error_count;
       
  3201   /* Reinitialize fields to forget about all memory */
       
  3202   m->smallbins = m->treebins = 0;
       
  3203   m->dvsize = m->topsize = 0;
       
  3204   m->seg.base = 0;
       
  3205   m->seg.size = 0;
       
  3206   m->seg.next = 0;
       
  3207   m->top = m->dv = 0;
       
  3208   for (i = 0; i < NTREEBINS; ++i)
       
  3209     *treebin_at(m, i) = 0;
       
  3210   init_bins(m);
       
  3211 }
       
  3212 #endif
       
  3213 
       
  3214 /* Allocate chunk and prepend remainder with chunk in successor base. */
       
  3215 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
       
  3216                            size_t nb) {
       
  3217   mchunkptr p = align_as_chunk(newbase);
       
  3218   mchunkptr oldfirst = align_as_chunk(oldbase);
       
  3219   size_t psize = (char*)oldfirst - (char*)p;
       
  3220   mchunkptr q = chunk_plus_offset(p, nb);
       
  3221   size_t qsize = psize - nb;
       
  3222   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
       
  3223 
       
  3224   assert((char*)oldfirst > (char*)q);
       
  3225   assert(pinuse(oldfirst));
       
  3226   assert(qsize >= MIN_CHUNK_SIZE);
       
  3227 
       
  3228   /* consolidate remainder with first chunk of old base */
       
  3229   if (oldfirst == m->top) {
       
  3230     size_t tsize = m->topsize += qsize;
       
  3231     m->top = q;
       
  3232     q->head = tsize | PINUSE_BIT;
       
  3233     check_top_chunk(m, q);
       
  3234   }
       
  3235   else if (oldfirst == m->dv) {
       
  3236     size_t dsize = m->dvsize += qsize;
       
  3237     m->dv = q;
       
  3238     set_size_and_pinuse_of_free_chunk(q, dsize);
       
  3239   }
       
  3240   else {
       
  3241     if (!cinuse(oldfirst)) {
       
  3242       size_t nsize = chunksize(oldfirst);
       
  3243       unlink_chunk(m, oldfirst, nsize);
       
  3244       oldfirst = chunk_plus_offset(oldfirst, nsize);
       
  3245       qsize += nsize;
       
  3246     }
       
  3247     set_free_with_pinuse(q, qsize, oldfirst);
       
  3248     insert_chunk(m, q, qsize);
       
  3249     check_free_chunk(m, q);
       
  3250   }
       
  3251 
       
  3252   check_malloced_chunk(m, chunk2mem(p), nb);
       
  3253   return chunk2mem(p);
       
  3254 }
       
  3255 
       
  3256 
       
  3257 /* Add a segment to hold a new noncontiguous region */
       
  3258 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
       
  3259   /* Determine locations and sizes of segment, fenceposts, old top */
       
  3260   char* old_top = (char*)m->top;
       
  3261   msegmentptr oldsp = segment_holding(m, old_top);
       
  3262   char* old_end = oldsp->base + oldsp->size;
       
  3263   size_t ssize = pad_request(sizeof(struct malloc_segment));
       
  3264   char* rawsp = old_end - (ssize + 4*SIZE_T_SIZE + CHUNK_ALIGN_MASK);
       
  3265   size_t offset = align_offset(chunk2mem(rawsp));
       
  3266   char* asp = rawsp + offset;
       
  3267   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
       
  3268   mchunkptr sp = (mchunkptr)csp;
       
  3269   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
       
  3270   mchunkptr tnext = chunk_plus_offset(sp, ssize);
       
  3271   mchunkptr p = tnext;
       
  3272   int nfences = 0;
       
  3273 
       
  3274   /* reset top to new space */
       
  3275   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
       
  3276 
       
  3277   /* Set up segment record */
       
  3278   assert(is_aligned(ss));
       
  3279   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
       
  3280   ss->base = tbase;
       
  3281   ss->size = tsize;
       
  3282   ss->sflags = mmapped;
       
  3283   ss->next = m->seg.next;
       
  3284   m->seg.next = ss;
       
  3285 
       
  3286   /* Insert trailing fenceposts */
       
  3287   for (;;) {
       
  3288     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
       
  3289     p->head = FENCEPOST_HEAD;
       
  3290     ++nfences;
       
  3291     if ((char*)(&(nextp->head)) < old_end)
       
  3292       p = nextp;
       
  3293     else
       
  3294       break;
       
  3295   }
       
  3296   assert(nfences >= 2);
       
  3297 
       
  3298   /* Insert the rest of old top into a bin as an ordinary free chunk */
       
  3299   if (csp != old_top) {
       
  3300     mchunkptr p = (mchunkptr)old_top;
       
  3301     size_t psize = csp - old_top;
       
  3302     mchunkptr tn = chunk_plus_offset(p, psize);
       
  3303     set_free_with_pinuse(p, psize, tn);
       
  3304     insert_chunk(m, p, psize);
       
  3305   }
       
  3306 
       
  3307   check_top_chunk(m, m->top);
       
  3308 }
       
  3309 
       
  3310 /* -------------------------- System allocation -------------------------- */
       
  3311 
       
  3312 /* Get memory from system using MORECORE or MMAP */
       
  3313 static void* sys_alloc(mstate m, size_t nb) {
       
  3314   char* tbase = CMFAIL;
       
  3315   size_t tsize = 0;
       
  3316   flag_t mmap_flag = 0;
       
  3317 
       
  3318   init_mparams();
       
  3319 
       
  3320   /* Directly map large chunks */
       
  3321   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
       
  3322     void* mem = mmap_alloc(m, nb);
       
  3323     if (mem != 0)
       
  3324       return mem;
       
  3325   }
       
  3326 
       
  3327   /*
       
  3328     Try getting memory in any of three ways (in most-preferred to
       
  3329     least-preferred order):
       
  3330     1. A call to MORECORE that can normally contiguously extend memory.
       
  3331        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
       
  3332        or main space is mmapped or a previous contiguous call failed)
       
  3333     2. A call to MMAP new space (disabled if not HAVE_MMAP).
       
  3334        Note that under the default settings, if MORECORE ever returns
       
  3335        failure for a request, and HAVE_MMAP is true, then mmap is
       
  3336        used as a noncontiguous system allocator. This is a useful backup
       
  3337        strategy for systems with holes in address spaces -- in this case
       
  3338        sbrk cannot contiguously expand the heap, but mmap may be able to
       
  3339        find space.
       
  3340     3. A call to MORECORE that cannot usually contiguously extend memory.
       
  3341        (disabled if not HAVE_MORECORE)
       
  3342   */
       
  3343 
       
  3344   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
       
  3345     char* brk = CMFAIL;
       
  3346     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
       
  3347     ACQUIRE_MORECORE_LOCK();
       
  3348 
       
  3349     if (ss == 0) {  /* First time through or recovery */
       
  3350       char* base = (char*)CALL_MORECORE(0);
       
  3351       if (base != CMFAIL) {
       
  3352         size_t asize = granularity_align(nb + TOP_FOOT_SIZE + 1);
       
  3353         /* Adjust to end on a page boundary */
       
  3354         if (!is_page_aligned(base))
       
  3355           asize += (page_align((size_t)base) - (size_t)base);
       
  3356         /* Can't call MORECORE if size is negative when treated as signed */
       
  3357         if (asize < MAX_SIZE_T / 2 &&
       
  3358             (brk = (char*)(CALL_MORECORE(asize))) == base) {
       
  3359           tbase = base;
       
  3360           tsize = (size_t)asize;
       
  3361         }
       
  3362       }
       
  3363     }
       
  3364     else {
       
  3365       /* Subtract out existing available top space from MORECORE request. */
       
  3366       size_t asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + 1);
       
  3367       /* Use mem here only if it did continuously extend old space */
       
  3368       if (asize < MAX_SIZE_T / 2 &&
       
  3369           (brk = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
       
  3370         tbase = brk;
       
  3371         tsize = (size_t)asize;
       
  3372       }
       
  3373     }
       
  3374 
       
  3375     if (tbase == CMFAIL) {
       
  3376         disable_contiguous(m); /* Don't try contiguous path in the future */
       
  3377         if (brk != CMFAIL) {   /* Try to use the space we did get */
       
  3378           char* end = (char*)CALL_MORECORE(0);
       
  3379           size_t esize = end - brk;
       
  3380           if (end != CMFAIL && end > brk && esize > nb + TOP_FOOT_SIZE) {
       
  3381             tbase = brk;
       
  3382             tsize = esize;
       
  3383           }
       
  3384         }
       
  3385     }
       
  3386 
       
  3387     RELEASE_MORECORE_LOCK();
       
  3388   }
       
  3389 
       
  3390   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
       
  3391     size_t req = nb + TOP_FOOT_SIZE + 1;
       
  3392     size_t rsize = granularity_align(req);
       
  3393     if (rsize > nb) { /* Fail if wraps around zero */
       
  3394       char* mp = (char*)(CALL_MMAP(rsize));
       
  3395       if (mp != CMFAIL) {
       
  3396         tbase = mp;
       
  3397         tsize = rsize;
       
  3398         mmap_flag = IS_MMAPPED_BIT;
       
  3399       }
       
  3400     }
       
  3401   }
       
  3402 
       
  3403   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
       
  3404     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + 1);
       
  3405     if (asize < MAX_SIZE_T / 2) {
       
  3406       char* brk = CMFAIL;
       
  3407       char* end = CMFAIL;
       
  3408       ACQUIRE_MORECORE_LOCK();
       
  3409       brk = (char*)(CALL_MORECORE(asize));
       
  3410       end = (char*)(CALL_MORECORE(0));
       
  3411       RELEASE_MORECORE_LOCK();
       
  3412       if (brk != CMFAIL && end != CMFAIL && brk < end) {
       
  3413         size_t ssize = end - brk;
       
  3414         if (ssize > nb + TOP_FOOT_SIZE) {
       
  3415           tbase = brk;
       
  3416           tsize = ssize;
       
  3417         }
       
  3418       }
       
  3419     }
       
  3420   }
       
  3421 
       
  3422   if (tbase != CMFAIL) {
       
  3423 
       
  3424     if ((m->footprint += tsize) > m->max_footprint)
       
  3425       m->max_footprint = m->footprint;
       
  3426 
       
  3427     if (!is_initialized(m)) { /* first-time initialization */
       
  3428       m->seg.base = m->least_addr = tbase;
       
  3429       m->seg.size = tsize;
       
  3430       m->seg.sflags = mmap_flag;
       
  3431       m->magic = mparams.magic;
       
  3432       m->mflags = mparams.default_mflags;
       
  3433       init_bins(m);
       
  3434       if (is_global(m))
       
  3435         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
       
  3436       else {
       
  3437         /* Offset top by embedded malloc_state */
       
  3438         mchunkptr mn = next_chunk(mem2chunk(m));
       
  3439         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
       
  3440       }
       
  3441     }
       
  3442 
       
  3443     else {
       
  3444       /* Try to merge with an existing segment */
       
  3445       msegmentptr sp = &m->seg;
       
  3446       while (sp != 0 && tbase != sp->base + sp->size)
       
  3447         sp = sp->next;
       
  3448       if (sp != 0 && (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
       
  3449           segment_holds(sp, m->top)) { /* append */
       
  3450         sp->size += tsize;
       
  3451         init_top(m, m->top, m->topsize + tsize);
       
  3452       }
       
  3453       else {
       
  3454         if (tbase < m->least_addr)
       
  3455           m->least_addr = tbase;
       
  3456         sp = &m->seg;
       
  3457         while (sp != 0 && sp->base != tbase + tsize)
       
  3458           sp = sp->next;
       
  3459         if (sp != 0 && (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
       
  3460           char* oldbase = sp->base;
       
  3461           sp->base = tbase;
       
  3462           sp->size += tsize;
       
  3463           return prepend_alloc(m, tbase, oldbase, nb);
       
  3464         }
       
  3465         else
       
  3466           add_segment(m, tbase, tsize, mmap_flag);
       
  3467       }
       
  3468     }
       
  3469 
       
  3470     if (nb < m->topsize) { /* Allocate from new or extended top space */
       
  3471       size_t rsize = m->topsize -= nb;
       
  3472       mchunkptr p = m->top;
       
  3473       mchunkptr r = m->top = chunk_plus_offset(p, nb);
       
  3474       r->head = rsize | PINUSE_BIT;
       
  3475       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
       
  3476       check_top_chunk(m, m->top);
       
  3477       check_malloced_chunk(m, chunk2mem(p), nb);
       
  3478       return chunk2mem(p);
       
  3479     }
       
  3480   }
       
  3481 
       
  3482   return 0;
       
  3483 }
       
  3484 
       
  3485 /* -----------------------  system deallocation -------------------------- */
       
  3486 
       
  3487 static int sys_trim(mstate m, size_t pad) {
       
  3488   size_t released = 0;
       
  3489   if (pad < MAX_REQUEST && is_initialized(m)) {
       
  3490     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
       
  3491 
       
  3492     if (m->topsize > pad) {
       
  3493       /* Shrink top space in granularity-size units, keeping at least one */
       
  3494       size_t unit = mparams.granularity;
       
  3495       size_t extra = ((m->topsize - pad + (unit-1)) / unit - 1) * unit;
       
  3496       msegmentptr sp = segment_holding(m, (char*)m->top);
       
  3497 
       
  3498       if ((sp->sflags & IS_MMAPPED_BIT) != 0) {
       
  3499         if (HAVE_MMAP &&
       
  3500             sp->size >= extra &&
       
  3501             !has_segment_link(m, sp)) { /* can't shrink if pinned */
       
  3502           size_t newsize = sp->size - extra;
       
  3503           /* Prefer mremap, fall back to munmap */
       
  3504           if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
       
  3505               (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
       
  3506             released = extra;
       
  3507           }
       
  3508         }
       
  3509       }
       
  3510       else if (HAVE_MORECORE) {
       
  3511         if (extra >= MAX_SIZE_T / 2) /* Avoid wrapping negative */
       
  3512           extra = (MAX_SIZE_T / 2) + 1 - unit;
       
  3513         ACQUIRE_MORECORE_LOCK();
       
  3514         {
       
  3515           /* Make sure end of memory is where we last set it. */
       
  3516           char* old_brk = (char*)(CALL_MORECORE(0));
       
  3517           if (old_brk == sp->base + sp->size) {
       
  3518             char* rel_brk = (char*)(CALL_MORECORE(-extra));
       
  3519             char* new_brk = (char*)(CALL_MORECORE(0));
       
  3520             if (rel_brk != CMFAIL && new_brk < old_brk)
       
  3521               released = old_brk - new_brk;
       
  3522           }
       
  3523         }
       
  3524         RELEASE_MORECORE_LOCK();
       
  3525       }
       
  3526 
       
  3527       if (released != 0) {
       
  3528         sp->size -= released;
       
  3529         m->footprint -= released;
       
  3530         init_top(m, m->top, m->topsize - released);
       
  3531         check_top_chunk(m, m->top);
       
  3532       }
       
  3533     }
       
  3534 
       
  3535     /* Unmap any unused mmapped segments */
       
  3536     if (HAVE_MMAP && use_noncontiguous(m)) {
       
  3537       msegmentptr pred = 0;
       
  3538       msegmentptr sp = m->seg.next;
       
  3539       while (sp != 0) {
       
  3540         char* base = sp->base;
       
  3541         size_t size = sp->size;
       
  3542         msegmentptr next = sp->next;
       
  3543         if ((sp->sflags & IS_MMAPPED_BIT)) {
       
  3544           mchunkptr p = align_as_chunk(base);
       
  3545           size_t psize = chunksize(p);
       
  3546           /* Can unmap if first chunk holds entire segment and not pinned */
       
  3547           if (!cinuse(p) &&
       
  3548               p != m->top &&
       
  3549               segment_holds(sp, (char*)pred) &&
       
  3550               (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
       
  3551             tchunkptr tp = (tchunkptr)p;
       
  3552             msegment pseg = *pred;
       
  3553             pseg.next = next;
       
  3554             if (p == m->dv) {
       
  3555               m->dv = 0;
       
  3556               m->dvsize = 0;
       
  3557             }
       
  3558             else {
       
  3559               unlink_large_chunk(m, tp);
       
  3560             }
       
  3561             if (CALL_MUNMAP(base, size) == 0) {
       
  3562               /* relink next-pointer of list predecessor */
       
  3563               msegmentptr pp = &m->seg;
       
  3564               while (pp != 0) {
       
  3565                 if (pp->next == pred) {
       
  3566                   pp->next = sp;
       
  3567                   break;
       
  3568                 }
       
  3569                 pp = pp->next;
       
  3570               }
       
  3571               *sp = pseg;
       
  3572               released += size;
       
  3573               m->footprint -= size;
       
  3574             }
       
  3575             else { /* back out if cannot unmap */
       
  3576               insert_large_chunk(m, tp, psize);
       
  3577             }
       
  3578           }
       
  3579         }
       
  3580         pred = sp;
       
  3581         sp = next;
       
  3582       }
       
  3583     }
       
  3584 
       
  3585     /* On failure, disable autotrim to avoid repeated failed future calls */
       
  3586     if (released == 0)
       
  3587       m->trim_check = MAX_SIZE_T;
       
  3588   }
       
  3589 
       
  3590   return (released != 0)? 1 : 0;
       
  3591 }
       
  3592 
       
  3593 /* ---------------------------- malloc support --------------------------- */
       
  3594 
       
  3595 /* allocate a large request from the best fitting chunk in a treebin */
       
  3596 static void* tmalloc_large(mstate m, size_t nb) {
       
  3597   tchunkptr v = 0;
       
  3598   size_t rsize = -nb; /* Unsigned negation */
       
  3599   tchunkptr t;
       
  3600   bindex_t idx;
       
  3601   compute_tree_index(nb, idx);
       
  3602 
       
  3603   if ((t = *treebin_at(m, idx)) != 0) {
       
  3604     /* Traverse tree for this bin looking for node with size == nb */
       
  3605     size_t sizebits = nb << leftshift_for_tree_index(idx);
       
  3606     tchunkptr rst = 0;  /* The deepest untaken right subtree */
       
  3607     for (;;) {
       
  3608       tchunkptr rt;
       
  3609       size_t trem = chunksize(t) - nb;
       
  3610       if (trem < rsize) {
       
  3611         v = t;
       
  3612         if ((rsize = trem) == 0)
       
  3613           break;
       
  3614       }
       
  3615       rt = t->child[1];
       
  3616       t = t->child[(sizebits >> (SIZE_T_BITSIZE-1)) & 1];
       
  3617       if (rt != 0 && rt != t)
       
  3618         rst = rt;
       
  3619       if (t == 0) {
       
  3620         t = rst; /* set t to least subtree holding sizes > nb */
       
  3621         break;
       
  3622       }
       
  3623       sizebits <<= 1;
       
  3624     }
       
  3625   }
       
  3626 
       
  3627   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
       
  3628     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
       
  3629     if (leftbits != 0) {
       
  3630       bindex_t i;
       
  3631       binmap_t leastbit = least_bit(leftbits);
       
  3632       compute_bit2idx(leastbit, i);
       
  3633       t = *treebin_at(m, i);
       
  3634     }
       
  3635   }
       
  3636 
       
  3637   while (t != 0) { /* find smallest of tree or subtree */
       
  3638     size_t trem = chunksize(t) - nb;
       
  3639     if (trem < rsize) {
       
  3640       rsize = trem;
       
  3641       v = t;
       
  3642     }
       
  3643     t = leftmost_child(t);
       
  3644   }
       
  3645 
       
  3646   /*  If dv is a better fit, return 0 so malloc will use it */
       
  3647   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
       
  3648     if (RTCHECK(ok_address(m, v))) { /* split */
       
  3649       mchunkptr r = chunk_plus_offset(v, nb);
       
  3650       assert(chunksize(v) == rsize + nb);
       
  3651       if (RTCHECK(ok_next(v, r))) {
       
  3652         unlink_large_chunk(m, v);
       
  3653         if (rsize < MIN_CHUNK_SIZE)
       
  3654           set_inuse_and_pinuse(m, v, (rsize + nb));
       
  3655         else {
       
  3656           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
       
  3657           set_size_and_pinuse_of_free_chunk(r, rsize);
       
  3658           insert_chunk(m, r, rsize);
       
  3659         }
       
  3660         return chunk2mem(v);
       
  3661       }
       
  3662     }
       
  3663     CORRUPTION_ERROR_ACTION(m);
       
  3664   }
       
  3665   return 0;
       
  3666 }
       
  3667 
       
  3668 /* allocate a small request from the best fitting chunk in a treebin */
       
  3669 static void* tmalloc_small(mstate m, size_t nb) {
       
  3670   tchunkptr t, v;
       
  3671   size_t rsize;
       
  3672   bindex_t i;
       
  3673   binmap_t leastbit = least_bit(m->treemap);
       
  3674   compute_bit2idx(leastbit, i);
       
  3675 
       
  3676   v = t = *treebin_at(m, i);
       
  3677   rsize = chunksize(t) - nb;
       
  3678 
       
  3679   while ((t = leftmost_child(t)) != 0) {
       
  3680     size_t trem = chunksize(t) - nb;
       
  3681     if (trem < rsize) {
       
  3682       rsize = trem;
       
  3683       v = t;
       
  3684     }
       
  3685   }
       
  3686 
       
  3687   if (RTCHECK(ok_address(m, v))) {
       
  3688     mchunkptr r = chunk_plus_offset(v, nb);
       
  3689     assert(chunksize(v) == rsize + nb);
       
  3690     if (RTCHECK(ok_next(v, r))) {
       
  3691       unlink_large_chunk(m, v);
       
  3692       if (rsize < MIN_CHUNK_SIZE)
       
  3693         set_inuse_and_pinuse(m, v, (rsize + nb));
       
  3694       else {
       
  3695         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
       
  3696         set_size_and_pinuse_of_free_chunk(r, rsize);
       
  3697         replace_dv(m, r, rsize);
       
  3698       }
       
  3699       return chunk2mem(v);
       
  3700     }
       
  3701   }
       
  3702 
       
  3703   CORRUPTION_ERROR_ACTION(m);
       
  3704   return 0;
       
  3705 }
       
  3706 
       
  3707 /* --------------------------- realloc support --------------------------- */
       
  3708 
       
  3709 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
       
  3710   if (bytes >= MAX_REQUEST) {
       
  3711     MALLOC_FAILURE_ACTION;
       
  3712     return 0;
       
  3713   }
       
  3714   if (!PREACTION(m)) {
       
  3715     mchunkptr oldp = mem2chunk(oldmem);
       
  3716     size_t oldsize = chunksize(oldp);
       
  3717     mchunkptr next = chunk_plus_offset(oldp, oldsize);
       
  3718     mchunkptr newp = 0;
       
  3719     void* extra = 0;
       
  3720 
       
  3721     /* Try to either shrink or extend into top. Else malloc-copy-free */
       
  3722 
       
  3723     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
       
  3724                 ok_next(oldp, next) && ok_pinuse(next))) {
       
  3725       size_t nb = request2size(bytes);
       
  3726       if (is_mmapped(oldp))
       
  3727         newp = mmap_resize(m, oldp, nb);
       
  3728       else if (oldsize >= nb) { /* already big enough */
       
  3729         size_t rsize = oldsize - nb;
       
  3730         newp = oldp;
       
  3731         if (rsize >= MIN_CHUNK_SIZE) {
       
  3732           mchunkptr remainder = chunk_plus_offset(newp, nb);
       
  3733           set_inuse(m, newp, nb);
       
  3734           set_inuse(m, remainder, rsize);
       
  3735           extra = chunk2mem(remainder);
       
  3736         }
       
  3737       }
       
  3738       else if (next == m->top && oldsize + m->topsize > nb) {
       
  3739         /* Expand into top */
       
  3740         size_t newsize = oldsize + m->topsize;
       
  3741         size_t newtopsize = newsize - nb;
       
  3742         mchunkptr newtop = chunk_plus_offset(oldp, nb);
       
  3743         set_inuse(m, oldp, nb);
       
  3744         newtop->head = newtopsize |PINUSE_BIT;
       
  3745         m->top = newtop;
       
  3746         m->topsize = newtopsize;
       
  3747         newp = oldp;
       
  3748       }
       
  3749     }
       
  3750     else {
       
  3751       USAGE_ERROR_ACTION(m, oldmem);
       
  3752       POSTACTION(m);
       
  3753       return 0;
       
  3754     }
       
  3755 
       
  3756     POSTACTION(m);
       
  3757 
       
  3758     if (newp != 0) {
       
  3759       if (extra != 0) {
       
  3760         internal_free(m, extra);
       
  3761       }
       
  3762       check_inuse_chunk(m, newp);
       
  3763       return chunk2mem(newp);
       
  3764     }
       
  3765     else {
       
  3766       void* newmem = internal_malloc(m, bytes);
       
  3767       if (newmem != 0) {
       
  3768         size_t oc = oldsize - overhead_for(oldp);
       
  3769         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
       
  3770         internal_free(m, oldmem);
       
  3771       }
       
  3772       return newmem;
       
  3773     }
       
  3774   }
       
  3775   return 0;
       
  3776 }
       
  3777 
       
  3778 /* --------------------------- memalign support -------------------------- */
       
  3779 
       
  3780 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
       
  3781   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
       
  3782     return internal_malloc(m, bytes);
       
  3783   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
       
  3784     alignment = MIN_CHUNK_SIZE;
       
  3785   if ((alignment & (alignment-1)) != 0) {/* Ensure a power of 2 */
       
  3786     size_t a = MALLOC_ALIGNMENT * 2;
       
  3787     while (a < alignment) a <<= 1;
       
  3788     alignment = a;
       
  3789   }
       
  3790 
       
  3791   if (bytes >= MAX_REQUEST - alignment) {
       
  3792     MALLOC_FAILURE_ACTION;
       
  3793   }
       
  3794   else {
       
  3795     size_t nb = request2size(bytes);
       
  3796     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
       
  3797     char* mem = (char*)internal_malloc(m, req);
       
  3798     if (mem != 0) {
       
  3799       void* leader = 0;
       
  3800       void* trailer = 0;
       
  3801       mchunkptr p = mem2chunk(mem);
       
  3802 
       
  3803       if (PREACTION(m)) return 0;
       
  3804       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
       
  3805         /*
       
  3806           Find an aligned spot inside chunk.  Since we need to give
       
  3807           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
       
  3808           the first calculation places us at a spot with less than
       
  3809           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
       
  3810           We've allocated enough total room so that this is always
       
  3811           possible.
       
  3812         */
       
  3813         char* brk = (char*)mem2chunk((size_t)(((size_t)(mem + alignment-1)) &
       
  3814                                               -alignment));
       
  3815         char* pos = ((size_t)(brk - (char*)(p)) >= MIN_CHUNK_SIZE)?
       
  3816           brk : brk+alignment;
       
  3817         mchunkptr newp = (mchunkptr)pos;
       
  3818         size_t leadsize = pos - (char*)(p);
       
  3819         size_t newsize = chunksize(p) - leadsize;
       
  3820 
       
  3821         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
       
  3822           newp->prev_foot = p->prev_foot + leadsize;
       
  3823           newp->head = (newsize|CINUSE_BIT);
       
  3824         }
       
  3825         else { /* Otherwise, give back leader, use the rest */
       
  3826           set_inuse(m, newp, newsize);
       
  3827           set_inuse(m, p, leadsize);
       
  3828           leader = chunk2mem(p);
       
  3829         }
       
  3830         p = newp;
       
  3831       }
       
  3832 
       
  3833       /* Give back spare room at the end */
       
  3834       if (!is_mmapped(p)) {
       
  3835         size_t size = chunksize(p);
       
  3836         if (size > nb + MIN_CHUNK_SIZE) {
       
  3837           size_t remainder_size = size - nb;
       
  3838           mchunkptr remainder = chunk_plus_offset(p, nb);
       
  3839           set_inuse(m, p, nb);
       
  3840           set_inuse(m, remainder, remainder_size);
       
  3841           trailer = chunk2mem(remainder);
       
  3842         }
       
  3843       }
       
  3844 
       
  3845       assert (chunksize(p) >= nb);
       
  3846       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
       
  3847       check_inuse_chunk(m, p);
       
  3848       POSTACTION(m);
       
  3849       if (leader != 0) {
       
  3850         internal_free(m, leader);
       
  3851       }
       
  3852       if (trailer != 0) {
       
  3853         internal_free(m, trailer);
       
  3854       }
       
  3855       return chunk2mem(p);
       
  3856     }
       
  3857   }
       
  3858   return 0;
       
  3859 }
       
  3860 
       
  3861 /* ------------------------ comalloc/coalloc support --------------------- */
       
  3862 
       
  3863 static void** ialloc(mstate m,
       
  3864                      size_t n_elements,
       
  3865                      size_t* sizes,
       
  3866                      int opts,
       
  3867                      void* chunks[]) {
       
  3868   /*
       
  3869     This provides common support for independent_X routines, handling
       
  3870     all of the combinations that can result.
       
  3871 
       
  3872     The opts arg has:
       
  3873     bit 0 set if all elements are same size (using sizes[0])
       
  3874     bit 1 set if elements should be zeroed
       
  3875   */
       
  3876 
       
  3877   size_t    element_size;   /* chunksize of each element, if all same */
       
  3878   size_t    contents_size;  /* total size of elements */
       
  3879   size_t    array_size;     /* request size of pointer array */
       
  3880   void*     mem;            /* malloced aggregate space */
       
  3881   mchunkptr p;              /* corresponding chunk */
       
  3882   size_t    remainder_size; /* remaining bytes while splitting */
       
  3883   void**    marray;         /* either "chunks" or malloced ptr array */
       
  3884   mchunkptr array_chunk;    /* chunk for malloced ptr array */
       
  3885   flag_t    was_enabled;    /* to disable mmap */
       
  3886   size_t    size;
       
  3887   size_t    i;
       
  3888 
       
  3889   /* compute array length, if needed */
       
  3890   if (chunks != 0) {
       
  3891     if (n_elements == 0)
       
  3892       return chunks; /* nothing to do */
       
  3893     marray = chunks;
       
  3894     array_size = 0;
       
  3895   }
       
  3896   else {
       
  3897     /* if empty req, must still return chunk representing empty array */
       
  3898     if (n_elements == 0)
       
  3899       return (void**)internal_malloc(m, 0);
       
  3900     marray = 0;
       
  3901     array_size = request2size(n_elements * (sizeof(void*)));
       
  3902   }
       
  3903 
       
  3904   /* compute total element size */
       
  3905   if (opts & 0x1) { /* all-same-size */
       
  3906     element_size = request2size(*sizes);
       
  3907     contents_size = n_elements * element_size;
       
  3908   }
       
  3909   else { /* add up all the sizes */
       
  3910     element_size = 0;
       
  3911     contents_size = 0;
       
  3912     for (i = 0; i != n_elements; ++i)
       
  3913       contents_size += request2size(sizes[i]);
       
  3914   }
       
  3915 
       
  3916   size = contents_size + array_size;
       
  3917 
       
  3918   /*
       
  3919      Allocate the aggregate chunk.  First disable direct-mmapping so
       
  3920      malloc won't use it, since we would not be able to later
       
  3921      free/realloc space internal to a segregated mmap region.
       
  3922   */
       
  3923   was_enabled = use_mmap(m);
       
  3924   disable_mmap(m);
       
  3925   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
       
  3926   if (was_enabled)
       
  3927     enable_mmap(m);
       
  3928   if (mem == 0)
       
  3929     return 0;
       
  3930 
       
  3931   if (PREACTION(m)) return 0;
       
  3932   p = mem2chunk(mem);
       
  3933   remainder_size = chunksize(p);
       
  3934 
       
  3935   assert(!is_mmapped(p));
       
  3936 
       
  3937   if (opts & 0x2) {       /* optionally clear the elements */
       
  3938     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
       
  3939   }
       
  3940 
       
  3941   /* If not provided, allocate the pointer array as final part of chunk */
       
  3942   if (marray == 0) {
       
  3943     size_t  array_chunk_size;
       
  3944     array_chunk = chunk_plus_offset(p, contents_size);
       
  3945     array_chunk_size = remainder_size - contents_size;
       
  3946     marray = (void**) (chunk2mem(array_chunk));
       
  3947     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
       
  3948     remainder_size = contents_size;
       
  3949   }
       
  3950 
       
  3951   /* split out elements */
       
  3952   for (i = 0; ; ++i) {
       
  3953     marray[i] = chunk2mem(p);
       
  3954     if (i != n_elements-1) {
       
  3955       if (element_size != 0)
       
  3956         size = element_size;
       
  3957       else
       
  3958         size = request2size(sizes[i]);
       
  3959       remainder_size -= size;
       
  3960       set_size_and_pinuse_of_inuse_chunk(m, p, size);
       
  3961       p = chunk_plus_offset(p, size);
       
  3962     }
       
  3963     else { /* the final element absorbs any overallocation slop */
       
  3964       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
       
  3965       break;
       
  3966     }
       
  3967   }
       
  3968 
       
  3969 #if DEBUG
       
  3970   if (marray != chunks) {
       
  3971     /* final element must have exactly exhausted chunk */
       
  3972     if (element_size != 0) {
       
  3973       assert(remainder_size == element_size);
       
  3974     }
       
  3975     else {
       
  3976       assert(remainder_size == request2size(sizes[i]));
       
  3977     }
       
  3978     check_inuse_chunk(m, mem2chunk(marray));
       
  3979   }
       
  3980   for (i = 0; i != n_elements; ++i)
       
  3981     check_inuse_chunk(m, mem2chunk(marray[i]));
       
  3982 
       
  3983 #endif
       
  3984 
       
  3985   POSTACTION(m);
       
  3986   return marray;
       
  3987 }
       
  3988 
       
  3989 
       
  3990 /* -------------------------- public routines ---------------------------- */
       
  3991 
       
  3992 #if !ONLY_MSPACES
       
  3993 
       
  3994 void* dlmalloc(size_t bytes) {
       
  3995   /*
       
  3996      Basic algorithm:
       
  3997      If a small request (< 256 bytes minus per-chunk overhead):
       
  3998        1. If one exists, use a remainderless chunk in associated smallbin.
       
  3999           (Remainderless means that there are too few excess bytes to
       
  4000           represent as a chunk.)
       
  4001        2. If it is big enough, use the dv chunk, which is normally the
       
  4002           chunk adjacent to the one used for the most recent small request.
       
  4003        3. If one exists, split the smallest available chunk in a bin,
       
  4004           saving remainder in dv.
       
  4005        4. If it is big enough, use the top chunk.
       
  4006        5. If available, get memory from system and use it
       
  4007      Otherwise, for a large request:
       
  4008        1. Find the smallest available binned chunk that fits, and use it
       
  4009           if it is better fitting than dv chunk, splitting if necessary.
       
  4010        2. If better fitting than any binned chunk, use the dv chunk.
       
  4011        3. If it is big enough, use the top chunk.
       
  4012        4. If request size >= mmap threshold, try to directly mmap this chunk.
       
  4013        5. If available, get memory from system and use it
       
  4014 
       
  4015      The ugly goto's here ensure that postaction occurs along all paths.
       
  4016   */
       
  4017 
       
  4018   if (!PREACTION(gm)) {
       
  4019     void* mem;
       
  4020     size_t nb;
       
  4021     if (bytes <= MAX_SMALL_REQUEST) {
       
  4022       bindex_t idx;
       
  4023       binmap_t smallbits;
       
  4024       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
       
  4025       idx = small_index(nb);
       
  4026       smallbits = gm->smallmap >> idx;
       
  4027 
       
  4028       if ((smallbits & 0x3) != 0) { /* Remainderless fit to a smallbin. */
       
  4029         mchunkptr b, p;
       
  4030         idx += ~smallbits & 1;      /* Uses next bin if idx empty */
       
  4031         b = smallbin_at(gm, idx);
       
  4032         p = b->fd;
       
  4033         assert(chunksize(p) == small_index2size(idx));
       
  4034         unlink_first_small_chunk(gm, b, p, idx);
       
  4035         set_inuse_and_pinuse(gm, p, small_index2size(idx));
       
  4036         mem = chunk2mem(p);
       
  4037         check_malloced_chunk(gm, mem, nb);
       
  4038         goto postaction;
       
  4039       }
       
  4040 
       
  4041       else if (nb > gm->dvsize) {
       
  4042         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
       
  4043           mchunkptr b, p, r;
       
  4044           size_t rsize;
       
  4045           bindex_t i;
       
  4046           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
       
  4047           binmap_t leastbit = least_bit(leftbits);
       
  4048           compute_bit2idx(leastbit, i);
       
  4049           b = smallbin_at(gm, i);
       
  4050           p = b->fd;
       
  4051           assert(chunksize(p) == small_index2size(i));
       
  4052           unlink_first_small_chunk(gm, b, p, i);
       
  4053           rsize = small_index2size(i) - nb;
       
  4054           /* Fit here cannot be remainderless if 4byte sizes */
       
  4055           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
       
  4056             set_inuse_and_pinuse(gm, p, small_index2size(i));
       
  4057           else {
       
  4058             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
       
  4059             r = chunk_plus_offset(p, nb);
       
  4060             set_size_and_pinuse_of_free_chunk(r, rsize);
       
  4061             replace_dv(gm, r, rsize);
       
  4062           }
       
  4063           mem = chunk2mem(p);
       
  4064           check_malloced_chunk(gm, mem, nb);
       
  4065           goto postaction;
       
  4066         }
       
  4067 
       
  4068         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
       
  4069           check_malloced_chunk(gm, mem, nb);
       
  4070           goto postaction;
       
  4071         }
       
  4072       }
       
  4073     }
       
  4074     else if (bytes >= MAX_REQUEST)
       
  4075       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
       
  4076     else {
       
  4077       nb = pad_request(bytes);
       
  4078       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
       
  4079         check_malloced_chunk(gm, mem, nb);
       
  4080         goto postaction;
       
  4081       }
       
  4082     }
       
  4083 
       
  4084     if (nb <= gm->dvsize) {
       
  4085       size_t rsize = gm->dvsize - nb;
       
  4086       mchunkptr p = gm->dv;
       
  4087       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
       
  4088         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
       
  4089         gm->dvsize = rsize;
       
  4090         set_size_and_pinuse_of_free_chunk(r, rsize);
       
  4091         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
       
  4092       }
       
  4093       else { /* exhaust dv */
       
  4094         size_t dvs = gm->dvsize;
       
  4095         gm->dvsize = 0;
       
  4096         gm->dv = 0;
       
  4097         set_inuse_and_pinuse(gm, p, dvs);
       
  4098       }
       
  4099       mem = chunk2mem(p);
       
  4100       check_malloced_chunk(gm, mem, nb);
       
  4101       goto postaction;
       
  4102     }
       
  4103 
       
  4104     else if (nb < gm->topsize) { /* Split top */
       
  4105       size_t rsize = gm->topsize -= nb;
       
  4106       mchunkptr p = gm->top;
       
  4107       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
       
  4108       r->head = rsize | PINUSE_BIT;
       
  4109       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
       
  4110       mem = chunk2mem(p);
       
  4111       check_top_chunk(gm, gm->top);
       
  4112       check_malloced_chunk(gm, mem, nb);
       
  4113       goto postaction;
       
  4114     }
       
  4115 
       
  4116     mem = sys_alloc(gm, nb);
       
  4117 
       
  4118   postaction:
       
  4119     POSTACTION(gm);
       
  4120     return mem;
       
  4121   }
       
  4122 
       
  4123   return 0;
       
  4124 }
       
  4125 
       
  4126 void dlfree(void* mem) {
       
  4127   /*
       
  4128      Consolidate freed chunks with preceeding or succeeding bordering
       
  4129      free chunks, if they exist, and then place in a bin.  Intermixed
       
  4130      with special cases for top, dv, mmapped chunks, and usage errors.
       
  4131   */
       
  4132 
       
  4133   if (mem != 0) {
       
  4134     mchunkptr p  = mem2chunk(mem);
       
  4135 #if FOOTERS
       
  4136     mstate fm = get_mstate_for(p);
       
  4137     if (!ok_magic(fm)) {
       
  4138       USAGE_ERROR_ACTION(fm, p);
       
  4139       return;
       
  4140     }
       
  4141 #else
       
  4142 #define fm gm
       
  4143 #endif
       
  4144     if (!PREACTION(fm)) {
       
  4145       check_inuse_chunk(fm, p);
       
  4146       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
       
  4147         size_t psize = chunksize(p);
       
  4148         mchunkptr next = chunk_plus_offset(p, psize);
       
  4149         if (!pinuse(p)) {
       
  4150           size_t prevsize = p->prev_foot;
       
  4151           if ((prevsize & IS_MMAPPED_BIT) != 0) {
       
  4152             prevsize &= ~IS_MMAPPED_BIT;
       
  4153             psize += prevsize + MMAP_FOOT_PAD;
       
  4154             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
       
  4155               fm->footprint -= psize;
       
  4156             goto postaction;
       
  4157           }
       
  4158           else {
       
  4159             mchunkptr prev = chunk_minus_offset(p, prevsize);
       
  4160             psize += prevsize;
       
  4161             p = prev;
       
  4162             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
       
  4163               if (p != fm->dv) {
       
  4164                 unlink_chunk(fm, p, prevsize);
       
  4165               }
       
  4166               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
       
  4167                 fm->dvsize = psize;
       
  4168                 set_free_with_pinuse(p, psize, next);
       
  4169                 goto postaction;
       
  4170               }
       
  4171             }
       
  4172             else
       
  4173               goto erroraction;
       
  4174           }
       
  4175         }
       
  4176 
       
  4177         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
       
  4178           if (!cinuse(next)) {  /* consolidate forward */
       
  4179             if (next == fm->top) {
       
  4180               size_t tsize = fm->topsize += psize;
       
  4181               fm->top = p;
       
  4182               p->head = tsize | PINUSE_BIT;
       
  4183               if (p == fm->dv) {
       
  4184                 fm->dv = 0;
       
  4185                 fm->dvsize = 0;
       
  4186               }
       
  4187               if (should_trim(fm, tsize))
       
  4188                 sys_trim(fm, 0);
       
  4189               goto postaction;
       
  4190             }
       
  4191             else if (next == fm->dv) {
       
  4192               size_t dsize = fm->dvsize += psize;
       
  4193               fm->dv = p;
       
  4194               set_size_and_pinuse_of_free_chunk(p, dsize);
       
  4195               goto postaction;
       
  4196             }
       
  4197             else {
       
  4198               size_t nsize = chunksize(next);
       
  4199               psize += nsize;
       
  4200               unlink_chunk(fm, next, nsize);
       
  4201               set_size_and_pinuse_of_free_chunk(p, psize);
       
  4202               if (p == fm->dv) {
       
  4203                 fm->dvsize = psize;
       
  4204                 goto postaction;
       
  4205               }
       
  4206             }
       
  4207           }
       
  4208           else
       
  4209             set_free_with_pinuse(p, psize, next);
       
  4210           insert_chunk(fm, p, psize);
       
  4211           check_free_chunk(fm, p);
       
  4212           goto postaction;
       
  4213         }
       
  4214       }
       
  4215     erroraction:
       
  4216       USAGE_ERROR_ACTION(fm, p);
       
  4217     postaction:
       
  4218       POSTACTION(fm);
       
  4219     }
       
  4220   }
       
  4221 #if !FOOTERS
       
  4222 #undef fm
       
  4223 #endif
       
  4224 }
       
  4225 
       
  4226 void* dlcalloc(size_t n_elements, size_t elem_size) {
       
  4227   void* mem;
       
  4228   size_t req = 0;
       
  4229   if (n_elements != 0) {
       
  4230     req = n_elements * elem_size;
       
  4231     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
       
  4232         (req / n_elements != elem_size))
       
  4233       req = MAX_SIZE_T; /* force downstream failure on overflow */
       
  4234   }
       
  4235   mem = dlmalloc(req);
       
  4236   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
       
  4237     memset(mem, 0, req);
       
  4238   return mem;
       
  4239 }
       
  4240 
       
  4241 void* dlrealloc(void* oldmem, size_t bytes) {
       
  4242   if (oldmem == 0)
       
  4243     return dlmalloc(bytes);
       
  4244   else {
       
  4245 #if ! FOOTERS
       
  4246     mstate m = gm;
       
  4247 #else
       
  4248     mstate m = get_mstate_for(mem2chunk(oldmem));
       
  4249     if (!ok_magic(m)) {
       
  4250       USAGE_ERROR_ACTION(m, oldmem);
       
  4251       return 0;
       
  4252     }
       
  4253 #endif
       
  4254     return internal_realloc(m, oldmem, bytes);
       
  4255   }
       
  4256 }
       
  4257 
       
  4258 void* dlmemalign(size_t alignment, size_t bytes) {
       
  4259   return internal_memalign(gm, alignment, bytes);
       
  4260 }
       
  4261 
       
  4262 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
       
  4263                                  void* chunks[]) {
       
  4264   size_t sz = elem_size; /* serves as 1-element array */
       
  4265   return ialloc(gm, n_elements, &sz, 3, chunks);
       
  4266 }
       
  4267 
       
  4268 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
       
  4269                                    void* chunks[]) {
       
  4270   return ialloc(gm, n_elements, sizes, 0, chunks);
       
  4271 }
       
  4272 
       
  4273 void* dlvalloc(size_t bytes) {
       
  4274   size_t pagesz;
       
  4275   init_mparams();
       
  4276   pagesz = mparams.page_size;
       
  4277   return dlmemalign(pagesz, bytes);
       
  4278 }
       
  4279 
       
  4280 void* dlpvalloc(size_t bytes) {
       
  4281   size_t pagesz;
       
  4282   init_mparams();
       
  4283   pagesz = mparams.page_size;
       
  4284   return dlmemalign(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
       
  4285 }
       
  4286 
       
  4287 int dlmalloc_trim(size_t pad) {
       
  4288   int result = 0;
       
  4289   if (!PREACTION(gm)) {
       
  4290     result = sys_trim(gm, pad);
       
  4291     POSTACTION(gm);
       
  4292   }
       
  4293   return result;
       
  4294 }
       
  4295 
       
  4296 size_t dlmalloc_footprint() {
       
  4297   return gm->footprint;
       
  4298 }
       
  4299 
       
  4300 #if !NO_MALLINFO
       
  4301 struct mallinfo dlmallinfo() {
       
  4302   return internal_mallinfo(gm);
       
  4303 }
       
  4304 #endif
       
  4305 
       
  4306 void dlmalloc_stats() {
       
  4307   internal_malloc_stats(gm);
       
  4308 }
       
  4309 
       
  4310 size_t dlmalloc_usable_size(void* mem) {
       
  4311   if (mem != 0) {
       
  4312     mchunkptr p = mem2chunk(mem);
       
  4313     if (cinuse(p))
       
  4314       return chunksize(p) - overhead_for(p);
       
  4315   }
       
  4316   return 0;
       
  4317 }
       
  4318 
       
  4319 int dlmallopt(int param_number, int value) {
       
  4320   return change_mparam(param_number, value);
       
  4321 }
       
  4322 
       
  4323 #endif
       
  4324 
       
  4325 /* ----------------------------- user mspaces ---------------------------- */
       
  4326 
       
  4327 #if MSPACES
       
  4328 
       
  4329 static mstate init_user_mstate(char* tbase, size_t tsize) {
       
  4330   size_t msize = pad_request(sizeof(struct malloc_state));
       
  4331   mchunkptr mn;
       
  4332   mchunkptr msp = align_as_chunk(tbase);
       
  4333   mstate m = (mstate)(chunk2mem(msp));
       
  4334   memset(m, 0, msize);
       
  4335   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
       
  4336   m->seg.base = m->least_addr = tbase;
       
  4337   m->seg.size = m->footprint = m->max_footprint = tsize;
       
  4338   m->magic = mparams.magic;
       
  4339   m->mflags = mparams.default_mflags;
       
  4340   disable_contiguous(m);
       
  4341   init_bins(m);
       
  4342   mn = next_chunk(mem2chunk(m));
       
  4343   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
       
  4344   check_top_chunk(m, m->top);
       
  4345   return m;
       
  4346 }
       
  4347 
       
  4348 mspace create_mspace(size_t capacity, int locked) {
       
  4349   mstate m = 0;
       
  4350   size_t msize = pad_request(sizeof(struct malloc_state));
       
  4351   init_mparams(); /* Ensure pagesize etc initialized */
       
  4352 
       
  4353   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
       
  4354     size_t rs = ((capacity == 0)? mparams.granularity :
       
  4355                  (capacity + TOP_FOOT_SIZE + msize));
       
  4356     flag_t mmap_flag = IS_MMAPPED_BIT;
       
  4357     size_t tsize = granularity_align(rs);
       
  4358     char* tbase = (char*)(CALL_MMAP(tsize));
       
  4359     if (tbase != CMFAIL) {
       
  4360       m = init_user_mstate(tbase, tsize);
       
  4361       m->seg.sflags = mmap_flag;
       
  4362       set_lock(m, locked);
       
  4363     }
       
  4364   }
       
  4365   return (mspace)m;
       
  4366 }
       
  4367 
       
  4368 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
       
  4369   mstate m = 0;
       
  4370   size_t msize = pad_request(sizeof(struct malloc_state));
       
  4371   init_mparams(); /* Ensure pagesize etc initialized */
       
  4372 
       
  4373   if (capacity > msize + TOP_FOOT_SIZE &&
       
  4374       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
       
  4375     m = init_user_mstate((char*)base, capacity);
       
  4376     set_lock(m, locked);
       
  4377   }
       
  4378   return (mspace)m;
       
  4379 }
       
  4380 
       
  4381 size_t destroy_mspace(mspace msp) {
       
  4382   size_t freed = 0;
       
  4383   mstate ms = (mstate)msp;
       
  4384   if (ok_magic(ms)) {
       
  4385     size_t msize = ms->seg.size;
       
  4386     flag_t mflag = ms->seg.sflags;
       
  4387     /* free each segment, getting each link before unmapping it */
       
  4388     msegmentptr sp = ms->seg.next;
       
  4389     if (sp != 0) {
       
  4390       msegmentptr next = sp->next;
       
  4391       char* nextbase = sp->base;
       
  4392       size_t nextsize = sp->size;
       
  4393       flag_t nextflag = sp->sflags;
       
  4394       while (sp != 0) {
       
  4395         char* base = nextbase;
       
  4396         size_t size = nextsize;
       
  4397         flag_t flag = nextflag;
       
  4398         if (next != 0) {
       
  4399           next = next->next;
       
  4400           if (next != 0) {
       
  4401             nextbase = next->base;
       
  4402             nextsize = next->size;
       
  4403             nextflag = next->sflags;
       
  4404           }
       
  4405         }
       
  4406         if ((flag & IS_MMAPPED_BIT) &&
       
  4407             CALL_MUNMAP(base, size) == 0)
       
  4408           freed += size;
       
  4409         sp = next;
       
  4410       }
       
  4411     }
       
  4412 
       
  4413     /* free main space */
       
  4414     if ((mflag & IS_MMAPPED_BIT) &&
       
  4415         CALL_MUNMAP((char*)(mem2chunk(ms)), msize) == 0)
       
  4416       freed += msize;
       
  4417   }
       
  4418   else {
       
  4419     USAGE_ERROR_ACTION(ms,ms);
       
  4420   }
       
  4421   return freed;
       
  4422 }
       
  4423 
       
  4424 /*
       
  4425   mspace versions of routines are near-clones of the global
       
  4426   versions. This is not so nice but better than the alternatives.
       
  4427 */
       
  4428 
       
  4429 
       
  4430 void* mspace_malloc(mspace msp, size_t bytes) {
       
  4431   mstate ms = (mstate)msp;
       
  4432   if (!ok_magic(ms)) {
       
  4433     USAGE_ERROR_ACTION(ms,ms);
       
  4434     return 0;
       
  4435   }
       
  4436   if (!PREACTION(ms)) {
       
  4437     void* mem;
       
  4438     size_t nb;
       
  4439     if (bytes <= MAX_SMALL_REQUEST) {
       
  4440       bindex_t idx;
       
  4441       binmap_t smallbits;
       
  4442       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
       
  4443       idx = small_index(nb);
       
  4444       smallbits = ms->smallmap >> idx;
       
  4445 
       
  4446       if ((smallbits & 0x3) != 0) { /* Remainderless fit to a smallbin. */
       
  4447         mchunkptr b, p;
       
  4448         idx += ~smallbits & 1;      /* Uses next bin if idx empty */
       
  4449         b = smallbin_at(ms, idx);
       
  4450         p = b->fd;
       
  4451         assert(chunksize(p) == small_index2size(idx));
       
  4452         unlink_first_small_chunk(ms, b, p, idx);
       
  4453         set_inuse_and_pinuse(ms, p, small_index2size(idx));
       
  4454         mem = chunk2mem(p);
       
  4455         check_malloced_chunk(ms, mem, nb);
       
  4456         goto postaction;
       
  4457       }
       
  4458 
       
  4459       else if (nb > ms->dvsize) {
       
  4460         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
       
  4461           mchunkptr b, p, r;
       
  4462           size_t rsize;
       
  4463           bindex_t i;
       
  4464           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
       
  4465           binmap_t leastbit = least_bit(leftbits);
       
  4466           compute_bit2idx(leastbit, i);
       
  4467           b = smallbin_at(ms, i);
       
  4468           p = b->fd;
       
  4469           assert(chunksize(p) == small_index2size(i));
       
  4470           unlink_first_small_chunk(ms, b, p, i);
       
  4471           rsize = small_index2size(i) - nb;
       
  4472           /* Fit here cannot be remainderless if 4byte sizes */
       
  4473           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
       
  4474             set_inuse_and_pinuse(ms, p, small_index2size(i));
       
  4475           else {
       
  4476             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
       
  4477             r = chunk_plus_offset(p, nb);
       
  4478             set_size_and_pinuse_of_free_chunk(r, rsize);
       
  4479             replace_dv(ms, r, rsize);
       
  4480           }
       
  4481           mem = chunk2mem(p);
       
  4482           check_malloced_chunk(ms, mem, nb);
       
  4483           goto postaction;
       
  4484         }
       
  4485 
       
  4486         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
       
  4487           check_malloced_chunk(ms, mem, nb);
       
  4488           goto postaction;
       
  4489         }
       
  4490       }
       
  4491     }
       
  4492     else if (bytes >= MAX_REQUEST)
       
  4493       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
       
  4494     else {
       
  4495       nb = pad_request(bytes);
       
  4496       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
       
  4497         check_malloced_chunk(ms, mem, nb);
       
  4498         goto postaction;
       
  4499       }
       
  4500     }
       
  4501 
       
  4502     if (nb <= ms->dvsize) {
       
  4503       size_t rsize = ms->dvsize - nb;
       
  4504       mchunkptr p = ms->dv;
       
  4505       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
       
  4506         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
       
  4507         ms->dvsize = rsize;
       
  4508         set_size_and_pinuse_of_free_chunk(r, rsize);
       
  4509         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
       
  4510       }
       
  4511       else { /* exhaust dv */
       
  4512         size_t dvs = ms->dvsize;
       
  4513         ms->dvsize = 0;
       
  4514         ms->dv = 0;
       
  4515         set_inuse_and_pinuse(ms, p, dvs);
       
  4516       }
       
  4517       mem = chunk2mem(p);
       
  4518       check_malloced_chunk(ms, mem, nb);
       
  4519       goto postaction;
       
  4520     }
       
  4521 
       
  4522     else if (nb < ms->topsize) { /* Split top */
       
  4523       size_t rsize = ms->topsize -= nb;
       
  4524       mchunkptr p = ms->top;
       
  4525       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
       
  4526       r->head = rsize | PINUSE_BIT;
       
  4527       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
       
  4528       mem = chunk2mem(p);
       
  4529       check_top_chunk(ms, ms->top);
       
  4530       check_malloced_chunk(ms, mem, nb);
       
  4531       goto postaction;
       
  4532     }
       
  4533 
       
  4534     mem = sys_alloc(ms, nb);
       
  4535 
       
  4536   postaction:
       
  4537     POSTACTION(ms);
       
  4538     return mem;
       
  4539   }
       
  4540 
       
  4541   return 0;
       
  4542 }
       
  4543 
       
  4544 void mspace_free(mspace msp, void* mem) {
       
  4545   if (mem != 0) {
       
  4546     mchunkptr p  = mem2chunk(mem);
       
  4547 #if FOOTERS
       
  4548     mstate fm = get_mstate_for(p);
       
  4549 #else
       
  4550     mstate fm = (mstate)msp;
       
  4551 #endif
       
  4552     if (!ok_magic(fm)) {
       
  4553       USAGE_ERROR_ACTION(fm, p);
       
  4554       return;
       
  4555     }
       
  4556     if (!PREACTION(fm)) {
       
  4557       check_inuse_chunk(fm, p);
       
  4558       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
       
  4559         size_t psize = chunksize(p);
       
  4560         mchunkptr next = chunk_plus_offset(p, psize);
       
  4561         if (!pinuse(p)) {
       
  4562           size_t prevsize = p->prev_foot;
       
  4563           if ((prevsize & IS_MMAPPED_BIT) != 0) {
       
  4564             prevsize &= ~IS_MMAPPED_BIT;
       
  4565             psize += prevsize + MMAP_FOOT_PAD;
       
  4566             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
       
  4567               fm->footprint -= psize;
       
  4568             goto postaction;
       
  4569           }
       
  4570           else {
       
  4571             mchunkptr prev = chunk_minus_offset(p, prevsize);
       
  4572             psize += prevsize;
       
  4573             p = prev;
       
  4574             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
       
  4575               if (p != fm->dv) {
       
  4576                 unlink_chunk(fm, p, prevsize);
       
  4577               }
       
  4578               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
       
  4579                 fm->dvsize = psize;
       
  4580                 set_free_with_pinuse(p, psize, next);
       
  4581                 goto postaction;
       
  4582               }
       
  4583             }
       
  4584             else
       
  4585               goto erroraction;
       
  4586           }
       
  4587         }
       
  4588 
       
  4589         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
       
  4590           if (!cinuse(next)) {  /* consolidate forward */
       
  4591             if (next == fm->top) {
       
  4592               size_t tsize = fm->topsize += psize;
       
  4593               fm->top = p;
       
  4594               p->head = tsize | PINUSE_BIT;
       
  4595               if (p == fm->dv) {
       
  4596                 fm->dv = 0;
       
  4597                 fm->dvsize = 0;
       
  4598               }
       
  4599               if (should_trim(fm, tsize))
       
  4600                 sys_trim(fm, 0);
       
  4601               goto postaction;
       
  4602             }
       
  4603             else if (next == fm->dv) {
       
  4604               size_t dsize = fm->dvsize += psize;
       
  4605               fm->dv = p;
       
  4606               set_size_and_pinuse_of_free_chunk(p, dsize);
       
  4607               goto postaction;
       
  4608             }
       
  4609             else {
       
  4610               size_t nsize = chunksize(next);
       
  4611               psize += nsize;
       
  4612               unlink_chunk(fm, next, nsize);
       
  4613               set_size_and_pinuse_of_free_chunk(p, psize);
       
  4614               if (p == fm->dv) {
       
  4615                 fm->dvsize = psize;
       
  4616                 goto postaction;
       
  4617               }
       
  4618             }
       
  4619           }
       
  4620           else
       
  4621             set_free_with_pinuse(p, psize, next);
       
  4622           insert_chunk(fm, p, psize);
       
  4623           check_free_chunk(fm, p);
       
  4624           goto postaction;
       
  4625         }
       
  4626       }
       
  4627     erroraction:
       
  4628       USAGE_ERROR_ACTION(fm, p);
       
  4629     postaction:
       
  4630       POSTACTION(fm);
       
  4631     }
       
  4632   }
       
  4633 }
       
  4634 
       
  4635 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
       
  4636   void* mem;
       
  4637   size_t req = 0;
       
  4638   mstate ms = (mstate)msp;
       
  4639   if (!ok_magic(ms)) {
       
  4640     USAGE_ERROR_ACTION(ms,ms);
       
  4641     return 0;
       
  4642   }
       
  4643   if (n_elements != 0) {
       
  4644     req = n_elements * elem_size;
       
  4645     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
       
  4646         (req / n_elements != elem_size))
       
  4647       req = MAX_SIZE_T; /* force downstream failure on overflow */
       
  4648   }
       
  4649   mem = internal_malloc(ms, req);
       
  4650   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
       
  4651     memset(mem, 0, req);
       
  4652   return mem;
       
  4653 }
       
  4654 
       
  4655 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
       
  4656   if (oldmem == 0)
       
  4657     return mspace_malloc(msp, bytes);
       
  4658   else {
       
  4659 #if FOOTERS
       
  4660     mchunkptr p  = mem2chunk(mem);
       
  4661     mstate ms = get_mstate_for(p);
       
  4662 #else
       
  4663     mstate ms = (mstate)msp;
       
  4664 #endif
       
  4665     if (!ok_magic(ms)) {
       
  4666       USAGE_ERROR_ACTION(ms,ms);
       
  4667       return 0;
       
  4668     }
       
  4669     return internal_realloc(ms, oldmem, bytes);
       
  4670   }
       
  4671 }
       
  4672 
       
  4673 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
       
  4674   mstate ms = (mstate)msp;
       
  4675   if (!ok_magic(ms)) {
       
  4676     USAGE_ERROR_ACTION(ms,ms);
       
  4677     return 0;
       
  4678   }
       
  4679   return internal_memalign(ms, alignment, bytes);
       
  4680 }
       
  4681 
       
  4682 void** mspace_independent_calloc(mspace msp, size_t n_elements,
       
  4683                                  size_t elem_size, void* chunks[]) {
       
  4684   size_t sz = elem_size; /* serves as 1-element array */
       
  4685   mstate ms = (mstate)msp;
       
  4686   if (!ok_magic(ms)) {
       
  4687     USAGE_ERROR_ACTION(ms,ms);
       
  4688     return 0;
       
  4689   }
       
  4690   return ialloc(ms, n_elements, &sz, 3, chunks);
       
  4691 }
       
  4692 
       
  4693 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
       
  4694                                    size_t sizes[], void* chunks[]) {
       
  4695   mstate ms = (mstate)msp;
       
  4696   if (!ok_magic(ms)) {
       
  4697     USAGE_ERROR_ACTION(ms,ms);
       
  4698     return 0;
       
  4699   }
       
  4700   return ialloc(ms, n_elements, sizes, 0, chunks);
       
  4701 }
       
  4702 
       
  4703 int mspace_trim(mspace msp, size_t pad) {
       
  4704   int result = 0;
       
  4705   mstate ms = (mstate)msp;
       
  4706   if (ok_magic(ms)) {
       
  4707     if (!PREACTION(ms)) {
       
  4708       result = sys_trim(ms, pad);
       
  4709       POSTACTION(ms);
       
  4710     }
       
  4711   }
       
  4712   else {
       
  4713     USAGE_ERROR_ACTION(ms,ms);
       
  4714   }
       
  4715   return result;
       
  4716 }
       
  4717 
       
  4718 void mspace_malloc_stats(mspace msp) {
       
  4719   mstate ms = (mstate)msp;
       
  4720   if (ok_magic(ms)) {
       
  4721     internal_malloc_stats(ms);
       
  4722   }
       
  4723   else {
       
  4724     USAGE_ERROR_ACTION(ms,ms);
       
  4725   }
       
  4726 }
       
  4727 
       
  4728 size_t mspace_footprint(mspace msp) {
       
  4729   size_t result;
       
  4730   mstate ms = (mstate)msp;
       
  4731   if (ok_magic(ms)) {
       
  4732     result = ms->footprint;
       
  4733   }
       
  4734   USAGE_ERROR_ACTION(ms,ms);
       
  4735   return result;
       
  4736 }
       
  4737 
       
  4738 
       
  4739 #if !NO_MALLIFO
       
  4740 struct mallinfo mspace_mallinfo(mspace msp) {
       
  4741   mstate ms = (mstate)msp;
       
  4742   if (!ok_magic(ms)) {
       
  4743     USAGE_ERROR_ACTION(ms,ms);
       
  4744   }
       
  4745   return internal_mallinfo(ms);
       
  4746 }
       
  4747 #endif
       
  4748 
       
  4749 int mspace_mallopt(int param_number, int value) {
       
  4750   return change_mparam(param_number, value);
       
  4751 }
       
  4752 
       
  4753 #endif
       
  4754 
       
  4755 /* -------------------- Alternative MORECORE functions ------------------- */
       
  4756 
       
  4757 /*
       
  4758   Guidelines for creating a custom version of MORECORE:
       
  4759 
       
  4760   * For best performance, MORECORE should allocate in multiples of pagesize.
       
  4761   * MORECORE may allocate more memory than requested. (Or even less,
       
  4762       but this will usually result in a malloc failure.)
       
  4763   * MORECORE must not allocate memory when given argument zero, but
       
  4764       instead return one past the end address of memory from previous
       
  4765       nonzero call.
       
  4766   * For best performance, consecutive calls to MORECORE with positive
       
  4767       arguments should return increasing addresses, indicating that
       
  4768       space has been contiguously extended.
       
  4769   * Even though consecutive calls to MORECORE need not return contiguous
       
  4770       addresses, it must be OK for malloc'ed chunks to span multiple
       
  4771       regions in those cases where they do happen to be contiguous.
       
  4772   * MORECORE need not handle negative arguments -- it may instead
       
  4773       just return MFAIL when given negative arguments.
       
  4774       Negative arguments are always multiples of pagesize. MORECORE
       
  4775       must not misinterpret negative args as large positive unsigned
       
  4776       args. You can suppress all such calls from even occurring by defining
       
  4777       MORECORE_CANNOT_TRIM,
       
  4778 
       
  4779   As an example alternative MORECORE, here is a custom allocator
       
  4780   kindly contributed for pre-OSX macOS.  It uses virtually but not
       
  4781   necessarily physically contiguous non-paged memory (locked in,
       
  4782   present and won't get swapped out).  You can use it by uncommenting
       
  4783   this section, adding some #includes, and setting up the appropriate
       
  4784   defines above:
       
  4785 
       
  4786       #define MORECORE osMoreCore
       
  4787 
       
  4788   There is also a shutdown routine that should somehow be called for
       
  4789   cleanup upon program exit.
       
  4790 
       
  4791   #define MAX_POOL_ENTRIES 100
       
  4792   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
       
  4793   static int next_os_pool;
       
  4794   void *our_os_pools[MAX_POOL_ENTRIES];
       
  4795 
       
  4796   void *osMoreCore(int size)
       
  4797   {
       
  4798     void *ptr = 0;
       
  4799     static void *sbrk_top = 0;
       
  4800 
       
  4801     if (size > 0)
       
  4802     {
       
  4803       if (size < MINIMUM_MORECORE_SIZE)
       
  4804          size = MINIMUM_MORECORE_SIZE;
       
  4805       if (CurrentExecutionLevel() == kTaskLevel)
       
  4806          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
       
  4807       if (ptr == 0)
       
  4808       {
       
  4809         return (void *) MFAIL;
       
  4810       }
       
  4811       // save ptrs so they can be freed during cleanup
       
  4812       our_os_pools[next_os_pool] = ptr;
       
  4813       next_os_pool++;
       
  4814       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
       
  4815       sbrk_top = (char *) ptr + size;
       
  4816       return ptr;
       
  4817     }
       
  4818     else if (size < 0)
       
  4819     {
       
  4820       // we don't currently support shrink behavior
       
  4821       return (void *) MFAIL;
       
  4822     }
       
  4823     else
       
  4824     {
       
  4825       return sbrk_top;
       
  4826     }
       
  4827   }
       
  4828 
       
  4829   // cleanup any allocated memory pools
       
  4830   // called as last thing before shutting down driver
       
  4831 
       
  4832   void osCleanupMem(void)
       
  4833   {
       
  4834     void **ptr;
       
  4835 
       
  4836     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
       
  4837       if (*ptr)
       
  4838       {
       
  4839          PoolDeallocate(*ptr);
       
  4840          *ptr = 0;
       
  4841       }
       
  4842   }
       
  4843 
       
  4844 */
       
  4845 
       
  4846 static RChunk rchunk;
       
  4847 #define RESCUE_BUFFER_SIZE        (100*1024)
       
  4848 #define SYS_HEAP_RECOVER_SIZE     (64*4096)
       
  4849 #define SYS_UI_REQUIRE_SIZE       (64*4096)
       
  4850 #define FREEMEM_TO_SKIP_MALLOCINFO   (6*1024*1024)  
       
  4851 
       
  4852 static void* watermark = 0;
       
  4853 static void* safetop = 0;
       
  4854 
       
  4855 TInt check_system_heap()
       
  4856 {
       
  4857     TInt freeMem = 0; //User::Available( block );
       
  4858 
       
  4859     // free memory in Hal
       
  4860     TMemoryInfoV1Buf  info;
       
  4861     if( UserHal::MemoryInfo( info ) == KErrNone )
       
  4862     freeMem += info().iFreeRamInBytes;
       
  4863 
       
  4864     return freeMem;
       
  4865 }
       
  4866 
       
  4867 int  free_memory( size_t& pool, size_t& heap, size_t& sys )
       
  4868 {
       
  4869     // the amount of mem to be reserved
       
  4870     mallinfo info = dlmallinfo();
       
  4871     pool = info.fordblks;
       
  4872 
       
  4873     sys = check_system_heap();
       
  4874 
       
  4875     TInt block;
       
  4876     heap = User::Available( block );
       
  4877 
       
  4878     return (pool + heap + sys);
       
  4879 }
       
  4880 
       
  4881 #if HAVE_MMAP
       
  4882 
       
  4883 // the first 1MB space in chunk is reserved for rescue buffer
       
  4884 #define RESCUE_BUFFER_TOP       (1<<20)
       
  4885 // system OOM plugin will notify app when memory is below 1.5MB,
       
  4886 // we set to 2MB to react earlier than that.
       
  4887 #define SYSTEM_OOM_THRESHOLD    (1<<21)
       
  4888 
       
  4889 static unsigned int rescue_buf_size = 0;
       
  4890 static bool rescue_buf_released = false;
       
  4891 struct FreeCell {
       
  4892     FreeCell*   next;
       
  4893     size_t      size;
       
  4894     size_t      ptr;
       
  4895 };
       
  4896 static FreeCell* free_list = 0;
       
  4897 
       
  4898 void fast_set_rescue_buffer_size( int size )
       
  4899 {
       
  4900     // make sure size is page-aligned
       
  4901     size = (size + PAGE_ALIGN) & ~PAGE_ALIGN;
       
  4902 
       
  4903     if (rescue_buf_size != size) {
       
  4904         if (size < RESCUE_BUFFER_TOP && size > rescue_buf_size) {
       
  4905             rchunk.Commit(rescue_buf_size, size - rescue_buf_size);
       
  4906         } else if (size < rescue_buf_size) {
       
  4907             rchunk.Decommit(size, rescue_buf_size - size);
       
  4908         }
       
  4909         rescue_buf_size = size;
       
  4910     }
       
  4911 }
       
  4912 
       
  4913 void release_rescue_buffer()
       
  4914 {
       
  4915     // memory allocation failure is about to happen, need to 
       
  4916     // release the rescue buffer to get more memory.
       
  4917     rchunk.Decommit(0, rescue_buf_size);
       
  4918 
       
  4919     // notify the memory manager that rescue buffer is used in this allocation
       
  4920     MemoryManager::SetStatus( ERescueOOM );
       
  4921     rescue_buf_released = true;
       
  4922 }
       
  4923 
       
  4924 void alloc_rescue_buffer()
       
  4925 {
       
  4926     // rescue buffer is allocated in region starting from 0 up to 1MB
       
  4927     bool initial = rchunk.Size() == 0;
       
  4928     if (!initial && !rescue_buf_released) 
       
  4929         return;
       
  4930 
       
  4931     rescue_buf_size = RESCUE_BUFFER_SIZE;
       
  4932     rchunk.Commit(0, rescue_buf_size);
       
  4933 
       
  4934     if (initial) {
       
  4935         // this is the first time allocating the rescue buffer,
       
  4936         // need to adjust watermark
       
  4937         watermark = rchunk.Base() + RESCUE_BUFFER_TOP;
       
  4938     }
       
  4939     rescue_buf_released = false;
       
  4940 }
       
  4941 
       
  4942 void *chunkMoreCore(int size)
       
  4943 {
       
  4944     if( !rchunk.Handle() )
       
  4945     {
       
  4946         static const TInt KFbServSharedHeapMaxSize = 0x04000000;        // 64MB first
       
  4947 	    TInt maxmem = 0;
       
  4948 	    HAL::Get(HALData::EMemoryRAM, maxmem);
       
  4949 	    TInt maxHeapSize = Min(maxmem, KFbServSharedHeapMaxSize);
       
  4950         rchunk.CreateDisconnectedLocal(0, 0, maxHeapSize);
       
  4951         alloc_rescue_buffer();
       
  4952     }
       
  4953 
       
  4954     void* oldmark = watermark;
       
  4955 
       
  4956     if (size > 0) {
       
  4957         if (rchunk.Commit((int)watermark-(int)rchunk.Base(), size) != KErrNone) {
       
  4958             release_rescue_buffer();
       
  4959             // try again
       
  4960             if (rchunk.Commit((int)watermark-(int)rchunk.Base(), size) != KErrNone) {
       
  4961                 return CMFAIL;
       
  4962             }
       
  4963         }
       
  4964 
       
  4965         watermark = (void*)((int)watermark + size);
       
  4966         MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  4967 
       
  4968         return oldmark;
       
  4969     } else if (size < 0) {
       
  4970         // this means we are going to uncommit some memory from the top
       
  4971         rchunk.Decommit((int)watermark + size - (int)rchunk.Base(), -size);
       
  4972         watermark = (void*)((int)watermark + size);
       
  4973         return watermark;
       
  4974     }
       
  4975 
       
  4976     return oldmark;
       
  4977 }
       
  4978 
       
  4979 void* symbian_mmap(size_t size)
       
  4980 {
       
  4981     size_t addr = (size_t)watermark;
       
  4982 
       
  4983     if (free_list) {
       
  4984         // find out the first cell big enough
       
  4985         FreeCell *e = free_list->next, *p = free_list;
       
  4986         if (p->size >= size) {
       
  4987             // the first cell is fine
       
  4988             addr = p->ptr;
       
  4989             size_t rest = p->size - size;
       
  4990             if (rest) {
       
  4991                 free_list->ptr += size;
       
  4992                 free_list->size = rest;
       
  4993             } else {
       
  4994                 FreeCell* f = free_list;
       
  4995                 free_list = free_list->next;
       
  4996                 free(f);
       
  4997             }
       
  4998         } else {
       
  4999             for(; e && (e->size < size); p = e, e = e->next);
       
  5000             if (e) {
       
  5001                 addr = e->ptr;
       
  5002                 e->ptr += size;
       
  5003                 e->size = (e->size - size);
       
  5004                 if (e->size == 0) {
       
  5005                     p->next = e->next;
       
  5006                     free(e);
       
  5007                 }
       
  5008             }
       
  5009         }
       
  5010     }
       
  5011     
       
  5012     if (rchunk.Commit((int)addr-(int)rchunk.Base(), size) != KErrNone) {
       
  5013         release_rescue_buffer();
       
  5014         // try again
       
  5015         if (rchunk.Commit((int)addr-(int)rchunk.Base(), size) != KErrNone) {
       
  5016             return CMFAIL;
       
  5017         }
       
  5018     }
       
  5019 
       
  5020     void* oldmark = watermark;
       
  5021     if (addr == (size_t)watermark)
       
  5022         watermark = (void*)((int)watermark + size);
       
  5023     else
       
  5024         oldmark = (void*)addr;
       
  5025 
       
  5026     MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5027     return oldmark;
       
  5028 }
       
  5029 
       
  5030 int symbian_munmap(void* ptr, size_t size)
       
  5031 {
       
  5032     int offset = (int)ptr - (int)rchunk.Base();
       
  5033     rchunk.Decommit(offset, size);
       
  5034 
       
  5035     if ((int)ptr + size == (int)watermark) {
       
  5036         watermark = (void*)((int)watermark - size);
       
  5037     } else {
       
  5038         // add the free cell to list
       
  5039         FreeCell* cell = (FreeCell*)calloc(sizeof(FreeCell), 1);
       
  5040         cell->ptr = (size_t)(ptr);
       
  5041         cell->size = size;
       
  5042 
       
  5043         if (free_list == 0)
       
  5044             free_list = cell;
       
  5045         else if (free_list->ptr > cell->ptr) {
       
  5046             if (cell->ptr + cell->size == free_list->ptr) {
       
  5047                 // adjacent
       
  5048                 free_list->ptr = cell->ptr;
       
  5049                 free_list->size += cell->size;
       
  5050                 free(cell);
       
  5051             } else {
       
  5052                 // prepend the cell
       
  5053                 cell->next = free_list;
       
  5054                 free_list = cell;
       
  5055             }
       
  5056         }
       
  5057         else {
       
  5058             FreeCell *e = free_list->next, *p = free_list;
       
  5059 
       
  5060             for(; e && (e->ptr < cell->ptr); p = e, e = e->next);
       
  5061             if (e) {
       
  5062                 if (p->ptr + p->size == cell->ptr) {
       
  5063                     // adjacent to left cell
       
  5064                     p->size += cell->size;
       
  5065                     free(cell);
       
  5066 
       
  5067                     // all three cells are adjacent
       
  5068                     if (p->ptr + p->size == e->ptr) {
       
  5069                         p->size += e->size;
       
  5070                         p->next = 0;
       
  5071                         free(e);
       
  5072                     }
       
  5073                 } else if (cell->ptr + cell->size == e->ptr) {
       
  5074                     // adjacent to right cell
       
  5075                     e->ptr = cell->ptr;
       
  5076                     e->size += cell->size;
       
  5077                     free(cell);
       
  5078                 } else {
       
  5079                     // insert the cell
       
  5080                     p->next = cell;
       
  5081                     cell->next = e;
       
  5082                 }
       
  5083             } else {
       
  5084                 // append the cell
       
  5085                 if (p->ptr + p->size == cell->ptr) {
       
  5086                     // adjacent to left cell
       
  5087                     p->size += cell->size;
       
  5088                     free(cell);
       
  5089                 }
       
  5090                 else
       
  5091                 	p->next = cell;	
       
  5092             }
       
  5093         }
       
  5094     }
       
  5095 
       
  5096     MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5097     return 0;
       
  5098 }
       
  5099 
       
  5100 void log_abort( const char* file, unsigned int line)
       
  5101 {
       
  5102     TPtrC8 des( (unsigned char*)file, strlen(file) );
       
  5103     MEM_LOGF( _L8("Abort at %S line %d"), des, line );
       
  5104     User::After( 500000 );
       
  5105     User::Panic( _L("fast_malloc"), 0 );
       
  5106 }
       
  5107 
       
  5108 bool fast_pre_check( size_t maxsz, size_t bufsz )
       
  5109 {
       
  5110     //adjust maxsz
       
  5111     maxsz = maxsz > (bufsz * 2) ? maxsz : (bufsz * 2);
       
  5112 
       
  5113     // if we have more than 6 MB of free mem, we 
       
  5114     // should skip malloc_info call.
       
  5115     unsigned int mem = check_system_heap();
       
  5116     if (mem>FREEMEM_TO_SKIP_MALLOCINFO && mem>maxsz)
       
  5117         return true;
       
  5118 
       
  5119     // the system is in low memory stage, we need
       
  5120     // to do more fine level check from dlmallocinfo.
       
  5121     int memFromSystem = mem - SYS_HEAP_RECOVER_SIZE * 2;
       
  5122     mallinfo info = dlmallinfo();
       
  5123     int req = maxsz - info.fordblks - memFromSystem;
       
  5124     //req = (int)bufsz > req ? bufsz : req;
       
  5125     if( req <= 0 ) 
       
  5126         return true;
       
  5127 
       
  5128     return false;
       
  5129 }
       
  5130 
       
  5131 void fast_post_check()
       
  5132 {
       
  5133     // do nothing, because we didn't preallocate memory chunk.
       
  5134 }
       
  5135 
       
  5136 
       
  5137 #else
       
  5138 void fast_set_rescue_buffer_size( int size )
       
  5139 {
       
  5140     // rescue buffer resize
       
  5141     int ntop = (int)rchunk.Base() + rchunk.Top();
       
  5142     int rescue_size = (size_t)ntop - (size_t)safetop;
       
  5143 
       
  5144     // no need if we already have a buffer big enough
       
  5145     if( rescue_size <= size )
       
  5146         rchunk.Adjust( rchunk.Size() + ( size - rescue_size ) );
       
  5147 }
       
  5148 
       
  5149 void *chunkMoreCore(int size)
       
  5150 {
       
  5151     if( !rchunk.Handle() )
       
  5152     {
       
  5153         rchunk.CreateLocal( 1<<16, 1<<26 );
       
  5154         watermark = rchunk.Base() + rchunk.Top();
       
  5155         safetop = watermark;
       
  5156 
       
  5157         // rescue buffer
       
  5158         rchunk.Adjust( rchunk.Size() + RESCUE_BUFFER_SIZE );
       
  5159     }
       
  5160 
       
  5161     void* oldmark = watermark;
       
  5162 
       
  5163     if (size > 0)
       
  5164     {
       
  5165         void* top = rchunk.Base()+rchunk.Top();
       
  5166 
       
  5167         if( (size_t)safetop >= (size_t)watermark + size )
       
  5168         {
       
  5169             // chunk is already expanded by last pre-check
       
  5170             watermark = (void*)((int)watermark + size);
       
  5171             return oldmark;
       
  5172         }
       
  5173 
       
  5174         // expand the chunk, but leave some memory for the ui
       
  5175         TInt err = -1;
       
  5176         int freemem = check_system_heap();
       
  5177 
       
  5178         if( freemem >= SYS_UI_REQUIRE_SIZE )
       
  5179         {
       
  5180             size_t adjsz = size + ((size_t)watermark - (size_t)safetop);
       
  5181             err = rchunk.Adjust(rchunk.Size()+adjsz);
       
  5182         }
       
  5183 
       
  5184         if (err==KErrNone)
       
  5185         {
       
  5186             watermark = (void*)((int)watermark + size);
       
  5187             safetop = watermark;
       
  5188             MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5189 /*
       
  5190             MEM_LOGF( _L8("WaterMark: %d  SafeTop: %d ChunkTop: %d"),
       
  5191                 (size_t)watermark-(size_t)rchunk.Base(),
       
  5192                 (size_t)safetop-(size_t)rchunk.Base(),
       
  5193                 rchunk.Top() );
       
  5194 */
       
  5195             return oldmark;
       
  5196         }
       
  5197         else
       
  5198         {
       
  5199             int rescue_size = (size_t)top - (size_t)watermark;
       
  5200 
       
  5201             // if no enough memory for the ui to recover,
       
  5202             // give some memory back to the system
       
  5203             if( freemem < SYS_UI_REQUIRE_SIZE )
       
  5204             {
       
  5205                 int rec_sz = rescue_size > SYS_HEAP_RECOVER_SIZE ? SYS_HEAP_RECOVER_SIZE : 0;
       
  5206                 if( rec_sz > 0 )
       
  5207                     rchunk.Adjust( rchunk.Size() - rec_sz );
       
  5208                 top = rchunk.Base() + rchunk.Top();
       
  5209                 rescue_size = (size_t)top - (size_t)watermark;
       
  5210                 MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5211             }
       
  5212 
       
  5213             // allocate from the rescue buffer
       
  5214             if( rescue_size >= size )
       
  5215             {
       
  5216                 watermark = (void*)((int)watermark + size);
       
  5217                 safetop = watermark;
       
  5218 
       
  5219                 // notify the memory manager that rescue buffer is used in this allocation
       
  5220                 MemoryManager::SetStatus( ERescueOOM );
       
  5221 
       
  5222                 return oldmark;
       
  5223             }
       
  5224             else {
       
  5225                 MemoryManager::SetStatus( EUserAllocOOM );
       
  5226                 return (void *) MFAIL;
       
  5227             }
       
  5228         }
       
  5229     }
       
  5230     else if (size < 0)
       
  5231     {
       
  5232         int ntop = (int)rchunk.Base() + rchunk.Top();
       
  5233         int rescue_size = (size_t)ntop - (size_t)safetop;
       
  5234         if( rescue_size < RESCUE_BUFFER_SIZE )
       
  5235             size += RESCUE_BUFFER_SIZE - rescue_size;
       
  5236 
       
  5237         TInt err = rchunk.Adjust(rchunk.Size()+size);
       
  5238         if (err==KErrNone)
       
  5239         {
       
  5240             watermark = (void*)((int)watermark + size);
       
  5241             safetop = (void*)((int)safetop + size);
       
  5242 
       
  5243             return watermark;
       
  5244         }
       
  5245         else
       
  5246             return (void *) MFAIL;
       
  5247     }
       
  5248     else
       
  5249     {
       
  5250         return watermark;
       
  5251     }
       
  5252 }
       
  5253 
       
  5254 void log_abort( const char* file, unsigned int line)
       
  5255 {
       
  5256   TPtrC8 des( (unsigned char*)file, strlen(file) );
       
  5257   MEM_LOGF( _L8("Abort at %S line %d"), des, line );
       
  5258   User::After( 500000 );
       
  5259   User::Panic( _L("fast_malloc"), 0 );
       
  5260 }
       
  5261 
       
  5262 bool fast_pre_check( size_t maxsz, size_t bufsz )
       
  5263 {
       
  5264     // UI has enough memory to recover?
       
  5265     // we need 250KB for the UI to recover
       
  5266 /*    int sys = check_system_heap();
       
  5267 
       
  5268     if( sys < SYS_UI_REQUIRE_SIZE ) {
       
  5269         // shrink the chunk and give some memory back to system
       
  5270         void* top = rchunk.Base() + rchunk.Top();
       
  5271         int bnksz = (size_t)top - (size_t)watermark;
       
  5272         int req_sz = SYS_UI_REQUIRE_SIZE - sys;
       
  5273 
       
  5274         if( bnksz > req_sz ) {
       
  5275             rchunk.Adjust( rchunk.Size() - req_sz );
       
  5276             safetop = (void*)( (int)safetop - req_sz );
       
  5277         }
       
  5278         else
       
  5279             return false;
       
  5280     }
       
  5281  */
       
  5282 
       
  5283     // the amount of mem to be reserved
       
  5284     mallinfo info = dlmallinfo();
       
  5285     int req = maxsz - info.fordblks;
       
  5286     req = bufsz > req ? bufsz : req;
       
  5287     if( req <= 0 ) return true;
       
  5288 
       
  5289     // check if already reserved enough memory in the chunk
       
  5290     int bnksz = (size_t)safetop - (size_t)watermark;
       
  5291     if( req <= bnksz ) return true;
       
  5292 
       
  5293     // expand the chunk
       
  5294     req -= bnksz;
       
  5295 
       
  5296     // align to page size
       
  5297     req = ( req + PAGE_ALIGN ) & ~PAGE_ALIGN;
       
  5298 
       
  5299     TInt err = rchunk.Adjust(rchunk.Size()+req);
       
  5300 
       
  5301     if( err == KErrNone )
       
  5302     {
       
  5303         safetop = (void*)( (int)safetop + req );
       
  5304 /*
       
  5305         MEM_LOGF( _L8("precheck... WaterMark: %d  SafeTop: %d ChunkTop: %d"),
       
  5306               (size_t)watermark-(size_t)rchunk.Base(),
       
  5307               (size_t)safetop-(size_t)rchunk.Base(),
       
  5308               rchunk.Top() );
       
  5309 */    
       
  5310         MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5311     }
       
  5312     else
       
  5313         return false;
       
  5314 
       
  5315     return true;
       
  5316 }
       
  5317 
       
  5318 void fast_post_check()
       
  5319 {
       
  5320     // shrink the chunk if necessary
       
  5321     void* top = rchunk.Base() + rchunk.Top();
       
  5322     int bnksz = (size_t)top - (size_t)watermark;
       
  5323 
       
  5324     // give some memory back to system
       
  5325     int shrink_size = ( MemoryManager::Status() == ENoOOM ) ?
       
  5326                         ( bnksz - RESCUE_BUFFER_SIZE ) : ( bnksz - SYS_HEAP_RECOVER_SIZE );
       
  5327 
       
  5328     //int shrink_size = bnksz - RESCUE_BUFFER_SIZE;
       
  5329     if( shrink_size > 0 )
       
  5330     {
       
  5331         rchunk.Adjust( rchunk.Size() - shrink_size );
       
  5332         safetop = (void*)( (int)safetop - shrink_size );
       
  5333         MEM_LOGF( _L8("chunk size: %d"), rchunk.Size());
       
  5334 
       
  5335 /*        MEM_LOGF( _L8("postcheck... WaterMark: %d  SafeTop: %d ChunkTop: %d"),
       
  5336             (size_t)watermark-(size_t)rchunk.Base(),
       
  5337             (size_t)safetop-(size_t)rchunk.Base(),
       
  5338             rchunk.Top() );
       
  5339 */    }
       
  5340 }
       
  5341 #endif
       
  5342 
       
  5343 unsigned int fast_malloc_size(void* p)
       
  5344 {
       
  5345     return fast_malloc_usable_size( p );    
       
  5346 }
       
  5347 
       
  5348 void close_mem_pool()
       
  5349 {
       
  5350     // empty, closing the chunk is
       
  5351     // now handled by an auxillary global variable.
       
  5352 }
       
  5353 
       
  5354 // a utility for closing the memory chunk.
       
  5355 // This is not a crash-proof solution, because it
       
  5356 // is difficult to dictate the deleteing order of
       
  5357 // global data and if closing util is not the last
       
  5358 // one to be deleted, it will crash.  Luckly enough
       
  5359 // , it seems to be working fine and no crash so far.
       
  5360 struct ChunkClosingUtil
       
  5361 {
       
  5362     ~ChunkClosingUtil()     { rchunk.Close(); }
       
  5363 };
       
  5364 
       
  5365 static ChunkClosingUtil __gx_closing;
       
  5366 
       
  5367 /* -----------------------------------------------------------------------
       
  5368 History:
       
  5369     C2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
       
  5370       * Fix memalign brace error.
       
  5371 
       
  5372     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
       
  5373       * Fix improper #endif nesting in C++
       
  5374       * Add explicit casts needed for C++
       
  5375 
       
  5376     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
       
  5377       * Use trees for large bins
       
  5378       * Support mspaces
       
  5379       * Use segments to unify sbrk-based and mmap-based system allocation,
       
  5380         removing need for emulation on most platforms without sbrk.
       
  5381       * Default safety checks
       
  5382       * Optional footer checks. Thanks to William Robertson for the idea.
       
  5383       * Internal code refactoring
       
  5384       * Incorporate suggestions and platform-specific changes.
       
  5385         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
       
  5386         Aaron Bachmann,  Emery Berger, and others.
       
  5387       * Speed up non-fastbin processing enough to remove fastbins.
       
  5388       * Remove useless cfree() to avoid conflicts with other apps.
       
  5389       * Remove internal memcpy, memset. Compilers handle builtins better.
       
  5390       * Remove some options that no one ever used and rename others.
       
  5391 
       
  5392     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
       
  5393       * Fix malloc_state bitmap array misdeclaration
       
  5394 
       
  5395     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
       
  5396       * Allow tuning of FIRST_SORTED_BIN_SIZE
       
  5397       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
       
  5398       * Better detection and support for non-contiguousness of MORECORE.
       
  5399         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
       
  5400       * Bypass most of malloc if no frees. Thanks To Emery Berger.
       
  5401       * Fix freeing of old top non-contiguous chunk im sysmalloc.
       
  5402       * Raised default trim and map thresholds to 256K.
       
  5403       * Fix mmap-related #defines. Thanks to Lubos Lunak.
       
  5404       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
       
  5405       * Branch-free bin calculation
       
  5406       * Default trim and mmap thresholds now 256K.
       
  5407 
       
  5408     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
       
  5409       * Introduce independent_comalloc and independent_calloc.
       
  5410         Thanks to Michael Pachos for motivation and help.
       
  5411       * Make optional .h file available
       
  5412       * Allow > 2GB requests on 32bit systems.
       
  5413       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
       
  5414         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
       
  5415         and Anonymous.
       
  5416       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
       
  5417         helping test this.)
       
  5418       * memalign: check alignment arg
       
  5419       * realloc: don't try to shift chunks backwards, since this
       
  5420         leads to  more fragmentation in some programs and doesn't
       
  5421         seem to help in any others.
       
  5422       * Collect all cases in malloc requiring system memory into sysmalloc
       
  5423       * Use mmap as backup to sbrk
       
  5424       * Place all internal state in malloc_state
       
  5425       * Introduce fastbins (although similar to 2.5.1)
       
  5426       * Many minor tunings and cosmetic improvements
       
  5427       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
       
  5428       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
       
  5429         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
       
  5430       * Include errno.h to support default failure action.
       
  5431 
       
  5432     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
       
  5433       * return null for negative arguments
       
  5434       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
       
  5435          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
       
  5436           (e.g. WIN32 platforms)
       
  5437          * Cleanup header file inclusion for WIN32 platforms
       
  5438          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
       
  5439          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
       
  5440            memory allocation routines
       
  5441          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
       
  5442          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
       
  5443            usage of 'assert' in non-WIN32 code
       
  5444          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
       
  5445            avoid infinite loop
       
  5446       * Always call 'fREe()' rather than 'free()'
       
  5447 
       
  5448     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
       
  5449       * Fixed ordering problem with boundary-stamping
       
  5450 
       
  5451     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
       
  5452       * Added pvalloc, as recommended by H.J. Liu
       
  5453       * Added 64bit pointer support mainly from Wolfram Gloger
       
  5454       * Added anonymously donated WIN32 sbrk emulation
       
  5455       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
       
  5456       * malloc_extend_top: fix mask error that caused wastage after
       
  5457         foreign sbrks
       
  5458       * Add linux mremap support code from HJ Liu
       
  5459 
       
  5460     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
       
  5461       * Integrated most documentation with the code.
       
  5462       * Add support for mmap, with help from
       
  5463         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
       
  5464       * Use last_remainder in more cases.
       
  5465       * Pack bins using idea from  colin@nyx10.cs.du.edu
       
  5466       * Use ordered bins instead of best-fit threshhold
       
  5467       * Eliminate block-local decls to simplify tracing and debugging.
       
  5468       * Support another case of realloc via move into top
       
  5469       * Fix error occuring when initial sbrk_base not word-aligned.
       
  5470       * Rely on page size for units instead of SBRK_UNIT to
       
  5471         avoid surprises about sbrk alignment conventions.
       
  5472       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
       
  5473         (raymond@es.ele.tue.nl) for the suggestion.
       
  5474       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
       
  5475       * More precautions for cases where other routines call sbrk,
       
  5476         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
       
  5477       * Added macros etc., allowing use in linux libc from
       
  5478         H.J. Lu (hjl@gnu.ai.mit.edu)
       
  5479       * Inverted this history list
       
  5480 
       
  5481     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
       
  5482       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
       
  5483       * Removed all preallocation code since under current scheme
       
  5484         the work required to undo bad preallocations exceeds
       
  5485         the work saved in good cases for most test programs.
       
  5486       * No longer use return list or unconsolidated bins since
       
  5487         no scheme using them consistently outperforms those that don't
       
  5488         given above changes.
       
  5489       * Use best fit for very large chunks to prevent some worst-cases.
       
  5490       * Added some support for debugging
       
  5491 
       
  5492     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
       
  5493       * Removed footers when chunks are in use. Thanks to
       
  5494         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
       
  5495 
       
  5496     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
       
  5497       * Added malloc_trim, with help from Wolfram Gloger
       
  5498         (wmglo@Dent.MED.Uni-Muenchen.DE).
       
  5499 
       
  5500     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
       
  5501 
       
  5502     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
       
  5503       * realloc: try to expand in both directions
       
  5504       * malloc: swap order of clean-bin strategy;
       
  5505       * realloc: only conditionally expand backwards
       
  5506       * Try not to scavenge used bins
       
  5507       * Use bin counts as a guide to preallocation
       
  5508       * Occasionally bin return list chunks in first scan
       
  5509       * Add a few optimizations from colin@nyx10.cs.du.edu
       
  5510 
       
  5511     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
       
  5512       * faster bin computation & slightly different binning
       
  5513       * merged all consolidations to one part of malloc proper
       
  5514          (eliminating old malloc_find_space & malloc_clean_bin)
       
  5515       * Scan 2 returns chunks (not just 1)
       
  5516       * Propagate failure in realloc if malloc returns 0
       
  5517       * Add stuff to allow compilation on non-ANSI compilers
       
  5518           from kpv@research.att.com
       
  5519 
       
  5520     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
       
  5521       * removed potential for odd address access in prev_chunk
       
  5522       * removed dependency on getpagesize.h
       
  5523       * misc cosmetics and a bit more internal documentation
       
  5524       * anticosmetics: mangled names in macros to evade debugger strangeness
       
  5525       * tested on sparc, hp-700, dec-mips, rs6000
       
  5526           with gcc & native cc (hp, dec only) allowing
       
  5527           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
       
  5528 
       
  5529     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
       
  5530       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
       
  5531          structure of old version,  but most details differ.)
       
  5532 
       
  5533 */