blob: 26ba8fd622c6c4b9ae1a42bac441b0890b7e2474 [file] [log] [blame]
Kumar Gala81673e92008-05-13 19:01:54 -05001#include <common.h>
2
wdenk217c9da2002-10-25 20:35:49 +00003#if 0 /* Moved to malloc.h */
4/* ---------- To make a malloc.h, start cutting here ------------ */
5
6/*
7 A version of malloc/free/realloc written by Doug Lea and released to the
8 public domain. Send questions/comments/complaints/performance data
9 to dl@cs.oswego.edu
10
11* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
12
13 Note: There may be an updated version of this malloc obtainable at
wdenk8bde7f72003-06-27 21:31:46 +000014 ftp://g.oswego.edu/pub/misc/malloc.c
15 Check before installing!
wdenk217c9da2002-10-25 20:35:49 +000016
17* Why use this malloc?
18
19 This is not the fastest, most space-conserving, most portable, or
20 most tunable malloc ever written. However it is among the fastest
21 while also being among the most space-conserving, portable and tunable.
22 Consistent balance across these factors results in a good general-purpose
23 allocator. For a high-level description, see
24 http://g.oswego.edu/dl/html/malloc.html
25
26* Synopsis of public routines
27
28 (Much fuller descriptions are contained in the program documentation below.)
29
30 malloc(size_t n);
31 Return a pointer to a newly allocated chunk of at least n bytes, or null
32 if no space is available.
33 free(Void_t* p);
34 Release the chunk of memory pointed to by p, or no effect if p is null.
35 realloc(Void_t* p, size_t n);
36 Return a pointer to a chunk of size n that contains the same data
37 as does chunk p up to the minimum of (n, p's size) bytes, or null
38 if no space is available. The returned pointer may or may not be
39 the same as p. If p is null, equivalent to malloc. Unless the
40 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41 size argument of zero (re)allocates a minimum-sized chunk.
42 memalign(size_t alignment, size_t n);
43 Return a pointer to a newly allocated chunk of n bytes, aligned
44 in accord with the alignment argument, which must be a power of
45 two.
46 valloc(size_t n);
47 Equivalent to memalign(pagesize, n), where pagesize is the page
48 size of the system (or as near to this as can be figured out from
49 all the includes/defines below.)
50 pvalloc(size_t n);
51 Equivalent to valloc(minimum-page-that-holds(n)), that is,
52 round up n to nearest pagesize.
53 calloc(size_t unit, size_t quantity);
54 Returns a pointer to quantity * unit bytes, with all locations
55 set to zero.
56 cfree(Void_t* p);
57 Equivalent to free(p).
58 malloc_trim(size_t pad);
59 Release all but pad bytes of freed top-most memory back
60 to the system. Return 1 if successful, else 0.
61 malloc_usable_size(Void_t* p);
62 Report the number usable allocated bytes associated with allocated
63 chunk p. This may or may not report more bytes than were requested,
64 due to alignment and minimum size constraints.
65 malloc_stats();
66 Prints brief summary statistics.
67 mallinfo()
68 Returns (by copy) a struct containing various summary statistics.
69 mallopt(int parameter_number, int parameter_value)
70 Changes one of the tunable parameters described below. Returns
71 1 if successful in changing the parameter, else 0.
72
73* Vital statistics:
74
75 Alignment: 8-byte
76 8 byte alignment is currently hardwired into the design. This
77 seems to suffice for all current machines and C compilers.
78
79 Assumed pointer representation: 4 or 8 bytes
80 Code for 8-byte pointers is untested by me but has worked
81 reliably by Wolfram Gloger, who contributed most of the
82 changes supporting this.
83
84 Assumed size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
87 Minimum overhead per allocated chunk: 4 or 8 bytes
88 Each malloced chunk has a hidden overhead of 4 bytes holding size
89 and status information.
90
91 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
wdenk8bde7f72003-06-27 21:31:46 +000092 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
wdenk217c9da2002-10-25 20:35:49 +000093
94 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96 needed; 4 (8) for a trailing size field
97 and 8 (16) bytes for free list pointers. Thus, the minimum
98 allocatable size is 16/24/32 bytes.
99
100 Even a request for zero bytes (i.e., malloc(0)) returns a
101 pointer to something of the minimum allocatable size.
102
103 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
wdenk8bde7f72003-06-27 21:31:46 +0000104 8-byte size_t: 2^63 - 16 bytes
wdenk217c9da2002-10-25 20:35:49 +0000105
106 It is assumed that (possibly signed) size_t bit values suffice to
107 represent chunk sizes. `Possibly signed' is due to the fact
108 that `size_t' may be defined on a system as either a signed or
109 an unsigned type. To be conservative, values that would appear
110 as negative numbers are avoided.
111 Requests for sizes with a negative sign bit when the request
112 size is treaded as a long will return null.
113
114 Maximum overhead wastage per allocated chunk: normally 15 bytes
115
116 Alignnment demands, plus the minimum allocatable size restriction
117 make the normal worst-case wastage 15 bytes (i.e., up to 15
118 more bytes will be allocated than were requested in malloc), with
119 two exceptions:
wdenk8bde7f72003-06-27 21:31:46 +0000120 1. Because requests for zero bytes allocate non-zero space,
121 the worst case wastage for a request of zero bytes is 24 bytes.
122 2. For requests >= mmap_threshold that are serviced via
123 mmap(), the worst case wastage is 8 bytes plus the remainder
124 from a system page (the minimal mmap unit); typically 4096 bytes.
wdenk217c9da2002-10-25 20:35:49 +0000125
126* Limitations
127
128 Here are some features that are NOT currently supported
129
130 * No user-definable hooks for callbacks and the like.
131 * No automated mechanism for fully checking that all accesses
132 to malloced memory stay within their bounds.
133 * No support for compaction.
134
135* Synopsis of compile-time options:
136
137 People have reported using previous versions of this malloc on all
138 versions of Unix, sometimes by tweaking some of the defines
139 below. It has been tested most extensively on Solaris and
140 Linux. It is also reported to work on WIN32 platforms.
141 People have also reported adapting this malloc for use in
142 stand-alone embedded systems.
143
144 The implementation is in straight, hand-tuned ANSI C. Among other
145 consequences, it uses a lot of macros. Because of this, to be at
146 all usable, this code should be compiled using an optimizing compiler
147 (for example gcc -O2) that can simplify expressions and control
148 paths.
149
150 __STD_C (default: derived from C compiler defines)
151 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152 a C compiler sufficiently close to ANSI to get away with it.
153 DEBUG (default: NOT defined)
154 Define to enable debugging. Adds fairly extensive assertion-based
155 checking to help track down memory errors, but noticeably slows down
156 execution.
157 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
158 Define this if you think that realloc(p, 0) should be equivalent
159 to free(p). Otherwise, since malloc returns a unique pointer for
160 malloc(0), so does realloc(p, 0).
161 HAVE_MEMCPY (default: defined)
162 Define if you are not otherwise using ANSI STD C, but still
163 have memcpy and memset in your C library and want to use them.
164 Otherwise, simple internal versions are supplied.
165 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
166 Define as 1 if you want the C library versions of memset and
167 memcpy called in realloc and calloc (otherwise macro versions are used).
168 At least on some platforms, the simple macro versions usually
169 outperform libc versions.
170 HAVE_MMAP (default: defined as 1)
171 Define to non-zero to optionally make malloc() use mmap() to
172 allocate very large blocks.
173 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
174 Define to non-zero to optionally make realloc() use mremap() to
175 reallocate very large blocks.
176 malloc_getpagesize (default: derived from system #includes)
177 Either a constant or routine call returning the system page size.
178 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
179 Optionally define if you are on a system with a /usr/include/malloc.h
180 that declares struct mallinfo. It is not at all necessary to
181 define this even if you do, but will ensure consistency.
182 INTERNAL_SIZE_T (default: size_t)
183 Define to a 32-bit type (probably `unsigned int') if you are on a
184 64-bit machine, yet do not want or need to allow malloc requests of
185 greater than 2^31 to be handled. This saves space, especially for
186 very small chunks.
187 INTERNAL_LINUX_C_LIB (default: NOT defined)
188 Defined only when compiled as part of Linux libc.
189 Also note that there is some odd internal name-mangling via defines
190 (for example, internally, `malloc' is named `mALLOc') needed
191 when compiling in this case. These look funny but don't otherwise
192 affect anything.
193 WIN32 (default: undefined)
194 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
195 LACKS_UNISTD_H (default: undefined if not WIN32)
196 Define this if your system does not have a <unistd.h>.
197 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
198 Define this if your system does not have a <sys/param.h>.
199 MORECORE (default: sbrk)
200 The name of the routine to call to obtain more memory from the system.
201 MORECORE_FAILURE (default: -1)
202 The value returned upon failure of MORECORE.
203 MORECORE_CLEARS (default 1)
York Sun472d5462013-04-01 11:29:11 -0700204 true (1) if the routine mapped to MORECORE zeroes out memory (which
wdenk217c9da2002-10-25 20:35:49 +0000205 holds for sbrk).
206 DEFAULT_TRIM_THRESHOLD
207 DEFAULT_TOP_PAD
208 DEFAULT_MMAP_THRESHOLD
209 DEFAULT_MMAP_MAX
210 Default values of tunable parameters (described in detail below)
211 controlling interaction with host system routines (sbrk, mmap, etc).
212 These values may also be changed dynamically via mallopt(). The
213 preset defaults are those that give best performance for typical
214 programs/systems.
215 USE_DL_PREFIX (default: undefined)
216 Prefix all public routines with the string 'dl'. Useful to
217 quickly avoid procedure declaration conflicts and linker symbol
218 conflicts with existing memory allocation routines.
219
220
221*/
222
Simon Glassd93041a2014-07-10 22:23:25 -0600223
wdenk217c9da2002-10-25 20:35:49 +0000224
wdenk217c9da2002-10-25 20:35:49 +0000225/* Preliminaries */
226
227#ifndef __STD_C
228#ifdef __STDC__
229#define __STD_C 1
230#else
231#if __cplusplus
232#define __STD_C 1
233#else
234#define __STD_C 0
235#endif /*__cplusplus*/
236#endif /*__STDC__*/
237#endif /*__STD_C*/
238
239#ifndef Void_t
240#if (__STD_C || defined(WIN32))
241#define Void_t void
242#else
243#define Void_t char
244#endif
245#endif /*Void_t*/
246
247#if __STD_C
248#include <stddef.h> /* for size_t */
249#else
250#include <sys/types.h>
251#endif
252
253#ifdef __cplusplus
254extern "C" {
255#endif
256
257#include <stdio.h> /* needed for malloc_stats */
258
259
260/*
261 Compile-time options
262*/
263
264
265/*
266 Debugging:
267
268 Because freed chunks may be overwritten with link fields, this
269 malloc will often die when freed memory is overwritten by user
270 programs. This can be very effective (albeit in an annoying way)
271 in helping track down dangling pointers.
272
273 If you compile with -DDEBUG, a number of assertion checks are
274 enabled that will catch more memory errors. You probably won't be
275 able to make much sense of the actual assertion errors, but they
276 should help you locate incorrectly overwritten memory. The
277 checking is fairly extensive, and will slow down execution
278 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
279 attempt to check every non-mmapped allocated and free chunk in the
280 course of computing the summmaries. (By nature, mmapped regions
281 cannot be checked very much automatically.)
282
283 Setting DEBUG may also be helpful if you are trying to modify
284 this code. The assertions in the check routines spell out in more
285 detail the assumptions and invariants underlying the algorithms.
286
287*/
288
wdenk217c9da2002-10-25 20:35:49 +0000289/*
290 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
291 of chunk sizes. On a 64-bit machine, you can reduce malloc
292 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
293 at the expense of not being able to handle requests greater than
294 2^31. This limitation is hardly ever a concern; you are encouraged
295 to set this. However, the default version is the same as size_t.
296*/
297
298#ifndef INTERNAL_SIZE_T
299#define INTERNAL_SIZE_T size_t
300#endif
301
302/*
303 REALLOC_ZERO_BYTES_FREES should be set if a call to
304 realloc with zero bytes should be the same as a call to free.
305 Some people think it should. Otherwise, since this malloc
306 returns a unique pointer for malloc(0), so does realloc(p, 0).
307*/
308
309
310/* #define REALLOC_ZERO_BYTES_FREES */
311
312
313/*
314 WIN32 causes an emulation of sbrk to be compiled in
315 mmap-based options are not currently supported in WIN32.
316*/
317
318/* #define WIN32 */
319#ifdef WIN32
320#define MORECORE wsbrk
321#define HAVE_MMAP 0
322
323#define LACKS_UNISTD_H
324#define LACKS_SYS_PARAM_H
325
326/*
327 Include 'windows.h' to get the necessary declarations for the
328 Microsoft Visual C++ data structures and routines used in the 'sbrk'
329 emulation.
330
331 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
332 Visual C++ header files are included.
333*/
334#define WIN32_LEAN_AND_MEAN
335#include <windows.h>
336#endif
337
338
339/*
340 HAVE_MEMCPY should be defined if you are not otherwise using
341 ANSI STD C, but still have memcpy and memset in your C library
342 and want to use them in calloc and realloc. Otherwise simple
343 macro versions are defined here.
344
345 USE_MEMCPY should be defined as 1 if you actually want to
346 have memset and memcpy called. People report that the macro
347 versions are often enough faster than libc versions on many
348 systems that it is better to use them.
349
350*/
351
352#define HAVE_MEMCPY
353
354#ifndef USE_MEMCPY
355#ifdef HAVE_MEMCPY
356#define USE_MEMCPY 1
357#else
358#define USE_MEMCPY 0
359#endif
360#endif
361
362#if (__STD_C || defined(HAVE_MEMCPY))
363
364#if __STD_C
365void* memset(void*, int, size_t);
366void* memcpy(void*, const void*, size_t);
367#else
368#ifdef WIN32
wdenk8bde7f72003-06-27 21:31:46 +0000369/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
370/* 'windows.h' */
wdenk217c9da2002-10-25 20:35:49 +0000371#else
372Void_t* memset();
373Void_t* memcpy();
374#endif
375#endif
376#endif
377
378#if USE_MEMCPY
379
380/* The following macros are only invoked with (2n+1)-multiples of
381 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
382 for fast inline execution when n is small. */
383
384#define MALLOC_ZERO(charp, nbytes) \
385do { \
386 INTERNAL_SIZE_T mzsz = (nbytes); \
387 if(mzsz <= 9*sizeof(mzsz)) { \
388 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
389 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
wdenk8bde7f72003-06-27 21:31:46 +0000390 *mz++ = 0; \
wdenk217c9da2002-10-25 20:35:49 +0000391 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
wdenk8bde7f72003-06-27 21:31:46 +0000392 *mz++ = 0; \
393 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
394 *mz++ = 0; }}} \
395 *mz++ = 0; \
396 *mz++ = 0; \
397 *mz = 0; \
wdenk217c9da2002-10-25 20:35:49 +0000398 } else memset((charp), 0, mzsz); \
399} while(0)
400
401#define MALLOC_COPY(dest,src,nbytes) \
402do { \
403 INTERNAL_SIZE_T mcsz = (nbytes); \
404 if(mcsz <= 9*sizeof(mcsz)) { \
405 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
406 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
407 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
wdenk8bde7f72003-06-27 21:31:46 +0000408 *mcdst++ = *mcsrc++; \
wdenk217c9da2002-10-25 20:35:49 +0000409 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
wdenk8bde7f72003-06-27 21:31:46 +0000410 *mcdst++ = *mcsrc++; \
411 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
412 *mcdst++ = *mcsrc++; }}} \
413 *mcdst++ = *mcsrc++; \
414 *mcdst++ = *mcsrc++; \
415 *mcdst = *mcsrc ; \
wdenk217c9da2002-10-25 20:35:49 +0000416 } else memcpy(dest, src, mcsz); \
417} while(0)
418
419#else /* !USE_MEMCPY */
420
421/* Use Duff's device for good zeroing/copying performance. */
422
423#define MALLOC_ZERO(charp, nbytes) \
424do { \
425 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
426 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
427 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
428 switch (mctmp) { \
429 case 0: for(;;) { *mzp++ = 0; \
430 case 7: *mzp++ = 0; \
431 case 6: *mzp++ = 0; \
432 case 5: *mzp++ = 0; \
433 case 4: *mzp++ = 0; \
434 case 3: *mzp++ = 0; \
435 case 2: *mzp++ = 0; \
436 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
437 } \
438} while(0)
439
440#define MALLOC_COPY(dest,src,nbytes) \
441do { \
442 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
443 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
444 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
445 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
446 switch (mctmp) { \
447 case 0: for(;;) { *mcdst++ = *mcsrc++; \
448 case 7: *mcdst++ = *mcsrc++; \
449 case 6: *mcdst++ = *mcsrc++; \
450 case 5: *mcdst++ = *mcsrc++; \
451 case 4: *mcdst++ = *mcsrc++; \
452 case 3: *mcdst++ = *mcsrc++; \
453 case 2: *mcdst++ = *mcsrc++; \
454 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
455 } \
456} while(0)
457
458#endif
459
460
461/*
462 Define HAVE_MMAP to optionally make malloc() use mmap() to
463 allocate very large blocks. These will be returned to the
464 operating system immediately after a free().
465*/
466
467#ifndef HAVE_MMAP
468#define HAVE_MMAP 1
469#endif
470
471/*
472 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
473 large blocks. This is currently only possible on Linux with
474 kernel versions newer than 1.3.77.
475*/
476
477#ifndef HAVE_MREMAP
478#ifdef INTERNAL_LINUX_C_LIB
479#define HAVE_MREMAP 1
480#else
481#define HAVE_MREMAP 0
482#endif
483#endif
484
485#if HAVE_MMAP
486
487#include <unistd.h>
488#include <fcntl.h>
489#include <sys/mman.h>
490
491#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
492#define MAP_ANONYMOUS MAP_ANON
493#endif
494
495#endif /* HAVE_MMAP */
496
497/*
498 Access to system page size. To the extent possible, this malloc
499 manages memory from the system in page-size units.
500
501 The following mechanics for getpagesize were adapted from
502 bsd/gnu getpagesize.h
503*/
504
505#ifndef LACKS_UNISTD_H
506# include <unistd.h>
507#endif
508
509#ifndef malloc_getpagesize
510# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
511# ifndef _SC_PAGE_SIZE
512# define _SC_PAGE_SIZE _SC_PAGESIZE
513# endif
514# endif
515# ifdef _SC_PAGE_SIZE
516# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
517# else
518# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
519 extern size_t getpagesize();
520# define malloc_getpagesize getpagesize()
521# else
522# ifdef WIN32
523# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
524# else
525# ifndef LACKS_SYS_PARAM_H
526# include <sys/param.h>
527# endif
528# ifdef EXEC_PAGESIZE
529# define malloc_getpagesize EXEC_PAGESIZE
530# else
531# ifdef NBPG
532# ifndef CLSIZE
533# define malloc_getpagesize NBPG
534# else
535# define malloc_getpagesize (NBPG * CLSIZE)
536# endif
537# else
538# ifdef NBPC
539# define malloc_getpagesize NBPC
540# else
541# ifdef PAGESIZE
542# define malloc_getpagesize PAGESIZE
543# else
544# define malloc_getpagesize (4096) /* just guess */
545# endif
546# endif
547# endif
548# endif
549# endif
550# endif
551# endif
552#endif
553
554
wdenk217c9da2002-10-25 20:35:49 +0000555/*
556
557 This version of malloc supports the standard SVID/XPG mallinfo
558 routine that returns a struct containing the same kind of
559 information you can get from malloc_stats. It should work on
560 any SVID/XPG compliant system that has a /usr/include/malloc.h
561 defining struct mallinfo. (If you'd like to install such a thing
562 yourself, cut out the preliminary declarations as described above
563 and below and save them in a malloc.h file. But there's no
564 compelling reason to bother to do this.)
565
566 The main declaration needed is the mallinfo struct that is returned
567 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
568 bunch of fields, most of which are not even meaningful in this
569 version of malloc. Some of these fields are are instead filled by
570 mallinfo() with other numbers that might possibly be of interest.
571
572 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
573 /usr/include/malloc.h file that includes a declaration of struct
574 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
575 version is declared below. These must be precisely the same for
576 mallinfo() to work.
577
578*/
579
580/* #define HAVE_USR_INCLUDE_MALLOC_H */
581
582#if HAVE_USR_INCLUDE_MALLOC_H
583#include "/usr/include/malloc.h"
584#else
585
586/* SVID2/XPG mallinfo structure */
587
588struct mallinfo {
589 int arena; /* total space allocated from system */
590 int ordblks; /* number of non-inuse chunks */
591 int smblks; /* unused -- always zero */
592 int hblks; /* number of mmapped regions */
593 int hblkhd; /* total space in mmapped regions */
594 int usmblks; /* unused -- always zero */
595 int fsmblks; /* unused -- always zero */
596 int uordblks; /* total allocated space */
597 int fordblks; /* total non-inuse space */
598 int keepcost; /* top-most, releasable (via malloc_trim) space */
599};
600
601/* SVID2/XPG mallopt options */
602
603#define M_MXFAST 1 /* UNUSED in this malloc */
604#define M_NLBLKS 2 /* UNUSED in this malloc */
605#define M_GRAIN 3 /* UNUSED in this malloc */
606#define M_KEEP 4 /* UNUSED in this malloc */
607
608#endif
609
610/* mallopt options that actually do something */
611
612#define M_TRIM_THRESHOLD -1
613#define M_TOP_PAD -2
614#define M_MMAP_THRESHOLD -3
615#define M_MMAP_MAX -4
616
617
wdenk217c9da2002-10-25 20:35:49 +0000618#ifndef DEFAULT_TRIM_THRESHOLD
619#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
620#endif
621
622/*
623 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
624 to keep before releasing via malloc_trim in free().
625
626 Automatic trimming is mainly useful in long-lived programs.
627 Because trimming via sbrk can be slow on some systems, and can
628 sometimes be wasteful (in cases where programs immediately
629 afterward allocate more large chunks) the value should be high
630 enough so that your overall system performance would improve by
631 releasing.
632
633 The trim threshold and the mmap control parameters (see below)
634 can be traded off with one another. Trimming and mmapping are
635 two different ways of releasing unused memory back to the
636 system. Between these two, it is often possible to keep
637 system-level demands of a long-lived program down to a bare
638 minimum. For example, in one test suite of sessions measuring
639 the XF86 X server on Linux, using a trim threshold of 128K and a
640 mmap threshold of 192K led to near-minimal long term resource
641 consumption.
642
643 If you are using this malloc in a long-lived program, it should
644 pay to experiment with these values. As a rough guide, you
645 might set to a value close to the average size of a process
646 (program) running on your system. Releasing this much memory
647 would allow such a process to run in memory. Generally, it's
648 worth it to tune for trimming rather tham memory mapping when a
649 program undergoes phases where several large chunks are
650 allocated and released in ways that can reuse each other's
651 storage, perhaps mixed with phases where there are no such
652 chunks at all. And in well-behaved long-lived programs,
653 controlling release of large blocks via trimming versus mapping
654 is usually faster.
655
656 However, in most programs, these parameters serve mainly as
657 protection against the system-level effects of carrying around
658 massive amounts of unneeded memory. Since frequent calls to
659 sbrk, mmap, and munmap otherwise degrade performance, the default
660 parameters are set to relatively high values that serve only as
661 safeguards.
662
663 The default trim value is high enough to cause trimming only in
664 fairly extreme (by current memory consumption standards) cases.
665 It must be greater than page size to have any useful effect. To
666 disable trimming completely, you can set to (unsigned long)(-1);
667
668
669*/
670
671
672#ifndef DEFAULT_TOP_PAD
673#define DEFAULT_TOP_PAD (0)
674#endif
675
676/*
677 M_TOP_PAD is the amount of extra `padding' space to allocate or
678 retain whenever sbrk is called. It is used in two ways internally:
679
680 * When sbrk is called to extend the top of the arena to satisfy
wdenk8bde7f72003-06-27 21:31:46 +0000681 a new malloc request, this much padding is added to the sbrk
682 request.
wdenk217c9da2002-10-25 20:35:49 +0000683
684 * When malloc_trim is called automatically from free(),
wdenk8bde7f72003-06-27 21:31:46 +0000685 it is used as the `pad' argument.
wdenk217c9da2002-10-25 20:35:49 +0000686
687 In both cases, the actual amount of padding is rounded
688 so that the end of the arena is always a system page boundary.
689
690 The main reason for using padding is to avoid calling sbrk so
691 often. Having even a small pad greatly reduces the likelihood
692 that nearly every malloc request during program start-up (or
693 after trimming) will invoke sbrk, which needlessly wastes
694 time.
695
696 Automatic rounding-up to page-size units is normally sufficient
697 to avoid measurable overhead, so the default is 0. However, in
698 systems where sbrk is relatively slow, it can pay to increase
699 this value, at the expense of carrying around more memory than
700 the program needs.
701
702*/
703
704
705#ifndef DEFAULT_MMAP_THRESHOLD
706#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
707#endif
708
709/*
710
711 M_MMAP_THRESHOLD is the request size threshold for using mmap()
712 to service a request. Requests of at least this size that cannot
713 be allocated using already-existing space will be serviced via mmap.
714 (If enough normal freed space already exists it is used instead.)
715
716 Using mmap segregates relatively large chunks of memory so that
717 they can be individually obtained and released from the host
718 system. A request serviced through mmap is never reused by any
719 other request (at least not directly; the system may just so
720 happen to remap successive requests to the same locations).
721
722 Segregating space in this way has the benefit that mmapped space
723 can ALWAYS be individually released back to the system, which
724 helps keep the system level memory demands of a long-lived
725 program low. Mapped memory can never become `locked' between
726 other chunks, as can happen with normally allocated chunks, which
727 menas that even trimming via malloc_trim would not release them.
728
729 However, it has the disadvantages that:
730
wdenk8bde7f72003-06-27 21:31:46 +0000731 1. The space cannot be reclaimed, consolidated, and then
732 used to service later requests, as happens with normal chunks.
733 2. It can lead to more wastage because of mmap page alignment
734 requirements
735 3. It causes malloc performance to be more dependent on host
736 system memory management support routines which may vary in
737 implementation quality and may impose arbitrary
738 limitations. Generally, servicing a request via normal
739 malloc steps is faster than going through a system's mmap.
wdenk217c9da2002-10-25 20:35:49 +0000740
741 All together, these considerations should lead you to use mmap
742 only for relatively large requests.
743
744
745*/
746
747
wdenk217c9da2002-10-25 20:35:49 +0000748#ifndef DEFAULT_MMAP_MAX
749#if HAVE_MMAP
750#define DEFAULT_MMAP_MAX (64)
751#else
752#define DEFAULT_MMAP_MAX (0)
753#endif
754#endif
755
756/*
757 M_MMAP_MAX is the maximum number of requests to simultaneously
758 service using mmap. This parameter exists because:
759
wdenk8bde7f72003-06-27 21:31:46 +0000760 1. Some systems have a limited number of internal tables for
761 use by mmap.
762 2. In most systems, overreliance on mmap can degrade overall
763 performance.
764 3. If a program allocates many large regions, it is probably
765 better off using normal sbrk-based allocation routines that
766 can reclaim and reallocate normal heap memory. Using a
767 small value allows transition into this mode after the
768 first few allocations.
wdenk217c9da2002-10-25 20:35:49 +0000769
770 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
771 the default value is 0, and attempts to set it to non-zero values
772 in mallopt will fail.
773*/
774
775
wdenk217c9da2002-10-25 20:35:49 +0000776/*
777 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
778 Useful to quickly avoid procedure declaration conflicts and linker
779 symbol conflicts with existing memory allocation routines.
780
781*/
782
783/* #define USE_DL_PREFIX */
784
785
wdenk217c9da2002-10-25 20:35:49 +0000786/*
787
788 Special defines for linux libc
789
790 Except when compiled using these special defines for Linux libc
791 using weak aliases, this malloc is NOT designed to work in
792 multithreaded applications. No semaphores or other concurrency
793 control are provided to ensure that multiple malloc or free calls
794 don't run at the same time, which could be disasterous. A single
795 semaphore could be used across malloc, realloc, and free (which is
796 essentially the effect of the linux weak alias approach). It would
797 be hard to obtain finer granularity.
798
799*/
800
801
802#ifdef INTERNAL_LINUX_C_LIB
803
804#if __STD_C
805
806Void_t * __default_morecore_init (ptrdiff_t);
807Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
808
809#else
810
811Void_t * __default_morecore_init ();
812Void_t *(*__morecore)() = __default_morecore_init;
813
814#endif
815
816#define MORECORE (*__morecore)
817#define MORECORE_FAILURE 0
818#define MORECORE_CLEARS 1
819
820#else /* INTERNAL_LINUX_C_LIB */
821
822#if __STD_C
823extern Void_t* sbrk(ptrdiff_t);
824#else
825extern Void_t* sbrk();
826#endif
827
828#ifndef MORECORE
829#define MORECORE sbrk
830#endif
831
832#ifndef MORECORE_FAILURE
833#define MORECORE_FAILURE -1
834#endif
835
836#ifndef MORECORE_CLEARS
837#define MORECORE_CLEARS 1
838#endif
839
840#endif /* INTERNAL_LINUX_C_LIB */
841
842#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
843
844#define cALLOc __libc_calloc
845#define fREe __libc_free
846#define mALLOc __libc_malloc
847#define mEMALIGn __libc_memalign
848#define rEALLOc __libc_realloc
849#define vALLOc __libc_valloc
850#define pvALLOc __libc_pvalloc
851#define mALLINFo __libc_mallinfo
852#define mALLOPt __libc_mallopt
853
854#pragma weak calloc = __libc_calloc
855#pragma weak free = __libc_free
856#pragma weak cfree = __libc_free
857#pragma weak malloc = __libc_malloc
858#pragma weak memalign = __libc_memalign
859#pragma weak realloc = __libc_realloc
860#pragma weak valloc = __libc_valloc
861#pragma weak pvalloc = __libc_pvalloc
862#pragma weak mallinfo = __libc_mallinfo
863#pragma weak mallopt = __libc_mallopt
864
865#else
866
867#ifdef USE_DL_PREFIX
868#define cALLOc dlcalloc
869#define fREe dlfree
870#define mALLOc dlmalloc
871#define mEMALIGn dlmemalign
872#define rEALLOc dlrealloc
873#define vALLOc dlvalloc
874#define pvALLOc dlpvalloc
875#define mALLINFo dlmallinfo
876#define mALLOPt dlmallopt
877#else /* USE_DL_PREFIX */
878#define cALLOc calloc
879#define fREe free
880#define mALLOc malloc
881#define mEMALIGn memalign
882#define rEALLOc realloc
883#define vALLOc valloc
884#define pvALLOc pvalloc
885#define mALLINFo mallinfo
886#define mALLOPt mallopt
887#endif /* USE_DL_PREFIX */
888
889#endif
890
891/* Public routines */
892
893#if __STD_C
894
895Void_t* mALLOc(size_t);
896void fREe(Void_t*);
897Void_t* rEALLOc(Void_t*, size_t);
898Void_t* mEMALIGn(size_t, size_t);
899Void_t* vALLOc(size_t);
900Void_t* pvALLOc(size_t);
901Void_t* cALLOc(size_t, size_t);
902void cfree(Void_t*);
903int malloc_trim(size_t);
904size_t malloc_usable_size(Void_t*);
905void malloc_stats();
906int mALLOPt(int, int);
907struct mallinfo mALLINFo(void);
908#else
909Void_t* mALLOc();
910void fREe();
911Void_t* rEALLOc();
912Void_t* mEMALIGn();
913Void_t* vALLOc();
914Void_t* pvALLOc();
915Void_t* cALLOc();
916void cfree();
917int malloc_trim();
918size_t malloc_usable_size();
919void malloc_stats();
920int mALLOPt();
921struct mallinfo mALLINFo();
922#endif
923
924
925#ifdef __cplusplus
926}; /* end of extern "C" */
927#endif
928
929/* ---------- To make a malloc.h, end cutting here ------------ */
Wolfgang Denkea882ba2010-06-20 23:33:59 +0200930#endif /* 0 */ /* Moved to malloc.h */
wdenk217c9da2002-10-25 20:35:49 +0000931
932#include <malloc.h>
Simon Glassd59476b2014-07-10 22:23:28 -0600933#include <asm/io.h>
934
Wolfgang Denkea882ba2010-06-20 23:33:59 +0200935#ifdef DEBUG
wdenk217c9da2002-10-25 20:35:49 +0000936#if __STD_C
937static void malloc_update_mallinfo (void);
938void malloc_stats (void);
939#else
940static void malloc_update_mallinfo ();
941void malloc_stats();
942#endif
Wolfgang Denkea882ba2010-06-20 23:33:59 +0200943#endif /* DEBUG */
wdenk217c9da2002-10-25 20:35:49 +0000944
Wolfgang Denkd87080b2006-03-31 18:32:53 +0200945DECLARE_GLOBAL_DATA_PTR;
946
wdenk217c9da2002-10-25 20:35:49 +0000947/*
948 Emulation of sbrk for WIN32
949 All code within the ifdef WIN32 is untested by me.
950
951 Thanks to Martin Fong and others for supplying this.
952*/
953
954
955#ifdef WIN32
956
957#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
958~(malloc_getpagesize-1))
959#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
960
961/* resrve 64MB to insure large contiguous space */
962#define RESERVED_SIZE (1024*1024*64)
963#define NEXT_SIZE (2048*1024)
964#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
965
966struct GmListElement;
967typedef struct GmListElement GmListElement;
968
969struct GmListElement
970{
971 GmListElement* next;
972 void* base;
973};
974
975static GmListElement* head = 0;
976static unsigned int gNextAddress = 0;
977static unsigned int gAddressBase = 0;
978static unsigned int gAllocatedSize = 0;
979
980static
981GmListElement* makeGmListElement (void* bas)
982{
983 GmListElement* this;
984 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
985 assert (this);
986 if (this)
987 {
988 this->base = bas;
989 this->next = head;
990 head = this;
991 }
992 return this;
993}
994
995void gcleanup ()
996{
997 BOOL rval;
998 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
999 if (gAddressBase && (gNextAddress - gAddressBase))
1000 {
1001 rval = VirtualFree ((void*)gAddressBase,
1002 gNextAddress - gAddressBase,
1003 MEM_DECOMMIT);
wdenk8bde7f72003-06-27 21:31:46 +00001004 assert (rval);
wdenk217c9da2002-10-25 20:35:49 +00001005 }
1006 while (head)
1007 {
1008 GmListElement* next = head->next;
1009 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1010 assert (rval);
1011 LocalFree (head);
1012 head = next;
1013 }
1014}
1015
1016static
1017void* findRegion (void* start_address, unsigned long size)
1018{
1019 MEMORY_BASIC_INFORMATION info;
1020 if (size >= TOP_MEMORY) return NULL;
1021
1022 while ((unsigned long)start_address + size < TOP_MEMORY)
1023 {
1024 VirtualQuery (start_address, &info, sizeof (info));
1025 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1026 return start_address;
1027 else
1028 {
wdenk8bde7f72003-06-27 21:31:46 +00001029 /* Requested region is not available so see if the */
1030 /* next region is available. Set 'start_address' */
1031 /* to the next region and call 'VirtualQuery()' */
1032 /* again. */
wdenk217c9da2002-10-25 20:35:49 +00001033
1034 start_address = (char*)info.BaseAddress + info.RegionSize;
1035
wdenk8bde7f72003-06-27 21:31:46 +00001036 /* Make sure we start looking for the next region */
1037 /* on the *next* 64K boundary. Otherwise, even if */
1038 /* the new region is free according to */
1039 /* 'VirtualQuery()', the subsequent call to */
1040 /* 'VirtualAlloc()' (which follows the call to */
1041 /* this routine in 'wsbrk()') will round *down* */
1042 /* the requested address to a 64K boundary which */
1043 /* we already know is an address in the */
1044 /* unavailable region. Thus, the subsequent call */
1045 /* to 'VirtualAlloc()' will fail and bring us back */
1046 /* here, causing us to go into an infinite loop. */
wdenk217c9da2002-10-25 20:35:49 +00001047
1048 start_address =
1049 (void *) AlignPage64K((unsigned long) start_address);
1050 }
1051 }
1052 return NULL;
1053
1054}
1055
1056
1057void* wsbrk (long size)
1058{
1059 void* tmp;
1060 if (size > 0)
1061 {
1062 if (gAddressBase == 0)
1063 {
1064 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1065 gNextAddress = gAddressBase =
1066 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1067 MEM_RESERVE, PAGE_NOACCESS);
1068 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1069gAllocatedSize))
1070 {
1071 long new_size = max (NEXT_SIZE, AlignPage (size));
1072 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1073 do
1074 {
1075 new_address = findRegion (new_address, new_size);
1076
1077 if (new_address == 0)
1078 return (void*)-1;
1079
1080 gAddressBase = gNextAddress =
1081 (unsigned int)VirtualAlloc (new_address, new_size,
1082 MEM_RESERVE, PAGE_NOACCESS);
wdenk8bde7f72003-06-27 21:31:46 +00001083 /* repeat in case of race condition */
1084 /* The region that we found has been snagged */
1085 /* by another thread */
wdenk217c9da2002-10-25 20:35:49 +00001086 }
1087 while (gAddressBase == 0);
1088
1089 assert (new_address == (void*)gAddressBase);
1090
1091 gAllocatedSize = new_size;
1092
1093 if (!makeGmListElement ((void*)gAddressBase))
1094 return (void*)-1;
1095 }
1096 if ((size + gNextAddress) > AlignPage (gNextAddress))
1097 {
1098 void* res;
1099 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1100 (size + gNextAddress -
1101 AlignPage (gNextAddress)),
1102 MEM_COMMIT, PAGE_READWRITE);
1103 if (res == 0)
1104 return (void*)-1;
1105 }
1106 tmp = (void*)gNextAddress;
1107 gNextAddress = (unsigned int)tmp + size;
1108 return tmp;
1109 }
1110 else if (size < 0)
1111 {
1112 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1113 /* Trim by releasing the virtual memory */
1114 if (alignedGoal >= gAddressBase)
1115 {
1116 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1117 MEM_DECOMMIT);
1118 gNextAddress = gNextAddress + size;
1119 return (void*)gNextAddress;
1120 }
1121 else
1122 {
1123 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1124 MEM_DECOMMIT);
1125 gNextAddress = gAddressBase;
1126 return (void*)-1;
1127 }
1128 }
1129 else
1130 {
1131 return (void*)gNextAddress;
1132 }
1133}
1134
1135#endif
1136
Simon Glassd93041a2014-07-10 22:23:25 -06001137
wdenk217c9da2002-10-25 20:35:49 +00001138
1139/*
1140 Type declarations
1141*/
1142
1143
1144struct malloc_chunk
1145{
1146 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1147 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1148 struct malloc_chunk* fd; /* double links -- used only if free. */
1149 struct malloc_chunk* bk;
Joakim Tjernlund1ba91ba2010-10-14 08:51:34 +02001150} __attribute__((__may_alias__)) ;
wdenk217c9da2002-10-25 20:35:49 +00001151
1152typedef struct malloc_chunk* mchunkptr;
1153
1154/*
1155
1156 malloc_chunk details:
1157
1158 (The following includes lightly edited explanations by Colin Plumb.)
1159
1160 Chunks of memory are maintained using a `boundary tag' method as
1161 described in e.g., Knuth or Standish. (See the paper by Paul
1162 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1163 survey of such techniques.) Sizes of free chunks are stored both
1164 in the front of each chunk and at the end. This makes
1165 consolidating fragmented chunks into bigger chunks very fast. The
1166 size fields also hold bits representing whether chunks are free or
1167 in use.
1168
1169 An allocated chunk looks like this:
1170
1171
1172 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk8bde7f72003-06-27 21:31:46 +00001173 | Size of previous chunk, if allocated | |
1174 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1175 | Size of chunk, in bytes |P|
wdenk217c9da2002-10-25 20:35:49 +00001176 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk8bde7f72003-06-27 21:31:46 +00001177 | User data starts here... .
1178 . .
1179 . (malloc_usable_space() bytes) .
1180 . |
wdenk217c9da2002-10-25 20:35:49 +00001181nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk8bde7f72003-06-27 21:31:46 +00001182 | Size of chunk |
1183 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk217c9da2002-10-25 20:35:49 +00001184
1185
1186 Where "chunk" is the front of the chunk for the purpose of most of
1187 the malloc code, but "mem" is the pointer that is returned to the
1188 user. "Nextchunk" is the beginning of the next contiguous chunk.
1189
1190 Chunks always begin on even word boundries, so the mem portion
1191 (which is returned to the user) is also on an even word boundary, and
1192 thus double-word aligned.
1193
1194 Free chunks are stored in circular doubly-linked lists, and look like this:
1195
1196 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk8bde7f72003-06-27 21:31:46 +00001197 | Size of previous chunk |
1198 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk217c9da2002-10-25 20:35:49 +00001199 `head:' | Size of chunk, in bytes |P|
1200 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk8bde7f72003-06-27 21:31:46 +00001201 | Forward pointer to next chunk in list |
1202 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203 | Back pointer to previous chunk in list |
1204 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205 | Unused space (may be 0 bytes long) .
1206 . .
1207 . |
wdenk217c9da2002-10-25 20:35:49 +00001208nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209 `foot:' | Size of chunk, in bytes |
wdenk8bde7f72003-06-27 21:31:46 +00001210 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
wdenk217c9da2002-10-25 20:35:49 +00001211
1212 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1213 chunk size (which is always a multiple of two words), is an in-use
1214 bit for the *previous* chunk. If that bit is *clear*, then the
1215 word before the current chunk size contains the previous chunk
1216 size, and can be used to find the front of the previous chunk.
1217 (The very first chunk allocated always has this bit set,
1218 preventing access to non-existent (or non-owned) memory.)
1219
1220 Note that the `foot' of the current chunk is actually represented
1221 as the prev_size of the NEXT chunk. (This makes it easier to
1222 deal with alignments etc).
1223
1224 The two exceptions to all this are
1225
1226 1. The special chunk `top', which doesn't bother using the
wdenk8bde7f72003-06-27 21:31:46 +00001227 trailing size field since there is no
1228 next contiguous chunk that would have to index off it. (After
1229 initialization, `top' is forced to always exist. If it would
1230 become less than MINSIZE bytes long, it is replenished via
1231 malloc_extend_top.)
wdenk217c9da2002-10-25 20:35:49 +00001232
1233 2. Chunks allocated via mmap, which have the second-lowest-order
wdenk8bde7f72003-06-27 21:31:46 +00001234 bit (IS_MMAPPED) set in their size fields. Because they are
1235 never merged or traversed from any other chunk, they have no
1236 foot size or inuse information.
wdenk217c9da2002-10-25 20:35:49 +00001237
1238 Available chunks are kept in any of several places (all declared below):
1239
1240 * `av': An array of chunks serving as bin headers for consolidated
1241 chunks. Each bin is doubly linked. The bins are approximately
1242 proportionally (log) spaced. There are a lot of these bins
1243 (128). This may look excessive, but works very well in
1244 practice. All procedures maintain the invariant that no
1245 consolidated chunk physically borders another one. Chunks in
1246 bins are kept in size order, with ties going to the
1247 approximately least recently used chunk.
1248
1249 The chunks in each bin are maintained in decreasing sorted order by
1250 size. This is irrelevant for the small bins, which all contain
1251 the same-sized chunks, but facilitates best-fit allocation for
1252 larger chunks. (These lists are just sequential. Keeping them in
1253 order almost never requires enough traversal to warrant using
1254 fancier ordered data structures.) Chunks of the same size are
1255 linked with the most recently freed at the front, and allocations
1256 are taken from the back. This results in LRU or FIFO allocation
1257 order, which tends to give each chunk an equal opportunity to be
1258 consolidated with adjacent freed chunks, resulting in larger free
1259 chunks and less fragmentation.
1260
1261 * `top': The top-most available chunk (i.e., the one bordering the
1262 end of available memory) is treated specially. It is never
1263 included in any bin, is used only if no other chunk is
1264 available, and is released back to the system if it is very
1265 large (see M_TRIM_THRESHOLD).
1266
1267 * `last_remainder': A bin holding only the remainder of the
1268 most recently split (non-top) chunk. This bin is checked
1269 before other non-fitting chunks, so as to provide better
1270 locality for runs of sequentially allocated chunks.
1271
1272 * Implicitly, through the host system's memory mapping tables.
1273 If supported, requests greater than a threshold are usually
1274 serviced via calls to mmap, and then later released via munmap.
1275
1276*/
Simon Glassd93041a2014-07-10 22:23:25 -06001277
wdenk217c9da2002-10-25 20:35:49 +00001278/* sizes, alignments */
1279
1280#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1281#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1282#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1283#define MINSIZE (sizeof(struct malloc_chunk))
1284
1285/* conversion from malloc headers to user pointers, and back */
1286
1287#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1288#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1289
1290/* pad request bytes into a usable size */
1291
1292#define request2size(req) \
1293 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1294 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1295 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1296
1297/* Check if m has acceptable alignment */
1298
1299#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1300
1301
Simon Glassd93041a2014-07-10 22:23:25 -06001302
wdenk217c9da2002-10-25 20:35:49 +00001303
1304/*
1305 Physical chunk operations
1306*/
1307
1308
1309/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1310
1311#define PREV_INUSE 0x1
1312
1313/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1314
1315#define IS_MMAPPED 0x2
1316
1317/* Bits to mask off when extracting size */
1318
1319#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1320
1321
1322/* Ptr to next physical malloc_chunk. */
1323
1324#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1325
1326/* Ptr to previous physical malloc_chunk */
1327
1328#define prev_chunk(p)\
1329 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1330
1331
1332/* Treat space at ptr + offset as a chunk */
1333
1334#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1335
1336
Simon Glassd93041a2014-07-10 22:23:25 -06001337
wdenk217c9da2002-10-25 20:35:49 +00001338
1339/*
1340 Dealing with use bits
1341*/
1342
1343/* extract p's inuse bit */
1344
1345#define inuse(p)\
1346((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1347
1348/* extract inuse bit of previous chunk */
1349
1350#define prev_inuse(p) ((p)->size & PREV_INUSE)
1351
1352/* check for mmap()'ed chunk */
1353
1354#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1355
1356/* set/clear chunk as in use without otherwise disturbing */
1357
1358#define set_inuse(p)\
1359((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1360
1361#define clear_inuse(p)\
1362((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1363
1364/* check/set/clear inuse bits in known places */
1365
1366#define inuse_bit_at_offset(p, s)\
1367 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1368
1369#define set_inuse_bit_at_offset(p, s)\
1370 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1371
1372#define clear_inuse_bit_at_offset(p, s)\
1373 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1374
1375
Simon Glassd93041a2014-07-10 22:23:25 -06001376
wdenk217c9da2002-10-25 20:35:49 +00001377
1378/*
1379 Dealing with size fields
1380*/
1381
1382/* Get size, ignoring use bits */
1383
1384#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1385
1386/* Set size at head, without disturbing its use bit */
1387
1388#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1389
1390/* Set size/use ignoring previous bits in header */
1391
1392#define set_head(p, s) ((p)->size = (s))
1393
1394/* Set size at footer (only when chunk is not in use) */
1395
1396#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1397
1398
Simon Glassd93041a2014-07-10 22:23:25 -06001399
wdenk217c9da2002-10-25 20:35:49 +00001400
1401
1402/*
1403 Bins
1404
1405 The bins, `av_' are an array of pairs of pointers serving as the
1406 heads of (initially empty) doubly-linked lists of chunks, laid out
1407 in a way so that each pair can be treated as if it were in a
1408 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1409 and chunks are the same).
1410
1411 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1412 8 bytes apart. Larger bins are approximately logarithmically
1413 spaced. (See the table below.) The `av_' array is never mentioned
1414 directly in the code, but instead via bin access macros.
1415
1416 Bin layout:
1417
1418 64 bins of size 8
1419 32 bins of size 64
1420 16 bins of size 512
1421 8 bins of size 4096
1422 4 bins of size 32768
1423 2 bins of size 262144
1424 1 bin of size what's left
1425
1426 There is actually a little bit of slop in the numbers in bin_index
1427 for the sake of speed. This makes no difference elsewhere.
1428
1429 The special chunks `top' and `last_remainder' get their own bins,
1430 (this is implemented via yet more trickery with the av_ array),
1431 although `top' is never properly linked to its bin since it is
1432 always handled specially.
1433
1434*/
1435
1436#define NAV 128 /* number of bins */
1437
1438typedef struct malloc_chunk* mbinptr;
1439
1440/* access macros */
1441
1442#define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1443#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1444#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1445
1446/*
1447 The first 2 bins are never indexed. The corresponding av_ cells are instead
1448 used for bookkeeping. This is not to save space, but to simplify
1449 indexing, maintain locality, and avoid some initialization tests.
1450*/
1451
Stefan Roesef2302d42008-08-06 14:05:38 +02001452#define top (av_[2]) /* The topmost chunk */
wdenk217c9da2002-10-25 20:35:49 +00001453#define last_remainder (bin_at(1)) /* remainder from last split */
1454
1455
1456/*
1457 Because top initially points to its own bin with initial
1458 zero size, thus forcing extension on the first malloc request,
1459 we avoid having any special code in malloc to check whether
1460 it even exists yet. But we still need to in malloc_extend_top.
1461*/
1462
1463#define initial_top ((mchunkptr)(bin_at(0)))
1464
1465/* Helper macro to initialize bins */
1466
1467#define IAV(i) bin_at(i), bin_at(i)
1468
1469static mbinptr av_[NAV * 2 + 2] = {
Kim Phillips199adb62012-10-29 13:34:32 +00001470 NULL, NULL,
wdenk217c9da2002-10-25 20:35:49 +00001471 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1472 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1473 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1474 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1475 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1476 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1477 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1478 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1479 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1480 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1481 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1482 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1483 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1484 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1485 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1486 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1487};
1488
Wolfgang Denk2e5167c2010-10-28 20:00:11 +02001489#ifdef CONFIG_NEEDS_MANUAL_RELOC
Gabor Juhos7b395232013-01-21 21:10:38 +00001490static void malloc_bin_reloc(void)
wdenk217c9da2002-10-25 20:35:49 +00001491{
Simon Glass93691842012-09-04 11:31:07 +00001492 mbinptr *p = &av_[2];
1493 size_t i;
1494
1495 for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
1496 *p = (mbinptr)((ulong)*p + gd->reloc_off);
wdenk217c9da2002-10-25 20:35:49 +00001497}
Gabor Juhos7b395232013-01-21 21:10:38 +00001498#else
1499static inline void malloc_bin_reloc(void) {}
Peter Tyser521af042009-09-21 11:20:36 -05001500#endif
Peter Tyser5e93bd12009-08-21 23:05:19 -05001501
1502ulong mem_malloc_start = 0;
1503ulong mem_malloc_end = 0;
1504ulong mem_malloc_brk = 0;
1505
1506void *sbrk(ptrdiff_t increment)
1507{
1508 ulong old = mem_malloc_brk;
1509 ulong new = old + increment;
1510
Kumar Gala6163f5b2010-11-15 18:41:43 -06001511 /*
1512 * if we are giving memory back make sure we clear it out since
1513 * we set MORECORE_CLEARS to 1
1514 */
1515 if (increment < 0)
1516 memset((void *)new, 0, -increment);
1517
Peter Tyser5e93bd12009-08-21 23:05:19 -05001518 if ((new < mem_malloc_start) || (new > mem_malloc_end))
karl.beldan@gmail.comae30b8c2010-04-06 22:18:08 +02001519 return (void *)MORECORE_FAILURE;
Peter Tyser5e93bd12009-08-21 23:05:19 -05001520
1521 mem_malloc_brk = new;
1522
1523 return (void *)old;
1524}
wdenk217c9da2002-10-25 20:35:49 +00001525
Peter Tyserd4e8ada2009-08-21 23:05:21 -05001526void mem_malloc_init(ulong start, ulong size)
1527{
1528 mem_malloc_start = start;
1529 mem_malloc_end = start + size;
1530 mem_malloc_brk = start;
1531
1532 memset((void *)mem_malloc_start, 0, size);
Gabor Juhos7b395232013-01-21 21:10:38 +00001533
1534 malloc_bin_reloc();
Peter Tyserd4e8ada2009-08-21 23:05:21 -05001535}
Peter Tyserd4e8ada2009-08-21 23:05:21 -05001536
wdenk217c9da2002-10-25 20:35:49 +00001537/* field-extraction macros */
1538
1539#define first(b) ((b)->fd)
1540#define last(b) ((b)->bk)
1541
1542/*
1543 Indexing into bins
1544*/
1545
1546#define bin_index(sz) \
1547(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1548 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1549 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1550 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1551 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1552 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
wdenk8bde7f72003-06-27 21:31:46 +00001553 126)
wdenk217c9da2002-10-25 20:35:49 +00001554/*
1555 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1556 identically sized chunks. This is exploited in malloc.
1557*/
1558
1559#define MAX_SMALLBIN 63
1560#define MAX_SMALLBIN_SIZE 512
1561#define SMALLBIN_WIDTH 8
1562
1563#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1564
1565/*
1566 Requests are `small' if both the corresponding and the next bin are small
1567*/
1568
1569#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1570
Simon Glassd93041a2014-07-10 22:23:25 -06001571
wdenk217c9da2002-10-25 20:35:49 +00001572
1573/*
1574 To help compensate for the large number of bins, a one-level index
1575 structure is used for bin-by-bin searching. `binblocks' is a
1576 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1577 have any (possibly) non-empty bins, so they can be skipped over
1578 all at once during during traversals. The bits are NOT always
1579 cleared as soon as all bins in a block are empty, but instead only
1580 when all are noticed to be empty during traversal in malloc.
1581*/
1582
1583#define BINBLOCKWIDTH 4 /* bins per block */
1584
Stefan Roesef2302d42008-08-06 14:05:38 +02001585#define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1586#define binblocks_w (av_[1])
wdenk217c9da2002-10-25 20:35:49 +00001587
1588/* bin<->block macros */
1589
1590#define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
Stefan Roesef2302d42008-08-06 14:05:38 +02001591#define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1592#define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
wdenk217c9da2002-10-25 20:35:49 +00001593
1594
Simon Glassd93041a2014-07-10 22:23:25 -06001595
wdenk217c9da2002-10-25 20:35:49 +00001596
1597
1598/* Other static bookkeeping data */
1599
1600/* variables holding tunable values */
1601
1602static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1603static unsigned long top_pad = DEFAULT_TOP_PAD;
1604static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1605static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1606
1607/* The first value returned from sbrk */
1608static char* sbrk_base = (char*)(-1);
1609
1610/* The maximum memory obtained from system via sbrk */
1611static unsigned long max_sbrked_mem = 0;
1612
1613/* The maximum via either sbrk or mmap */
1614static unsigned long max_total_mem = 0;
1615
1616/* internal working copy of mallinfo */
1617static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1618
1619/* The total memory obtained from system via sbrk */
1620#define sbrked_mem (current_mallinfo.arena)
1621
1622/* Tracking mmaps */
1623
Wolfgang Denkea882ba2010-06-20 23:33:59 +02001624#ifdef DEBUG
wdenk217c9da2002-10-25 20:35:49 +00001625static unsigned int n_mmaps = 0;
Wolfgang Denkea882ba2010-06-20 23:33:59 +02001626#endif /* DEBUG */
wdenk217c9da2002-10-25 20:35:49 +00001627static unsigned long mmapped_mem = 0;
1628#if HAVE_MMAP
1629static unsigned int max_n_mmaps = 0;
1630static unsigned long max_mmapped_mem = 0;
1631#endif
1632
Simon Glassd93041a2014-07-10 22:23:25 -06001633
wdenk217c9da2002-10-25 20:35:49 +00001634
1635/*
1636 Debugging support
1637*/
1638
1639#ifdef DEBUG
1640
1641
1642/*
1643 These routines make a number of assertions about the states
1644 of data structures that should be true at all times. If any
1645 are not true, it's very likely that a user program has somehow
1646 trashed memory. (It's also possible that there is a coding error
1647 in malloc. In which case, please report it!)
1648*/
1649
1650#if __STD_C
1651static void do_check_chunk(mchunkptr p)
1652#else
1653static void do_check_chunk(p) mchunkptr p;
1654#endif
1655{
wdenk217c9da2002-10-25 20:35:49 +00001656 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
wdenk217c9da2002-10-25 20:35:49 +00001657
1658 /* No checkable chunk is mmapped */
1659 assert(!chunk_is_mmapped(p));
1660
1661 /* Check for legal address ... */
1662 assert((char*)p >= sbrk_base);
1663 if (p != top)
1664 assert((char*)p + sz <= (char*)top);
1665 else
1666 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1667
1668}
1669
1670
1671#if __STD_C
1672static void do_check_free_chunk(mchunkptr p)
1673#else
1674static void do_check_free_chunk(p) mchunkptr p;
1675#endif
1676{
1677 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
wdenk217c9da2002-10-25 20:35:49 +00001678 mchunkptr next = chunk_at_offset(p, sz);
wdenk217c9da2002-10-25 20:35:49 +00001679
1680 do_check_chunk(p);
1681
1682 /* Check whether it claims to be free ... */
1683 assert(!inuse(p));
1684
1685 /* Unless a special marker, must have OK fields */
1686 if ((long)sz >= (long)MINSIZE)
1687 {
1688 assert((sz & MALLOC_ALIGN_MASK) == 0);
1689 assert(aligned_OK(chunk2mem(p)));
1690 /* ... matching footer field */
1691 assert(next->prev_size == sz);
1692 /* ... and is fully consolidated */
1693 assert(prev_inuse(p));
1694 assert (next == top || inuse(next));
1695
1696 /* ... and has minimally sane links */
1697 assert(p->fd->bk == p);
1698 assert(p->bk->fd == p);
1699 }
1700 else /* markers are always of size SIZE_SZ */
1701 assert(sz == SIZE_SZ);
1702}
1703
1704#if __STD_C
1705static void do_check_inuse_chunk(mchunkptr p)
1706#else
1707static void do_check_inuse_chunk(p) mchunkptr p;
1708#endif
1709{
1710 mchunkptr next = next_chunk(p);
1711 do_check_chunk(p);
1712
1713 /* Check whether it claims to be in use ... */
1714 assert(inuse(p));
1715
1716 /* ... and is surrounded by OK chunks.
1717 Since more things can be checked with free chunks than inuse ones,
1718 if an inuse chunk borders them and debug is on, it's worth doing them.
1719 */
1720 if (!prev_inuse(p))
1721 {
1722 mchunkptr prv = prev_chunk(p);
1723 assert(next_chunk(prv) == p);
1724 do_check_free_chunk(prv);
1725 }
1726 if (next == top)
1727 {
1728 assert(prev_inuse(next));
1729 assert(chunksize(next) >= MINSIZE);
1730 }
1731 else if (!inuse(next))
1732 do_check_free_chunk(next);
1733
1734}
1735
1736#if __STD_C
1737static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1738#else
1739static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1740#endif
1741{
wdenk217c9da2002-10-25 20:35:49 +00001742 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1743 long room = sz - s;
wdenk217c9da2002-10-25 20:35:49 +00001744
1745 do_check_inuse_chunk(p);
1746
1747 /* Legal size ... */
1748 assert((long)sz >= (long)MINSIZE);
1749 assert((sz & MALLOC_ALIGN_MASK) == 0);
1750 assert(room >= 0);
1751 assert(room < (long)MINSIZE);
1752
1753 /* ... and alignment */
1754 assert(aligned_OK(chunk2mem(p)));
1755
1756
1757 /* ... and was allocated at front of an available chunk */
1758 assert(prev_inuse(p));
1759
1760}
1761
1762
1763#define check_free_chunk(P) do_check_free_chunk(P)
1764#define check_inuse_chunk(P) do_check_inuse_chunk(P)
1765#define check_chunk(P) do_check_chunk(P)
1766#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1767#else
1768#define check_free_chunk(P)
1769#define check_inuse_chunk(P)
1770#define check_chunk(P)
1771#define check_malloced_chunk(P,N)
1772#endif
1773
Simon Glassd93041a2014-07-10 22:23:25 -06001774
wdenk217c9da2002-10-25 20:35:49 +00001775
1776/*
1777 Macro-based internal utilities
1778*/
1779
1780
1781/*
1782 Linking chunks in bin lists.
1783 Call these only with variables, not arbitrary expressions, as arguments.
1784*/
1785
1786/*
1787 Place chunk p of size s in its bin, in size order,
1788 putting it ahead of others of same size.
1789*/
1790
1791
1792#define frontlink(P, S, IDX, BK, FD) \
1793{ \
1794 if (S < MAX_SMALLBIN_SIZE) \
1795 { \
1796 IDX = smallbin_index(S); \
1797 mark_binblock(IDX); \
1798 BK = bin_at(IDX); \
1799 FD = BK->fd; \
1800 P->bk = BK; \
1801 P->fd = FD; \
1802 FD->bk = BK->fd = P; \
1803 } \
1804 else \
1805 { \
1806 IDX = bin_index(S); \
1807 BK = bin_at(IDX); \
1808 FD = BK->fd; \
1809 if (FD == BK) mark_binblock(IDX); \
1810 else \
1811 { \
1812 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1813 BK = FD->bk; \
1814 } \
1815 P->bk = BK; \
1816 P->fd = FD; \
1817 FD->bk = BK->fd = P; \
1818 } \
1819}
1820
1821
1822/* take a chunk off a list */
1823
1824#define unlink(P, BK, FD) \
1825{ \
1826 BK = P->bk; \
1827 FD = P->fd; \
1828 FD->bk = BK; \
1829 BK->fd = FD; \
1830} \
1831
1832/* Place p as the last remainder */
1833
1834#define link_last_remainder(P) \
1835{ \
1836 last_remainder->fd = last_remainder->bk = P; \
1837 P->fd = P->bk = last_remainder; \
1838}
1839
1840/* Clear the last_remainder bin */
1841
1842#define clear_last_remainder \
1843 (last_remainder->fd = last_remainder->bk = last_remainder)
1844
1845
Simon Glassd93041a2014-07-10 22:23:25 -06001846
wdenk217c9da2002-10-25 20:35:49 +00001847
1848
1849/* Routines dealing with mmap(). */
1850
1851#if HAVE_MMAP
1852
1853#if __STD_C
1854static mchunkptr mmap_chunk(size_t size)
1855#else
1856static mchunkptr mmap_chunk(size) size_t size;
1857#endif
1858{
1859 size_t page_mask = malloc_getpagesize - 1;
1860 mchunkptr p;
1861
1862#ifndef MAP_ANONYMOUS
1863 static int fd = -1;
1864#endif
1865
1866 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1867
1868 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1869 * there is no following chunk whose prev_size field could be used.
1870 */
1871 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1872
1873#ifdef MAP_ANONYMOUS
1874 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1875 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1876#else /* !MAP_ANONYMOUS */
1877 if (fd < 0)
1878 {
1879 fd = open("/dev/zero", O_RDWR);
1880 if(fd < 0) return 0;
1881 }
1882 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1883#endif
1884
1885 if(p == (mchunkptr)-1) return 0;
1886
1887 n_mmaps++;
1888 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1889
1890 /* We demand that eight bytes into a page must be 8-byte aligned. */
1891 assert(aligned_OK(chunk2mem(p)));
1892
1893 /* The offset to the start of the mmapped region is stored
1894 * in the prev_size field of the chunk; normally it is zero,
1895 * but that can be changed in memalign().
1896 */
1897 p->prev_size = 0;
1898 set_head(p, size|IS_MMAPPED);
1899
1900 mmapped_mem += size;
1901 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1902 max_mmapped_mem = mmapped_mem;
1903 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1904 max_total_mem = mmapped_mem + sbrked_mem;
1905 return p;
1906}
1907
1908#if __STD_C
1909static void munmap_chunk(mchunkptr p)
1910#else
1911static void munmap_chunk(p) mchunkptr p;
1912#endif
1913{
1914 INTERNAL_SIZE_T size = chunksize(p);
1915 int ret;
1916
1917 assert (chunk_is_mmapped(p));
1918 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1919 assert((n_mmaps > 0));
1920 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1921
1922 n_mmaps--;
1923 mmapped_mem -= (size + p->prev_size);
1924
1925 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1926
1927 /* munmap returns non-zero on failure */
1928 assert(ret == 0);
1929}
1930
1931#if HAVE_MREMAP
1932
1933#if __STD_C
1934static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1935#else
1936static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1937#endif
1938{
1939 size_t page_mask = malloc_getpagesize - 1;
1940 INTERNAL_SIZE_T offset = p->prev_size;
1941 INTERNAL_SIZE_T size = chunksize(p);
1942 char *cp;
1943
1944 assert (chunk_is_mmapped(p));
1945 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1946 assert((n_mmaps > 0));
1947 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1948
1949 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1950 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1951
1952 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1953
1954 if (cp == (char *)-1) return 0;
1955
1956 p = (mchunkptr)(cp + offset);
1957
1958 assert(aligned_OK(chunk2mem(p)));
1959
1960 assert((p->prev_size == offset));
1961 set_head(p, (new_size - offset)|IS_MMAPPED);
1962
1963 mmapped_mem -= size + offset;
1964 mmapped_mem += new_size;
1965 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1966 max_mmapped_mem = mmapped_mem;
1967 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1968 max_total_mem = mmapped_mem + sbrked_mem;
1969 return p;
1970}
1971
1972#endif /* HAVE_MREMAP */
1973
1974#endif /* HAVE_MMAP */
1975
1976
Simon Glassd93041a2014-07-10 22:23:25 -06001977
wdenk217c9da2002-10-25 20:35:49 +00001978
1979/*
1980 Extend the top-most chunk by obtaining memory from system.
1981 Main interface to sbrk (but see also malloc_trim).
1982*/
1983
1984#if __STD_C
1985static void malloc_extend_top(INTERNAL_SIZE_T nb)
1986#else
1987static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1988#endif
1989{
1990 char* brk; /* return value from sbrk */
1991 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1992 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1993 char* new_brk; /* return of 2nd sbrk call */
1994 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1995
1996 mchunkptr old_top = top; /* Record state of old top */
1997 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1998 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1999
2000 /* Pad request with top_pad plus minimal overhead */
2001
2002 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2003 unsigned long pagesz = malloc_getpagesize;
2004
2005 /* If not the first time through, round to preserve page boundary */
2006 /* Otherwise, we need to correct to a page size below anyway. */
2007 /* (We also correct below if an intervening foreign sbrk call.) */
2008
2009 if (sbrk_base != (char*)(-1))
2010 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2011
2012 brk = (char*)(MORECORE (sbrk_size));
2013
2014 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2015 if (brk == (char*)(MORECORE_FAILURE) ||
2016 (brk < old_end && old_top != initial_top))
2017 return;
2018
2019 sbrked_mem += sbrk_size;
2020
2021 if (brk == old_end) /* can just add bytes to current top */
2022 {
2023 top_size = sbrk_size + old_top_size;
2024 set_head(top, top_size | PREV_INUSE);
2025 }
2026 else
2027 {
2028 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2029 sbrk_base = brk;
2030 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2031 sbrked_mem += brk - (char*)old_end;
2032
2033 /* Guarantee alignment of first new chunk made from this space */
2034 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2035 if (front_misalign > 0)
2036 {
2037 correction = (MALLOC_ALIGNMENT) - front_misalign;
2038 brk += correction;
2039 }
2040 else
2041 correction = 0;
2042
2043 /* Guarantee the next brk will be at a page boundary */
2044
2045 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
wdenk8bde7f72003-06-27 21:31:46 +00002046 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
wdenk217c9da2002-10-25 20:35:49 +00002047
2048 /* Allocate correction */
2049 new_brk = (char*)(MORECORE (correction));
2050 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2051
2052 sbrked_mem += correction;
2053
2054 top = (mchunkptr)brk;
2055 top_size = new_brk - brk + correction;
2056 set_head(top, top_size | PREV_INUSE);
2057
2058 if (old_top != initial_top)
2059 {
2060
2061 /* There must have been an intervening foreign sbrk call. */
2062 /* A double fencepost is necessary to prevent consolidation */
2063
2064 /* If not enough space to do this, then user did something very wrong */
2065 if (old_top_size < MINSIZE)
2066 {
wdenk8bde7f72003-06-27 21:31:46 +00002067 set_head(top, PREV_INUSE); /* will force null return from malloc */
2068 return;
wdenk217c9da2002-10-25 20:35:49 +00002069 }
2070
2071 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2072 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2073 set_head_size(old_top, old_top_size);
2074 chunk_at_offset(old_top, old_top_size )->size =
wdenk8bde7f72003-06-27 21:31:46 +00002075 SIZE_SZ|PREV_INUSE;
wdenk217c9da2002-10-25 20:35:49 +00002076 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
wdenk8bde7f72003-06-27 21:31:46 +00002077 SIZE_SZ|PREV_INUSE;
wdenk217c9da2002-10-25 20:35:49 +00002078 /* If possible, release the rest. */
2079 if (old_top_size >= MINSIZE)
wdenk8bde7f72003-06-27 21:31:46 +00002080 fREe(chunk2mem(old_top));
wdenk217c9da2002-10-25 20:35:49 +00002081 }
2082 }
2083
2084 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2085 max_sbrked_mem = sbrked_mem;
2086 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2087 max_total_mem = mmapped_mem + sbrked_mem;
2088
2089 /* We always land on a page boundary */
2090 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2091}
2092
2093
Simon Glassd93041a2014-07-10 22:23:25 -06002094
wdenk217c9da2002-10-25 20:35:49 +00002095
2096/* Main public routines */
2097
2098
2099/*
2100 Malloc Algorthim:
2101
2102 The requested size is first converted into a usable form, `nb'.
2103 This currently means to add 4 bytes overhead plus possibly more to
2104 obtain 8-byte alignment and/or to obtain a size of at least
2105 MINSIZE (currently 16 bytes), the smallest allocatable size.
2106 (All fits are considered `exact' if they are within MINSIZE bytes.)
2107
2108 From there, the first successful of the following steps is taken:
2109
2110 1. The bin corresponding to the request size is scanned, and if
wdenk8bde7f72003-06-27 21:31:46 +00002111 a chunk of exactly the right size is found, it is taken.
wdenk217c9da2002-10-25 20:35:49 +00002112
2113 2. The most recently remaindered chunk is used if it is big
wdenk8bde7f72003-06-27 21:31:46 +00002114 enough. This is a form of (roving) first fit, used only in
2115 the absence of exact fits. Runs of consecutive requests use
2116 the remainder of the chunk used for the previous such request
2117 whenever possible. This limited use of a first-fit style
2118 allocation strategy tends to give contiguous chunks
2119 coextensive lifetimes, which improves locality and can reduce
2120 fragmentation in the long run.
wdenk217c9da2002-10-25 20:35:49 +00002121
2122 3. Other bins are scanned in increasing size order, using a
wdenk8bde7f72003-06-27 21:31:46 +00002123 chunk big enough to fulfill the request, and splitting off
2124 any remainder. This search is strictly by best-fit; i.e.,
2125 the smallest (with ties going to approximately the least
2126 recently used) chunk that fits is selected.
wdenk217c9da2002-10-25 20:35:49 +00002127
2128 4. If large enough, the chunk bordering the end of memory
wdenk8bde7f72003-06-27 21:31:46 +00002129 (`top') is split off. (This use of `top' is in accord with
2130 the best-fit search rule. In effect, `top' is treated as
2131 larger (and thus less well fitting) than any other available
2132 chunk since it can be extended to be as large as necessary
2133 (up to system limitations).
wdenk217c9da2002-10-25 20:35:49 +00002134
2135 5. If the request size meets the mmap threshold and the
wdenk8bde7f72003-06-27 21:31:46 +00002136 system supports mmap, and there are few enough currently
2137 allocated mmapped regions, and a call to mmap succeeds,
2138 the request is allocated via direct memory mapping.
wdenk217c9da2002-10-25 20:35:49 +00002139
2140 6. Otherwise, the top of memory is extended by
wdenk8bde7f72003-06-27 21:31:46 +00002141 obtaining more space from the system (normally using sbrk,
2142 but definable to anything else via the MORECORE macro).
2143 Memory is gathered from the system (in system page-sized
2144 units) in a way that allows chunks obtained across different
2145 sbrk calls to be consolidated, but does not require
2146 contiguous memory. Thus, it should be safe to intersperse
2147 mallocs with other sbrk calls.
wdenk217c9da2002-10-25 20:35:49 +00002148
2149
2150 All allocations are made from the the `lowest' part of any found
2151 chunk. (The implementation invariant is that prev_inuse is
2152 always true of any allocated chunk; i.e., that each allocated
2153 chunk borders either a previously allocated and still in-use chunk,
2154 or the base of its memory arena.)
2155
2156*/
2157
2158#if __STD_C
2159Void_t* mALLOc(size_t bytes)
2160#else
2161Void_t* mALLOc(bytes) size_t bytes;
2162#endif
2163{
2164 mchunkptr victim; /* inspected/selected chunk */
2165 INTERNAL_SIZE_T victim_size; /* its size */
2166 int idx; /* index for bin traversal */
2167 mbinptr bin; /* associated bin */
2168 mchunkptr remainder; /* remainder from a split */
2169 long remainder_size; /* its size */
2170 int remainder_index; /* its bin index */
2171 unsigned long block; /* block traverser bit */
2172 int startidx; /* first bin of a traversed block */
2173 mchunkptr fwd; /* misc temp for linking */
2174 mchunkptr bck; /* misc temp for linking */
2175 mbinptr q; /* misc temp */
2176
2177 INTERNAL_SIZE_T nb;
2178
Simon Glassd59476b2014-07-10 22:23:28 -06002179#ifdef CONFIG_SYS_MALLOC_F_LEN
2180 if (!(gd->flags & GD_FLG_RELOC)) {
2181 ulong new_ptr;
2182 void *ptr;
2183
2184 new_ptr = gd->malloc_ptr + bytes;
2185 if (new_ptr > gd->malloc_limit)
2186 panic("Out of pre-reloc memory");
2187 ptr = map_sysmem(gd->malloc_base + gd->malloc_ptr, bytes);
2188 gd->malloc_ptr = ALIGN(new_ptr, sizeof(new_ptr));
2189 return ptr;
2190 }
2191#endif
2192
Wolfgang Denk27405442010-01-15 11:20:10 +01002193 /* check if mem_malloc_init() was run */
2194 if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
2195 /* not initialized yet */
Kim Phillips199adb62012-10-29 13:34:32 +00002196 return NULL;
Wolfgang Denk27405442010-01-15 11:20:10 +01002197 }
2198
Kim Phillips199adb62012-10-29 13:34:32 +00002199 if ((long)bytes < 0) return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002200
2201 nb = request2size(bytes); /* padded request size; */
2202
2203 /* Check for exact match in a bin */
2204
2205 if (is_small_request(nb)) /* Faster version for small requests */
2206 {
2207 idx = smallbin_index(nb);
2208
2209 /* No traversal or size check necessary for small bins. */
2210
2211 q = bin_at(idx);
2212 victim = last(q);
2213
2214 /* Also scan the next one, since it would have a remainder < MINSIZE */
2215 if (victim == q)
2216 {
2217 q = next_bin(q);
2218 victim = last(q);
2219 }
2220 if (victim != q)
2221 {
2222 victim_size = chunksize(victim);
2223 unlink(victim, bck, fwd);
2224 set_inuse_bit_at_offset(victim, victim_size);
2225 check_malloced_chunk(victim, nb);
2226 return chunk2mem(victim);
2227 }
2228
2229 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2230
2231 }
2232 else
2233 {
2234 idx = bin_index(nb);
2235 bin = bin_at(idx);
2236
2237 for (victim = last(bin); victim != bin; victim = victim->bk)
2238 {
2239 victim_size = chunksize(victim);
2240 remainder_size = victim_size - nb;
2241
2242 if (remainder_size >= (long)MINSIZE) /* too big */
2243 {
wdenk8bde7f72003-06-27 21:31:46 +00002244 --idx; /* adjust to rescan below after checking last remainder */
2245 break;
wdenk217c9da2002-10-25 20:35:49 +00002246 }
2247
2248 else if (remainder_size >= 0) /* exact fit */
2249 {
wdenk8bde7f72003-06-27 21:31:46 +00002250 unlink(victim, bck, fwd);
2251 set_inuse_bit_at_offset(victim, victim_size);
2252 check_malloced_chunk(victim, nb);
2253 return chunk2mem(victim);
wdenk217c9da2002-10-25 20:35:49 +00002254 }
2255 }
2256
2257 ++idx;
2258
2259 }
2260
2261 /* Try to use the last split-off remainder */
2262
2263 if ( (victim = last_remainder->fd) != last_remainder)
2264 {
2265 victim_size = chunksize(victim);
2266 remainder_size = victim_size - nb;
2267
2268 if (remainder_size >= (long)MINSIZE) /* re-split */
2269 {
2270 remainder = chunk_at_offset(victim, nb);
2271 set_head(victim, nb | PREV_INUSE);
2272 link_last_remainder(remainder);
2273 set_head(remainder, remainder_size | PREV_INUSE);
2274 set_foot(remainder, remainder_size);
2275 check_malloced_chunk(victim, nb);
2276 return chunk2mem(victim);
2277 }
2278
2279 clear_last_remainder;
2280
2281 if (remainder_size >= 0) /* exhaust */
2282 {
2283 set_inuse_bit_at_offset(victim, victim_size);
2284 check_malloced_chunk(victim, nb);
2285 return chunk2mem(victim);
2286 }
2287
2288 /* Else place in bin */
2289
2290 frontlink(victim, victim_size, remainder_index, bck, fwd);
2291 }
2292
2293 /*
2294 If there are any possibly nonempty big-enough blocks,
2295 search for best fitting chunk by scanning bins in blockwidth units.
2296 */
2297
Stefan Roesef2302d42008-08-06 14:05:38 +02002298 if ( (block = idx2binblock(idx)) <= binblocks_r)
wdenk217c9da2002-10-25 20:35:49 +00002299 {
2300
2301 /* Get to the first marked block */
2302
Stefan Roesef2302d42008-08-06 14:05:38 +02002303 if ( (block & binblocks_r) == 0)
wdenk217c9da2002-10-25 20:35:49 +00002304 {
2305 /* force to an even block boundary */
2306 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2307 block <<= 1;
Stefan Roesef2302d42008-08-06 14:05:38 +02002308 while ((block & binblocks_r) == 0)
wdenk217c9da2002-10-25 20:35:49 +00002309 {
wdenk8bde7f72003-06-27 21:31:46 +00002310 idx += BINBLOCKWIDTH;
2311 block <<= 1;
wdenk217c9da2002-10-25 20:35:49 +00002312 }
2313 }
2314
2315 /* For each possibly nonempty block ... */
2316 for (;;)
2317 {
2318 startidx = idx; /* (track incomplete blocks) */
2319 q = bin = bin_at(idx);
2320
2321 /* For each bin in this block ... */
2322 do
2323 {
wdenk8bde7f72003-06-27 21:31:46 +00002324 /* Find and use first big enough chunk ... */
wdenk217c9da2002-10-25 20:35:49 +00002325
wdenk8bde7f72003-06-27 21:31:46 +00002326 for (victim = last(bin); victim != bin; victim = victim->bk)
2327 {
2328 victim_size = chunksize(victim);
2329 remainder_size = victim_size - nb;
wdenk217c9da2002-10-25 20:35:49 +00002330
wdenk8bde7f72003-06-27 21:31:46 +00002331 if (remainder_size >= (long)MINSIZE) /* split */
2332 {
2333 remainder = chunk_at_offset(victim, nb);
2334 set_head(victim, nb | PREV_INUSE);
2335 unlink(victim, bck, fwd);
2336 link_last_remainder(remainder);
2337 set_head(remainder, remainder_size | PREV_INUSE);
2338 set_foot(remainder, remainder_size);
2339 check_malloced_chunk(victim, nb);
2340 return chunk2mem(victim);
2341 }
wdenk217c9da2002-10-25 20:35:49 +00002342
wdenk8bde7f72003-06-27 21:31:46 +00002343 else if (remainder_size >= 0) /* take */
2344 {
2345 set_inuse_bit_at_offset(victim, victim_size);
2346 unlink(victim, bck, fwd);
2347 check_malloced_chunk(victim, nb);
2348 return chunk2mem(victim);
2349 }
wdenk217c9da2002-10-25 20:35:49 +00002350
wdenk8bde7f72003-06-27 21:31:46 +00002351 }
wdenk217c9da2002-10-25 20:35:49 +00002352
2353 bin = next_bin(bin);
2354
2355 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2356
2357 /* Clear out the block bit. */
2358
2359 do /* Possibly backtrack to try to clear a partial block */
2360 {
wdenk8bde7f72003-06-27 21:31:46 +00002361 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2362 {
Stefan Roesef2302d42008-08-06 14:05:38 +02002363 av_[1] = (mbinptr)(binblocks_r & ~block);
wdenk8bde7f72003-06-27 21:31:46 +00002364 break;
2365 }
2366 --startidx;
wdenk217c9da2002-10-25 20:35:49 +00002367 q = prev_bin(q);
2368 } while (first(q) == q);
2369
2370 /* Get to the next possibly nonempty block */
2371
Stefan Roesef2302d42008-08-06 14:05:38 +02002372 if ( (block <<= 1) <= binblocks_r && (block != 0) )
wdenk217c9da2002-10-25 20:35:49 +00002373 {
Stefan Roesef2302d42008-08-06 14:05:38 +02002374 while ((block & binblocks_r) == 0)
wdenk8bde7f72003-06-27 21:31:46 +00002375 {
2376 idx += BINBLOCKWIDTH;
2377 block <<= 1;
2378 }
wdenk217c9da2002-10-25 20:35:49 +00002379 }
2380 else
wdenk8bde7f72003-06-27 21:31:46 +00002381 break;
wdenk217c9da2002-10-25 20:35:49 +00002382 }
2383 }
2384
2385
2386 /* Try to use top chunk */
2387
2388 /* Require that there be a remainder, ensuring top always exists */
2389 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2390 {
2391
2392#if HAVE_MMAP
2393 /* If big and would otherwise need to extend, try to use mmap instead */
2394 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
wdenk8bde7f72003-06-27 21:31:46 +00002395 (victim = mmap_chunk(nb)) != 0)
wdenk217c9da2002-10-25 20:35:49 +00002396 return chunk2mem(victim);
2397#endif
2398
2399 /* Try to extend */
2400 malloc_extend_top(nb);
2401 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
Kim Phillips199adb62012-10-29 13:34:32 +00002402 return NULL; /* propagate failure */
wdenk217c9da2002-10-25 20:35:49 +00002403 }
2404
2405 victim = top;
2406 set_head(victim, nb | PREV_INUSE);
2407 top = chunk_at_offset(victim, nb);
2408 set_head(top, remainder_size | PREV_INUSE);
2409 check_malloced_chunk(victim, nb);
2410 return chunk2mem(victim);
2411
2412}
2413
2414
Simon Glassd93041a2014-07-10 22:23:25 -06002415
wdenk217c9da2002-10-25 20:35:49 +00002416
2417/*
2418
2419 free() algorithm :
2420
2421 cases:
2422
2423 1. free(0) has no effect.
2424
2425 2. If the chunk was allocated via mmap, it is release via munmap().
2426
2427 3. If a returned chunk borders the current high end of memory,
wdenk8bde7f72003-06-27 21:31:46 +00002428 it is consolidated into the top, and if the total unused
2429 topmost memory exceeds the trim threshold, malloc_trim is
2430 called.
wdenk217c9da2002-10-25 20:35:49 +00002431
2432 4. Other chunks are consolidated as they arrive, and
wdenk8bde7f72003-06-27 21:31:46 +00002433 placed in corresponding bins. (This includes the case of
2434 consolidating with the current `last_remainder').
wdenk217c9da2002-10-25 20:35:49 +00002435
2436*/
2437
2438
2439#if __STD_C
2440void fREe(Void_t* mem)
2441#else
2442void fREe(mem) Void_t* mem;
2443#endif
2444{
2445 mchunkptr p; /* chunk corresponding to mem */
2446 INTERNAL_SIZE_T hd; /* its head field */
2447 INTERNAL_SIZE_T sz; /* its size */
2448 int idx; /* its bin index */
2449 mchunkptr next; /* next contiguous chunk */
2450 INTERNAL_SIZE_T nextsz; /* its size */
2451 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2452 mchunkptr bck; /* misc temp for linking */
2453 mchunkptr fwd; /* misc temp for linking */
2454 int islr; /* track whether merging with last_remainder */
2455
Simon Glassd59476b2014-07-10 22:23:28 -06002456#ifdef CONFIG_SYS_MALLOC_F_LEN
2457 /* free() is a no-op - all the memory will be freed on relocation */
2458 if (!(gd->flags & GD_FLG_RELOC))
2459 return;
2460#endif
2461
Kim Phillips199adb62012-10-29 13:34:32 +00002462 if (mem == NULL) /* free(0) has no effect */
wdenk217c9da2002-10-25 20:35:49 +00002463 return;
2464
2465 p = mem2chunk(mem);
2466 hd = p->size;
2467
2468#if HAVE_MMAP
2469 if (hd & IS_MMAPPED) /* release mmapped memory. */
2470 {
2471 munmap_chunk(p);
2472 return;
2473 }
2474#endif
2475
2476 check_inuse_chunk(p);
2477
2478 sz = hd & ~PREV_INUSE;
2479 next = chunk_at_offset(p, sz);
2480 nextsz = chunksize(next);
2481
2482 if (next == top) /* merge with top */
2483 {
2484 sz += nextsz;
2485
2486 if (!(hd & PREV_INUSE)) /* consolidate backward */
2487 {
2488 prevsz = p->prev_size;
2489 p = chunk_at_offset(p, -((long) prevsz));
2490 sz += prevsz;
2491 unlink(p, bck, fwd);
2492 }
2493
2494 set_head(p, sz | PREV_INUSE);
2495 top = p;
2496 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2497 malloc_trim(top_pad);
2498 return;
2499 }
2500
2501 set_head(next, nextsz); /* clear inuse bit */
2502
2503 islr = 0;
2504
2505 if (!(hd & PREV_INUSE)) /* consolidate backward */
2506 {
2507 prevsz = p->prev_size;
2508 p = chunk_at_offset(p, -((long) prevsz));
2509 sz += prevsz;
2510
2511 if (p->fd == last_remainder) /* keep as last_remainder */
2512 islr = 1;
2513 else
2514 unlink(p, bck, fwd);
2515 }
2516
2517 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2518 {
2519 sz += nextsz;
2520
2521 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2522 {
2523 islr = 1;
2524 link_last_remainder(p);
2525 }
2526 else
2527 unlink(next, bck, fwd);
2528 }
2529
2530
2531 set_head(p, sz | PREV_INUSE);
2532 set_foot(p, sz);
2533 if (!islr)
2534 frontlink(p, sz, idx, bck, fwd);
2535}
2536
2537
Simon Glassd93041a2014-07-10 22:23:25 -06002538
wdenk217c9da2002-10-25 20:35:49 +00002539
2540
2541/*
2542
2543 Realloc algorithm:
2544
2545 Chunks that were obtained via mmap cannot be extended or shrunk
2546 unless HAVE_MREMAP is defined, in which case mremap is used.
2547 Otherwise, if their reallocation is for additional space, they are
2548 copied. If for less, they are just left alone.
2549
2550 Otherwise, if the reallocation is for additional space, and the
2551 chunk can be extended, it is, else a malloc-copy-free sequence is
2552 taken. There are several different ways that a chunk could be
2553 extended. All are tried:
2554
2555 * Extending forward into following adjacent free chunk.
2556 * Shifting backwards, joining preceding adjacent space
2557 * Both shifting backwards and extending forward.
2558 * Extending into newly sbrked space
2559
2560 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2561 size argument of zero (re)allocates a minimum-sized chunk.
2562
2563 If the reallocation is for less space, and the new request is for
2564 a `small' (<512 bytes) size, then the newly unused space is lopped
2565 off and freed.
2566
2567 The old unix realloc convention of allowing the last-free'd chunk
2568 to be used as an argument to realloc is no longer supported.
2569 I don't know of any programs still relying on this feature,
2570 and allowing it would also allow too many other incorrect
2571 usages of realloc to be sensible.
2572
2573
2574*/
2575
2576
2577#if __STD_C
2578Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2579#else
2580Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2581#endif
2582{
2583 INTERNAL_SIZE_T nb; /* padded request size */
2584
2585 mchunkptr oldp; /* chunk corresponding to oldmem */
2586 INTERNAL_SIZE_T oldsize; /* its size */
2587
2588 mchunkptr newp; /* chunk to return */
2589 INTERNAL_SIZE_T newsize; /* its size */
2590 Void_t* newmem; /* corresponding user mem */
2591
2592 mchunkptr next; /* next contiguous chunk after oldp */
2593 INTERNAL_SIZE_T nextsize; /* its size */
2594
2595 mchunkptr prev; /* previous contiguous chunk before oldp */
2596 INTERNAL_SIZE_T prevsize; /* its size */
2597
2598 mchunkptr remainder; /* holds split off extra space from newp */
2599 INTERNAL_SIZE_T remainder_size; /* its size */
2600
2601 mchunkptr bck; /* misc temp for linking */
2602 mchunkptr fwd; /* misc temp for linking */
2603
2604#ifdef REALLOC_ZERO_BYTES_FREES
2605 if (bytes == 0) { fREe(oldmem); return 0; }
2606#endif
2607
Kim Phillips199adb62012-10-29 13:34:32 +00002608 if ((long)bytes < 0) return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002609
2610 /* realloc of null is supposed to be same as malloc */
Kim Phillips199adb62012-10-29 13:34:32 +00002611 if (oldmem == NULL) return mALLOc(bytes);
wdenk217c9da2002-10-25 20:35:49 +00002612
Simon Glassd59476b2014-07-10 22:23:28 -06002613#ifdef CONFIG_SYS_MALLOC_F_LEN
2614 if (!(gd->flags & GD_FLG_RELOC)) {
2615 /* This is harder to support and should not be needed */
2616 panic("pre-reloc realloc() is not supported");
2617 }
2618#endif
2619
wdenk217c9da2002-10-25 20:35:49 +00002620 newp = oldp = mem2chunk(oldmem);
2621 newsize = oldsize = chunksize(oldp);
2622
2623
2624 nb = request2size(bytes);
2625
2626#if HAVE_MMAP
2627 if (chunk_is_mmapped(oldp))
2628 {
2629#if HAVE_MREMAP
2630 newp = mremap_chunk(oldp, nb);
2631 if(newp) return chunk2mem(newp);
2632#endif
2633 /* Note the extra SIZE_SZ overhead. */
2634 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2635 /* Must alloc, copy, free. */
2636 newmem = mALLOc(bytes);
2637 if (newmem == 0) return 0; /* propagate failure */
2638 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2639 munmap_chunk(oldp);
2640 return newmem;
2641 }
2642#endif
2643
2644 check_inuse_chunk(oldp);
2645
2646 if ((long)(oldsize) < (long)(nb))
2647 {
2648
2649 /* Try expanding forward */
2650
2651 next = chunk_at_offset(oldp, oldsize);
2652 if (next == top || !inuse(next))
2653 {
2654 nextsize = chunksize(next);
2655
2656 /* Forward into top only if a remainder */
2657 if (next == top)
2658 {
wdenk8bde7f72003-06-27 21:31:46 +00002659 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2660 {
2661 newsize += nextsize;
2662 top = chunk_at_offset(oldp, nb);
2663 set_head(top, (newsize - nb) | PREV_INUSE);
2664 set_head_size(oldp, nb);
2665 return chunk2mem(oldp);
2666 }
wdenk217c9da2002-10-25 20:35:49 +00002667 }
2668
2669 /* Forward into next chunk */
2670 else if (((long)(nextsize + newsize) >= (long)(nb)))
2671 {
wdenk8bde7f72003-06-27 21:31:46 +00002672 unlink(next, bck, fwd);
2673 newsize += nextsize;
2674 goto split;
wdenk217c9da2002-10-25 20:35:49 +00002675 }
2676 }
2677 else
2678 {
Kim Phillips199adb62012-10-29 13:34:32 +00002679 next = NULL;
wdenk217c9da2002-10-25 20:35:49 +00002680 nextsize = 0;
2681 }
2682
2683 /* Try shifting backwards. */
2684
2685 if (!prev_inuse(oldp))
2686 {
2687 prev = prev_chunk(oldp);
2688 prevsize = chunksize(prev);
2689
2690 /* try forward + backward first to save a later consolidation */
2691
Kim Phillips199adb62012-10-29 13:34:32 +00002692 if (next != NULL)
wdenk217c9da2002-10-25 20:35:49 +00002693 {
wdenk8bde7f72003-06-27 21:31:46 +00002694 /* into top */
2695 if (next == top)
2696 {
2697 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2698 {
2699 unlink(prev, bck, fwd);
2700 newp = prev;
2701 newsize += prevsize + nextsize;
2702 newmem = chunk2mem(newp);
2703 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2704 top = chunk_at_offset(newp, nb);
2705 set_head(top, (newsize - nb) | PREV_INUSE);
2706 set_head_size(newp, nb);
2707 return newmem;
2708 }
2709 }
wdenk217c9da2002-10-25 20:35:49 +00002710
wdenk8bde7f72003-06-27 21:31:46 +00002711 /* into next chunk */
2712 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2713 {
2714 unlink(next, bck, fwd);
2715 unlink(prev, bck, fwd);
2716 newp = prev;
2717 newsize += nextsize + prevsize;
2718 newmem = chunk2mem(newp);
2719 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2720 goto split;
2721 }
wdenk217c9da2002-10-25 20:35:49 +00002722 }
2723
2724 /* backward only */
Kim Phillips199adb62012-10-29 13:34:32 +00002725 if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
wdenk217c9da2002-10-25 20:35:49 +00002726 {
wdenk8bde7f72003-06-27 21:31:46 +00002727 unlink(prev, bck, fwd);
2728 newp = prev;
2729 newsize += prevsize;
2730 newmem = chunk2mem(newp);
2731 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2732 goto split;
wdenk217c9da2002-10-25 20:35:49 +00002733 }
2734 }
2735
2736 /* Must allocate */
2737
2738 newmem = mALLOc (bytes);
2739
Kim Phillips199adb62012-10-29 13:34:32 +00002740 if (newmem == NULL) /* propagate failure */
2741 return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002742
2743 /* Avoid copy if newp is next chunk after oldp. */
2744 /* (This can only happen when new chunk is sbrk'ed.) */
2745
2746 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2747 {
2748 newsize += chunksize(newp);
2749 newp = oldp;
2750 goto split;
2751 }
2752
2753 /* Otherwise copy, free, and exit */
2754 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2755 fREe(oldmem);
2756 return newmem;
2757 }
2758
2759
2760 split: /* split off extra room in old or expanded chunk */
2761
2762 if (newsize - nb >= MINSIZE) /* split off remainder */
2763 {
2764 remainder = chunk_at_offset(newp, nb);
2765 remainder_size = newsize - nb;
2766 set_head_size(newp, nb);
2767 set_head(remainder, remainder_size | PREV_INUSE);
2768 set_inuse_bit_at_offset(remainder, remainder_size);
2769 fREe(chunk2mem(remainder)); /* let free() deal with it */
2770 }
2771 else
2772 {
2773 set_head_size(newp, newsize);
2774 set_inuse_bit_at_offset(newp, newsize);
2775 }
2776
2777 check_inuse_chunk(newp);
2778 return chunk2mem(newp);
2779}
2780
2781
Simon Glassd93041a2014-07-10 22:23:25 -06002782
wdenk217c9da2002-10-25 20:35:49 +00002783
2784/*
2785
2786 memalign algorithm:
2787
2788 memalign requests more than enough space from malloc, finds a spot
2789 within that chunk that meets the alignment request, and then
2790 possibly frees the leading and trailing space.
2791
2792 The alignment argument must be a power of two. This property is not
2793 checked by memalign, so misuse may result in random runtime errors.
2794
2795 8-byte alignment is guaranteed by normal malloc calls, so don't
2796 bother calling memalign with an argument of 8 or less.
2797
2798 Overreliance on memalign is a sure way to fragment space.
2799
2800*/
2801
2802
2803#if __STD_C
2804Void_t* mEMALIGn(size_t alignment, size_t bytes)
2805#else
2806Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2807#endif
2808{
2809 INTERNAL_SIZE_T nb; /* padded request size */
2810 char* m; /* memory returned by malloc call */
2811 mchunkptr p; /* corresponding chunk */
2812 char* brk; /* alignment point within p */
2813 mchunkptr newp; /* chunk to return */
2814 INTERNAL_SIZE_T newsize; /* its size */
2815 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2816 mchunkptr remainder; /* spare room at end to split off */
2817 long remainder_size; /* its size */
2818
Kim Phillips199adb62012-10-29 13:34:32 +00002819 if ((long)bytes < 0) return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002820
2821 /* If need less alignment than we give anyway, just relay to malloc */
2822
2823 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2824
2825 /* Otherwise, ensure that it is at least a minimum chunk size */
2826
2827 if (alignment < MINSIZE) alignment = MINSIZE;
2828
2829 /* Call malloc with worst case padding to hit alignment. */
2830
2831 nb = request2size(bytes);
2832 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2833
Kim Phillips199adb62012-10-29 13:34:32 +00002834 if (m == NULL) return NULL; /* propagate failure */
wdenk217c9da2002-10-25 20:35:49 +00002835
2836 p = mem2chunk(m);
2837
2838 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2839 {
2840#if HAVE_MMAP
2841 if(chunk_is_mmapped(p))
2842 return chunk2mem(p); /* nothing more to do */
2843#endif
2844 }
2845 else /* misaligned */
2846 {
2847 /*
2848 Find an aligned spot inside chunk.
2849 Since we need to give back leading space in a chunk of at
2850 least MINSIZE, if the first calculation places us at
2851 a spot with less than MINSIZE leader, we can move to the
2852 next aligned spot -- we've allocated enough total room so that
2853 this is always possible.
2854 */
2855
2856 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2857 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2858
2859 newp = (mchunkptr)brk;
2860 leadsize = brk - (char*)(p);
2861 newsize = chunksize(p) - leadsize;
2862
2863#if HAVE_MMAP
2864 if(chunk_is_mmapped(p))
2865 {
2866 newp->prev_size = p->prev_size + leadsize;
2867 set_head(newp, newsize|IS_MMAPPED);
2868 return chunk2mem(newp);
2869 }
2870#endif
2871
2872 /* give back leader, use the rest */
2873
2874 set_head(newp, newsize | PREV_INUSE);
2875 set_inuse_bit_at_offset(newp, newsize);
2876 set_head_size(p, leadsize);
2877 fREe(chunk2mem(p));
2878 p = newp;
2879
2880 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2881 }
2882
2883 /* Also give back spare room at the end */
2884
2885 remainder_size = chunksize(p) - nb;
2886
2887 if (remainder_size >= (long)MINSIZE)
2888 {
2889 remainder = chunk_at_offset(p, nb);
2890 set_head(remainder, remainder_size | PREV_INUSE);
2891 set_head_size(p, nb);
2892 fREe(chunk2mem(remainder));
2893 }
2894
2895 check_inuse_chunk(p);
2896 return chunk2mem(p);
2897
2898}
2899
Simon Glassd93041a2014-07-10 22:23:25 -06002900
wdenk217c9da2002-10-25 20:35:49 +00002901
2902
2903/*
2904 valloc just invokes memalign with alignment argument equal
2905 to the page size of the system (or as near to this as can
2906 be figured out from all the includes/defines above.)
2907*/
2908
2909#if __STD_C
2910Void_t* vALLOc(size_t bytes)
2911#else
2912Void_t* vALLOc(bytes) size_t bytes;
2913#endif
2914{
2915 return mEMALIGn (malloc_getpagesize, bytes);
2916}
2917
2918/*
2919 pvalloc just invokes valloc for the nearest pagesize
2920 that will accommodate request
2921*/
2922
2923
2924#if __STD_C
2925Void_t* pvALLOc(size_t bytes)
2926#else
2927Void_t* pvALLOc(bytes) size_t bytes;
2928#endif
2929{
2930 size_t pagesize = malloc_getpagesize;
2931 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2932}
2933
2934/*
2935
2936 calloc calls malloc, then zeroes out the allocated chunk.
2937
2938*/
2939
2940#if __STD_C
2941Void_t* cALLOc(size_t n, size_t elem_size)
2942#else
2943Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2944#endif
2945{
2946 mchunkptr p;
2947 INTERNAL_SIZE_T csz;
2948
2949 INTERNAL_SIZE_T sz = n * elem_size;
2950
2951
2952 /* check if expand_top called, in which case don't need to clear */
2953#if MORECORE_CLEARS
2954 mchunkptr oldtop = top;
2955 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2956#endif
2957 Void_t* mem = mALLOc (sz);
2958
Kim Phillips199adb62012-10-29 13:34:32 +00002959 if ((long)n < 0) return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002960
Kim Phillips199adb62012-10-29 13:34:32 +00002961 if (mem == NULL)
2962 return NULL;
wdenk217c9da2002-10-25 20:35:49 +00002963 else
2964 {
Simon Glassd59476b2014-07-10 22:23:28 -06002965#ifdef CONFIG_SYS_MALLOC_F_LEN
2966 if (!(gd->flags & GD_FLG_RELOC)) {
2967 MALLOC_ZERO(mem, sz);
2968 return mem;
2969 }
2970#endif
wdenk217c9da2002-10-25 20:35:49 +00002971 p = mem2chunk(mem);
2972
2973 /* Two optional cases in which clearing not necessary */
2974
2975
2976#if HAVE_MMAP
2977 if (chunk_is_mmapped(p)) return mem;
2978#endif
2979
2980 csz = chunksize(p);
2981
2982#if MORECORE_CLEARS
2983 if (p == oldtop && csz > oldtopsize)
2984 {
2985 /* clear only the bytes from non-freshly-sbrked memory */
2986 csz = oldtopsize;
2987 }
2988#endif
2989
2990 MALLOC_ZERO(mem, csz - SIZE_SZ);
2991 return mem;
2992 }
2993}
2994
2995/*
2996
2997 cfree just calls free. It is needed/defined on some systems
2998 that pair it with calloc, presumably for odd historical reasons.
2999
3000*/
3001
3002#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3003#if __STD_C
3004void cfree(Void_t *mem)
3005#else
3006void cfree(mem) Void_t *mem;
3007#endif
3008{
3009 fREe(mem);
3010}
3011#endif
3012
Simon Glassd93041a2014-07-10 22:23:25 -06003013
wdenk217c9da2002-10-25 20:35:49 +00003014
3015/*
3016
3017 Malloc_trim gives memory back to the system (via negative
3018 arguments to sbrk) if there is unused memory at the `high' end of
3019 the malloc pool. You can call this after freeing large blocks of
3020 memory to potentially reduce the system-level memory requirements
3021 of a program. However, it cannot guarantee to reduce memory. Under
3022 some allocation patterns, some large free blocks of memory will be
3023 locked between two used chunks, so they cannot be given back to
3024 the system.
3025
3026 The `pad' argument to malloc_trim represents the amount of free
3027 trailing space to leave untrimmed. If this argument is zero,
3028 only the minimum amount of memory to maintain internal data
3029 structures will be left (one page or less). Non-zero arguments
3030 can be supplied to maintain enough trailing space to service
3031 future expected allocations without having to re-obtain memory
3032 from the system.
3033
3034 Malloc_trim returns 1 if it actually released any memory, else 0.
3035
3036*/
3037
3038#if __STD_C
3039int malloc_trim(size_t pad)
3040#else
3041int malloc_trim(pad) size_t pad;
3042#endif
3043{
3044 long top_size; /* Amount of top-most memory */
3045 long extra; /* Amount to release */
3046 char* current_brk; /* address returned by pre-check sbrk call */
3047 char* new_brk; /* address returned by negative sbrk call */
3048
3049 unsigned long pagesz = malloc_getpagesize;
3050
3051 top_size = chunksize(top);
3052 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3053
3054 if (extra < (long)pagesz) /* Not enough memory to release */
3055 return 0;
3056
3057 else
3058 {
3059 /* Test to make sure no one else called sbrk */
3060 current_brk = (char*)(MORECORE (0));
3061 if (current_brk != (char*)(top) + top_size)
3062 return 0; /* Apparently we don't own memory; must fail */
3063
3064 else
3065 {
3066 new_brk = (char*)(MORECORE (-extra));
3067
3068 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3069 {
wdenk8bde7f72003-06-27 21:31:46 +00003070 /* Try to figure out what we have */
3071 current_brk = (char*)(MORECORE (0));
3072 top_size = current_brk - (char*)top;
3073 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3074 {
3075 sbrked_mem = current_brk - sbrk_base;
3076 set_head(top, top_size | PREV_INUSE);
3077 }
3078 check_chunk(top);
3079 return 0;
wdenk217c9da2002-10-25 20:35:49 +00003080 }
3081
3082 else
3083 {
wdenk8bde7f72003-06-27 21:31:46 +00003084 /* Success. Adjust top accordingly. */
3085 set_head(top, (top_size - extra) | PREV_INUSE);
3086 sbrked_mem -= extra;
3087 check_chunk(top);
3088 return 1;
wdenk217c9da2002-10-25 20:35:49 +00003089 }
3090 }
3091 }
3092}
3093
Simon Glassd93041a2014-07-10 22:23:25 -06003094
wdenk217c9da2002-10-25 20:35:49 +00003095
3096/*
3097 malloc_usable_size:
3098
3099 This routine tells you how many bytes you can actually use in an
3100 allocated chunk, which may be more than you requested (although
3101 often not). You can use this many bytes without worrying about
3102 overwriting other allocated objects. Not a particularly great
3103 programming practice, but still sometimes useful.
3104
3105*/
3106
3107#if __STD_C
3108size_t malloc_usable_size(Void_t* mem)
3109#else
3110size_t malloc_usable_size(mem) Void_t* mem;
3111#endif
3112{
3113 mchunkptr p;
Kim Phillips199adb62012-10-29 13:34:32 +00003114 if (mem == NULL)
wdenk217c9da2002-10-25 20:35:49 +00003115 return 0;
3116 else
3117 {
3118 p = mem2chunk(mem);
3119 if(!chunk_is_mmapped(p))
3120 {
3121 if (!inuse(p)) return 0;
3122 check_inuse_chunk(p);
3123 return chunksize(p) - SIZE_SZ;
3124 }
3125 return chunksize(p) - 2*SIZE_SZ;
3126 }
3127}
3128
3129
Simon Glassd93041a2014-07-10 22:23:25 -06003130
wdenk217c9da2002-10-25 20:35:49 +00003131
3132/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3133
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003134#ifdef DEBUG
wdenk217c9da2002-10-25 20:35:49 +00003135static void malloc_update_mallinfo()
3136{
3137 int i;
3138 mbinptr b;
3139 mchunkptr p;
3140#ifdef DEBUG
3141 mchunkptr q;
3142#endif
3143
3144 INTERNAL_SIZE_T avail = chunksize(top);
3145 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3146
3147 for (i = 1; i < NAV; ++i)
3148 {
3149 b = bin_at(i);
3150 for (p = last(b); p != b; p = p->bk)
3151 {
3152#ifdef DEBUG
3153 check_free_chunk(p);
3154 for (q = next_chunk(p);
wdenk8bde7f72003-06-27 21:31:46 +00003155 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3156 q = next_chunk(q))
3157 check_inuse_chunk(q);
wdenk217c9da2002-10-25 20:35:49 +00003158#endif
3159 avail += chunksize(p);
3160 navail++;
3161 }
3162 }
3163
3164 current_mallinfo.ordblks = navail;
3165 current_mallinfo.uordblks = sbrked_mem - avail;
3166 current_mallinfo.fordblks = avail;
3167 current_mallinfo.hblks = n_mmaps;
3168 current_mallinfo.hblkhd = mmapped_mem;
3169 current_mallinfo.keepcost = chunksize(top);
3170
3171}
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003172#endif /* DEBUG */
wdenk217c9da2002-10-25 20:35:49 +00003173
Simon Glassd93041a2014-07-10 22:23:25 -06003174
wdenk217c9da2002-10-25 20:35:49 +00003175
3176/*
3177
3178 malloc_stats:
3179
3180 Prints on the amount of space obtain from the system (both
3181 via sbrk and mmap), the maximum amount (which may be more than
3182 current if malloc_trim and/or munmap got called), the maximum
3183 number of simultaneous mmap regions used, and the current number
3184 of bytes allocated via malloc (or realloc, etc) but not yet
3185 freed. (Note that this is the number of bytes allocated, not the
3186 number requested. It will be larger than the number requested
3187 because of alignment and bookkeeping overhead.)
3188
3189*/
3190
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003191#ifdef DEBUG
wdenk217c9da2002-10-25 20:35:49 +00003192void malloc_stats()
3193{
3194 malloc_update_mallinfo();
3195 printf("max system bytes = %10u\n",
wdenk8bde7f72003-06-27 21:31:46 +00003196 (unsigned int)(max_total_mem));
wdenk217c9da2002-10-25 20:35:49 +00003197 printf("system bytes = %10u\n",
wdenk8bde7f72003-06-27 21:31:46 +00003198 (unsigned int)(sbrked_mem + mmapped_mem));
wdenk217c9da2002-10-25 20:35:49 +00003199 printf("in use bytes = %10u\n",
wdenk8bde7f72003-06-27 21:31:46 +00003200 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
wdenk217c9da2002-10-25 20:35:49 +00003201#if HAVE_MMAP
3202 printf("max mmap regions = %10u\n",
wdenk8bde7f72003-06-27 21:31:46 +00003203 (unsigned int)max_n_mmaps);
wdenk217c9da2002-10-25 20:35:49 +00003204#endif
3205}
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003206#endif /* DEBUG */
wdenk217c9da2002-10-25 20:35:49 +00003207
3208/*
3209 mallinfo returns a copy of updated current mallinfo.
3210*/
3211
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003212#ifdef DEBUG
wdenk217c9da2002-10-25 20:35:49 +00003213struct mallinfo mALLINFo()
3214{
3215 malloc_update_mallinfo();
3216 return current_mallinfo;
3217}
Wolfgang Denkea882ba2010-06-20 23:33:59 +02003218#endif /* DEBUG */
wdenk217c9da2002-10-25 20:35:49 +00003219
3220
Simon Glassd93041a2014-07-10 22:23:25 -06003221
wdenk217c9da2002-10-25 20:35:49 +00003222
3223/*
3224 mallopt:
3225
3226 mallopt is the general SVID/XPG interface to tunable parameters.
3227 The format is to provide a (parameter-number, parameter-value) pair.
3228 mallopt then sets the corresponding parameter to the argument
3229 value if it can (i.e., so long as the value is meaningful),
3230 and returns 1 if successful else 0.
3231
3232 See descriptions of tunable parameters above.
3233
3234*/
3235
3236#if __STD_C
3237int mALLOPt(int param_number, int value)
3238#else
3239int mALLOPt(param_number, value) int param_number; int value;
3240#endif
3241{
3242 switch(param_number)
3243 {
3244 case M_TRIM_THRESHOLD:
3245 trim_threshold = value; return 1;
3246 case M_TOP_PAD:
3247 top_pad = value; return 1;
3248 case M_MMAP_THRESHOLD:
3249 mmap_threshold = value; return 1;
3250 case M_MMAP_MAX:
3251#if HAVE_MMAP
3252 n_mmaps_max = value; return 1;
3253#else
3254 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3255#endif
3256
3257 default:
3258 return 0;
3259 }
3260}
3261
3262/*
3263
3264History:
3265
3266 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3267 * return null for negative arguments
3268 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
wdenk8bde7f72003-06-27 21:31:46 +00003269 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3270 (e.g. WIN32 platforms)
3271 * Cleanup up header file inclusion for WIN32 platforms
3272 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3273 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3274 memory allocation routines
3275 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3276 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
wdenk217c9da2002-10-25 20:35:49 +00003277 usage of 'assert' in non-WIN32 code
wdenk8bde7f72003-06-27 21:31:46 +00003278 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3279 avoid infinite loop
wdenk217c9da2002-10-25 20:35:49 +00003280 * Always call 'fREe()' rather than 'free()'
3281
3282 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3283 * Fixed ordering problem with boundary-stamping
3284
3285 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3286 * Added pvalloc, as recommended by H.J. Liu
3287 * Added 64bit pointer support mainly from Wolfram Gloger
3288 * Added anonymously donated WIN32 sbrk emulation
3289 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3290 * malloc_extend_top: fix mask error that caused wastage after
wdenk8bde7f72003-06-27 21:31:46 +00003291 foreign sbrks
wdenk217c9da2002-10-25 20:35:49 +00003292 * Add linux mremap support code from HJ Liu
3293
3294 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3295 * Integrated most documentation with the code.
3296 * Add support for mmap, with help from
wdenk8bde7f72003-06-27 21:31:46 +00003297 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
wdenk217c9da2002-10-25 20:35:49 +00003298 * Use last_remainder in more cases.
3299 * Pack bins using idea from colin@nyx10.cs.du.edu
3300 * Use ordered bins instead of best-fit threshhold
3301 * Eliminate block-local decls to simplify tracing and debugging.
3302 * Support another case of realloc via move into top
3303 * Fix error occuring when initial sbrk_base not word-aligned.
3304 * Rely on page size for units instead of SBRK_UNIT to
wdenk8bde7f72003-06-27 21:31:46 +00003305 avoid surprises about sbrk alignment conventions.
wdenk217c9da2002-10-25 20:35:49 +00003306 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
wdenk8bde7f72003-06-27 21:31:46 +00003307 (raymond@es.ele.tue.nl) for the suggestion.
wdenk217c9da2002-10-25 20:35:49 +00003308 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3309 * More precautions for cases where other routines call sbrk,
wdenk8bde7f72003-06-27 21:31:46 +00003310 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
wdenk217c9da2002-10-25 20:35:49 +00003311 * Added macros etc., allowing use in linux libc from
wdenk8bde7f72003-06-27 21:31:46 +00003312 H.J. Lu (hjl@gnu.ai.mit.edu)
wdenk217c9da2002-10-25 20:35:49 +00003313 * Inverted this history list
3314
3315 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3316 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3317 * Removed all preallocation code since under current scheme
wdenk8bde7f72003-06-27 21:31:46 +00003318 the work required to undo bad preallocations exceeds
3319 the work saved in good cases for most test programs.
wdenk217c9da2002-10-25 20:35:49 +00003320 * No longer use return list or unconsolidated bins since
wdenk8bde7f72003-06-27 21:31:46 +00003321 no scheme using them consistently outperforms those that don't
3322 given above changes.
wdenk217c9da2002-10-25 20:35:49 +00003323 * Use best fit for very large chunks to prevent some worst-cases.
3324 * Added some support for debugging
3325
3326 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3327 * Removed footers when chunks are in use. Thanks to
wdenk8bde7f72003-06-27 21:31:46 +00003328 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
wdenk217c9da2002-10-25 20:35:49 +00003329
3330 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3331 * Added malloc_trim, with help from Wolfram Gloger
wdenk8bde7f72003-06-27 21:31:46 +00003332 (wmglo@Dent.MED.Uni-Muenchen.DE).
wdenk217c9da2002-10-25 20:35:49 +00003333
3334 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3335
3336 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3337 * realloc: try to expand in both directions
3338 * malloc: swap order of clean-bin strategy;
3339 * realloc: only conditionally expand backwards
3340 * Try not to scavenge used bins
3341 * Use bin counts as a guide to preallocation
3342 * Occasionally bin return list chunks in first scan
3343 * Add a few optimizations from colin@nyx10.cs.du.edu
3344
3345 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3346 * faster bin computation & slightly different binning
3347 * merged all consolidations to one part of malloc proper
wdenk8bde7f72003-06-27 21:31:46 +00003348 (eliminating old malloc_find_space & malloc_clean_bin)
wdenk217c9da2002-10-25 20:35:49 +00003349 * Scan 2 returns chunks (not just 1)
3350 * Propagate failure in realloc if malloc returns 0
3351 * Add stuff to allow compilation on non-ANSI compilers
wdenk8bde7f72003-06-27 21:31:46 +00003352 from kpv@research.att.com
wdenk217c9da2002-10-25 20:35:49 +00003353
3354 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3355 * removed potential for odd address access in prev_chunk
3356 * removed dependency on getpagesize.h
3357 * misc cosmetics and a bit more internal documentation
3358 * anticosmetics: mangled names in macros to evade debugger strangeness
3359 * tested on sparc, hp-700, dec-mips, rs6000
wdenk8bde7f72003-06-27 21:31:46 +00003360 with gcc & native cc (hp, dec only) allowing
3361 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
wdenk217c9da2002-10-25 20:35:49 +00003362
3363 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3364 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
wdenk8bde7f72003-06-27 21:31:46 +00003365 structure of old version, but most details differ.)
wdenk217c9da2002-10-25 20:35:49 +00003366
3367*/