summaryrefslogtreecommitdiff
path: root/src/lib/libc/stdlib/malloc.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* remove unneeded semicolons; checked by millert@jsg2024-09-201-2/+2
|
* In _malloc_init(), round up the region being mprotected RW to the mallocmiod2024-03-301-11/+11
| | | | | | | | | | page size, rather than relying upon mprotect to round up to the actual mmu page size. This repairs malloc operation on systems where the malloc page size (1 << _MAX_PAGE_SHIFT) is larger than the mmu page size. ok otto@
* A small cleanup of malloc_bytes(), getting rid of a goto and a tinyotto2023-12-191-29/+27
| | | | bit of optimization; ok tb@ asou@
* Save backtraces to show in leak dump. Depth of backtrace set byotto2023-12-041-79/+174
| | | | | malloc option D (aka 1), 2, 3 or 4. No performance impact if not used. ok asou@
* KNF plus fixed a few signed vs unsigned compares (that we actuallyotto2023-11-041-22/+33
| | | | not real problems)
* A few micro-optimizations; ok asou@otto2023-10-261-20/+15
|
* When option D is active, store callers for all chunks; this avoidsotto2023-10-221-75/+151
| | | | | | | the 0x0 call sites for leak reports. Also display more info on detected write of free chunks: print the info about where the chunk was allocated, and for the preceding chunk as well. ok asou@
* Print waring message when not allocated memory in putleakinfo().asou2023-09-091-2/+20
| | | | ok otto.
* Recommit "Allow to ask for deeper callers for leak reports usingotto2023-06-301-10/+47
| | | | | | | malloc options" Now only enabled for platforms where it's know to work and written as a inline functions instead of a macro.
* Revert previous, not all platforms allow compilingotto2023-06-231-13/+2
| | | | __builtin_return_address(a) with a != 0.
* Allow to ask for deeper callers for leak reports using malloc options.otto2023-06-221-2/+13
| | | | ok deraadt@
* Add portable version and m88k-specific version lb() function, becauseaoyama2023-06-071-1/+21
| | | | | | unfortunately gcc3 does not have __builtin_clz(). ok miod@ otto@
* More thorough write-afetr-free checks.otto2023-06-041-10/+22
| | | | | | | | | | | | | | | | | | | On free, chunks (the pieces of a pages used for smaller allocations) are junked and then validated after they leave the delayed free list. So after free, a chunk always contains junk bytes. This means that if we start with the right contents for a new page of chunks, we can *validate* instead of *write* junk bytes when (re)-using a chunk. With this, we can detect write-after-free when a chunk is recycled, not justy when a chunk is in the delayed free list. We do a little bit more work on initial allocation of a page of chunks and when re-using (as we validate now even on junk level 1). Also: some extra consistency checks for recallocaray(3) and fixes in error messages to make them more consistent, with man page bits. Plus regress additions.
* Remove malloc interposition, a workaround that was once needed for emacsotto2023-05-271-7/+7
| | | | ok guenther@
* As mmap(2) is no longer a LOCK syscall, do away with the extraotto2023-05-101-23/+1
| | | | | unlock-lock dance it serves no real purpose any more. Confirmed by a small performance increase in tests. ok @tb
* remove duplicate includejsg2023-04-211-2/+1
| | | | ok otto@
* Dump (leak) info using utrace(2) and compile the code always inotto2023-04-161-141/+182
| | | | | except for bootblocks. This way we have built-in leak detecction always (if enable by malloc flags). See man pages for details.
* Introduce variation in location of junked bytes; ok tb@otto2023-04-051-3/+8
|
* Check all chunks in the delayed free list for write-after-free.otto2023-04-011-5/+21
| | | | Should catch more of them and closer (in time) to the WAF. ok tb@
* Change malloc chunk sizes to be fine grained.otto2023-03-251-102/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic idea is simple: one of the reasons the recent sshd bug is potentially exploitable is that a (erroneously) freed malloc chunk gets re-used in a different role. malloc has power of two chunk sizes and so one page of chunks holds many different types of allocations. Userland malloc has no knowledge of types, we only know about sizes. So I changed that to use finer-grained chunk sizes. This has some performance impact as we need to allocate chunk pages in more cases. Gain it back by allocation chunk_info pages in a bundle, and use less buckets is !malloc option S. The chunk sizes used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a few more for sparc64 with its 8k sized pages and loongson with its 16k pages). If malloc option S (or rather cache size 0) is used we use strict multiple of 16 sized chunks, to get as many buckets as possible. ssh(d) enabled malloc option S, in general security sensitive programs should. See the find_bucket() and bin_of() functions. Thanks to Tony Finch for pointing me to code to compute nice bucket sizes. ok tb@
* There is no reason to-be-cleared chunks cannot participate in delayedotto2023-02-271-27/+23
| | | | freeing; ok tb@
* Change the way malloc_init() works so that the main data structuresotto2022-12-271-65/+66
| | | | | | | can be made immutable to provide extra protection. Also init pools on-demand: only pools that are actually used are initialized. Tested by many
* put the malloc_readonly struct into the "openbsd.mutable" section, soderaadt2022-10-141-2/+3
| | | | | that the kernel and ld.so will know not to mark it immutable. malloc handles the read/write transitions by itself.
* To figure our whether a large allocation can be grown into theguenther2022-06-301-12/+2
| | | | | | | | | | | following page(s) we've been first mquery()ing for it, mmapp()ing w/o MAP_FIXED if available, and then munmap()ing if there was a race. Instead, just try it directly with mmap(MAP_FIXED | __MAP_NOREPLACE) tested in snaps for weeks ok deraadt@
* Currently malloc caches a number of free'ed regions up to 128kotto2022-02-261-33/+160
| | | | | | | | | | in size. This cache is indexed by size (in # of pages), so it is very quick to check. Some programs allocate and deallocate larger allocations in a frantic way. Accomodate those programs by also keeping a cache of regions between 128k and 2M, in a cache of variable sized regions. Tested by many in snaps; ok deraadt@
* Switch two calls from memset() to explicit_bzero()tb2021-09-191-3/+3
| | | | | | | This matches the documented behavior more obviously and ensures that these aren't optimized away, although this is unlikely. Discussed with deraadt and otto
* Make MALLOC_STATS compile again; noted by Omar Polo and Joe Nelsonotto2021-07-231-2/+2
|
* An extra internal consistency check and a missing stats adjustment. ok tb@otto2021-04-091-1/+4
|
* Change the implementation of the malloc cache to keep lists ofotto2021-03-091-152/+118
| | | | | regions of a given size. In snaps for a while, committing since no issues were reported and a wider audience is good. ok deraadt@
* - Make use of the fact that we know how the chunks are aligned, andotto2021-02-251-46/+80
| | | | | | | | | | write 8 bytes at the time by using a uint64_t pointer. For an allocation a max of 4 such uint64_t's are written spread over the allocation. For pages sized and larger, the first page is junked in such a way. - Delayed free of a small chunk checks the corresponiding way. - Pages ending up in the cache are validated upon unmapping or re-use. In snaps for a while
* mapalign() only handles allocations >= a page; problem found by and ok semarie@otto2020-11-231-1/+3
|
* make fixed-sized fixed-value mib[] arrays be constderaadt2020-10-121-4/+3
| | | | ok guenther tb millert
* As noted by tb@ previous commit only removed an unused fucntion.otto2020-10-091-4/+9
| | | | | So redo previous commit properly: Use random value for canary bytes; ok tb@.
* Use random value for canary bytes; ok tb@otto2020-10-061-23/+1
|
* For page-sized and larger allocations do not put the pages we'reotto2020-09-061-21/+18
| | | | | | | shaving off into the cache but unamp them. Pages in the cache get re-used and then a future grow of the first allocation will be hampered. Also make realloc a no-op for small shrinkage. ok deraadt@
* When system calls indicate an error they return -1, not some arbitraryderaadt2019-06-281-2/+2
| | | | | | value < 0. errno is only updated in this case. Change all (most?) callers of syscalls to follow this better, and let's see if this strictness helps us in the future.
* Only override size of chunk if we're not given the actual length.otto2019-05-231-2/+3
| | | | Fixes malloc_conceal...freezero with malloc options C and/or G.
* Inroduce malloc_conceal() and calloc_conceal(). Similar to theirotto2019-05-101-196/+193
| | | | | counterparts but return memory in pages marked MAP_CONCEAL and on free() freezero() is actually called.
* Move default numer of pools in the multi-threaded case to 8. Various testsotto2019-01-101-2/+2
| | | | by me and others indicate that it is the optimum.
* Make the "not my pool" searching loop a tiny bit smarter, whileotto2019-01-101-20/+37
| | | | | | making the number of pools variable. Do not document the malloc conf settings atm, don't know yet if they will stay. Thanks to all the testers. ok deraadt@
* Improve speed for the multi-threaded case by reducing lock contention.otto2018-12-101-30/+21
| | | | tested by many; ok florian@
* style; OK ottoflorian2018-12-091-3/+3
|
* Refactor "find the right pool" code into a function. ok djm@ tb@otto2018-11-271-65/+34
|
* Introducing malloc_usable_size() was a mistake. While some otherotto2018-11-211-78/+1
| | | | | | | | | | | libs have it, it is a function that is considered harmful, so: Delete malloc_usable_size(). It is a function that blurs the line between malloc managed memory and application managed memory and exposes some of the internal workings of malloc. If an application relies on that, it is likely to break using another implementation of malloc. If you want usable size x, just allocate x bytes. ok deraadt@ and other devs
* Fix compilation on alpha, where DEF_WEAK() really must be paired withguenther2018-11-191-2/+1
| | | | PROTO_NORMAL(). Problem noted by deraadt@
* Implement malloc_usable_size(); ok millert@ deraadt@ and jmc@ for the man pageotto2018-11-181-1/+79
|
* Use the new vm.malloc_conf sysctl; ok millert@ deraadt@otto2018-11-061-6/+11
|
* Implement C11's aligned_alloc(3). ok guenther@otto2018-11-051-1/+43
|
* sys/uio.h is not used anymoreotto2018-04-071-3/+2
|
* fix MALLOC_STATS; spotted by and ok semarie@otto2018-03-301-1/+5
|