summaryrefslogtreecommitdiff
path: root/src/lib/libc/stdlib/malloc.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Dump (leak) info using utrace(2) and compile the code always inotto2023-04-161-141/+182
| | | | | except for bootblocks. This way we have built-in leak detecction always (if enable by malloc flags). See man pages for details.
* Introduce variation in location of junked bytes; ok tb@otto2023-04-051-3/+8
|
* Check all chunks in the delayed free list for write-after-free.otto2023-04-011-5/+21
| | | | Should catch more of them and closer (in time) to the WAF. ok tb@
* Change malloc chunk sizes to be fine grained.otto2023-03-251-102/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic idea is simple: one of the reasons the recent sshd bug is potentially exploitable is that a (erroneously) freed malloc chunk gets re-used in a different role. malloc has power of two chunk sizes and so one page of chunks holds many different types of allocations. Userland malloc has no knowledge of types, we only know about sizes. So I changed that to use finer-grained chunk sizes. This has some performance impact as we need to allocate chunk pages in more cases. Gain it back by allocation chunk_info pages in a bundle, and use less buckets is !malloc option S. The chunk sizes used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a few more for sparc64 with its 8k sized pages and loongson with its 16k pages). If malloc option S (or rather cache size 0) is used we use strict multiple of 16 sized chunks, to get as many buckets as possible. ssh(d) enabled malloc option S, in general security sensitive programs should. See the find_bucket() and bin_of() functions. Thanks to Tony Finch for pointing me to code to compute nice bucket sizes. ok tb@
* There is no reason to-be-cleared chunks cannot participate in delayedotto2023-02-271-27/+23
| | | | freeing; ok tb@
* Change the way malloc_init() works so that the main data structuresotto2022-12-271-65/+66
| | | | | | | can be made immutable to provide extra protection. Also init pools on-demand: only pools that are actually used are initialized. Tested by many
* put the malloc_readonly struct into the "openbsd.mutable" section, soderaadt2022-10-141-2/+3
| | | | | that the kernel and ld.so will know not to mark it immutable. malloc handles the read/write transitions by itself.
* To figure our whether a large allocation can be grown into theguenther2022-06-301-12/+2
| | | | | | | | | | | following page(s) we've been first mquery()ing for it, mmapp()ing w/o MAP_FIXED if available, and then munmap()ing if there was a race. Instead, just try it directly with mmap(MAP_FIXED | __MAP_NOREPLACE) tested in snaps for weeks ok deraadt@
* Currently malloc caches a number of free'ed regions up to 128kotto2022-02-261-33/+160
| | | | | | | | | | in size. This cache is indexed by size (in # of pages), so it is very quick to check. Some programs allocate and deallocate larger allocations in a frantic way. Accomodate those programs by also keeping a cache of regions between 128k and 2M, in a cache of variable sized regions. Tested by many in snaps; ok deraadt@
* Switch two calls from memset() to explicit_bzero()tb2021-09-191-3/+3
| | | | | | | This matches the documented behavior more obviously and ensures that these aren't optimized away, although this is unlikely. Discussed with deraadt and otto
* Make MALLOC_STATS compile again; noted by Omar Polo and Joe Nelsonotto2021-07-231-2/+2
|
* An extra internal consistency check and a missing stats adjustment. ok tb@otto2021-04-091-1/+4
|
* Change the implementation of the malloc cache to keep lists ofotto2021-03-091-152/+118
| | | | | regions of a given size. In snaps for a while, committing since no issues were reported and a wider audience is good. ok deraadt@
* - Make use of the fact that we know how the chunks are aligned, andotto2021-02-251-46/+80
| | | | | | | | | | write 8 bytes at the time by using a uint64_t pointer. For an allocation a max of 4 such uint64_t's are written spread over the allocation. For pages sized and larger, the first page is junked in such a way. - Delayed free of a small chunk checks the corresponiding way. - Pages ending up in the cache are validated upon unmapping or re-use. In snaps for a while
* mapalign() only handles allocations >= a page; problem found by and ok semarie@otto2020-11-231-1/+3
|
* make fixed-sized fixed-value mib[] arrays be constderaadt2020-10-121-4/+3
| | | | ok guenther tb millert
* As noted by tb@ previous commit only removed an unused fucntion.otto2020-10-091-4/+9
| | | | | So redo previous commit properly: Use random value for canary bytes; ok tb@.
* Use random value for canary bytes; ok tb@otto2020-10-061-23/+1
|
* For page-sized and larger allocations do not put the pages we'reotto2020-09-061-21/+18
| | | | | | | shaving off into the cache but unamp them. Pages in the cache get re-used and then a future grow of the first allocation will be hampered. Also make realloc a no-op for small shrinkage. ok deraadt@
* When system calls indicate an error they return -1, not some arbitraryderaadt2019-06-281-2/+2
| | | | | | value < 0. errno is only updated in this case. Change all (most?) callers of syscalls to follow this better, and let's see if this strictness helps us in the future.
* Only override size of chunk if we're not given the actual length.otto2019-05-231-2/+3
| | | | Fixes malloc_conceal...freezero with malloc options C and/or G.
* Inroduce malloc_conceal() and calloc_conceal(). Similar to theirotto2019-05-101-196/+193
| | | | | counterparts but return memory in pages marked MAP_CONCEAL and on free() freezero() is actually called.
* Move default numer of pools in the multi-threaded case to 8. Various testsotto2019-01-101-2/+2
| | | | by me and others indicate that it is the optimum.
* Make the "not my pool" searching loop a tiny bit smarter, whileotto2019-01-101-20/+37
| | | | | | making the number of pools variable. Do not document the malloc conf settings atm, don't know yet if they will stay. Thanks to all the testers. ok deraadt@
* Improve speed for the multi-threaded case by reducing lock contention.otto2018-12-101-30/+21
| | | | tested by many; ok florian@
* style; OK ottoflorian2018-12-091-3/+3
|
* Refactor "find the right pool" code into a function. ok djm@ tb@otto2018-11-271-65/+34
|
* Introducing malloc_usable_size() was a mistake. While some otherotto2018-11-211-78/+1
| | | | | | | | | | | libs have it, it is a function that is considered harmful, so: Delete malloc_usable_size(). It is a function that blurs the line between malloc managed memory and application managed memory and exposes some of the internal workings of malloc. If an application relies on that, it is likely to break using another implementation of malloc. If you want usable size x, just allocate x bytes. ok deraadt@ and other devs
* Fix compilation on alpha, where DEF_WEAK() really must be paired withguenther2018-11-191-2/+1
| | | | PROTO_NORMAL(). Problem noted by deraadt@
* Implement malloc_usable_size(); ok millert@ deraadt@ and jmc@ for the man pageotto2018-11-181-1/+79
|
* Use the new vm.malloc_conf sysctl; ok millert@ deraadt@otto2018-11-061-6/+11
|
* Implement C11's aligned_alloc(3). ok guenther@otto2018-11-051-1/+43
|
* sys/uio.h is not used anymoreotto2018-04-071-3/+2
|
* fix MALLOC_STATS; spotted by and ok semarie@otto2018-03-301-1/+5
|
* use _ALIGN() which is uhm a bit OpenBSD-specific, but it means wederaadt2018-03-061-3/+2
| | | | | | don't need to use sys/param.h at all, guess which one i believe is greater namespace polution ok otto
* Use _MAX_PAGE_SHIFT, rather than #ifdef mips64deraadt2018-03-051-6/+2
| | | | ok guenther kettenis
* use consistent style for for loop in unmap(), no functional changeotto2018-02-071-4/+2
|
* keep in sync with ld.so malloc.cotto2018-01-301-2/+3
|
* - An error in the multithreaded case could print the wrong function nameotto2018-01-281-12/+23
| | | | | | | - Start with a full page of struct region_info's - Save an mprotect in the init code: allocate 3 pages with none and make the middle page r/w instead of a r/w allocation and two calls to make the guard pages none
* - do not junk pages returned by free_bytes(), all freed chunks are alreadyotto2018-01-261-19/+19
| | | | | junked - freezero(): only clear requested size
* Zap the rotor, it was a wrong idea. Cluebat applied by kshe whootto2018-01-181-6/+3
| | | | | came also up with this diff. Simple, no bias and benchmarks show the extra random calls disappear in te measurement noise.
* Move to ffs(3) for bitmask scanning. I played with this earlier,otto2018-01-181-21/+11
| | | | | | | but at that time ffs function calls were generated instead of the compiler inlining the code. Now that ffs is marked protected in libc this is handled better. Thanks to kshe who prompted me to look at this again.
* optimization and some cleanup; mostly from kshe (except the unmap() part)otto2018-01-081-67/+51
|
* Only init chunk_info once, plus some moving of code to group related functions.otto2018-01-011-273/+267
|
* step one in avoiding unneccesary init of chunk_info;otto2017-12-271-65/+81
| | | | some cleanup; tested by sthen@ on a ports build
* 's' should include 'f'; from Jacqueline Jolicoeurotto2017-11-021-2/+2
|
* Restore a return that was inadvertently removed from freezero() in r1.234,jsing2017-10-191-1/+2
| | | | | | | which results in an internal double free when internal functions are not in use. ok otto@
* do not return f() where f is a void function; loop var type fixotto2017-10-051-4/+5
|
* Use dprintf instead of snprintf/writeotto2017-10-051-82/+36
|
* Make delayed free non-optional and make F do an extensive double free check.otto2017-09-231-21/+26
| | | | ok tb@ tedu@