| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
page size, rather than relying upon mprotect to round up to the actual mmu
page size.
This repairs malloc operation on systems where the malloc page size
(1 << _MAX_PAGE_SHIFT) is larger than the mmu page size.
ok otto@
|
|
|
|
| |
bit of optimization; ok tb@ asou@
|
|
|
|
|
| |
malloc option D (aka 1), 2, 3 or 4. No performance impact if not
used. ok asou@
|
|
|
|
| |
not real problems)
|
| |
|
|
|
|
|
|
|
| |
the 0x0 call sites for leak reports. Also display more info on
detected write of free chunks: print the info about where the chunk
was allocated, and for the preceding chunk as well.
ok asou@
|
|
|
|
| |
ok otto.
|
|
|
|
|
|
|
| |
malloc options"
Now only enabled for platforms where it's know to work and written
as a inline functions instead of a macro.
|
|
|
|
| |
__builtin_return_address(a) with a != 0.
|
|
|
|
| |
ok deraadt@
|
|
|
|
|
|
| |
unfortunately gcc3 does not have __builtin_clz().
ok miod@ otto@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On free, chunks (the pieces of a pages used for smaller allocations)
are junked and then validated after they leave the delayed free
list. So after free, a chunk always contains junk bytes. This means
that if we start with the right contents for a new page of chunks,
we can *validate* instead of *write* junk bytes when (re)-using a
chunk.
With this, we can detect write-after-free when a chunk is recycled,
not justy when a chunk is in the delayed free list. We do a little
bit more work on initial allocation of a page of chunks and when
re-using (as we validate now even on junk level 1).
Also: some extra consistency checks for recallocaray(3) and fixes
in error messages to make them more consistent, with man page bits.
Plus regress additions.
|
|
|
|
| |
ok guenther@
|
|
|
|
|
| |
unlock-lock dance it serves no real purpose any more. Confirmed
by a small performance increase in tests. ok @tb
|
|
|
|
| |
ok otto@
|
|
|
|
|
| |
except for bootblocks. This way we have built-in leak detecction
always (if enable by malloc flags). See man pages for details.
|
| |
|
|
|
|
| |
Should catch more of them and closer (in time) to the WAF. ok tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The basic idea is simple: one of the reasons the recent sshd bug
is potentially exploitable is that a (erroneously) freed malloc
chunk gets re-used in a different role. malloc has power of two
chunk sizes and so one page of chunks holds many different types
of allocations. Userland malloc has no knowledge of types, we only
know about sizes. So I changed that to use finer-grained chunk
sizes.
This has some performance impact as we need to allocate chunk pages
in more cases. Gain it back by allocation chunk_info pages in a
bundle, and use less buckets is !malloc option S. The chunk sizes
used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320,
384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a
few more for sparc64 with its 8k sized pages and loongson with its
16k pages).
If malloc option S (or rather cache size 0) is used we use strict
multiple of 16 sized chunks, to get as many buckets as possible.
ssh(d) enabled malloc option S, in general security sensitive
programs should.
See the find_bucket() and bin_of() functions. Thanks to Tony Finch
for pointing me to code to compute nice bucket sizes.
ok tb@
|
|
|
|
| |
freeing; ok tb@
|
|
|
|
|
|
|
| |
can be made immutable to provide extra protection. Also init pools
on-demand: only pools that are actually used are initialized.
Tested by many
|
|
|
|
|
| |
that the kernel and ld.so will know not to mark it immutable. malloc
handles the read/write transitions by itself.
|
|
|
|
|
|
|
|
|
|
|
| |
following page(s) we've been first mquery()ing for it, mmapp()ing
w/o MAP_FIXED if available, and then munmap()ing if there was a
race. Instead, just try it directly with
mmap(MAP_FIXED | __MAP_NOREPLACE)
tested in snaps for weeks
ok deraadt@
|
|
|
|
|
|
|
|
|
|
| |
in size. This cache is indexed by size (in # of pages), so it is
very quick to check. Some programs allocate and deallocate larger
allocations in a frantic way. Accomodate those programs by also
keeping a cache of regions between 128k and 2M, in a cache of variable
sized regions.
Tested by many in snaps; ok deraadt@
|
|
|
|
|
|
|
| |
This matches the documented behavior more obviously and ensures that
these aren't optimized away, although this is unlikely.
Discussed with deraadt and otto
|
| |
|
| |
|
|
|
|
|
| |
regions of a given size. In snaps for a while, committing since
no issues were reported and a wider audience is good. ok deraadt@
|
|
|
|
|
|
|
|
|
|
| |
write 8 bytes at the time by using a uint64_t pointer. For an
allocation a max of 4 such uint64_t's are written spread over the
allocation. For pages sized and larger, the first page is junked in
such a way.
- Delayed free of a small chunk checks the corresponiding way.
- Pages ending up in the cache are validated upon unmapping or re-use.
In snaps for a while
|
| |
|
|
|
|
| |
ok guenther tb millert
|
|
|
|
|
| |
So redo previous commit properly:
Use random value for canary bytes; ok tb@.
|
| |
|
|
|
|
|
|
|
| |
shaving off into the cache but unamp them. Pages in the cache get
re-used and then a future grow of the first allocation will be
hampered. Also make realloc a no-op for small shrinkage.
ok deraadt@
|
|
|
|
|
|
| |
value < 0. errno is only updated in this case. Change all (most?)
callers of syscalls to follow this better, and let's see if this strictness
helps us in the future.
|
|
|
|
| |
Fixes malloc_conceal...freezero with malloc options C and/or G.
|
|
|
|
|
| |
counterparts but return memory in pages marked MAP_CONCEAL and on
free() freezero() is actually called.
|
|
|
|
| |
by me and others indicate that it is the optimum.
|
|
|
|
|
|
| |
making the number of pools variable. Do not document the malloc
conf settings atm, don't know yet if they will stay. Thanks to all
the testers. ok deraadt@
|
|
|
|
| |
tested by many; ok florian@
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
libs have it, it is a function that is considered harmful, so:
Delete malloc_usable_size(). It is a function that blurs the line
between malloc managed memory and application managed memory and
exposes some of the internal workings of malloc. If an application
relies on that, it is likely to break using another implementation
of malloc. If you want usable size x, just allocate x bytes. ok
deraadt@ and other devs
|
|
|
|
| |
PROTO_NORMAL(). Problem noted by deraadt@
|
| |
|
| |
|
| |
|
| |
|
| |
|