| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
malloc options"
Now only enabled for platforms where it's know to work and written
as a inline functions instead of a macro.
|
|
|
|
| |
__builtin_return_address(a) with a != 0.
|
|
|
|
| |
ok deraadt@
|
|
|
|
|
|
| |
unfortunately gcc3 does not have __builtin_clz().
ok miod@ otto@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On free, chunks (the pieces of a pages used for smaller allocations)
are junked and then validated after they leave the delayed free
list. So after free, a chunk always contains junk bytes. This means
that if we start with the right contents for a new page of chunks,
we can *validate* instead of *write* junk bytes when (re)-using a
chunk.
With this, we can detect write-after-free when a chunk is recycled,
not justy when a chunk is in the delayed free list. We do a little
bit more work on initial allocation of a page of chunks and when
re-using (as we validate now even on junk level 1).
Also: some extra consistency checks for recallocaray(3) and fixes
in error messages to make them more consistent, with man page bits.
Plus regress additions.
|
|
|
|
| |
ok guenther@
|
|
|
|
|
|
|
|
| |
future, inadvertant PLT entries. Move the __getcwd and __realpath
declarations to hidden/{stdlib,unistd}.h to consolidate and remove
duplication.
ok tb@ otto@ deraadt@
|
|
|
|
|
| |
unlock-lock dance it serves no real purpose any more. Confirmed
by a small performance increase in tests. ok @tb
|
|
|
|
| |
ok otto@
|
|
|
|
| |
(sorry, otto, for not spotting in the updated diff)
|
|
|
|
|
| |
except for bootblocks. This way we have built-in leak detecction
always (if enable by malloc flags). See man pages for details.
|
| |
|
|
|
|
| |
Should catch more of them and closer (in time) to the WAF. ok tb@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The basic idea is simple: one of the reasons the recent sshd bug
is potentially exploitable is that a (erroneously) freed malloc
chunk gets re-used in a different role. malloc has power of two
chunk sizes and so one page of chunks holds many different types
of allocations. Userland malloc has no knowledge of types, we only
know about sizes. So I changed that to use finer-grained chunk
sizes.
This has some performance impact as we need to allocate chunk pages
in more cases. Gain it back by allocation chunk_info pages in a
bundle, and use less buckets is !malloc option S. The chunk sizes
used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320,
384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a
few more for sparc64 with its 8k sized pages and loongson with its
16k pages).
If malloc option S (or rather cache size 0) is used we use strict
multiple of 16 sized chunks, to get as many buckets as possible.
ssh(d) enabled malloc option S, in general security sensitive
programs should.
See the find_bucket() and bin_of() functions. Thanks to Tony Finch
for pointing me to code to compute nice bucket sizes.
ok tb@
|
|
|
|
| |
Originally from djm@. OK deraadt@ florian@ bluhm@
|
|
|
|
| |
freeing; ok tb@
|
|
|
|
|
|
|
| |
can be made immutable to provide extra protection. Also init pools
on-demand: only pools that are actually used are initialized.
Tested by many
|
|
|
|
|
| |
any changes not taken noted on tech, but chiefly here i did not take the
cancelation - cancellation changes;
|
|
|
|
|
|
|
| |
uppercase.
While here use the correct idiom of casting to unsigned char.
OK millert, farewell to ultrix deraadt
|
|
|
|
|
| |
the lock, when it is correctly initialized after the lock
ok otto millert
|
|
|
|
|
| |
that the kernel and ld.so will know not to mark it immutable. malloc
handles the read/write transitions by itself.
|
| |
|
|
|
|
|
|
| |
from josiah frentsos, tweaked by schwarze
ok schwarze
|
|
|
|
| |
inline use was removed in 1998
|
|
|
|
| |
Both FreeBSD and NetBSD have this behavior. OK deraadt@
|
|
|
|
| |
ok schwarze@
|
|
|
|
|
|
| |
https://minnie.tuhs.org/pipermail/tuhs/2017-August/011807.html
ok schwarze@
|
|
|
|
| |
ok schwarze@
|
| |
|
|
|
|
|
|
|
| |
instance would be rekeyed every 1.6MB. This makes it happen at a
random point somewhere in the 1-2MB range.
Feedback deraadt@ visa@, ok tb@ visa@
|
|
|
|
|
|
|
|
|
|
|
|
| |
UNIX System V mention it. Only do so in manual pages with a
pre-existing HISTORY section.
Prompted by the comparison of System V and BSD commands and interfaces
in Sun's "System V Enhancements Overview" document.
checked against manuals on bitsavers, TUHS archive and CSRG archive CDs
ok jmc@ schwarze@
|
|
|
|
|
|
|
|
|
|
|
| |
following page(s) we've been first mquery()ing for it, mmapp()ing
w/o MAP_FIXED if available, and then munmap()ing if there was a
race. Instead, just try it directly with
mmap(MAP_FIXED | __MAP_NOREPLACE)
tested in snaps for weeks
ok deraadt@
|
|
|
|
|
|
|
| |
This got broken when system.c was converted from signal(3) to sigaction(2).
Also add SIGINT and SIGQUIT to the set of blocked signals and unblock
them in the parent after the signal handlers are installed.
Based on a diff from Leon Fischer. OK deraadt@
|
| |
|
|
|
|
|
|
|
| |
Use a temporary variable to store the number of bytes to be copied
(size_t) and also use it as the memcpy(3) length. Previously we
copied "size" bytes instead of just the necessary number.
OK claudio@ tb@
|
|
|
|
|
|
|
| |
jmc@ dislikes a comma before "then" in a conditional, so leave those
untouched.
ok jmc@
|
|
|
|
| |
ok jmc@ schwarze@
|
|
|
|
| |
instances in the tree. ok deraadt@
|
|
|
|
|
|
|
|
|
|
| |
in size. This cache is indexed by size (in # of pages), so it is
very quick to check. Some programs allocate and deallocate larger
allocations in a frantic way. Accomodate those programs by also
keeping a cache of regions between 128k and 2M, in a cache of variable
sized regions.
Tested by many in snaps; ok deraadt@
|
|
|
|
| |
ok jmc@ sthen@ millert@
|
|
|
|
|
|
| |
from uwe@netbsd -r1.22
ok millert
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lsearch(3) is really just lfind(3) with an additional branch to append
the key if lfind(3) fails. If we get rid of the underlying
linear_base() function and move the search portion into lfind(3) and
the key-copying portion into lsearch(3) we get smaller and simpler
code.
Misc. notes:
- We do not need to keep the historical comment about errno. lsearch(3)
is pure computation and does not set errno. That's really all you
need to know. The specification reserves no errors, either.
- We are using lfind(3) internally now, so it switches from
PROTO_DEPRECATED to PROTO_NORMAL in hidden/search.h and needs
DEF_WEAK in stdlib/lsearch.c.
With advice from guenther@ on symbol housekeeping in libc.
Thread: https://marc.info/?l=openbsd-tech&m=163885187632449&w=2
ok millert@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the key overlaps the end of the array, memcpy(3) mutates the key
and copies a corrupted value into the end of the array.
If we use memmove(3) instead we at least end up with a clean copy of
the key at the end of the array. This is closer to the intended
behavior.
With input from millert@ and deraadt@.
Thread: https://marc.info/?l=openbsd-tech&m=163880307403606&w=2
ok millert@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "lim" variable needs to be a size_t to match nmemb, otherwise we
get undefined behavior when nmemb exceeds INT_MAX.
Prompted by a blog post by Joshua Bloch:
https://ai.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html
Fixed by Chris Torek a long time ago:
https://svnweb.freebsd.org/csrg/lib/libc/stdlib/bsearch.c?revision=51742&view=markup
ok millert@
|
| |
|
|
|
|
| |
to 3-term BSD license.
|
|
|
|
| |
ok florian@
|