| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
feature to terminate the program when out of memory. Application code
should always handle failure of library functions properly. So if you
want your program to terminate, write something like
| p = malloc(...);
| if (p == NULL)
| err(1, NULL);
and don't abuse malloc_options.
Direction suggested by otto@ after anton@ pointed out that this very old
text still used an outdated data type for malloc_options and potentially
failed to define its value at compile time.
OK otto@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These formerly public functions have only ever been called from
EVP_CIPHER_asn1_to_param() and EVP_CPIHER_param_to_asn1(), either
directly if the EVP_CIPH_FLAG_DEFAULT_ASN1 flag is set, or indirectly
when set as the .[gs]et_asn1_parameters() method of the EVP_CIPHER.
This commit removes their use in .[gs]et_asn1_parameters() dating back
to long before the EVP_CIPH_FLAG_DEFAULT_ASN1 was introduced in 2010.
This way the only remaining consumer of .[gs]et_asn1_parameters() is RC2.
ok jsing
|
|
|
|
| |
(comment tweak, no code change)
|
| |
|
|
|
|
|
|
|
|
|
| |
Remove the UNALIGNED_MEMOPS_ARE_FAST from AES-IGE, which can result in
implementation defined behaviour on i386/amd64. While we could keep this
purely for aligned inputs and outputs, it's probably not that important
and can be redone in a simpler form later if we want to do so.
ok tb@
|
| |
|
|
|
|
| |
Discussed with tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes use of EC_FIELD_ELEMENT to perform fixed width constant
time operations.
Addition and doubling of points makes use of the formulas from
"Complete addition formulas for prime order elliptic curves"
(https://eprint.iacr.org/2015/1060). These are complete and
operate in constant time.
Further work will continue in tree.
ok tb@
|
|
|
|
|
|
|
|
|
|
| |
Provide EC_FIELD_ELEMENT and EC_FIELD_MODULUS, which allow for operations
on fixed width fields in constant time. These can in turn be used to
implement Elliptic Curve cryptography for prime fields, without needing
to use BN. This will improve the code, reduces timing leaks and enable
further optimisation.
ok beck@ tb@
|
|
|
|
|
|
|
| |
These implement constant time modular addition, subtraction and
multiplication in the Montegomery domain.
ok tb@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move bn_add_words() and bn_sub_words() from bn_add.c to bn_add_sub.c.
These have effectively been replaced in the previous rewrites. Remove
the asserts - if bad lengths are passed the results will be incorrect
and things will fail (these should use size_t instead of int, but that
is a problem for another day).
Provide bn_sub_words_borrow(), which computes a subtraction but only
returns the resulting borrow. Provide bn_add_words_masked() and
bn_sub_words_masked(), which perform an masked addition or subtraction.
These can also be used to implement constant time addition and subtraction,
especially for reduction.
ok beck@ tb@
|
|
|
|
|
|
|
|
|
| |
In the diff_len < 0 case, it incorrectly uses 0 - b[0], which mishandles
the borrow - fix this by using bn_subw_subw(). Do the same in the
diff_len > 0 case for consistency. Note that this is never currently
reached since BN_usub() requires a >= b.
ok beck@ tb@
|
|
|
|
|
|
|
|
|
| |
This is a different way of avoiding the pointer arithmetic on NULL and
avoids test breakage in pyca/cryptography. This is also a gross hack
that penalizes existing callers of BIO_s_mem(), but this is rarely
called in a hot loop and if so that will most likely be a test.
ok kenjiro joshua jsing
|
|
|
|
| |
This causes a test failure in pyca/cryptography.
|
|
|
|
| |
OK deraadt@
|
|
|
|
|
|
|
|
| |
Provide method specific functions for EC_POINT_set_to_infinity() and
EC_POINT_is_at_infinity(). These are not always the same thing and
will depend on the coordinate system in use.
ok beck@ tb@
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are a very large number of entry points to libcrypto, which means it
is easy to run code prior to OPENSSL_init_crypto() being invoked. This
means that CPU capability detection will not have been run, leading to
poor choices with regards to the use of accelerated implementations.
Now that our CPU capability detection code has been cleaned up and is safe,
provide an openssl_init_crypto_constructor() that runs CPU capability
detection and invoke it as a library constructor. This should only be used
to invoke code that does not do memory allocation or trigger signals.
ok tb@
|
|
|
|
| |
This is no longer used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The arm CPU capability detection is uses SIGILL and is unsafe to call from
some contexts. Furthermore, this is only useful to detect NEON support,
which is then unused on OpenBSD due to __STRICT_ALIGNMENT. Requiring a
minimum of ARMv7+VFP+NEON is also not unreasonable.
The SHA-1, SHA-256 and SHA-512 (non-NEON) C code performs within ~5% of
the assembly, as does RSA when using the C based Montgomery multiplication.
The C versions of AES and GHASH code are around ~40-50% of the assembly,
howeer if you care about performance you really want to use
Chacha20Poly1305 on this platform.
This will enable further clean up to proceed.
ok joshua@ kinjiro@ tb@
|
|
|
|
|
|
|
|
|
|
| |
FIPS is currently revising their PBKDF2 recommendations and apparently
they want to require 16 octets.
https://github.com/pyca/cryptography/issues/12949
https://github.com/libressl/portable/issues/1168
ok kenjiro joshua jsing
|
|
|
|
|
|
|
|
|
|
| |
Using hmacWithSHA1 isn't outrageously bad, but newly generated encrypted
password files ought to be using something better. Make it so.
https://github.com/pyca/cryptography/issues/12949
https://github.com/libressl/portable/issues/1168
ok joshua
|
|
|
|
|
|
|
|
| |
binaries had become unlinkable. Change the libc definition to weak to solve
that, and to "const char * const" so that noone will try to set it late.
It must be stable before the first malloc() call, which could be before
main()...
discussion with otto, kettenis, tedu
|
|
|
|
|
|
|
|
| |
Rework some logic, add explicit numerical checks, move assignment out of
variable declaration and use post-increment/post-decrement unless there is
a specific reason to do pre-increment.
ok kenjiro@ tb@
|
|
|
|
|
|
| |
When checking the GCM tag, use timingsafe_memcmp() instead of memcmp().
ok tb@
|
|
|
|
|
|
|
|
|
|
| |
SSL_alert_desc_string() is only used by our good old friends M2Crypto
and Net::SSLeay. While some of the two-letter combinations can be made
sense of without looking at the switch, I guess, this is just a
completely useless interface. The same level of uselessness can be
acchieved in a single line matching BoringSSL.
ok joshua kenjiro
|
| |
|
|
|
|
|
|
|
|
| |
This adds significant complexity to the code. On amd64 and aarch64 it
results in a minimal slowdown for aligned inputs and a performance
improvement for unaligned inputs.
ok beck@ joshua@ tb@
|
| |
|
| |
|
|
|
|
| |
Discussed with tb@
|
|
|
|
|
|
|
|
| |
Check if ctx->data is NULL before calling freezero(). Also add
HKDF and TLS1-PRF to the EVP_PKEY cleanup regression test, as
they no longer crash with this change.
ok tb@
|
|
|
|
|
|
|
| |
Initialize the output buffer with MLKEM1024_PUBLIC_KEY_BYTES
instead of MLKEM768_PUBLIC_KEY_BYTES.
ok tb@
|
|
|
|
|
|
|
|
| |
The last #else branch in CRYPTO_gcm128_init() doesn't initialize the
function pointers for gmult/ghash, which results in a segfault when
using GCM on architectures taking this branch, notably sparc64.
found by and fix from jca
|
|
|
|
|
|
|
|
| |
This is currently done in a rather silly way. Shift the index by 1
and avoid weird pointer dances. Rather than relying on static
initialization, use code to obviate a comment.
ok beck joshua jsing
|
| |
|
|
|
|
| |
ok tb@, joshua@
|
|
|
|
| |
ok tb@, joshua@
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a precursor to adding new group ids for post quantum
stuff which are up in the 4000 range, so using the array index
as the group id will be silly. Instead we just add the group
id to the structure and we walk the list to find it.
This should never be a very large list for us, so no need
to do anything cuter than linear search for now.
ok jsing@, joshua@
|
|
|
|
|
|
|
|
| |
Even though this should remain internal, make it the same
as the public key marshal function, and make the needed
fallout changes in regress.
ok kenjiro@, tb@
|
|
|
|
| |
ok tb@
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even though this should remain internal, make it the same
as the public key marshal function, and make the needed
fallout changes in regress.
This does not yet do the bikeshed of renaming the structure
field in the regress ctx, that will wait until a follow on
to convert 1024 in a similar manner
ok tb@
|
|
|
|
| |
ok jsing@, joshua@
|
|
|
|
|
|
|
|
|
| |
- Get rid of CBB/CBS usage in public api
- Make void functions return int that can fail if malloc fails.
Along with some fallout and resulting bikeshedding in the regress tests.
ok jsing@, tb@
|
|
|
|
|
|
|
|
|
|
| |
AES_ecb_encrypt() does not really do ECB - provide an
aes_ecb_encrypt_internal that actually does multiple blocks and call this
from aes_ecb_cipher(). Provide ECB with its own key initialisation
function, which allows aes_init_key() to be simplified considerably.
The block function pointer is now unused, so mop this up.
ok joshua@ tb@
|
|
|
|
|
|
|
| |
Provide aes_{en,de}crypt_block128() which have correct function signatures
and use these when calling the various mode functions.
ok joshua@ tb@
|
|
|
|
|
|
|
| |
Provide AES-NI with its own aesni_ofb_cipher() and switch aes_ofb_cipher()
to call AES_ofb128_encrypt() directly.
ok joshua@ tb@
|
|
|
|
|
|
|
|
| |
Provide AES-NI with its own aesni_cfb*_cipher() functions, which then
allows us to change the existing aes_cfb*_cipher() functions to () to call
AES_cfb*_encrypt() directly.
ok beck@ tb@
|
| |
|