summaryrefslogtreecommitdiff
path: root/src/lib/libcrypto (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Create bm->buf from the start to avoid arithmetic on NULLtb2025-05-241-1/+7
| | | | | | | | | This is a different way of avoiding the pointer arithmetic on NULL and avoids test breakage in pyca/cryptography. This is also a gross hack that penalizes existing callers of BIO_s_mem(), but this is rarely called in a hot loop and if so that will most likely be a test. ok kenjiro joshua jsing
* Revert "bio_mem: avoid pointer arithmetic on NULL"tb2025-05-241-4/+2
| | | | This causes a test failure in pyca/cryptography.
* Provide method specific functions for EC POINT infinity.jsing2025-05-243-10/+27
| | | | | | | | Provide method specific functions for EC_POINT_set_to_infinity() and EC_POINT_is_at_infinity(). These are not always the same thing and will depend on the coordinate system in use. ok beck@ tb@
* Mop up ghash arm assembly remnants.jsing2025-05-241-18/+1
|
* Provide openssl_init_crypto_constructor() and invoke via a constructor.jsing2025-05-241-3/+14
| | | | | | | | | | | | | | There are a very large number of entry points to libcrypto, which means it is easy to run code prior to OPENSSL_init_crypto() being invoked. This means that CPU capability detection will not have been run, leading to poor choices with regards to the use of accelerated implementations. Now that our CPU capability detection code has been cleaned up and is safe, provide an openssl_init_crypto_constructor() that runs CPU capability detection and invoke it as a library constructor. This should only be used to invoke code that does not do memory allocation or trigger signals. ok tb@
* Remove remnants of OPENSSL_cpuid_setup().jsing2025-05-243-20/+10
| | | | This is no longer used.
* Disable libcrypto assembly on arm.jsing2025-05-245-257/+2
| | | | | | | | | | | | | | | | | The arm CPU capability detection is uses SIGILL and is unsafe to call from some contexts. Furthermore, this is only useful to detect NEON support, which is then unused on OpenBSD due to __STRICT_ALIGNMENT. Requiring a minimum of ARMv7+VFP+NEON is also not unreasonable. The SHA-1, SHA-256 and SHA-512 (non-NEON) C code performs within ~5% of the assembly, as does RSA when using the C based Montgomery multiplication. The C versions of AES and GHASH code are around ~40-50% of the assembly, howeer if you care about performance you really want to use Chacha20Poly1305 on this platform. This will enable further clean up to proceed. ok joshua@ kinjiro@ tb@
* Crank default salt length of PBE2 to 16 octetstb2025-05-242-4/+13
| | | | | | | | | | FIPS is currently revising their PBKDF2 recommendations and apparently they want to require 16 octets. https://github.com/pyca/cryptography/issues/12949 https://github.com/libressl/portable/issues/1168 ok kenjiro joshua jsing
* Switch the default PBMAC to hmacWithSHA256tb2025-05-241-2/+2
| | | | | | | | | | Using hmacWithSHA1 isn't outrageously bad, but newly generated encrypted password files ought to be using something better. Make it so. https://github.com/pyca/cryptography/issues/12949 https://github.com/libressl/portable/issues/1168 ok joshua
* Do a clean up pass over the GCM code.jsing2025-05-221-92/+86
| | | | | | | | Rework some logic, add explicit numerical checks, move assignment out of variable declaration and use post-increment/post-decrement unless there is a specific reason to do pre-increment. ok kenjiro@ tb@
* Use timingsafe_memcmp() in CRYPTO_gcm128_finish().jsing2025-05-221-2/+2
| | | | | | When checking the GCM tag, use timingsafe_memcmp() instead of memcmp(). ok tb@
* Reorder some functions.jsing2025-05-211-20/+20
|
* Remove GHASH_CHUNK and size_t related code from GCM encrypt/decrypt.jsing2025-05-211-220/+1
| | | | | | | | This adds significant complexity to the code. On amd64 and aarch64 it results in a minimal slowdown for aligned inputs and a performance improvement for unaligned inputs. ok beck@ joshua@ tb@
* Fix wrapping.jsing2025-05-211-13/+9
|
* Remove now unused AES assembly generation scripts.jsing2025-05-213-5256/+0
|
* Remove more unused code.jsing2025-05-211-95/+1
| | | | Discussed with tb@
* Add NULL checks to HKDF and TLS1-PRF EVP_PKEY cleanup functionskenjiro2025-05-212-2/+8
| | | | | | | | Check if ctx->data is NULL before calling freezero(). Also add HKDF and TLS1-PRF to the EVP_PKEY cleanup regression test, as they no longer crash with this change. ok tb@
* Fix buffer size in MLKEM1024_marshal_public_key()kenjiro2025-05-211-2/+2
| | | | | | | Initialize the output buffer with MLKEM1024_PUBLIC_KEY_BYTES instead of MLKEM768_PUBLIC_KEY_BYTES. ok tb@
* Unbreak GHASH on some architectures setting GHASH_ASMtb2025-05-201-1/+3
| | | | | | | | The last #else branch in CRYPTO_gcm128_init() doesn't initialize the function pointers for gmult/ghash, which results in a segfault when using GCM on architectures taking this branch, notably sparc64. found by and fix from jca
* Simplify err_build_SYS_str_reasonstb2025-05-201-19/+13
| | | | | | | | This is currently done in a rather silly way. Shift the index by 1 and avoid weird pointer dances. Rather than relying on static initialization, use code to obviate a comment. ok beck joshua jsing
* Fix previous - names use underscores and not hyphens.jsing2025-05-201-3/+3
|
* Add ML-KEM768 Hybrid Kems to obj_mac.numbeck2025-05-201-0/+3
| | | | ok tb@, joshua@
* Add ML-KEM768 Hybrid Kems to objects.txtbeck2025-05-201-0/+6
| | | | ok tb@, joshua@
* Make MLKEM1024_marshal_private_key consistent with the public_key funcitonsbeck2025-05-202-27/+44
| | | | | | | | Even though this should remain internal, make it the same as the public key marshal function, and make the needed fallout changes in regress. ok kenjiro@, tb@
* Whitespace nits from tbbeck2025-05-201-1/+4
| | | | ok tb@
* Fix up MLKEM768_marshal_private_key to not use a passed in CBBbeck2025-05-192-27/+43
| | | | | | | | | | | | Even though this should remain internal, make it the same as the public key marshal function, and make the needed fallout changes in regress. This does not yet do the bikeshed of renaming the structure field in the regress ctx, that will wait until a follow on to convert 1024 in a similar manner ok tb@
* Remove the boringssl if || ideom from mlkembeck2025-05-192-34/+46
| | | | ok jsing@, joshua@
* API changes for ML-KEMbeck2025-05-194-78/+126
| | | | | | | | | - Get rid of CBB/CBS usage in public api - Make void functions return int that can fail if malloc fails. Along with some fallout and resulting bikeshedding in the regress tests. ok jsing@, tb@
* Simplify EVP AES code for ECB.jsing2025-05-192-33/+46
| | | | | | | | | | AES_ecb_encrypt() does not really do ECB - provide an aes_ecb_encrypt_internal that actually does multiple blocks and call this from aes_ecb_cipher(). Provide ECB with its own key initialisation function, which allows aes_init_key() to be simplified considerably. The block function pointer is now unused, so mop this up. ok joshua@ tb@
* Remove block128_f function casts.jsing2025-05-191-8/+20
| | | | | | | Provide aes_{en,de}crypt_block128() which have correct function signatures and use these when calling the various mode functions. ok joshua@ tb@
* Simplify EVP AES code for OFB.jsing2025-05-191-7/+19
| | | | | | | Provide AES-NI with its own aesni_ofb_cipher() and switch aes_ofb_cipher() to call AES_ofb128_encrypt() directly. ok joshua@ tb@
* Simplify EVP AES code for CFB.jsing2025-05-191-25/+79
| | | | | | | | Provide AES-NI with its own aesni_cfb*_cipher() functions, which then allows us to change the existing aes_cfb*_cipher() functions to () to call AES_cfb*_encrypt() directly. ok beck@ tb@
* EC_POINT_new: wording tweaks in the BUGS sectiontb2025-05-181-6/+6
|
* Simplify EVP AES code for CTR.jsing2025-05-181-22/+23
| | | | | | | | Provide AES-NI with its own aesni_ctr_cipher(), which then allows us to change aes_ctr_cipher() to call AES_ctr128_encrypt() directly. The stream.ctr function pointer is now unused and can be mopped up. ok beck@ tb@
* Unifdef AES_CTR_ASM.jsing2025-05-181-14/+1
| | | | This is a remnant from s390x assembly.
* Simplify EVP code for AES CBC.jsing2025-05-181-26/+33
| | | | | | | | | Change aes_cbc_cipher() to call AES_cbc_encrypt() directly, rather than via the stream.cbc function pointer. Remove stream.cbc since it is no longer used. Also provide a separate aes_cbc_init_key() function which makes this standalone and does not require checking mode flags. ok joshua@ tb@
* add missing u64/uint64_t conversionbcook2025-05-181-3/+3
| | | | ok jsing@
* Use stdint types instead of u64/u32/u8.jsing2025-05-185-134/+127
| | | | No change in generated assembly.
* Remove contortions with the rem_4bit table.jsing2025-05-181-28/+9
| | | | | | | | | Instead of using size_t and a PACK macro, store the entries as uint16_t and then uncondtionally left shift 48 bits. This gives a small performance gain on some architectures and has the advantage of reducing the size of the table from 1024 bits to 256 bits. ok beck@ joshua@ tb@
* Inline REDUCE1BIT macro.jsing2025-05-181-15/+6
| | | | | | | | | The REDUCE1BIT macro is now only used in one place, so just inline it. Additionally we do not need separate 32 bit and 64 bit versions - just use the 64 bit version and let the compiler deal with it (we effectively get the same code on i386). ok beck@ joshua@
* bio_mem: avoid pointer arithmetic on NULLtb2025-05-181-2/+4
| | | | | | Prompted by a diff by Kenjiro Nakayama ok jsing
* rc2: two files escaped the lure of the attic, set these poor souls freetb2025-05-182-241/+0
|
* Remove TABLE_BITS from gcm128.jsing2025-05-172-248/+3
| | | | | | | | | TABLE_BITS is always currently defined as 4 - 8 is considered to be insecure due to timing leaks and 1 is considerably slower. Remove code that is not regularly tested, does not serve a lot of purpose and is making clean up harder than it needs to be. ok tb@
* Replace GCM_MUL/GHASH defines with static inline functions.jsing2025-05-161-121/+99
| | | | | | | | | Rather than having defines for GCM_MUL/GHASH (along with the wonder that is GCM_FUNCREF_4BIT) then conditioning on their availability, provide and call gcm_mul()/gcm_ghash() unconditionally. This simplifies all of the call sites. ok tb@
* Increase default PKCS12_SALT_LEN from 8 to 16 bytestb2025-05-101-2/+2
| | | | | | | | | | | | | | Currently PKCS12_setup_mac() function uses salt length of 8 bytes / 64 bits when no salt length is specified. Increase this fallback default to 16 bytes / 128 bits, as recommended by NIST SP 800-132. Note this is for interoperability purposes. Some FIPS implementations enforce minimum salt length of 16 bytes. Examples of such FIPS implemenations are Bouncycastle FIPS Java API and Chainguard FIPS Provider for OpenSSL. Also future v3.6 release of OpenSSL will also increase the default salt length to 16 bytes. From Dimitri John Ledkov, thanks
* asn_moid: move inclusion of err_local.h to the proper placetb2025-05-101-2/+2
|
* Sort FOOerror() in ASCII ordertb2025-05-101-18/+18
|
* Simplify the remaining FOOerror()tb2025-05-101-26/+28
| | | | | | | | Redirect through an additional macro that adds the repeated function, file and line macros. Reduces the eyesore and makes the whole thing much more redable. similar to a suggestion by jsing a while back
* Remove unused internal FOOerror()tb2025-05-101-11/+1
| | | | pointed out by djm a while back
* Remove error macros except PEMerr(), RSAerr() and SSLerr()tb2025-05-101-37/+4
| | | | | | | These three are still used in about half a dozen ports. All the others are unused. ok jsing