summaryrefslogtreecommitdiff
path: root/src/lib/libcrypto/modes/gcm128.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Rework gcm128 implementation selection for amd64/i386.jsing2025-06-281-57/+13
| | | | | | | | | Provide gcm128_amd64.c and gcm128_i386.c, which contain the appropriate gcm128 initialisation and CPU feature tests for the respective platform. This allows for all of the #define spagetti to be removed from gcm128.c and removes one of the two remaining consumers of crypto_cpu_caps_ia32(). ok tb@
* Use a single implementation of gcm_mul()/gcm_ghash().jsing2025-06-281-19/+8
| | | | | | | | Since we always initialise the gmult/ghash function pointers, use the same implementaion of gcm_mul() and gcm_ghash(), regardless of the actual underlying implementation. ok tb@
* Remove less than useful comment.jsing2025-06-281-8/+1
|
* Make OPENSSL_IA32_SSE2 the default for i386 and remove the flag.jsing2025-06-091-7/+1
| | | | | | | | | | | | | | | | | The OPENSSL_IA32_SSE2 flag controls whether a number of the perlasm scripts generate additional implementations that use SSE2 functionality. In all cases except ghash, the code checks OPENSSL_ia32cap_P for SSE2 support, before trying to run SSE2 code. For ghash it generates a CLMUL based implementation in addition to different MMX version (one MMX version hides behind OPENSSL_IA32_SSE2, the other does not), however this does not appear to actually use SSE2. We also disable AES-NI on i386 if OPENSSL_IA32_SSE2. On OpenBSD, we've always defined OPENSSL_IA32_SSE2 so this is effectively a no-op. The only change is that we now check MMX rather than SSE2 for the ghash MMX implementation. ok bcook@ beck@
* More code clean up.jsing2025-06-081-10/+9
| | | | | Fix some things that got missed in the last pass - the majority is use of post-increment rather than unnecessary pre-increment.
* Remove more mess related to arm assembly.jsing2025-06-081-23/+1
|
* Mop up ghash arm assembly remnants.jsing2025-05-241-18/+1
|
* Do a clean up pass over the GCM code.jsing2025-05-221-92/+86
| | | | | | | | Rework some logic, add explicit numerical checks, move assignment out of variable declaration and use post-increment/post-decrement unless there is a specific reason to do pre-increment. ok kenjiro@ tb@
* Use timingsafe_memcmp() in CRYPTO_gcm128_finish().jsing2025-05-221-2/+2
| | | | | | When checking the GCM tag, use timingsafe_memcmp() instead of memcmp(). ok tb@
* Reorder some functions.jsing2025-05-211-20/+20
|
* Remove GHASH_CHUNK and size_t related code from GCM encrypt/decrypt.jsing2025-05-211-220/+1
| | | | | | | | This adds significant complexity to the code. On amd64 and aarch64 it results in a minimal slowdown for aligned inputs and a performance improvement for unaligned inputs. ok beck@ joshua@ tb@
* Fix wrapping.jsing2025-05-211-13/+9
|
* Remove more unused code.jsing2025-05-211-95/+1
| | | | Discussed with tb@
* Unbreak GHASH on some architectures setting GHASH_ASMtb2025-05-201-1/+3
| | | | | | | | The last #else branch in CRYPTO_gcm128_init() doesn't initialize the function pointers for gmult/ghash, which results in a segfault when using GCM on architectures taking this branch, notably sparc64. found by and fix from jca
* Use stdint types instead of u64/u32/u8.jsing2025-05-181-46/+46
| | | | No change in generated assembly.
* Remove contortions with the rem_4bit table.jsing2025-05-181-28/+9
| | | | | | | | | Instead of using size_t and a PACK macro, store the entries as uint16_t and then uncondtionally left shift 48 bits. This gives a small performance gain on some architectures and has the advantage of reducing the size of the table from 1024 bits to 256 bits. ok beck@ joshua@ tb@
* Inline REDUCE1BIT macro.jsing2025-05-181-15/+6
| | | | | | | | | The REDUCE1BIT macro is now only used in one place, so just inline it. Additionally we do not need separate 32 bit and 64 bit versions - just use the 64 bit version and let the compiler deal with it (we effectively get the same code on i386). ok beck@ joshua@
* Remove TABLE_BITS from gcm128.jsing2025-05-171-234/+2
| | | | | | | | | TABLE_BITS is always currently defined as 4 - 8 is considered to be insecure due to timing leaks and 1 is considerably slower. Remove code that is not regularly tested, does not serve a lot of purpose and is making clean up harder than it needs to be. ok tb@
* Replace GCM_MUL/GHASH defines with static inline functions.jsing2025-05-161-121/+99
| | | | | | | | | Rather than having defines for GCM_MUL/GHASH (along with the wonder that is GCM_FUNCREF_4BIT) then conditioning on their availability, provide and call gcm_mul()/gcm_ghash() unconditionally. This simplifies all of the call sites. ok tb@
* Restore two #if defined(GHASH) that were incorrectly removed.jsing2025-04-251-5/+5
| | | | | | | Also condition on defined(GHASH_CHUNK) since this is used within these blocks. This makes the conditionals consistent with other usage. Fixes build with TABLE_BITS == 1.
* Unifdef OPENSSL_SMALL_FOOTPRINT.jsing2025-04-251-13/+5
| | | | ok tb@
* Use the OPENSSL_SMALL_FOOTPRINT code in gcm_init_4bit().jsing2025-04-251-32/+2
| | | | | | | | A modern compiler will unroll these loops - LLVM produces identical code (at least on arm64). Drop the manually unrolled version and have code that is more readable and maintainable. ok tb@
* Mop up all of the GETU32/BSWAP4/BSWAP8 macros since they're now unused.jsing2025-04-231-7/+1
| | | | ok beck@ tb@
* Rewrite gcm_gmult_1bit() to avoid sizeof(long) hacks.jsing2025-04-231-22/+8
| | | | | | | | | | | We're already using 64 bit variables, so just continue to do so and let the compiler deal with code generation. While here, use unsigned right shifts instead of relying on signed right shifts and implementation-defined behaviour (which the original code did). Feedback from lucas@ ok beck@ tb@
* Fix CRYPTO_gcm128_decrypt() when compiled with TABLE_BITS == 1.jsing2025-04-231-3/+3
| | | | | | | | | This appears to have been broken since 2013 when OpenSSL commit 3b4be0018b5 landed. This added in_t and out_t variables, but continued to use in and out instead. Yet another reason why untested conditional code is a bad thing. ok beck@ tb@
* Mop up OPENSSL_FIPSAPI define.jsing2025-04-221-3/+1
|
* Mop up unused MODES_DEBUG.jsing2025-04-211-7/+1
|
* Reenable AES-NI in libcryptotb2024-09-061-5/+8
| | | | | | | | | | | | | | | | | | | The OPENSSL_cpu_caps() change after the last bump missed a crucial bit: there is more MD mess in the MI code than anticipated, with the result that AES is now used without AES-NI on amd64 and i386, hurting machines that previously greatly benefitted from it. Temporarily add an internal crypto_cpu_caps_ia32() API that returns the OPENSSL_ia32cap_P or 0 like OPENSSL_cpu_caps() previously did. This can be improved after the release. Regression reported and fix tested by Mark Patruck. No impact on public ABI or API. with/ok jsing PS: Next time my pkg_add feels very slow, I should perhaps not mechanically blame IEEE 802.11...
* Improve byte order handling in gcm128.jsing2023-08-101-329/+44
| | | | | | Replace a pile of byte order handling mess with htobe*() and be*toh(). ok tb@
* Hide symbols in modes.hbeck2023-07-081-1/+12
| | | | ok tb@
* Hit modes with the loving mallet of knfmtbeck2023-07-081-560/+627
| | | | ok tb@
* Make internal header file names consistenttb2022-11-261-2/+2
| | | | | | | | | | | | | | | | Libcrypto currently has a mess of *_lcl.h, *_locl.h, and *_local.h names used for internal headers. Move all these headers we inherited from OpenSSL to *_local.h, reserving the name *_internal.h for our own code. Similarly, move dtls_locl.h and ssl_locl.h to dtls_local and ssl_local.h. constant_time_locl.h is moved to constant_time.h since it's special. Adjust all .c files in libcrypto, libssl and regress. The diff is mechanical with the exception of tls13_quic.c, where #include <ssl_locl.h> was fixed manually. discussed with jsing, no objection bcook
* Make the NEON codepaths conditional on __STRICT_ALIGNMENT not beingkettenis2018-01-241-2/+2
| | | | | | defined as they rely on unaligned access. ok joel@
* In the middle of CRYPTO_gcm128_finish() there is a complicated #ifdefderaadt2017-12-091-6/+8
| | | | | block which defines a variable late, after code. Place this chunk into a { subblock } to satisfy old compilers and old eyes.
* Checking sizeof size_t by SIZE_MAX instead of _LP64inoguchi2017-09-031-7/+7
| | | | ok bcook@
* Fix ifdef to if in gcm128.cinoguchi2017-08-301-2/+2
| | | | ok deraadt@ bcook@
* fix missing bracket on ARMbcook2017-08-141-15/+15
| | | | ok beck@
* move endian/word size checks from runtime to compile timebcook2017-08-131-246/+264
| | | | ok guenther@
* use freezero() instead of memset/explicit_bzero + free. Substantiallyderaadt2017-05-021-5/+2
| | | | | | | | | | reduces conditional logic (-218, +82). MOD_EXP_CTIME_MIN_CACHE_LINE_WIDTH cache alignment calculation bn/bn_exp.c wasn'tt quite right. Two other tricky bits with ASN1_STRING_FLAG_NDEF and BN_FLG_STATIC_DATA where the condition cannot be collapsed completely. Passes regress. ok beck
* Replace all uses of magic numbers when operating on OPENSSL_ia32_P[] bymiod2016-11-041-6/+13
| | | | | | | | | | | | | | | meaningful constants in a private header file, so that reviewers can actually get a chance to figure out what the code is attempting to do without knowing all cpuid bits. While there, turn it from an array of two 32-bit ints into a properly aligned 64-bit int. Use of OPENSSL_ia32_P is now restricted to the assembler parts. C code will now always use OPENSSL_cpu_caps() and check for the proper bits in the whole 64-bit word it returns. i386 tests and ok jsing@
* Remove I386_ONLY define. It was only used to prefer amiod2016-11-041-3/+2
| | | | | | | faster-on-genuine-80386-but-slower-on-80486-onwards innstruction sequence in the SHA512 code, and had not been enabled in years, if at all. ok tom@ bcook@
* Correct spelling of OPENSSL_cleanse.jsing2015-09-101-2/+2
| | | | ok miod@
* Remove assert() or OPENSSL_assert() of pointers being non-NULL. The policymiod2015-02-101-2/+1
| | | | | for libraries in OpenBSD is to deliberately let NULL pointers cause a SIGSEGV. ok doug@ jsing@
* Delete a lot of #if 0 code in libressl.doug2015-02-071-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | There are a few instances where #if 1 is removed but the code remains. Based on the following OpenSSL commits. Some of the commits weren't strictly deletions so they are going to be split up into separate commits. 6f91b017bbb7140f816721141ac156d1b828a6b3 3d47c1d331fdc7574d2275cda1a630ccdb624b08 dfb56425b68314b2b57e17c82c1df42e7a015132 c8fa2356a00cbaada8963f739e5570298311a060 f16a64d11f55c01f56baa62ebf1dec7f8fe718cb 9ccc00ef6ea65567622e40c49aca43f2c6d79cdb 02a938c953b3e1ced71d9a832de1618f907eb96d 75d0ebef2aef7a2c77b27575b8da898e22f3ccd5 d6fbb194095312f4722c81c9362dbd0de66cb656 6f1a93ad111c7dfe36a09a976c4c009079b19ea1 1a5adcfb5edfe23908b350f8757df405b0f5f71f 8de24b792743d11e1d5a0dcd336a49368750c577 a2b18e657ea1a932d125154f4e13ab2258796d90 8e964419603d2478dfb391c66e7ccb2dcc9776b4 32dfde107636ac9bc62a5b3233fe2a54dbc27008 input + ok jsing@, miod@, tedu@
* Remove leading underscore from _BYTE_ORDER and _{LITTLE,BIG}_ENDIAN, to bemiod2014-07-091-32/+32
| | | | | more friendly to systems where the underscore flavours may be defined as empty. Found the hard way be bcook@; joint brainstrom with bcook beck and guenther
* hand-KNF macro the do { } while loopsderaadt2014-06-271-13/+13
|
* tags as requested by miod and teduderaadt2014-06-121-0/+1
|
* malloc() result does not need a cast.deraadt2014-06-071-1/+1
| | | | ok miod
* Move the cts128 and gcm128 tests to regress.jsing2014-05-311-307/+0
|
* Get __STRICT_ALIGNMENT from <machine/endian.h> and decide upon it, rathermiod2014-05-071-3/+3
| | | | | | | | | | | | | | than defining it for not (i386 and amd64 (and sometimes s390)) only. Compile-time tests remain compile-time tests, and runtime-test remain runtime-test instead of being converted to compile-time tests, per matthew@'s explicit demand (rationale: this makes sure the compiler checks your code even if you won't run it). No functional change except on s390 (which we don't run on) and vax (which we run on, but noone cares about) ok matthew@