| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
In bf_local.h r1.2, openssl/opensslconf.h was pulled out of the
HEADER_BF_LOCL_H header guard, so BF_PTR was never defined from
opensslfeatures.h. Thus, alpha, mips64, sparc64 haven't used the
path that is supposedly optimized for them. On the M3k the speed
gain of bf-cbc with BF_PTR is roughly 5%, so not really great.
This is blowfish, so I don't think we want to carry complications
for alpha and mips64 only.
ok jsing kenjiro
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the constants here are only defined if a specific header is in
scope. So move the machine-independent macros to those headers and lose
the header guards. Most of these should actually be typedefs but let's
change this when we're bumping the major since this technically has ABI
impact.
IDEA_INT RC2_INT and RC4_INT are always unsigned int
DES_LONG is always unsigned int except on i386
This preserves the existing situation on OpenBSD. If you're using
portable on i386 with a compiler that does not define __i386__,
there's an ABI break.
ok jsing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The OPENSSL_IA32_SSE2 flag controls whether a number of the perlasm
scripts generate additional implementations that use SSE2 functionality.
In all cases except ghash, the code checks OPENSSL_ia32cap_P for SSE2
support, before trying to run SSE2 code. For ghash it generates a CLMUL
based implementation in addition to different MMX version (one MMX
version hides behind OPENSSL_IA32_SSE2, the other does not), however this
does not appear to actually use SSE2. We also disable AES-NI on i386 if
OPENSSL_IA32_SSE2.
On OpenBSD, we've always defined OPENSSL_IA32_SSE2 so this is effectively
a no-op. The only change is that we now check MMX rather than SSE2 for the
ghash MMX implementation.
ok bcook@ beck@
|
|
|
|
|
|
| |
This no longer does anything on this architecture.
ok bcook@ beck@
|
|
|
|
|
|
|
|
| |
These are unused internally and very few things look at them, none of
which should really matter to us, except possibly free pascal on Windows.
sizeof has been available since forever...
ok jsing
|
|
|
|
| |
pointed out by/ok jsing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
codesearch.debian.net only shows some legacy openssl patches plus binkd
(a FidoNet mailer) as sole potential user. net-snmp and a strongswan DES
plugin bundle some opt-in libdes/openssl legacy things. If this should
break any of this, I don't think we need to care. If you're really going
to use DES you can also use non bleeding edge libressl.
We can remove the big 'default values' block because one of
DES_RISC1, DES_RISC2, DES_UNROLL is always defined (you can ignore
DES_PTR for this), so this is dead support code for mostly dead
platforms.
ok kenjiro
|
|
|
|
|
|
| |
libdes is dead, Jim. Only its successors continue to haunt us.
discussed with jsing
|
|
|
|
|
|
|
|
| |
This was the header guard for des_old.h introduced in 2002 and removed
in 2014. The header guard for des.h is HEADER_NEW_DES_H for the sake of
inconsistency (ostensibly due to backward compat concerns with libdes).
ok jsing
|
|
|
|
|
|
|
| |
md2.h left on Apr 15, 2014, along with jpake and seed. In particular,
HEADER_MD2_H is never defined. These bits have been dead ever since.
ok jsing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The arm CPU capability detection is uses SIGILL and is unsafe to call from
some contexts. Furthermore, this is only useful to detect NEON support,
which is then unused on OpenBSD due to __STRICT_ALIGNMENT. Requiring a
minimum of ARMv7+VFP+NEON is also not unreasonable.
The SHA-1, SHA-256 and SHA-512 (non-NEON) C code performs within ~5% of
the assembly, as does RSA when using the C based Montgomery multiplication.
The C versions of AES and GHASH code are around ~40-50% of the assembly,
howeer if you care about performance you really want to use
Chacha20Poly1305 on this platform.
This will enable further clean up to proceed.
ok joshua@ kinjiro@ tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitsliced and vector permutation AES implementations were created
around 2009, in attempts to speed up AES on Intel hardware. Both require
SSSE3 which existed from around 2006. Intel introduced AES-NI in 2008 and
a large percentage of Intel/AMD CPUs made in the last 15 years include it.
AES-NI is significantly faster and requires less code.
Furthermore, the BS-AES and VP-AES implementations are wired directly into
EVP (as is AES-NI currently), which means that any consumers of the AES_*
API are not able to benefit from acceleration. Removing these greatly
simplifies the EVP AES code - if you just happen to have a CPU that
supports SSSE3 but not AES-NI, then you'll now use the regular AES assembly
implementations instead.
ok kettenis@ tb@
|
|
|
|
|
|
|
|
|
|
| |
This provides a SHA-512 assembly implementation that makes use of the ARM
Cryptographic Extension (CE), which is found on many arm64 CPUs. This gives
a performance gain of up to 2.5x on an Apple M2 (dependent on block size).
If an aarch64 machine does not have SHA512 support, then we'll fall back to
using the existing C implementation.
ok kettenis@ tb@
|
|
|
|
|
|
|
|
|
|
| |
Some people are concerned that leaking a user name is a privacy issue.
Allow disabling the __FILE__ and __LINE__ argument in the error stack
to avoid this. This can be improved a bit in tree.
From Viktor Szakats in https://github.com/libressl/portable/issues/761
ok bcook jsing
|
|
|
|
|
|
|
|
|
|
| |
This provides a SHA-256 assembly implementation that makes use of the ARM
Cryptographic Extension (CE), which is found on many arm64 CPUs. This gives
a performance gain of up to 7.5x on an Apple M2 (dependent on block size).
If an aarch64 machine does not have SHA2 support, then we'll fall back to
using the existing C implementation.
ok kettenis@ tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, SHA{1,256,512}_ASM defines are used to remove the C
implementation of sha{1,256,512}_block_data_order() when it is provided
by assembly. However, this prevents the C implementation from being used
as a fallback.
Rename the C sha*_block_data_order() to sha*_block_generic() and provide
a sha*_block_data_order() that calls sha*_block_generic(). Replace the
Makefile based SHA*_ASM defines with two HAVE_SHA_* defines that allow
these functions to be compiled in or removed, such that machine specific
verisons can be provided. This should effectively be a no-op on any
platform that defined SHA{1,256,512}_ASM.
ok tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The RC4_INDEX define switches between base pointer indexing and per-byte
pointer increment. This supposedly made a huge difference to performance
on x86 at some point, however compilers have improved somewhat since then.
There is no change (or effectively no change) in generated assembly on
a the majority of LLVM platforms and even when there is some change
(e.g. aarch64), there is no noticable performance difference.
Simplify the (still messy) macros/code and mop up RC4_INDEX.
ok tb@
|
|
|
|
|
|
|
|
|
|
| |
This appears to be about 5% faster than the current perlasm version on a
modern Intel CPU.
While here rename md5_block_asm_data_order to md5_block_data_order, for
consistency with other hashes.
ok tb@
|
|
|
|
|
|
|
|
| |
This provides a SHA-1 assembly implementation for amd64, which uses
the Intel SHA Extensions (aka SHA New Instructions or SHA-NI). This
provides a 2-2.5x performance gain on some Intel CPUs and many AMD CPUs.
ok tb@
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As already done for SHA-256 and SHA-512, replace the perlasm generated
SHA-1 assembly implementation with one that is actually readable. Call the
assembly implementation from a C wrapper that can, in the future, dispatch
to alternate implementations. On a modern CPU the performance is around
5% faster than the base implementation generated by sha1-x86_64.pl, however
it is around 15% slower than the excessively complex SSSE2/AVX version that
is also generated by the same script (a SHA-NI version will greatly
outperform this and is much cleaner/simpler).
ok tb@
|
|
|
|
|
|
|
|
| |
This provides a SHA-256 assembly implementation for amd64, which uses
the Intel SHA Extensions (aka SHA New Instructions or SHA-NI). This
provides a 3-5x performance gain on some Intel CPUs and many AMD CPUs.
ok tb@
|
|
|
|
|
|
|
|
| |
Replace the perlasm generated SHA-512 assembly with a more readable
version and the same C wrapper introduced for SHA-256. As for SHA-256,
on a modern CPU the performance is largely the same.
ok tb@
|
|
|
|
|
|
|
| |
This also provides a crypto_cpu_caps_amd64 variable that can be checked
for CRYPTO_CPU_CAPS_AMD64_SHA.
ok tb@
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the perlasm generated SHA-256 assembly implementation with one that
is actually readable. Call the assembly implementation from a C wrapper
that can, in the future, dispatch to alternate implementations. Performance
is similar (or even better) on modern CPUs, while somewhat slower on older
CPUs (this is in part due to the wrapper, the impact of which is more
noticable with small block sizes).
Thanks to gkoehler@ and tb@ for testing.
ok tb@
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the aarch64 CPU detection code with a version that parses ISAR0,
avoiding signal handling and SIGILL. This gets ISAR0 via sysctl(), but this
can be adapted to other mechanisms for other platforms (or alternatively
the same can be achieved via HWCAP).
This now follows the same naming/design as used by amd64 and i386, hence
define HAVE_CRYPTO_CPU_CAPS_INIT for aarch64.
ok kettenis@ tb@
|
|
|
|
|
|
|
|
| |
It is gross that an internal detail leaked into a public header, but,
hey, it's openssl. No hack is too terrible to appear in this library.
opensslconf.h needs major pruning but the day that happens is not today.
ok jsing
|
|
|
|
|
|
|
|
|
|
| |
ppc64-mont.pl (which produces bn_mul_mont_fpu64()) is unused on both
powerpc and powerpc64, so remove it. ppccap.c doesn't actually contain
anything to do with CPU capabilities - it just provides a bn_mul_mont()
that calls bn_mul_mont_int() (which ppc-mont.pl generates). Change
ppc-mont.pl to generate bn_mul_mont() directly and remove ppccap.c.
ok tb@
|
|
|
|
|
|
| |
Move the IA32 specific code to arch/{amd64,i386}/crypto_cpu_caps.c, rather
than polluting cryptlib.c with machine dependent code. A stub version of
crypto_cpu_caps_ia32() still remains for now.
|
|
|
|
|
|
|
| |
This has been unused for a long time - it can be found in the attic if
someone wants to clean it up and enable it in the future.
ok tb@
|
|
|
|
|
|
|
| |
This is the same CPU capabilities code that is now used for amd64. Like
amd64 we now only populate OPENSSL_ia32cap_P with bits used by perlasm.
Discussed with tb@
|
|
|
|
|
|
|
|
|
|
|
| |
This is a CPU capability detection implementation in C, with minimal
inline assembly (for cpuid and xgetbv). This replaces the assembly
mess generated by x86_64cpuid.pl. Rather than populating OPENSSL_ia32cap_P
directly with CPUID output, just set the bits that the remaining
perlasm checks (namely AESNI, AVX, FXSR, INTEL, HT, MMX, PCLMUL, SSE, SSE2
and SSSE3).
ok joshua@ tb@
|
|
|
|
|
|
|
|
|
| |
This allows us in particular to get rid of the MD Symbols.list which
were needed on amd64 and i386 for llvm 16 a while back. OPENSSL_ia32cap_P
was never properly exported since the symbols were marked .hidden in the
asm.
ok beck jsing
|
| |
|
|
|
|
|
|
|
|
| |
Provide a per architecture crypto_arch.h - this will be used in a similar
manner to bn_arch.h and will allow for architecture specific #defines and
static inline functions. Move the HAVE_AES_* and HAVE_RC4_* defines here.
ok tb@
|
|
|
|
|
| |
ssh tools. The dynamic objects are entirely ret-clean, static binaries
will contain a blend of cleaning and non-cleaning callers.
|
|
|
|
|
|
|
| |
Always provide AES_{encrypt,decrypt}() via C functions, which then either
use a C implementation or call the assembly implementation.
ok tb@
|
|
|
|
| |
These files are now built on all platforms.
|
|
|
|
|
|
|
| |
This is a legacy algorithm and the assembly is only marginally faster than
the C code.
Discussed with beck@ and tb@
|
|
|
|
| |
This is now built on all platforms.
|
|
|
|
|
|
|
|
| |
Always include aes_core.c and provide AES_set_{encrypt,decrypt}_key() via C
functions, which then either use a C implementation or call the assembly
implementation.
ok tb@
|
|
|
|
| |
This is now built on all platforms.
|
|
|
|
|
|
|
| |
This is a legacy algorithm and the assembly is only marginally faster than
the C code.
Discussed with beck@ and tb@
|
| |
|
|
|
|
|
|
|
|
| |
Rename the assembly generated functions from AES_cbc_encrypt() to
aes_cbc_encrypt_internal(). Always include aes_cbc.c and change it
to use defines that are similar to those used in BN.
ok tb@
|
| |
|
|
|
|
| |
This is now built on all platforms.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than having public API switch between C and assembly, always
use C functions as entry points, which then call an assembly
implementation (if available). This makes it significantly easier
to deal with symbol aliasing/namespaces and it also means we
benefit from vulnerability prevention provided by the C compiler.
Rename the assembly generated functions from RC4() to rc4_internal()
and RC4_set_key() to rc4_set_key_internal(). Always include rc4.c
and change it to use defines that are similar to those used in BN.
ok beck@ joshua@ tb@
|