summaryrefslogtreecommitdiff
path: root/src (unfollow)
Commit message (Collapse)AuthorFilesLines
2023-05-23Add empty line for consistencytb1-1/+2
2023-05-23Add regress coverage for obj_dat.c r1.52tb1-1/+44
2023-05-23Always NUL terminate buf in OBJ_obj2txt()tb1-1/+4
OBJ_obj2txt() is often called without error checking and is used for reporting unexpected or malformed objects. As such, we should ensure buf is a string even on failure. This had long been the case before it was lost in a recent rewrite. If obj and obj->data are both non-NULL this is already taken care of by i2t_ASN1_OBJECT_internal(), so many callers were still safe. ok miod
2023-05-23cms_asn1.c: zap stray tabstb1-8/+1
2023-05-22Remove misplaced semicolons in .Fatb2-6/+6
2023-05-20ecdhtest: Fix indenttb1-2/+2
2023-05-20Remove a space that I thought I had already deleted.tb1-2/+2
Makes mandoc -Tlint happier
2023-05-20Add a slow regress target that runs openssl speed with proper alignmenttb1-2/+7
and with an unaligned offset. Let's see if all ciphers on our strict alignment arches can deal with this.
2023-05-20openssl speed: add an '-unaligned n' optiontb2-7/+37
All hashes and ciphers covered by speed should be able to handle unaligned input and output. The buffers used in openssl speed are well aligned since they are large, so will never exercise the more problematic unaligned case. I wished something like this was available on various occasions. It would have been useful to point more easily at OpenSSL's broken T4 assembly. Yesterday there were two independent reasons for wanting it, so I sat down and did it. It's trivial: make the allocations a bit larger and use buffers starting at an offset inside these allocations. Despite the trivality, I managed to have a stupid bug. Thanks miod. discussed with jsing ok miod
2023-05-20openssl speed: minor style nitstb1-8/+6
This drops a bunch of unnecessary parentheses, makes the strcmp() checks consistent and moves some "}\n\telse" to "} else". Makes an upcoming commit smaller
2023-05-20openssl speed: remove binary curve remnantstb1-88/+5
This wasn't properly hidden under OPENSSL_NO_EC2M, and all it does now is producing ugly errors and useless "statistics". While looking at this, I found that much of speed "has been pilfered from [Eric A. Young's] libdes speed.c program". Apparently this was an precursor and ingredient of SSLeay. Unfortunately, it seems that this piece of the history is lost. ok miod PS: If anyone is bored, a rewrite from scratch of the speed 'app' would be a welcome contribution and may be an instructive rainy day project. The current code was written in about the most stupid way possible so as to maximize fragility and unmaintainability.
2023-05-19Add missing rsa_security_bit() handler to the RSA-PSS ASN1_METHODtb1-1/+2
Prompted by a report by Steffen Ullrich on libressl@openbsd.org ok jsing
2023-05-19backout alignment changes (breaking at least two architectures)deraadt4-100/+89
2023-05-18Add PROTO_NORMAL() declarations for the remaining syscalls, to avoidguenther1-4/+1
future, inadvertant PLT entries. Move the __getcwd and __realpath declarations to hidden/{stdlib,unistd}.h to consolidate and remove duplication. ok tb@ otto@ deraadt@
2023-05-17Use crypto_internal.h's CTASSERT()tb2-8/+5
Now that this macro is available in a header, let's use that version rather than copies in several .c files. discussed with jsing
2023-05-17Clean up alignment handling for SHA-512.jsing2-81/+95
All assembly implementations are required to perform their own alignment handling. In the case of the C implementation, on strict alignment platforms, unaligned data will be copied into an aligned buffer. However, most platforms then perform byte-by-byte reads (via the PULL64 macros). Instead, remove SHA512_BLOCK_CAN_MANAGE_UNALIGNED_DATA and alignment handling to sha512_block_data_order() - if the data is aligned then simply perform 64 bit loads and then do endian conversion via be64toh(). If the data is unaligned then use memcpy() and be64toh() (in the form of crypto_load_be64toh()). Overall this reduces complexity and can improve performance (on aarch64 we get a ~10% performance gain with aligned input and about ~1-2% gain on armv7), while the same movq/bswapq is generated for amd64 and movl/bswapl for i386. ok tb@
2023-05-16ecdhtest: check malloc() return valuestb1-4/+7
From Ilya Chipitsine
2023-05-16add missing pointer invalidationjcs1-1/+2
ok tb
2023-05-16Clean up SHA-512 input handling and round macros.jsing1-47/+49
Avoid reach around and initialisation outside of the macro, cleaning up the call sites to remove the initialisation. Use a T2 variable to more closely follow the documented algorithm and remove the gorgeous compound statement X = Y += A + B + C. There is no change to the clang generated assembly on aarch64. ok tb@
2023-05-14Rename arguments of X509_STORE_CTX_init()tb1-5/+5
It is higly confusing to call the list of untrusted certs chain, when you're later going to call X509_STORE_CTX_get0_chain() to get a completely unrelated chain by the verifier. Other X509_STORE_CTX APIs call this list of certs 'untrusted', so go with that. At the same time, rename the x509 into leaf, which is more explicit. suggested by/ok jsing
2023-05-14Fix X509error() and X509V3error()tb1-6/+11
When v3err.c was merged into x509_err.c nearly three years ago, it was overlooked that the code needed two distinct pairs of ERR_FUNC/ERR_REASON, one for ERR_LIB_X509 and one for ERR_LIB_X509V3. The result is that the reason strings for the X509_R_* codes would be overwritten by the ones for X509V3_R_* with the same value while the reason strings for all X509V3_R_* would be left undefined. Fix this by an #undef/#define dance for ERR_LIB_X509V3 once we no longer the ERR_FUNC/ERR_REASON pair for ERR_LIB_X509. reported by job ok jsing
2023-05-14Send the linebuffer BIO to the attictb1-377/+0
*) On VMS, stdout may very well lead to a file that is written to in a record-oriented fashion. That means that every write() will write a separate record, which will be read separately by the programs trying to read from it. This can be very confusing. The solution is to put a BIO filter in the way that will buffer text until a linefeed is reached, and then write everything a line at a time, so every record written will be an actual line, not chunks of lines and not (usually doesn't happen, but I've seen it once) several lines in one record. BIO_f_linebuffer() is the answer. Currently, it's a VMS-only method, because that's where it has been tested well enough. [Richard Levitte] Yeah, no, we don't care about any of this and haven't compiled this file since forever. Looks like tedu's chainsaw got blunt at some point...
2023-05-14Fix another mandoc -Tlint warningtb1-3/+5
With this the only -Tlint warnings are about Xr to undocumented functions: EVP_CIPHER_CTX_copy, EVP_CIPHER_CTX_get_cipher_data, X509V3_EXT_get_nid.
2023-05-14Rephrase a sentence slightly to apease mandoc -Tlinttb1-3/+5
2023-05-14Fix Xr as BN_is_prime(3) is in the attictb1-3/+3
2023-05-14Zap trailing commatb1-2/+2
2023-05-14X509_policy_tree_level_count(3) is gonetb1-3/+2
2023-05-14add missing #include <string.h>; ok tb@op8-8/+18
2023-05-13Assert that test->want != NULL at this pointtb1-1/+3
Should make coverity happier
2023-05-12Bob points out that one error should be an X509V3error()tb1-2/+2
2023-05-12x509_utl.c: fix some style nits.tb1-4/+3
2023-05-12Rewrite string_to_hex() and hex_to_string() using CBB/CBStb1-70/+124
These helpers used to contain messy pointer bashing some with weird logic for NUL termination. This can be written more safely and cleanly using CBB/CBS, so do that. The result is nearly but not entirely identical to code used elsewhere due to some strange semantics. Apart from errors pushed on the stack due to out-of-memory conditions, care was taken to preserve error codes. ok jsing
2023-05-12asn1oct: add a couple more teststb1-1/+10
2023-05-12Reduce the number of SHA-512 C implementations from three to one.jsing1-134/+1
We currently have three C implementations for SHA-512 - a version that is optimised for CPUs with minimal registers (specifically i386), a regular implementation and a semi-unrolled implementation. Testing on a ~15 year old i386 CPU, the fastest version is actually the semi-unrolled version (not to mention that we still currently have an i586 assembly implementation that is used on i386 instead...). More decent architectures do not seem to care between the regular and semi-unrolled version, presumably since they are effectively doing the same thing in hardware during execution. Remove all except the semi-unrolled version. ok tb@
2023-05-12asn1oct: minor tweak in error messagetb1-3/+3
2023-05-12Add regress coverage for {s2i,i2s}_ASN1_OCTET_STRINGtb2-1/+271
2023-05-12primility -> primalityjsg1-3/+3
ok tb@
2023-05-12Be a bit more precise on how s2i_ASN1_OCTET_STRING handles colonstb1-5/+6
2023-05-11tls_verify.c: give up on variable alignment in this filetb1-6/+6
The previous commit resulted in misalignment, which impacts my OCD worse than no alignment at all. Alignment wasn't consistently done in this file anyway. op tells me it won't affect current efforts in reducing the diff.
2023-05-11Document recent changes in primality testingtb1-8/+23
With input from beck and jsing
2023-05-10Use is_pseudoprime instead of is_prime in bn_bpsw.ctb1-30/+33
This is more accurate and improves readability a bit. Apart from a comment tweak this is sed + knfmt (which resulted in four wrapped lines). Discussed with beck and jsing
2023-05-10switch two ASN1_STRING_data() to ASN1_STRING_get0_data()op1-5/+5
and while here mark as const data. This diff is actually from gilles@, in OpenSMTPD-portable bundled libtls. ok tb@, jsing@
2023-05-10Add Miller-Rabin test for random bases to BPSWtb3-33/+130
The behavior of the BPSW primality test for numbers > 2^64 is not very well understood. While there is no known composite that passes the test, there are heuristics that indicate that there are likely infinitely many. Therefore it seems appropriate to harden the test. Having a settable number of MR rounds before doing a version of BPSW is also the approach taken by Go's primality check in math/big. This adds a new implementation of the old MR test that runs before running the strong Lucas test. I like to imagine that it's slightly cleaner code. We're effectively at about twice the cost of what we had a year ago. In addition, it adds some non-determinism in case there actually are false positives for the BPSW test. The implementation is straightforward. It could easily be tweaked to use the additional gcds in the "enhanced" MR test of FIPS 186-5, but as long as we are only going to throw away the additional info, that's not worth much. This is a first step towards incorporating some of the considerations in "A performant misuse-resistant API for Primality Testing" by Massimo and Paterson. Further work will happen in tree. In particular, there are plans to crank the number of Miller-Rabin tests considerably so as to have a guaranteed baseline. The manual will be updated shortly. positive feedback beck ok jsing
2023-05-10As mmap(2) is no longer a LOCK syscall, do away with the extraotto1-23/+1
unlock-lock dance it serves no real purpose any more. Confirmed by a small performance increase in tests. ok @tb
2023-05-09Make malloc tests that set flags more robust against the user alsootto2-15/+19
having flags set.
2023-05-09Make failure mode of EVP_AEAD_CTX_new() more explicittb1-4/+9
Pointed out and ok by dlg
2023-05-09Add regress coverage for -1 modulus as well.tb1-25/+38
2023-05-09bn_exp: also special case -1 modulustb1-6/+6
Anything taken to the power of 0 is 1, and then reduced mod 1 or mod -1 it will be 0. If "anything" includes 0 or not is a matter of convention, but it should not depend on the sign of the modulus... Reported by Guido Vranken ok jsing (who had the same diff)
2023-05-09Rewrite BN_bn2hex() using CBB/CBS.jsing1-25/+35
ok tb@
2023-05-09Rewrite BN_bn2dec() using CBB/CBS.jsing1-63/+61
ok tb@