Skip to content

Secure or Compliant, Pick One

I’m on record as stating that FIPS 140-2 validated software is necessarily less secure than its equivalent unvalidated implementation, all other things being equal.  There are several factors conspiring to force this unfortunate outcome:

1) Exposure:  the culture of non-disclosure and non-transparency in the CMVP means that only a handful of people ever even have the opportunity to really assess the quality of the software.  Even when that software is derived more or less directly from OpenSSL or other open source software, as is often the case, outsiders generally cannot know what open source software is used in a given validated product.

3) Suspended animation:  It can easily take a year to obtain a validation, from the time the test lab is first engaged until the formal validation award.  During that time the submitted software is unchanged, whereas the equivalent unvalidated and accessible version has had significant real-world use and review that may well have resulted in the discovery of vulnerabilities.  Your freshly validated cryptography is going to deploy into an environment some 12 months further along in the perpetual arms race between good and evil.

4) Superficiality:  the actual validation analysis and testing is pretty superficial.  In multiple OpenSSL based validations I’ve personally participated in, the CMVP testing has never revealed any flaws in the previously existing algorithm implementations.  The one cryptographic flaw that was discovered in those validated products (not by the CMVP, incidentally) was in code that was written specifically for the validation (the PRNG).

5) Head-in-sand incentives:  this is the dollars and cents issue that really matters.  There are huge disincentives to fixing (or discovering) bugs and vulnerabilities in already validated software.  If a vulnerability is found it is for all practical purposes not fixable — been there done that with the (effective) revocation of validation #733[1].  That validation was for an open source derivative of OpenSSL publicly advertised and disclosed as such from the beginning.  When we were privately informed of the (very minor) vulnerability we started the process of trying to negotiate approval of the fix with the CMVP.  The patch was prepared the same day that we learned of the vulnerability.  Several weeks later we were still trying to figure out what hoops needed to be jumped with the CMVP bureaucracy.  Since the vulnerability was in open source our options for suppressing its existence were limited.  When our internally agreed time limit expired, we announced.  The CMVP almost immediately revoked [2] the validation.  This occurred after at least several commercial vendors were well along with plans to ship products based on the validated module.

I know of a number of other proprietary validations based on the same software.  There were no other revocations that I am aware of.  Those vendors could have rapidly jumped the bureaucratic wickets and rushed updated validated software to the field.  Or they simply could have done nothing, as the CMVP is generally unaware of the pedigree of the software they validate.

Now imagine you’re a vendor wishing to leverage one of the existing open source based validations in your proprietary product, and you know about this “revocation” incident.  Hmmm … what to do?  Use the existing validation and run the risk of being abruptly cut off at the knees by a revocation?  Or shell out for your own validation of the same software but with no known obvious association to the highly visible open source validation?  It should be no surprise that in spite of the additional costs, in both time and money, many vendors are choosing the latter option.  I call those “private label” validations, where the software is only trivially modified or even precisely identical to that of the open source validation, but it is revalidated under another name.  I’ve been hired to conduct a number of such private label validations, enough to notice an interesting pattern — the very similar (or even identical!) software is generally validated in less time and with less hassle than the same software identified as open source.  Those multiple parallel validations of very similar code have also been an unintended controlled experiment that has demonstrated that the validation process is highly subjective.

We originally intended the OpenSSL FIPS Object Module validations to be directly utilized by software vendors.  Some do, but the biggest and unintended benefit turns out to be the ready-made example for private label validations.  Take the code and validation documentation, change the name from OpenSSL to <your_catchy_product_name_here>, submit it as a proprietary validation comfortable in the knowledge that any connection to OpenSSL will remain obscured in the shadows.  And if any vulnerabilities are disclosed in the open source world, you have a spectrum of options from the completely irresponsible all the way through to actually correcting the vulnerability, an action you can take without any time pressure.

Now imagine you’re an end user who has the option of using FIPS validated software or not (i.e., you’re not in an environment where FIPS validation is mandated).  Not much of a decision to make, the non-validated equivalent is clearly the more secure in any real-world sense of defense against compromise or attack (assuming all other things equal of course, such as the choice of strong crypto algorithms).  Just pick the current open source equivalent of whatever validated product you would have used (OpenSSL 0.9.8k instead of the FIPS Object Module v1.2, say).  It will have the same (or better if bug fixes have been applied) crypto implementations.  Any vulnerabilities subsequently discovered will be fixed and announced in a responsible time frame.  The software will be more thoroughly reviewed and analyzed.

Update 2013-09-23: Recent events have shown, with a vengeance, that the situation is far more dire than the earlier essay above presumes. One of the random number generators (Dual EC DRBG) in a standard mandated for FIPS 140-2 (SP800-90A) is now known to be defective by design. SP800-90 specifically mandates exclusive use of the compromised points.

That point is worth emphasizing: SP800-90A allows implementers to either use a set of compromised points or to generate their own. What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by specific query) is that FIPS 140-2 requires use of the compromised points. Several official statements including the NIST recommendation fail to mention this leaving the impression that alternative uncompromised points can be generated and used.

There are only two inferences to be drawn regarding NIST CAVP/CMVP complicity: either they (the bureaucracy responsible for regulating the use of cryptography for the U.S. government) were oblivious to the backdoor vulnerability, or they knowingly participated in enforcing its use. Neither possibility is comforting.

I was part of the team that implemented all four SP800-90 DRBGs in the OpenSSL FIPS Object Module 2.0. That implementation was requested and funded by a sponsor (as were other algorithm implementations and 70+ platforms). My colleagues were aware at the time of the dubious reputation of Dual EC DRBG. I was the one who argued for including it in OpenSSL anyway, reasoning that it was an open official standard and OpenSSL is a comprehensive cryptographic library that already implements some known weak algorithms. I thought we were just “checking the box” in implementing all of SP800-90; we didn’t make Dual EC DRBG a default anywhere and I didn’t think anyone would be stupid enough to actually use it in a real-world context (FIPS 140-2 has many elements not relevant in the real world). Well RSA proved me wrong by implementing[3] it by default in most of their product lines. As with NIST either incompetence or complicity is indicated.

The original conclusion of this essay is dramatically underscored by the Snowden revelations: if you care about actual security do not use FIPS 140-2 validated cryptography. Or proprietary commercial cryptography either; the restrictions of FIPS 140-2 make it much harder (or impossible) to do cryptography securely, but we now know that some non-validated commercial cryptography has been compromised. I suspect time will show that RSA wasn’t the only compromised vendor. OpenSSL could conceivably have subtle vulnerabilities in the source code (it has accidental bugs for sure), but backdoors are much harder to sneak into open source software. The OpenSSL libraries can be compiled from source rather easily on most Linux/Unix[4] platforms, and copied over the bundled binary libraries supplied by the OS distributor.

See also

[Updated 2013-11-07 to note use of compromised points is mandatory]

[Updated 2015-03-12 to reference a related blog entry]

1 (footnote added 2013-12-07) Mitigation of the Lucky 13 vulnerability is a telling example. An effective mitigation was developed for OpenSSL proper, but because we are not allowed to make even the most trivial of modifications to the FIPS module that mitigation could not be effected for the “FIPS capable” OpenSSL when FIPS mode is enabled.

2 Technically speaking they only disallowed the use of the PRNG, but since  most non-trivial applications need RNG that amounted to an effective revocation.

3 While the RSA cryptography originates from and is closely related to OpenSSL, their Dual EC DRBG implementation was done prior to and separately from the OpenSSL one.

4 If you’re using Microsoft Windows cryptography is not your biggest security worry.

Opinions expressed herein are not necessarily those of Veridical Systems, OpenSSL, DoD, the author's evil twin Skippy, or anyone else possibly including the author himself.