Speeds and Feeds » Cryptography http://veridicalsystems.com/blog Personal Musings of Steve Marquess Sat, 05 Mar 2016 20:12:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.8.5 Of Money, Responsibility, and Pride http://veridicalsystems.com/blog/of-money-responsibility-and-pride/ http://veridicalsystems.com/blog/of-money-responsibility-and-pride/#comments Sat, 12 Apr 2014 15:59:37 +0000 http://veridicalsystems.com/blog/?p=80 Fate has made me the “money guy” for OpenSSL so I’m going to talk about that for a bit.

As has been well reported in the news of late, the OpenSSL Software Foundation (OSF) is a legal entity created to hustle money in support of OpenSSL. By “hustle” I mean exactly that: raising revenue by any and all means[1]. OSF typically receives about US$2000 a year in outright donations and sells commercial software support contracts[2] and does both hourly rate and fixed price “work-for-hire” consulting as shown on the OSF web site. The media have noted that in the five years since it was created OSF has never taken in over $1 million in gross revenues annually.

Thanks to that publicity there has been an outpouring of grassroots support from the OpenSSL user community, roughly two hundred donations this past week[3] along with many messages of support and encouragement[4]. Most were for $5 or $10 and, judging from the E-mail addresses and names, were from all around the world. I haven’t finished entering all of them to get an exact total, but all those donations together come to about US$9,000. Even if those donations continue to arrive at the same rate indefinitely (they won’t), and even though every penny of those funds goes directly to OpenSSL team members, it is nowhere near enough to properly sustain the manpower levels needed to support such a complex and critical software product. While OpenSSL does “belong to the people” it is neither realistic nor appropriate to expect that a few hundred, or even a few thousand, individuals provide all the financial support. The ones who should be contributing real resources are the commercial companies[5] and governments[6] who use OpenSSL extensively and take it for granted.

Lacking any other significant source of revenue, we get most of ours the hard way: we earn it via commercial “work-for-hire” contracts[7]. The customer wants something related to OpenSSL, realizes that the people who wrote it are highly qualified to do it, and hires one or more of us to make it happen. For the OpenSSL team members not having any other employment or day job such contract work is their only non-trivial source of income.

Which gets me to the main point I want to make in this essay, about responsibility and pride. You can see right on the OSF web site that our consulting rate is US$250 an hour. Two hundred fifty dollars an hour; not high for a lawyer or doctor or even many professional tech jobs, but a living wage for sure. “These guys must be sitting pretty flush, eh?” Uh, no. “Ah, overpriced then, no takers.” Wrong again; I could sell more hours at that rate if only there were more hours to sell. At the moment OSF has about a hundred grand in open contracts — these are executed contracts with purchase orders, not just contracts in discussion or negotiation — that aren’t being worked because no one in this very small “workforce” of qualified OpenSSL developers is available to work on them. Even though they could make good money moonlighting they tend to their other responsibilities first: day job, family, OpenSSL itself. I’ve had prospective clients call me and beg for Stephen Henson to look at their problem. I have standing instructions from one client to please let them know if Andy Polyakov ever has any free time. I’ve had clients ask “would more money help”? Some queries I just turn down right away with “sorry, we’re unable to help”.

Even when we can staff a commercial contract, it can’t be rushed or skimped; these guys are just too used to taking pride in their work no matter what it is. Having worked for decades in industry and government I know that “good enough” and “quick and dirty” are the norm, so for some of the contract work I’ve tried encouraging a pragmatic “get ‘er done” attitude. They won’t do it; nothing less than the very best work they are capable of will do.

The team member without conventional full time outside employment is Dr. Stephen Henson. He’s a pretty private person[8] and he’ll probably be unhappy with me for what I’m writing here (sorry Steve). The creation of OSF was largely inspired by a revelation that was shocking to me at the time. I had been working with some of the OpenSSL team for several years when I learned how much income Steve was receiving (then as now he had no conventional employment). I was stunned to realize that my income, as one consultant of hundreds in one program of thousands in the U.S. military/industrial complex, was over five times his. Five. Times. 5X! This for a world class talent carrying an enormous burden, and when it comes to coding I’m not qualified to carry his keyboard. I had naively assumed that someone with his talent and experience would have a commensurate income, or at the very least be outearning run-of-the-mill hack programmers and consultants like me. Now that OSF is well established and has a growing roster of clients we have gone a long ways towards redressing that situation, but he could pull in a lot more commercial revenue if he didn’t steadfastly refuse to neglect OpenSSL.

These guys don’t work on OpenSSL for money. They don’t do it for fame (who outside of geek circles ever heard of them or OpenSSL until “heartbleed” hit the news?). They do it out of pride in craftsmanship[9] and the responsibility for something they believe in.

I stand in awe of their talent and dedication, that of Stephen Henson in particular. It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that you’ll be ignored and unappreciated until something goes wrong. The combination of the personality to handle that kind of pressure with the relevant technical skills and experience to effectively work on such software is a rare commodity, and those who have it are likely to already be a valued, well-rewarded, and jealously guarded resource of some company or worthy cause. For those reasons OpenSSL will always be undermanned, but the present situation can and should be improved.

There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work. If you’re a corporate or government decision maker in a position to do something about it, give it some thought. Please. I’m getting old and weary and I’d like to retire someday.

_________

1 Any legal and moral means. Geeze, give me a break…

2 I said legal and moral; shameless still goes so here’s a plug for one of the most effective ways your corporation can not only support OpenSSL but also receive something of tangible value in return: a software support contract. We have a formal contract with the fine print that lawyers love, and your accounts payable people won’t be all flummoxed at the bizarre notion of giving money away as they’re used to paying for expensive commercial support contracts for proprietary software. Someday you may even encounter an issue with your mission critical use of OpenSSL that could benefit from direct and prompt attention from the people who wrote that code.

3 The accounting software into which each and every donation is manually entered doesn’t have an easy way of counting the number of transactions of a particular type.

4 One message in particular cheered me (and hopefully my colleagues) and I can’t resist quoting it here. It begins [edited for NSFW filters]: “Thank you … For doing something really f**king hard and making it free.”

5 I’m looking at you, Fortune 1000 companies. The ones who include OpenSSL in your firewall/appliance/cloud/financial/security products that you sell for profit, and/or who use it to secure your internal infrastructure and communications. The ones who don’t have to fund an in-house team of programmers to wrangle crypto code, and who then nag us for free consulting services when you can’t figure out how to use it. The ones who have never lifted a finger to contribute to the open source community that gave you this gift. You know who you are.

6 Multiple agencies of the U.S. Department of Defense (DoD) have provided substantial financial support over a decade for the OpenSSL FIPS Object Module series of open source based FIPS 140-2 validations, most recently DARPA. But, those validations essentially just distort and contort existing OpenSSL code to satisfy some peculiar and arbitrary requirements and do nothing to improve the overall quality of OpenSSL itself. Having consulted in that environment I know OpenSSL is very widely used throughout DoD, both directly and as repackaged by commercial vendors. Given the bazillions of dollars in DoD funding you’d think an investment in OpenSSL would be a no-brainer.

7 The commercial contracting work falls into four general categories:

  • Annual software support contracts, mentioned above. Realistically speaking we’re usually going to address the kind of problems reported under these contracts anyway (though perhaps not as quickly), so these provide the most benefit overall.
  • Adding/extending specific features of general interest, e.g. TLS 1.2, hardware specific optimizations. This kind of work is a win-win for everyone as the entire OpenSSL community typically benefits along with the sponsor of the work.
  • FIPS 140-2 validation related work. This is of benefit to a much smaller segment of the user community, and has significant outsourced costs. It also arguably has a negative impact on the OpenSSL code base and diverts scarce manpower from improving OpenSSL proper.
  • Consulting on issues unlikely to be of general interest, such as porting to specialized proprietary environments or assisting with customer modifications to OpenSSL.

With very few notable exceptions (Qualys, PSW Group) commercial contracts are tied to specific deliverables and do not fund work on fundamental maintenance and development activities like releases management, code review and refactoring, performance and security, etc.

8 He really is the private sort, even (perhaps especially) when it comes to maudlin sentiments as expressed here. He also has to deal with a large volume of technical correspondence. So please don’t contact him directly without a really good reason. I will be happy to collate and forward on a reasonably timely basis a digest of comments sent c/o marquess@opensslfoundation.com.

9 “Hey wait a minute — didn’t those bozos just make a dumb sloppy mistake and break the internet?” That’s really a topic for another essay, but all non-trivial software has bugs (the Apple “goto fail” and Debian PRNG bug come to mind). Given the widespread use of OpenSSL over many years it still has an excellent track record. The question that has been asked repeatedly and not often answered is why did this bug take so long to find? Well consider that:

  • The code was written by someone with a proven track record who is a co-author of the heartbeat specification (RFC6520). It was reviewed by the OpenSSL team and no one spotted a problem.
  • The code was visible all along to the entire OpenSSL community and no one saw it.
  • OpenSSL is used by many multinational companies and major government agencies with huge resources who didn’t spot it (or at least did not report it, same difference).
  • Many have called this “the worst security bug ever”, which is debatable but it is a very serious vulnerability. There are many security researchers in the world who have found problems in OpenSSL and reviewed the code with a fine tooth comb, as shown by all the academic papers which have been written over the years and the security advisories relating to them. Finding this bug would have been a feather in the cap of any one of those security researchers.
  • Two years passed before Google with its impressive technical resources and talent (and shortly thereafter Codenomicon) found this issue.

So the mystery is not that a few overworked volunteers missed this bug; the mystery is why it hasn’t happened more often.

]]>
http://veridicalsystems.com/blog/of-money-responsibility-and-pride/feed/ 0
The Immutability of FIPS http://veridicalsystems.com/blog/immutability-of-fips/ http://veridicalsystems.com/blog/immutability-of-fips/#comments Fri, 28 Mar 2014 14:20:50 +0000 http://veridicalsystems.com/blog/?p=58 In addition to the problems with Dual EC DRBG that have now been well documented[1], it is apparent to many of us in the clear bright light of the Snowden revelations that quite a few things that were previously dismissed as mere ineptitude or accident may in fact be aspects of a carefully planned and executed “advanced persistent threat”(APT)[2]. A number of aspects of TLS like extended random come to mind, for instance. Also the recent silent omission of the RSA 4096 modulus size from FIPS 140-2 CAVP algorithm testing[3].

But, I think the biggest aspect of this entire APT thing is hiding in plain sight. I’m referring to the very existence of the FIPS 140-2 validation program. Matt Green once quipped that “FIPS is the answer to the question ‘how can we force all cryptographic software to be approved by a government committee?’” and that about sums it up.

A common feature of these various engineered exploits we’re discovering is that they are relatively fragile. The positioning of Dual EC, for instance, must have been very tedious and expensive in time and money, and not just the $10M payment to RSA which was just the end game in a much longer process of discovering and developing the backdoored algorithm and guiding the formation of the technical standards and policies to encourage its use. In the “real” world of software development code is constantly tweaked, improved, refined, extended. It would suck to spend years and millions carefully maneuvering a subtle vulnerability into mainstream products (or to discover and exploit a naturally occurring vulnerability) only to have it suddenly vanish with a routine minor software upgrade.

The single most distinguishing (and IMHO deplorable) feature of FIPS 140-2 validation is the almost total prohibition of changes to validated modules. I call it the “ready, fire, aim” approach to software development: first there is a mad scramble to write your code and push it through the formal testing (which as we well know is shallow in terms of real-world implementation issues[4]), as time is always a pressing concern when you have to wait 6, 9, or even 13(!) months for government action on the submission). Even absent rigged and constantly shifting standards that is a recipe for bugs. Then, once submitted you can’t change it[5] even as the inevitable flaws are discovered. In the OpenSSL FIPS module for instance there are a number of vulnerabilities such as the notorious “Lucky 13″ and (recently) CVE-2014-0076 that we are not permitted to mitigate. That’s why I’ve long been on record as saying that “a validated module is necessarily less secure than its unvalidated equivalent”, e.g. the OpenSSL FIPS module versus stock OpenSSL.

That, I think, perhaps even more than rigged standards like Dual EC DRBG, is the real impact of the cryptographic module validation program. It severely inhibits the naturally occurring process of evolutionary improvement that would otherwise limit the utility of consciously exploited vulnerabilities.

The presence of Dual EC DRBG in the OpenSSL FIPS Object Module is a contemporary case in point. Even though it is not enabled by default, and even though an inadvertent bug means that it can’t even be used without a minor code change or other workarounds, the mere presence of that executable code still represents a vulnerability of sorts from the APT perspective. Imagine if you will that you were an APT[2] agent responsible for maintaining the capability of accessing communications or data secured through Dual EC DRBG based cryptography[6]. Your ideal situation is Dual EC DRBG used silently and automatically, as was the case with RSA BSAFE until recently. That particular channel is now closing[7], but second best is having the Dual EC DRBG code already present in a latent form where it can be enabled with the lightest of touches. As an APT agent you already have access to many target systems via multiple means such as “QUANTUM INTERCEPT” style remote compromises and access to products at multiple points in the supply chain. You don’t want to install ransomware or steal credit card numbers, you want unobtrusive and persistent visibility into all electronic communications. You want to leave as little trace of that as possible, and the latent Dual EC DRBG implementation in the OpenSSL FIPS module aids discrete compromise. By only overwriting a few words of object code you can silently enable use of Dual EC[8], whether FIPS mode is actually enabled or not[9]. Do it in live memory and you have an essentially undetectable hack. In contrast introducing the multiple kilobytes of object code that implements Dual EC would require a much heavier touch.

So, on a general software hygiene basis, and particularly if you want to frustrate that level of APT compromise, you don’t want the Dual EC object code present at all. That is why OSF is attempting to remove the Dual EC DRBG implementation entirely from the OpenSSL FIPS Object Module 2.0. That pending revision will be 2.0.6 and the requisite formal paperwork (“Maintenance Letter”) was submitted to the CMVP on January 20, 2014. It’s typical to wait two to three months for review of such submissions and I hope to be updating this post soon to note a successful outcome. [update 2014-07024]: This “change letter” update was finally approved on 2014-06-27, more than six months after submission. Unfortunately, with approval uncertain we had to proceed in the interim with testing of new platforms on the original code base that still included Dual EC DRBG and that change letter for revision 2.0.7 was approved on 2014-07-03. So Dual EC DRBG was gone and then back in the blink of an eye. We will attempt to remove it again for the next upcoming revision, 2.0.8.

[updated 2014-03-29]:I should clarify the distinction between the two different hacks discussed here; enabling Dual EC DRBG and bypassing the POST integrity test. A hack in live memory would most likely take the form of tweaking the run-time variables that determine the DRBG; the POST could be ignored if it had already been performed, else the hack could just preset the global static variables that indicate the successful completion of a POST. A hack on the executable image on disk, i.e. libcrypto could involve bypassing the POST and/or integrity test as suggested in footnote 9.

[updated 2016-01-29]: Add CVE-2016-0701 to the list of vulnerabilities we’re forbidden to address in the FIPS module. Fortunately as a practical matter this vulnerability will only be an issue for the most obscure use cases; i.e. direct use of libcrypto and reuse of keys and use of affected DH parameters and FIPS mode enabled.

_________

1 On the Practical Exploitability of Dual EC in TLS Implementations. This study examines actual Dual EC based TLS implementations, showing the ease of exploitation by anyone possessing the “up-my-sleeve” secret numbers. It does not address exploitation of other types of Dual EC based cryptography.

2 I’m trying to be neutral in the use of this term. There are two separate issues here, one being “is it right/appropriate/moral/prudent that <insert your nation-state APT agent of choice here> spy on <insert your target of choice here>?”. The other separate issue, assuming your answer to the first is “yes”, becomes “what are the implications of massive subversion of widely used technical standards and infrastructure?”. This discussion addresses the second issue and I attempt to avoid the first.

3 This is an odd one, not documented anywhere that I’m aware of (e.g., SP800-57 table 2 doesn’t exclude RSA key sizes above 3072). We noticed when researching the new RSA algorithm test vectors for the new post-2013 SP800-131A “transition” requirements that the 4096 modulus size had disappeared from the set of possible sizes (along with the smallest sizes which was expected). We inquired about this through a couple of test labs and the most coherent response we received was that 4096 was eliminated as “not practical”. That isn’t a very credible response on two counts: 1) OpenSSL has implemented 4096 and larger modulus sizes for a long time, and 2) the FIPS 140-2 validation testing process is rather notoriously unconcerned with “practicality”.

4 I’m referring to the Level 1 FIPS 140-2 validations which by design completely ignore issues like performance, buffer overruns, side-channel and other vulnerabilities, etc. Level 2 and higher do pay more attention to some security relevant issues, though still having the immutability problem.

5 Defenders of the status quo will correctly note that there is indeed a process for modifying already validated modules, and even a “fast track” for addressing urgent situations like security vulnerabilities. That process is even moderately feasible for some validations, the small ones encompassing only a few platforms (“Operational Environments”). For a larger validation, like #1747 with eighty platforms, the mandated retesting on each and every such platform, generally required even when study of the source code would clearly show no platform specific dependencies, isn’t even remotely feasible in either time or money. Anyone have roughly a million dollars to spare, and be willing to wait a couple of years for results?

6 Note this is much more than just TLS. Any RSA key pair generated using Dual EC is suspect, for instance encryption keys used to protect storage arrays (and obviously the data protected by those keys including unmounted disks), or hardware tokens where the seed record was generated with a toolkit using Dual EC (e.g. BSAFE).

7 Though I suspect it is closing very, very slowly. The presence or use of a cryptographic library often is not at all apparent to the end users of products that contain or reference it.

8 For proprietary closed source software this enabling can be done at any point in the product distribution process from initial vendor generation of executable code to final deployment on individual end systems. For open source software compiled by the end user, or for uncorrupted binary software distributed via a robust cryptographically secure means, this enabling must be effected against the deployed executable code. Such enabling can still be done relatively easily because the mechanism for run-time enabling of Dual EC is already present.

9 The integrity test mandated by FIPS 140-2 is worthless in preventing such a compromise (I’d even argue it is worthless period). The integrity test consists of an elaborate determination of a digest over the object code (executable code and read-only data) of the cryptographic module for comparison with a known good digest also embedded in the module. But you don’t even have to modify that embedded digest value, as on any machine architecture and for any compiler there will always be a conditional branch instruction at the point the fail/succeed determination is made. Depending on the specific architecture and compiler you just overwrite that conditional branch with a NOOP or an unconditional branch, a one word (or even one bit) mod.

References for further reading:

http://blog.cryptographyengineering.com/2012/01/openssl-and-nss-are-fips-140-certified.html

http://blog.cryptographyengineering.com/2013/12/a-few-more-notes-on-nsa-random-number.html

http://nakedsecurity.sophos.com/2014/03/28/nist-to-review-standard-for-cryptographic-development-do-we-really-care/

http://www.mail-archive.com/cryptography%40metzdowd.com/msg06990.html

https://blog.bit9.com/2012/04/23/fips-compliance-may-actually-make-openssl-less-secure/

https://blogs.oracle.com/darren/entry/fips_140_2_actively_harmful

https://security.stackexchange.com/questions/34791/openssl-vs-fips-enabled-openssl

http://seclists.org/basics/2007/Jan/9

https://www.schneier.com/blog/archives/2010/01/fips_140-2_leve.html

http://comments.gmane.org/gmane.comp.encryption.general/19852

https://pomcor.com/2015/11/12/cryptographic-module-standards-at-a-crossroads-after-snowdens-revelations/

https://www.ida.org/idamedia/Corporate/Files/Publications/IDA_Documents/ITSD/2014/D-4991.ashx

http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

]]>
http://veridicalsystems.com/blog/immutability-of-fips/feed/ 0
Secure or Compliant, Pick One http://veridicalsystems.com/blog/secure-or-compliant-pick-one/ http://veridicalsystems.com/blog/secure-or-compliant-pick-one/#comments Wed, 20 Mar 2013 13:21:35 +0000 http://www.marq3.net/blog/?p=30 I’m on record as stating that FIPS 140-2 validated software is necessarily less secure than its equivalent unvalidated implementation, all other things being equal.  There are several factors conspiring to force this unfortunate outcome:

1) Exposure:  the culture of non-disclosure and non-transparency in the CMVP means that only a handful of people ever even have the opportunity to really assess the quality of the software.  Even when that software is derived more or less directly from OpenSSL or other open source software, as is often the case, outsiders generally cannot know what open source software is used in a given validated product.

3) Suspended animation:  It can easily take a year to obtain a validation, from the time the test lab is first engaged until the formal validation award.  During that time the submitted software is unchanged, whereas the equivalent unvalidated and accessible version has had significant real-world use and review that may well have resulted in the discovery of vulnerabilities.  Your freshly validated cryptography is going to deploy into an environment some 12 months further along in the perpetual arms race between good and evil.

4) Superficiality:  the actual validation analysis and testing is pretty superficial.  In multiple OpenSSL based validations I’ve personally participated in, the CMVP testing has never revealed any flaws in the previously existing algorithm implementations.  The one cryptographic flaw that was discovered in those validated products (not by the CMVP, incidentally) was in code that was written specifically for the validation (the PRNG).

5) Head-in-sand incentives:  this is the dollars and cents issue that really matters.  There are huge disincentives to fixing (or discovering) bugs and vulnerabilities in already validated software.  If a vulnerability is found it is for all practical purposes not fixable — been there done that with the (effective) revocation of validation #733[1].  That validation was for an open source derivative of OpenSSL publicly advertised and disclosed as such from the beginning.  When we were privately informed of the (very minor) vulnerability we started the process of trying to negotiate approval of the fix with the CMVP.  The patch was prepared the same day that we learned of the vulnerability.  Several weeks later we were still trying to figure out what hoops needed to be jumped with the CMVP bureaucracy.  Since the vulnerability was in open source our options for suppressing its existence were limited.  When our internally agreed time limit expired, we announced.  The CMVP almost immediately revoked [2] the validation.  This occurred after at least several commercial vendors were well along with plans to ship products based on the validated module.

I know of a number of other proprietary validations based on the same software.  There were no other revocations that I am aware of.  Those vendors could have rapidly jumped the bureaucratic wickets and rushed updated validated software to the field.  Or they simply could have done nothing, as the CMVP is generally unaware of the pedigree of the software they validate.

Now imagine you’re a vendor wishing to leverage one of the existing open source based validations in your proprietary product, and you know about this “revocation” incident.  Hmmm … what to do?  Use the existing validation and run the risk of being abruptly cut off at the knees by a revocation?  Or shell out for your own validation of the same software but with no known obvious association to the highly visible open source validation?  It should be no surprise that in spite of the additional costs, in both time and money, many vendors are choosing the latter option.  I call those “private label” validations, where the software is only trivially modified or even precisely identical to that of the open source validation, but it is revalidated under another name.  I’ve been hired to conduct a number of such private label validations, enough to notice an interesting pattern — the very similar (or even identical!) software is generally validated in less time and with less hassle than the same software identified as open source.  Those multiple parallel validations of very similar code have also been an unintended controlled experiment that has demonstrated that the validation process is highly subjective.

We originally intended the OpenSSL FIPS Object Module validations to be directly utilized by software vendors.  Some do, but the biggest and unintended benefit turns out to be the ready-made example for private label validations.  Take the code and validation documentation, change the name from OpenSSL to <your_catchy_product_name_here>, submit it as a proprietary validation comfortable in the knowledge that any connection to OpenSSL will remain obscured in the shadows.  And if any vulnerabilities are disclosed in the open source world, you have a spectrum of options from the completely irresponsible all the way through to actually correcting the vulnerability, an action you can take without any time pressure.

Now imagine you’re an end user who has the option of using FIPS validated software or not (i.e., you’re not in an environment where FIPS validation is mandated).  Not much of a decision to make, the non-validated equivalent is clearly the more secure in any real-world sense of defense against compromise or attack (assuming all other things equal of course, such as the choice of strong crypto algorithms).  Just pick the current open source equivalent of whatever validated product you would have used (OpenSSL 0.9.8k instead of the FIPS Object Module v1.2, say).  It will have the same (or better if bug fixes have been applied) crypto implementations.  Any vulnerabilities subsequently discovered will be fixed and announced in a responsible time frame.  The software will be more thoroughly reviewed and analyzed.

Update 2013-09-23: Recent events have shown, with a vengeance, that the situation is far more dire than the earlier essay above presumes. One of the random number generators (Dual EC DRBG) in a standard mandated for FIPS 140-2 (SP800-90A) is now known to be defective by design. SP800-90 specifically mandates exclusive use of the compromised points.

That point is worth emphasizing: SP800-90A allows implementers to either use a set of compromised points or to generate their own. What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by specific query) is that FIPS 140-2 requires use of the compromised points. Several official statements including the NIST recommendation fail to mention this leaving the impression that alternative uncompromised points can be generated and used.

There are only two inferences to be drawn regarding NIST CAVP/CMVP complicity: either they (the bureaucracy responsible for regulating the use of cryptography for the U.S. government) were oblivious to the backdoor vulnerability, or they knowingly participated in enforcing its use. Neither possibility is comforting.

I was part of the team that implemented all four SP800-90 DRBGs in the OpenSSL FIPS Object Module 2.0. That implementation was requested and funded by a sponsor (as were other algorithm implementations and 70+ platforms). My colleagues were aware at the time of the dubious reputation of Dual EC DRBG. I was the one who argued for including it in OpenSSL anyway, reasoning that it was an open official standard and OpenSSL is a comprehensive cryptographic library that already implements some known weak algorithms. I thought we were just “checking the box” in implementing all of SP800-90; we didn’t make Dual EC DRBG a default anywhere and I didn’t think anyone would be stupid enough to actually use it in a real-world context (FIPS 140-2 has many elements not relevant in the real world). Well RSA proved me wrong by implementing[3] it by default in most of their product lines. As with NIST either incompetence or complicity is indicated.

The original conclusion of this essay is dramatically underscored by the Snowden revelations: if you care about actual security do not use FIPS 140-2 validated cryptography. Or proprietary commercial cryptography either; the restrictions of FIPS 140-2 make it much harder (or impossible) to do cryptography securely, but we now know that some non-validated commercial cryptography has been compromised. I suspect time will show that RSA wasn’t the only compromised vendor. OpenSSL could conceivably have subtle vulnerabilities in the source code (it has accidental bugs for sure), but backdoors are much harder to sneak into open source software. The OpenSSL libraries can be compiled from source rather easily on most Linux/Unix[4] platforms, and copied over the bundled binary libraries supplied by the OS distributor.

See also http://veridicalsystems.com/blog/immutability-of-fips/

[Updated 2013-11-07 to note use of compromised points is mandatory]

[Updated 2015-03-12 to reference a related blog entry]
_________

1 (footnote added 2013-12-07) Mitigation of the Lucky 13 vulnerability is a telling example. An effective mitigation was developed for OpenSSL proper, but because we are not allowed to make even the most trivial of modifications to the FIPS module that mitigation could not be effected for the “FIPS capable” OpenSSL when FIPS mode is enabled.

2 Technically speaking they only disallowed the use of the PRNG, but since  most non-trivial applications need RNG that amounted to an effective revocation.

3 While the RSA cryptography originates from and is closely related to OpenSSL, their Dual EC DRBG implementation was done prior to and separately from the OpenSSL one.

4 If you’re using Microsoft Windows cryptography is not your biggest security worry.

]]>
http://veridicalsystems.com/blog/secure-or-compliant-pick-one/feed/ 0
DoD PKI and the Beat of a Different Drummer, Part 2 http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/ http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/#comments Wed, 20 Mar 2013 00:40:30 +0000 http://www.marq3.net/blog/?p=24

After several years of dealing with huge unwieldy CRL files DoD finally stands up a OCSP server, and after months pass it is more or less usable for awhile. Then I noticed the OCSP responses were being signed by an expired certificate (for unknown reasons DoD decided to use self-signed responder certificates). Here’s a typical query using a revoked certificate:

$ openssl ocsp -issuer ca.DOD_CA-13.pem -cert xxx.yyy.zzz.mil.REVOKED.crt -url http://ocsp.disa.mil/ -resp_text -VAfile ca.dod_ocsp_ss.pem
OCSP Response Data:
OCSP Response Status: successful (0×0)
Response Type: Basic OCSP Response

Response verify OK
xxx.yyy.zzz.mil.REVOKED.crt: revoked
This Update: May 3 23:00:00 2009 GMT
Next Update: May 10 07:00:00 2009 GMT
Revocation Time: Feb 21 13:53:33 2008 GMT
$
Note the CA certificate used for verification, ca.dod_ocsp_ss.pem. It expired nearly a year ago:
$ openssl x509 -noout -enddate -in ca.dod_ocsp_ss.pem
notAfter=Jun 22 19:26:25 2008 GMT
$ date
Mon May 4 08:45:07 EDT 2009
$

Unfortunately Apache mod_ssl doesn’t care for expired responder certs, so I wrote a patch to add a SSLOCSPResponderNoCertVerify configuration option to suppress the responder certificate validity check.

]]>
http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/feed/ 0
DoD PKI and the Beat of a Different Drummer, Part 1 http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/ http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/#comments Wed, 20 Mar 2013 00:24:16 +0000 http://www.marq3.net/blog/?p=13 So, several years after first implementing the use of client and server x.509 certificates, DoD finally stands up an OCSP service. Good thing, because the relevant CRL files total over 200 megabytes, with some of them having a lifetime as brief as 18 hours.

But, they had to do it a little differently. For starters self-signed certs are used for signing the responses. That caused some problems for my DoD client because Apache mod_ssl assumes the OCSP responses will be signed in the CA cert chain. With a little prodding from me Dr. Stephen Henson of OpenSSL fame came up with a patch to implement a new directive to specify trusted signer certs: https://issues.apache.org/bugzilla/show_bug.cgi?id=46037.

This patch implements the two configuration directives

SSLOCSPResponderCertificateFile file
SSLOCSPResponderCertificateFile Set of trusted PEM encoded OCSP responder certificates

Also available in in httpd 2.3 and later, if using OpenSSL 0.9.7 or later.

“This supplies a list of trusted OCSP responder certificates to be used during OCSP responder certificate validation. The supplied certificates are implicitly trusted without any further validation. This is typically used where the OCSP responder certificate is self signed or omitted from the OCSP response.”

]]>
http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/feed/ 0
NSS Trickery http://veridicalsystems.com/blog/nss-trickery/ http://veridicalsystems.com/blog/nss-trickery/#comments Wed, 20 Mar 2013 00:22:08 +0000 http://www.marq3.net/blog/?p=11 Like other comparable products Firefox and Thunderbird ship with a wide assortment of pre-installed CA certificates. Not only the usual ones from Verisign, Equifax, and the like but also ones from some obscure entities like “Staat der Nederlanden”, “”Camerfirma Chambers of Commerce”, “TURKTRUST Certificate Services”.

The DoD PKI policy mandates that CA trusted keystores should only contain the CA certs specifically authorized by DISA. This make sense if you think about it, as a desktop system in the Pentagon shouldn’t be trusting CA certs from foreign CAs.

Fixing the keystore should be easy, we just use the handy-dandy GUI based certificate management tool to remove the unauthorized certs, right? No so. If you try that you find that after tediously clicky-clicking your way through and deleting 100 plus certificates that they initially appear to be gone. But, as soon as you restart Firefox (or Thunderbird, etc.) they all reappear. What is happening is that the NSS shared library libnssckbi.so automatically re-adds the bundled CA certs to the disk resident keystore (the cert8.db file).

Now this is downright annoying. Presumably the Mozilla Foundation is being paid for the inclusion of the bundled CA certs and wants to discourage their removal in order to boost the commercial value of that placement, but as with the DoD policy there are legitimate reasons why end users may want to remove bundled certificates.

There appears to be no alternative to complete replacement of the libnssckbi.so library. The bundled certs are defined in the file mozilla/security/nss/lib/ckfw/builtins/certdata.txt in the source tree. The Mozilla specific build process is annoyingly awkward and different for both Linux/Unix and Windows.

It should be noted that we have essentially the same problem in a different form with Microsoft Windows, as routine Microsoft issued patches tend to reinsert CA certificates. As we don’t have the option of modifying the software culling the unwanted CA certs requires constant vigilance.

]]>
http://veridicalsystems.com/blog/nss-trickery/feed/ 0
The Fickleness of FIPS http://veridicalsystems.com/blog/the-fickleness-of-fips/ http://veridicalsystems.com/blog/the-fickleness-of-fips/#comments Wed, 20 Mar 2013 00:19:50 +0000 http://www.marq3.net/blog/?p=9  
(Updated 2015-12-11)
 
All of my clients seeking FIPS 140-2 validations are concerned about schedule. The elapsed time to the final validation award is usually more important than cost. The biggest element of that timeline is the long hiatus between the test report submission by the test lab to the CMVP, and the time when it is picked out of the inbox for CMVP review.

That time interval can vary dramatically and capriciously as demonstrated by two recent validations. The test report for #1051 was submitted on 2008-04-28 and validation award was 2008-11-17, approximately 7 months. The test report for #1111 was submitted on 2008-02-29 but the validation award was not until 2009-04-03, approximately 13 months. Quite a difference, roughly half a year, sufficiently long in the latter case to spoil any commercial value of that validation.

How did the two validated products differ? Here’s the interesting part — both were based on the same source code! Even stranger, the “quick” validation was for source code based delivery and static linking, both well off the beaten path for most validations. The tardy validation was a bog standard binary shared library validation, the whole purpose of which was to quickly obtain a few validated binaries for DoD (the sponsor) while waiting for the source code based validation.

The FIPS validation process is so shrouded in secrecy that I will never know for sure why the one validation took nearly twice as long. The validations were performed by different test labs, but there was no evidence that I could see of negligence or incompetence on the part of the one test lab. The most likely cause was different reviewers at the CMVP. The CMVP review is (in my opinion) a very subjective process and different reviewers show very distinct preferences in their commentary and requirements for document changes. Interestingly enough the test lab informed me that the NIST reviewer in this case insisted on remaining anonymous; in the past I’ve always been told who was involved.

So there you have it — a very non-transparent process, anonymous bureaucrats, nearly a 2x difference in validation times for the same software. You pays yer money and you takes yer chances.

[Update 2015-12-11] An even better example of CMVP capriciousness:

The “RE” validation, an “Alternative Scenario 1A” clone of the #1747 validation, was approved November 13 2015 (http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm#2473).

It was submitted along with its identical twin “RE” validation on April 17 2015. The two sets of paperwork differed in only one trivial aspect, “RE” in the module name for one versus “SE” for the other. Same module, same test lab, same paperwork, submitted together at the same time. A more perfect controlled study could not have been devised on purpose.

The “SE” validation was approved on June 25 (#2398), after a little more than two months (69 calendar days, 48 working days).

The “RE” validation was not approved for almost seven months (210 calendar days, 145 working days). That’s three times as long for the exact same submission. This is the most striking example yet of CMVP capriciousness.

Why the wild disparity? Well, probably because the two identical submissions were farmed out to two different reviewers. The review process is notoriously subjective, and in fact we received “comments” (requirements for changes) for the “RE” validation whereas the “SE” one was approved as-is. As a result the two Security Policy documents are no longer identical. That doesn’t explain the time discrepancy, though, as those “comments” weren’t received until long after “SE” had been approved.

The moral here is that FIPS 140-2 validations are a crapshoot; it’s impossible to make any reliable predictions on how long any validation action will take or how it will be received. If you have really deep pockets you can submit the same validation multiple times to hedge your bets (as done for the #1051 and #1111 validations discussed above), but for most of us it’s an open ended gamble: submit, hope, wait, …

]]>
http://veridicalsystems.com/blog/the-fickleness-of-fips/feed/ 0