Speeds and Feeds http://veridicalsystems.com/blog Personal Musings of Steve Marquess Sat, 05 Mar 2016 20:12:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.8.5 Life in a Digital Ghetto http://veridicalsystems.com/blog/life-in-a-digital-ghetto/ http://veridicalsystems.com/blog/life-in-a-digital-ghetto/#comments Sat, 28 Mar 2015 15:11:43 +0000 http://veridicalsystems.com/blog/?p=141  …In Which I Bitch and Moan about Living on the Wrong Side of the Digital Divide in an Isolated Pocket of Digital Poverty

 
(Updated 2015-08-03)
 
When I purchased my current residence in 1987, the Internet had not yet exploded into a dominating presence in social discourse and commerce. I used an analog modem for E-mail and such web content as was available at the time. What little television our family watched was over-the-air broadcast content. We didn’t have any cell phones.

Fast forward nearly thirty years and the situation is quite different. Polite society assumes every respectable person has a cell phone and some sort of broadband access. My inability to reliably receive text messages at this address has been a bit of a hassle1. Digital TV signals are marginal even with dual high gain antennas 70 feet (21m) up on a rotating antenna tower2 mast.

My family and I live in a relatively rural location (for being only 45 miles/74km from the “capitol of the free world”) that lies in the radio shadow of a nearby mountain3. Standing out in the front yard we can sometimes get a one bar 2G cell signal, but inside the house a cell phone is useless4. Cable service is not available and probably never will be, as the cable lines would need to be run over a mile (1.6km) to reach only a small handful of houses and the local cable monopoly has made clear its lack of interest in doing so. Ditto DSL telephone service, which would require upgrading the “SLIC” (Subscriber Line Interface Concentrator) cabinet a little over a mile away5.

Thirty years ago the absence of cell, cable, DSL, or comparable communication services didn’t matter; analog dial-up modems were sufficient. But, as the Internet grew in size and importance and the typical web site or data download ballooned in size, analog dialup was no longer adequate. I tried ISDN, which was roughly twice as fast (128kbps) as the 56K modem on a good day. That sufficed for several years, but even basic web surfing and data transfers (such as routine Linux updates) became increasingly painful. I would burn routine software updates to a CD or USB drive at a higher bandwidth location (which is to say anywhere else) and bring them home for updating the SOHO (Small Office Home Office) computers.

Next we tried a local WISP (Wireless ISP) which creatively bounced wifi signals through a hodgepodge of nodes to a wired PoP (Point of Presence) many miles away. Nominal bandwidth was good, a couple of MBps, but even with our access point on the top of a sixty foot (18m) antenna tower, performance and reliability were still marginal. Also, congestion was a recurring issue: even when the signal was strong and clear performance would nosedive in the evenings as other subscribers returned home from work and started surfing porn or watching cat videos or whatever.

Since I work full time in a home office this erratic Internet access was beginning to severely impact my bottom line, so at that point I took the only remaining option available: I signed up for a dedicated T-1 line which provides 1.544mbps (182KBps) for $600 a month. That’s one megabyte in about five seconds, or one gigabyte in 100 minutes … all for the bargain price of only $7,200 a year. Ouch. At least that bandwidth is dedicated, up and down, and is relatively reliable (it’s often several months between major outages)6. I use QoS routing for the VoIP traffic which allows voice calls even when the bandwidth is completely saturated. I try to extract the maximum utility from that narrow little pipe by loading it for hours or days at a time with bulk data uploads or downloads.

Needless to say streaming media isn’t an option, and I have to ask clients to FedEx really large files like virtual images.

Unfortunately, I’m starting to see the same bandwidth bloat issues now that drove me from dialup modem to ISDN to guerrilla wifi to the T-1 line. Routine data downloads are getting larger and slower. Websites I need to view for my professional activities, never mind recreation, are getting ever more bloated and slower. I have no bandwidth upgrade options, at any price7.

Periodically I check with WISPs that specialize in commercial broadband services. A technician for one of them climbed my tower just a few months ago and spent a long time with binoculars looking for any water towers or other structures they could use for point-to-point service; no joy8.

I’m of retirement age (sixty) and fortunate enough to have enough savings to be able to retire, in theory at least. This bandwidth problem may well drive me to retire sooner rather than later, as neither moving nor renting an office are viable options. Selling this property and moving to another just to extend my working career by a few years makes no sense considering the size of the current property that my family and I have spent years improving (a largish house and multiple outbuildings, including heavy machinery9). Renting a commercial business office doesn’t work for a number of reasons: a) my workday can be sunup to sundown on weekdays and weekends, which is tolerable when a kitchen and amenities are only a short stroll away but which would be far less tolerable in a tiny isolated office; b) I need more than just a desk as I often work with client equipment using two 19″ freestanding racks, two workbenches, and several hundred square feet of storage space for all the boxes and gear.

So, we’ll see. As I slowly suffocate on this 1.544Mbps bandwidth the decision may be made for me. If and when I do retire from revenue generating activities I’ll no longer be able to justify the current $7,200 annual cost of the leased line, and will drop back to the local WISP service. That will do for casual E-mail but will rather abruptly cut me off from dabbling in part time work as the T-1 service requires a three year contractual commitment10, so I can’t buy short term bandwidth increases.

I’m not feeling sorry for myself because I really like the current property and its location. The lack of broadband options will surely impact the resale price, but I’m in good health and plan to remain here for many years to come so that is a future worry. I’m a redneck at heart and there aren’t a lot of places where I could park a forklift in the front yard and be surrounded by greenery and wildlife. I do find it a bit ironic though that I have worse communications options here, a few miles from an interstate highway and a mere one hour drive from the White House and Capitol, than those in many so called third world countries.

(Update 2015-08-03) Well, after much research and pondering I have decided to take the plunge and build a 140′ (43m) tower for a 5GHz line of sight link to a local WISP. I’m still refining the cost estimate but I expect it to total about $30,000; a properly engineered structure that size isn’t cheap (the “geotechnical” report alone is costing me over 3 grand). To help control costs I’ll be doing the foundation excavation and prep myself. I expect the construction process to take at least several months and since the current T-1 lease doesn’t come up for renewal until the fall of 2016 I don’t have an incentive to rush things. I hope to have the foundation ready this fall but may defer the tower assembly and erection until next spring.

With this tower I’ll get more than ten times the bandwidth for less than a sixth the monthly cost of the T-1 line, or an annual savings of $6,000. So in five years I will have broken even. That’s a big capitol investment to escape from the digital ghetto, but I think my odds of finding a better option in those five years are slim.

The WISP I plan to use wants to leverage the tower to host a “micro-POP” for my immediate neighborhood, so some other good may come of it. I’ll also put a dual-band cellular repeater and DTV antennas on the tower, so we should have cell phone service and some DTV reception as a bonus. The tower will also host ham radios antennas (which in fact will be the biggest presence on the tower); I’ve already upgraded my license to Extra class in anticipation (KB3IHF).

_________

[1] I’ve kludged together a SMS-emulation of sorts using a Google Gtalk number. It works for receiving SMS messages from some sources (which I receive as E-mails), but not for others. So for instance, we can’t get the fraud alerts from our bank telling us that our credit card has been suspended due to suspected fraudulent activity. We find that out the hard way, when trying to make a purchase.

I also know about repeaters (“microcells”) and have done a fair amount of research on the options. For less than a thousand dollars I could put a cellular repeater on the antenna tower and possibly get a decent 3G signal. But, such repeaters aren’t “supported” by either of the main cellular providers, meaning any issues would receive even less sympathy than the usual level of apathy those providers are famous for. In addition, even with a perfect signal the cost at my current level of data traffic would be prohibitive. I will try a microcell when I eventually drop the T-1 line and thus lose the current SOHO VoIP services, but usage will be largely limited to voice (and SMS!) only.

[2] A Heights aluminum tower with a Ham IV rotator. This is a popular model of ham radio tower (call sign KB3IHF). Only the HF bands (20m and up) are feasible given the high surrounding terrain.

[3] Sugarloaf Mountain, which is actually just a large hill or monadnock.

[4] Ironically we have some neighbors only about 1000 yards (900m) away, at roughly the same elevation, who get a decent 3G signal. The cost of such service for the data volumes I currently manage to stuff through a narrowband pipe would be ruinous, but at least those neighbors have that option.

[5] Our POTS (Plain Old Telephone Service) lines, aka “landlines” have been rather unreliable so we’ve had multiple opportunities over the years to talk with the telco linemen. They note that the local telco (Verizon) has no intention of doing any enhancement of the UTP (copper) infrastructure and associated regulated services, preferring instead to concentrate on unregulated wireless and FIOS services (neither of which are available to us, though). The time-to-repair for POTS problems is as long as two weeks; at one point I was paying for four POTS lines so that at least one would hopefully be working at any point in time. At present I’ve put all my eggs in the T-1 basket; that still fails periodically (typically squirrels chewing the overhead lines), but with a commercial SLA repairs have a higher priority and I can file for and get rebates for the extended outages.

The SLIC upgrade to support DSL would supposedly cost something like $60,000. I’ve even inquired of the local telco if they would consider allowing me to pay for that upgrade; the idea was rejected out of hand. I’m not sure I would have been willing to absorb that entire cost myself (if no neighbors wanted to chip in), but when you consider what I’m paying for the narrowband T-1 line it could be cost effective over the long haul.

[6] T-1 outages are usually caused by line damage (squirrels, storms, treefalls) but one extended outage resulted from a massive DDoS attack targeted at journalist Brian Krebs with whom I share upstream provider Level 3. I’m lucky I haven’t earned that level of hostile attention, as it wouldn’t take much of a DoS to shut me down.

[7] I’ve even considered getting a second T-1 line, at the staggering cost of $1,200 monthly for 364KBps, and bonding that with the existing T-1 for double the bandwidth, but the 25 pair line leading to my house can accommodate only one T-1 signal.

Satellite service isn’t an option due to severe upload limitations; I run multiple servers (VoIP, web, E-mail) and even though I have colo servers located at sites with good bandwidth I still need to upload data to them.

[8] Erecting a taller tower is an option, but local zoning rules would limit the maximum height to about 140′(43m) and it would easily cost $25,000 or more. I would first hire a crane service to come out with a manlift basket to put a technician high enough to see if that additional 40′(12m) or so of height would make a difference. A $22-25K tower investment would be paid back in only a few years given the exorbitant cost of the T-1 line, so I will consider installing a higher tower if I’m still working at the point where the T-1 service contract comes up for renewal.

[9] I have a fairly well equipped machine shop (manual machines only, no pesky computers), see Metal Illness

[10] The cost for month-by-month service, after the original multi-year commitment has expired, is obscenely expensive. The current three-year commitment expires in December of 2016, at which point I’ll have to make a ~$22,000 decision about another renewal.

]]>
http://veridicalsystems.com/blog/life-in-a-digital-ghetto/feed/ 0
Of Money, Responsibility, and Pride http://veridicalsystems.com/blog/of-money-responsibility-and-pride/ http://veridicalsystems.com/blog/of-money-responsibility-and-pride/#comments Sat, 12 Apr 2014 15:59:37 +0000 http://veridicalsystems.com/blog/?p=80 Fate has made me the “money guy” for OpenSSL so I’m going to talk about that for a bit.

As has been well reported in the news of late, the OpenSSL Software Foundation (OSF) is a legal entity created to hustle money in support of OpenSSL. By “hustle” I mean exactly that: raising revenue by any and all means[1]. OSF typically receives about US$2000 a year in outright donations and sells commercial software support contracts[2] and does both hourly rate and fixed price “work-for-hire” consulting as shown on the OSF web site. The media have noted that in the five years since it was created OSF has never taken in over $1 million in gross revenues annually.

Thanks to that publicity there has been an outpouring of grassroots support from the OpenSSL user community, roughly two hundred donations this past week[3] along with many messages of support and encouragement[4]. Most were for $5 or $10 and, judging from the E-mail addresses and names, were from all around the world. I haven’t finished entering all of them to get an exact total, but all those donations together come to about US$9,000. Even if those donations continue to arrive at the same rate indefinitely (they won’t), and even though every penny of those funds goes directly to OpenSSL team members, it is nowhere near enough to properly sustain the manpower levels needed to support such a complex and critical software product. While OpenSSL does “belong to the people” it is neither realistic nor appropriate to expect that a few hundred, or even a few thousand, individuals provide all the financial support. The ones who should be contributing real resources are the commercial companies[5] and governments[6] who use OpenSSL extensively and take it for granted.

Lacking any other significant source of revenue, we get most of ours the hard way: we earn it via commercial “work-for-hire” contracts[7]. The customer wants something related to OpenSSL, realizes that the people who wrote it are highly qualified to do it, and hires one or more of us to make it happen. For the OpenSSL team members not having any other employment or day job such contract work is their only non-trivial source of income.

Which gets me to the main point I want to make in this essay, about responsibility and pride. You can see right on the OSF web site that our consulting rate is US$250 an hour. Two hundred fifty dollars an hour; not high for a lawyer or doctor or even many professional tech jobs, but a living wage for sure. “These guys must be sitting pretty flush, eh?” Uh, no. “Ah, overpriced then, no takers.” Wrong again; I could sell more hours at that rate if only there were more hours to sell. At the moment OSF has about a hundred grand in open contracts — these are executed contracts with purchase orders, not just contracts in discussion or negotiation — that aren’t being worked because no one in this very small “workforce” of qualified OpenSSL developers is available to work on them. Even though they could make good money moonlighting they tend to their other responsibilities first: day job, family, OpenSSL itself. I’ve had prospective clients call me and beg for Stephen Henson to look at their problem. I have standing instructions from one client to please let them know if Andy Polyakov ever has any free time. I’ve had clients ask “would more money help”? Some queries I just turn down right away with “sorry, we’re unable to help”.

Even when we can staff a commercial contract, it can’t be rushed or skimped; these guys are just too used to taking pride in their work no matter what it is. Having worked for decades in industry and government I know that “good enough” and “quick and dirty” are the norm, so for some of the contract work I’ve tried encouraging a pragmatic “get ‘er done” attitude. They won’t do it; nothing less than the very best work they are capable of will do.

The team member without conventional full time outside employment is Dr. Stephen Henson. He’s a pretty private person[8] and he’ll probably be unhappy with me for what I’m writing here (sorry Steve). The creation of OSF was largely inspired by a revelation that was shocking to me at the time. I had been working with some of the OpenSSL team for several years when I learned how much income Steve was receiving (then as now he had no conventional employment). I was stunned to realize that my income, as one consultant of hundreds in one program of thousands in the U.S. military/industrial complex, was over five times his. Five. Times. 5X! This for a world class talent carrying an enormous burden, and when it comes to coding I’m not qualified to carry his keyboard. I had naively assumed that someone with his talent and experience would have a commensurate income, or at the very least be outearning run-of-the-mill hack programmers and consultants like me. Now that OSF is well established and has a growing roster of clients we have gone a long ways towards redressing that situation, but he could pull in a lot more commercial revenue if he didn’t steadfastly refuse to neglect OpenSSL.

These guys don’t work on OpenSSL for money. They don’t do it for fame (who outside of geek circles ever heard of them or OpenSSL until “heartbleed” hit the news?). They do it out of pride in craftsmanship[9] and the responsibility for something they believe in.

I stand in awe of their talent and dedication, that of Stephen Henson in particular. It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that you’ll be ignored and unappreciated until something goes wrong. The combination of the personality to handle that kind of pressure with the relevant technical skills and experience to effectively work on such software is a rare commodity, and those who have it are likely to already be a valued, well-rewarded, and jealously guarded resource of some company or worthy cause. For those reasons OpenSSL will always be undermanned, but the present situation can and should be improved.

There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work. If you’re a corporate or government decision maker in a position to do something about it, give it some thought. Please. I’m getting old and weary and I’d like to retire someday.

_________

1 Any legal and moral means. Geeze, give me a break…

2 I said legal and moral; shameless still goes so here’s a plug for one of the most effective ways your corporation can not only support OpenSSL but also receive something of tangible value in return: a software support contract. We have a formal contract with the fine print that lawyers love, and your accounts payable people won’t be all flummoxed at the bizarre notion of giving money away as they’re used to paying for expensive commercial support contracts for proprietary software. Someday you may even encounter an issue with your mission critical use of OpenSSL that could benefit from direct and prompt attention from the people who wrote that code.

3 The accounting software into which each and every donation is manually entered doesn’t have an easy way of counting the number of transactions of a particular type.

4 One message in particular cheered me (and hopefully my colleagues) and I can’t resist quoting it here. It begins [edited for NSFW filters]: “Thank you … For doing something really f**king hard and making it free.”

5 I’m looking at you, Fortune 1000 companies. The ones who include OpenSSL in your firewall/appliance/cloud/financial/security products that you sell for profit, and/or who use it to secure your internal infrastructure and communications. The ones who don’t have to fund an in-house team of programmers to wrangle crypto code, and who then nag us for free consulting services when you can’t figure out how to use it. The ones who have never lifted a finger to contribute to the open source community that gave you this gift. You know who you are.

6 Multiple agencies of the U.S. Department of Defense (DoD) have provided substantial financial support over a decade for the OpenSSL FIPS Object Module series of open source based FIPS 140-2 validations, most recently DARPA. But, those validations essentially just distort and contort existing OpenSSL code to satisfy some peculiar and arbitrary requirements and do nothing to improve the overall quality of OpenSSL itself. Having consulted in that environment I know OpenSSL is very widely used throughout DoD, both directly and as repackaged by commercial vendors. Given the bazillions of dollars in DoD funding you’d think an investment in OpenSSL would be a no-brainer.

7 The commercial contracting work falls into four general categories:

  • Annual software support contracts, mentioned above. Realistically speaking we’re usually going to address the kind of problems reported under these contracts anyway (though perhaps not as quickly), so these provide the most benefit overall.
  • Adding/extending specific features of general interest, e.g. TLS 1.2, hardware specific optimizations. This kind of work is a win-win for everyone as the entire OpenSSL community typically benefits along with the sponsor of the work.
  • FIPS 140-2 validation related work. This is of benefit to a much smaller segment of the user community, and has significant outsourced costs. It also arguably has a negative impact on the OpenSSL code base and diverts scarce manpower from improving OpenSSL proper.
  • Consulting on issues unlikely to be of general interest, such as porting to specialized proprietary environments or assisting with customer modifications to OpenSSL.

With very few notable exceptions (Qualys, PSW Group) commercial contracts are tied to specific deliverables and do not fund work on fundamental maintenance and development activities like releases management, code review and refactoring, performance and security, etc.

8 He really is the private sort, even (perhaps especially) when it comes to maudlin sentiments as expressed here. He also has to deal with a large volume of technical correspondence. So please don’t contact him directly without a really good reason. I will be happy to collate and forward on a reasonably timely basis a digest of comments sent c/o marquess@opensslfoundation.com.

9 “Hey wait a minute — didn’t those bozos just make a dumb sloppy mistake and break the internet?” That’s really a topic for another essay, but all non-trivial software has bugs (the Apple “goto fail” and Debian PRNG bug come to mind). Given the widespread use of OpenSSL over many years it still has an excellent track record. The question that has been asked repeatedly and not often answered is why did this bug take so long to find? Well consider that:

  • The code was written by someone with a proven track record who is a co-author of the heartbeat specification (RFC6520). It was reviewed by the OpenSSL team and no one spotted a problem.
  • The code was visible all along to the entire OpenSSL community and no one saw it.
  • OpenSSL is used by many multinational companies and major government agencies with huge resources who didn’t spot it (or at least did not report it, same difference).
  • Many have called this “the worst security bug ever”, which is debatable but it is a very serious vulnerability. There are many security researchers in the world who have found problems in OpenSSL and reviewed the code with a fine tooth comb, as shown by all the academic papers which have been written over the years and the security advisories relating to them. Finding this bug would have been a feather in the cap of any one of those security researchers.
  • Two years passed before Google with its impressive technical resources and talent (and shortly thereafter Codenomicon) found this issue.

So the mystery is not that a few overworked volunteers missed this bug; the mystery is why it hasn’t happened more often.

]]>
http://veridicalsystems.com/blog/of-money-responsibility-and-pride/feed/ 0
The Immutability of FIPS http://veridicalsystems.com/blog/immutability-of-fips/ http://veridicalsystems.com/blog/immutability-of-fips/#comments Fri, 28 Mar 2014 14:20:50 +0000 http://veridicalsystems.com/blog/?p=58 In addition to the problems with Dual EC DRBG that have now been well documented[1], it is apparent to many of us in the clear bright light of the Snowden revelations that quite a few things that were previously dismissed as mere ineptitude or accident may in fact be aspects of a carefully planned and executed “advanced persistent threat”(APT)[2]. A number of aspects of TLS like extended random come to mind, for instance. Also the recent silent omission of the RSA 4096 modulus size from FIPS 140-2 CAVP algorithm testing[3].

But, I think the biggest aspect of this entire APT thing is hiding in plain sight. I’m referring to the very existence of the FIPS 140-2 validation program. Matt Green once quipped that “FIPS is the answer to the question ‘how can we force all cryptographic software to be approved by a government committee?’” and that about sums it up.

A common feature of these various engineered exploits we’re discovering is that they are relatively fragile. The positioning of Dual EC, for instance, must have been very tedious and expensive in time and money, and not just the $10M payment to RSA which was just the end game in a much longer process of discovering and developing the backdoored algorithm and guiding the formation of the technical standards and policies to encourage its use. In the “real” world of software development code is constantly tweaked, improved, refined, extended. It would suck to spend years and millions carefully maneuvering a subtle vulnerability into mainstream products (or to discover and exploit a naturally occurring vulnerability) only to have it suddenly vanish with a routine minor software upgrade.

The single most distinguishing (and IMHO deplorable) feature of FIPS 140-2 validation is the almost total prohibition of changes to validated modules. I call it the “ready, fire, aim” approach to software development: first there is a mad scramble to write your code and push it through the formal testing (which as we well know is shallow in terms of real-world implementation issues[4]), as time is always a pressing concern when you have to wait 6, 9, or even 13(!) months for government action on the submission). Even absent rigged and constantly shifting standards that is a recipe for bugs. Then, once submitted you can’t change it[5] even as the inevitable flaws are discovered. In the OpenSSL FIPS module for instance there are a number of vulnerabilities such as the notorious “Lucky 13″ and (recently) CVE-2014-0076 that we are not permitted to mitigate. That’s why I’ve long been on record as saying that “a validated module is necessarily less secure than its unvalidated equivalent”, e.g. the OpenSSL FIPS module versus stock OpenSSL.

That, I think, perhaps even more than rigged standards like Dual EC DRBG, is the real impact of the cryptographic module validation program. It severely inhibits the naturally occurring process of evolutionary improvement that would otherwise limit the utility of consciously exploited vulnerabilities.

The presence of Dual EC DRBG in the OpenSSL FIPS Object Module is a contemporary case in point. Even though it is not enabled by default, and even though an inadvertent bug means that it can’t even be used without a minor code change or other workarounds, the mere presence of that executable code still represents a vulnerability of sorts from the APT perspective. Imagine if you will that you were an APT[2] agent responsible for maintaining the capability of accessing communications or data secured through Dual EC DRBG based cryptography[6]. Your ideal situation is Dual EC DRBG used silently and automatically, as was the case with RSA BSAFE until recently. That particular channel is now closing[7], but second best is having the Dual EC DRBG code already present in a latent form where it can be enabled with the lightest of touches. As an APT agent you already have access to many target systems via multiple means such as “QUANTUM INTERCEPT” style remote compromises and access to products at multiple points in the supply chain. You don’t want to install ransomware or steal credit card numbers, you want unobtrusive and persistent visibility into all electronic communications. You want to leave as little trace of that as possible, and the latent Dual EC DRBG implementation in the OpenSSL FIPS module aids discrete compromise. By only overwriting a few words of object code you can silently enable use of Dual EC[8], whether FIPS mode is actually enabled or not[9]. Do it in live memory and you have an essentially undetectable hack. In contrast introducing the multiple kilobytes of object code that implements Dual EC would require a much heavier touch.

So, on a general software hygiene basis, and particularly if you want to frustrate that level of APT compromise, you don’t want the Dual EC object code present at all. That is why OSF is attempting to remove the Dual EC DRBG implementation entirely from the OpenSSL FIPS Object Module 2.0. That pending revision will be 2.0.6 and the requisite formal paperwork (“Maintenance Letter”) was submitted to the CMVP on January 20, 2014. It’s typical to wait two to three months for review of such submissions and I hope to be updating this post soon to note a successful outcome. [update 2014-07024]: This “change letter” update was finally approved on 2014-06-27, more than six months after submission. Unfortunately, with approval uncertain we had to proceed in the interim with testing of new platforms on the original code base that still included Dual EC DRBG and that change letter for revision 2.0.7 was approved on 2014-07-03. So Dual EC DRBG was gone and then back in the blink of an eye. We will attempt to remove it again for the next upcoming revision, 2.0.8.

[updated 2014-03-29]:I should clarify the distinction between the two different hacks discussed here; enabling Dual EC DRBG and bypassing the POST integrity test. A hack in live memory would most likely take the form of tweaking the run-time variables that determine the DRBG; the POST could be ignored if it had already been performed, else the hack could just preset the global static variables that indicate the successful completion of a POST. A hack on the executable image on disk, i.e. libcrypto could involve bypassing the POST and/or integrity test as suggested in footnote 9.

[updated 2016-01-29]: Add CVE-2016-0701 to the list of vulnerabilities we’re forbidden to address in the FIPS module. Fortunately as a practical matter this vulnerability will only be an issue for the most obscure use cases; i.e. direct use of libcrypto and reuse of keys and use of affected DH parameters and FIPS mode enabled.

_________

1 On the Practical Exploitability of Dual EC in TLS Implementations. This study examines actual Dual EC based TLS implementations, showing the ease of exploitation by anyone possessing the “up-my-sleeve” secret numbers. It does not address exploitation of other types of Dual EC based cryptography.

2 I’m trying to be neutral in the use of this term. There are two separate issues here, one being “is it right/appropriate/moral/prudent that <insert your nation-state APT agent of choice here> spy on <insert your target of choice here>?”. The other separate issue, assuming your answer to the first is “yes”, becomes “what are the implications of massive subversion of widely used technical standards and infrastructure?”. This discussion addresses the second issue and I attempt to avoid the first.

3 This is an odd one, not documented anywhere that I’m aware of (e.g., SP800-57 table 2 doesn’t exclude RSA key sizes above 3072). We noticed when researching the new RSA algorithm test vectors for the new post-2013 SP800-131A “transition” requirements that the 4096 modulus size had disappeared from the set of possible sizes (along with the smallest sizes which was expected). We inquired about this through a couple of test labs and the most coherent response we received was that 4096 was eliminated as “not practical”. That isn’t a very credible response on two counts: 1) OpenSSL has implemented 4096 and larger modulus sizes for a long time, and 2) the FIPS 140-2 validation testing process is rather notoriously unconcerned with “practicality”.

4 I’m referring to the Level 1 FIPS 140-2 validations which by design completely ignore issues like performance, buffer overruns, side-channel and other vulnerabilities, etc. Level 2 and higher do pay more attention to some security relevant issues, though still having the immutability problem.

5 Defenders of the status quo will correctly note that there is indeed a process for modifying already validated modules, and even a “fast track” for addressing urgent situations like security vulnerabilities. That process is even moderately feasible for some validations, the small ones encompassing only a few platforms (“Operational Environments”). For a larger validation, like #1747 with eighty platforms, the mandated retesting on each and every such platform, generally required even when study of the source code would clearly show no platform specific dependencies, isn’t even remotely feasible in either time or money. Anyone have roughly a million dollars to spare, and be willing to wait a couple of years for results?

6 Note this is much more than just TLS. Any RSA key pair generated using Dual EC is suspect, for instance encryption keys used to protect storage arrays (and obviously the data protected by those keys including unmounted disks), or hardware tokens where the seed record was generated with a toolkit using Dual EC (e.g. BSAFE).

7 Though I suspect it is closing very, very slowly. The presence or use of a cryptographic library often is not at all apparent to the end users of products that contain or reference it.

8 For proprietary closed source software this enabling can be done at any point in the product distribution process from initial vendor generation of executable code to final deployment on individual end systems. For open source software compiled by the end user, or for uncorrupted binary software distributed via a robust cryptographically secure means, this enabling must be effected against the deployed executable code. Such enabling can still be done relatively easily because the mechanism for run-time enabling of Dual EC is already present.

9 The integrity test mandated by FIPS 140-2 is worthless in preventing such a compromise (I’d even argue it is worthless period). The integrity test consists of an elaborate determination of a digest over the object code (executable code and read-only data) of the cryptographic module for comparison with a known good digest also embedded in the module. But you don’t even have to modify that embedded digest value, as on any machine architecture and for any compiler there will always be a conditional branch instruction at the point the fail/succeed determination is made. Depending on the specific architecture and compiler you just overwrite that conditional branch with a NOOP or an unconditional branch, a one word (or even one bit) mod.

References for further reading:

http://blog.cryptographyengineering.com/2012/01/openssl-and-nss-are-fips-140-certified.html

http://blog.cryptographyengineering.com/2013/12/a-few-more-notes-on-nsa-random-number.html

http://nakedsecurity.sophos.com/2014/03/28/nist-to-review-standard-for-cryptographic-development-do-we-really-care/

http://www.mail-archive.com/cryptography%40metzdowd.com/msg06990.html

https://blog.bit9.com/2012/04/23/fips-compliance-may-actually-make-openssl-less-secure/

https://blogs.oracle.com/darren/entry/fips_140_2_actively_harmful

https://security.stackexchange.com/questions/34791/openssl-vs-fips-enabled-openssl

http://seclists.org/basics/2007/Jan/9

https://www.schneier.com/blog/archives/2010/01/fips_140-2_leve.html

http://comments.gmane.org/gmane.comp.encryption.general/19852

https://pomcor.com/2015/11/12/cryptographic-module-standards-at-a-crossroads-after-snowdens-revelations/

https://www.ida.org/idamedia/Corporate/Files/Publications/IDA_Documents/ITSD/2014/D-4991.ashx

http://arstechnica.com/security/2013/09/fatal-crypto-flaw-in-some-government-certified-smartcards-makes-forgery-a-snap/2/

]]>
http://veridicalsystems.com/blog/immutability-of-fips/feed/ 0
Photovoltaic System http://veridicalsystems.com/blog/photovoltaic-system/ http://veridicalsystems.com/blog/photovoltaic-system/#comments Wed, 22 May 2013 12:36:25 +0000 http://veridicalsystems.com/blog/?p=44 In the fall of 2012 we completed a major photovoltaic system installation.  In addition to giving us tree-hugging bragging rights and potentially being a reasonable financial investement this system also provides backup power to keep the business computing infrastructure running through the frequent power outages in this area. As of May 2013 (six months of operation) this system has generated 8.5 megawatt-hours and seamlessly overridden multiple utility outages that would have meant hours of lost work each time.

]]>
http://veridicalsystems.com/blog/photovoltaic-system/feed/ 0
Pain http://veridicalsystems.com/blog/pain/ http://veridicalsystems.com/blog/pain/#comments Tue, 19 Mar 2013 22:05:43 +0000 http://www.marq3.net/blog/?p=4

I’ve just learned that a brother-in law has been afflicted with his first kidney stone.  I am a serial grower of kidney stones, enough times over enough years that I’ve lost count.  I’ve had three (or is it four?)  lithotripsies, one cystoscopy, and passed a half dozen or so without clinically mediated trauma (note I didn’t use the term “naturally”; there ain’t nothing natural about it).

That legacy has made me an accidental and unwilling connoisseur of pain. Kidney stone pain1 is a multidimensional experience; each time is a creative variation on a soul-crushingly familiar theme.  Some are just a few days of that distinctive, unforgettable, omnipresent abdominal pain that instantly disappears when the offending stone finds its own way to freedom.  The epicenter of the pain isn’t necessarily right where you’d think the stone itself is, and when the pain is bad you hurt everywhere, from your toenails to your scalp.  My very first stone went undiagnosed for many months, until my urine looked like iced tea, because I didn’t recognize the symptoms and describe the location accurately (now I recognize the sensation, instantly).  Some stones find me whimpering and babbling on an emergency room cot, curled in a fetal position, begging for another injection of morphine.  But every stone announces its presence the same way, leaving me to wonder what journey of discovery this one will lead me to. On the worst of those journeys the world contracts to a tiny little point that contains only you and the pain, and everything else fades to insignificance.

It’s not all bad, there are some compensating advantages.  Once freed of the stone and the pain, by whatever means, your world suddenly expands.  The mere absence of pain is an exquisite pleasure. The sky is brighter and bluer, the air is sweeter, your loved ones are lovelier, life is good!  Little discomforts like bee stings and broken bones no longer bother you as much.  You can also save a few bucks on dental work by skipping the anesthesia — I’ve had four crowns and some fillings installed without any.

As a spiritual journey such acquaintance with pain is both uplifting and humbling. On the one hand you can look down on the lesser mortals around you as they snivel and whine about their petty little aches and discomforts.  On the other, any fantasies you may secretly have nurtured about the inner steel you would show in the torture chamber, steadfastly refusing to betray your dignity or your comrades, are gone forever.  Before my cystoscopy, if the docs had come to me and said “We’re sorry, but to stop the pain we’ll have to castrate you and tattoo an obscenity on your forehead” I would have replied “fine, fine, whatever, just get on with it already”.

The term “painkiller” is a misnomer because those drugs don’t kill the pain … and I mean the most effective ones, the “Schedule II” ones that are tightly controlled and that physicians are reluctant to prescribe.  They just take the edge off, even intravenous morphine (been there, done that). What’s worse, thanks to the insanely stupid War on Drugs, you’re afraid to actually use what drugs you do have.  Kidney stones (mine anyway) usually do their worst in the wee hours on a weekend, and you never know how bad this one is going to be, and the memories of the times you didn’t have the painkillers and would have taken them are so very clear and vivid.  So, you conserve your stash as one of your most precious possessions (note to blog-reading home-invading drug fiends: you aren’t going to find my stash easily).  With each new stone I fill the prescription and then add it to the stash.  I get a surprising amount of comfort at the low point of an episode just by caressing those little bottles, like a miser with his gold or Gollum with his ring of power.  It’s there if I really, really need it … my preciousssss…

Speaking of wretched social policy … we euthanized a pet a few weeks ago.  At sixteen years of age this cat was beyond effective medical intervention, at any price, and beginning to suffer.  We paid (dearly) for the vet to come to the house because we wanted the animal to spend its last moments in comfortable and familiar surroundings.  It’s a pity that we can treat our pets more humanely than we can our fellow humans.  When my father was dying of cancer, in a hospice arrangement at home, we were permitted to keep on hand one small bottle of painkiller to be used “at the end” to relieve pain.  I’ve forgotten what it was (time has dulled my memory of details, and these are not memories one wants to keep), but it was a liquid applied as drops on his tongue.  It was effective, acting almost instantly, as we could tell from the relaxation of the furrows of pain in his face. Towards the very end his pain level was much higher (obviously so even though he could not speak).  It was the middle of the night and a quick calculation showed that at the new rate of application we would run out at 3 a.m.  I called the designated duty nurse for a refill.  She was unwilling to “disturb the doctor” and told us to wait until morning.

I have never been as angry, as thoroughly, totally, consequences-be-damned angry, as I was at that moment, and hope I never am again.  I knew the name of the doctor and the general neighborhood where he lived, but didn’t have his phone number.  I told the nurse: “The doctor IS going to be disturbed one way or another.  Either you call him now, or I go to where I know he lives and start pounding on doors until I find his house”.  Fortunately for me and my still clean arrest record I did get the prescription, and Dad died in the mid-morning of the coming day.

Such is the gloriously civilized society we have created, where we have the means to at least partially vanquish much pain and suffering, and yet sometimes fail miserably to do so.  Thanks to draconian drug laws we are the prison nation, leading the world in the number of our citizens behind bars (both per capita and in absolute numbers), yet illegal drugs are no less a scourge than they ever were.  Shame on us all.

________

1. I’ve read that kidney stones and childbirth are two uniquely painful experiences.  Only women can make that comparison directly.  I’m glad I’m a guy

]]>
http://veridicalsystems.com/blog/pain/feed/ 0
Secure or Compliant, Pick One http://veridicalsystems.com/blog/secure-or-compliant-pick-one/ http://veridicalsystems.com/blog/secure-or-compliant-pick-one/#comments Wed, 20 Mar 2013 13:21:35 +0000 http://www.marq3.net/blog/?p=30 I’m on record as stating that FIPS 140-2 validated software is necessarily less secure than its equivalent unvalidated implementation, all other things being equal.  There are several factors conspiring to force this unfortunate outcome:

1) Exposure:  the culture of non-disclosure and non-transparency in the CMVP means that only a handful of people ever even have the opportunity to really assess the quality of the software.  Even when that software is derived more or less directly from OpenSSL or other open source software, as is often the case, outsiders generally cannot know what open source software is used in a given validated product.

3) Suspended animation:  It can easily take a year to obtain a validation, from the time the test lab is first engaged until the formal validation award.  During that time the submitted software is unchanged, whereas the equivalent unvalidated and accessible version has had significant real-world use and review that may well have resulted in the discovery of vulnerabilities.  Your freshly validated cryptography is going to deploy into an environment some 12 months further along in the perpetual arms race between good and evil.

4) Superficiality:  the actual validation analysis and testing is pretty superficial.  In multiple OpenSSL based validations I’ve personally participated in, the CMVP testing has never revealed any flaws in the previously existing algorithm implementations.  The one cryptographic flaw that was discovered in those validated products (not by the CMVP, incidentally) was in code that was written specifically for the validation (the PRNG).

5) Head-in-sand incentives:  this is the dollars and cents issue that really matters.  There are huge disincentives to fixing (or discovering) bugs and vulnerabilities in already validated software.  If a vulnerability is found it is for all practical purposes not fixable — been there done that with the (effective) revocation of validation #733[1].  That validation was for an open source derivative of OpenSSL publicly advertised and disclosed as such from the beginning.  When we were privately informed of the (very minor) vulnerability we started the process of trying to negotiate approval of the fix with the CMVP.  The patch was prepared the same day that we learned of the vulnerability.  Several weeks later we were still trying to figure out what hoops needed to be jumped with the CMVP bureaucracy.  Since the vulnerability was in open source our options for suppressing its existence were limited.  When our internally agreed time limit expired, we announced.  The CMVP almost immediately revoked [2] the validation.  This occurred after at least several commercial vendors were well along with plans to ship products based on the validated module.

I know of a number of other proprietary validations based on the same software.  There were no other revocations that I am aware of.  Those vendors could have rapidly jumped the bureaucratic wickets and rushed updated validated software to the field.  Or they simply could have done nothing, as the CMVP is generally unaware of the pedigree of the software they validate.

Now imagine you’re a vendor wishing to leverage one of the existing open source based validations in your proprietary product, and you know about this “revocation” incident.  Hmmm … what to do?  Use the existing validation and run the risk of being abruptly cut off at the knees by a revocation?  Or shell out for your own validation of the same software but with no known obvious association to the highly visible open source validation?  It should be no surprise that in spite of the additional costs, in both time and money, many vendors are choosing the latter option.  I call those “private label” validations, where the software is only trivially modified or even precisely identical to that of the open source validation, but it is revalidated under another name.  I’ve been hired to conduct a number of such private label validations, enough to notice an interesting pattern — the very similar (or even identical!) software is generally validated in less time and with less hassle than the same software identified as open source.  Those multiple parallel validations of very similar code have also been an unintended controlled experiment that has demonstrated that the validation process is highly subjective.

We originally intended the OpenSSL FIPS Object Module validations to be directly utilized by software vendors.  Some do, but the biggest and unintended benefit turns out to be the ready-made example for private label validations.  Take the code and validation documentation, change the name from OpenSSL to <your_catchy_product_name_here>, submit it as a proprietary validation comfortable in the knowledge that any connection to OpenSSL will remain obscured in the shadows.  And if any vulnerabilities are disclosed in the open source world, you have a spectrum of options from the completely irresponsible all the way through to actually correcting the vulnerability, an action you can take without any time pressure.

Now imagine you’re an end user who has the option of using FIPS validated software or not (i.e., you’re not in an environment where FIPS validation is mandated).  Not much of a decision to make, the non-validated equivalent is clearly the more secure in any real-world sense of defense against compromise or attack (assuming all other things equal of course, such as the choice of strong crypto algorithms).  Just pick the current open source equivalent of whatever validated product you would have used (OpenSSL 0.9.8k instead of the FIPS Object Module v1.2, say).  It will have the same (or better if bug fixes have been applied) crypto implementations.  Any vulnerabilities subsequently discovered will be fixed and announced in a responsible time frame.  The software will be more thoroughly reviewed and analyzed.

Update 2013-09-23: Recent events have shown, with a vengeance, that the situation is far more dire than the earlier essay above presumes. One of the random number generators (Dual EC DRBG) in a standard mandated for FIPS 140-2 (SP800-90A) is now known to be defective by design. SP800-90 specifically mandates exclusive use of the compromised points.

That point is worth emphasizing: SP800-90A allows implementers to either use a set of compromised points or to generate their own. What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by specific query) is that FIPS 140-2 requires use of the compromised points. Several official statements including the NIST recommendation fail to mention this leaving the impression that alternative uncompromised points can be generated and used.

There are only two inferences to be drawn regarding NIST CAVP/CMVP complicity: either they (the bureaucracy responsible for regulating the use of cryptography for the U.S. government) were oblivious to the backdoor vulnerability, or they knowingly participated in enforcing its use. Neither possibility is comforting.

I was part of the team that implemented all four SP800-90 DRBGs in the OpenSSL FIPS Object Module 2.0. That implementation was requested and funded by a sponsor (as were other algorithm implementations and 70+ platforms). My colleagues were aware at the time of the dubious reputation of Dual EC DRBG. I was the one who argued for including it in OpenSSL anyway, reasoning that it was an open official standard and OpenSSL is a comprehensive cryptographic library that already implements some known weak algorithms. I thought we were just “checking the box” in implementing all of SP800-90; we didn’t make Dual EC DRBG a default anywhere and I didn’t think anyone would be stupid enough to actually use it in a real-world context (FIPS 140-2 has many elements not relevant in the real world). Well RSA proved me wrong by implementing[3] it by default in most of their product lines. As with NIST either incompetence or complicity is indicated.

The original conclusion of this essay is dramatically underscored by the Snowden revelations: if you care about actual security do not use FIPS 140-2 validated cryptography. Or proprietary commercial cryptography either; the restrictions of FIPS 140-2 make it much harder (or impossible) to do cryptography securely, but we now know that some non-validated commercial cryptography has been compromised. I suspect time will show that RSA wasn’t the only compromised vendor. OpenSSL could conceivably have subtle vulnerabilities in the source code (it has accidental bugs for sure), but backdoors are much harder to sneak into open source software. The OpenSSL libraries can be compiled from source rather easily on most Linux/Unix[4] platforms, and copied over the bundled binary libraries supplied by the OS distributor.

See also http://veridicalsystems.com/blog/immutability-of-fips/

[Updated 2013-11-07 to note use of compromised points is mandatory]

[Updated 2015-03-12 to reference a related blog entry]
_________

1 (footnote added 2013-12-07) Mitigation of the Lucky 13 vulnerability is a telling example. An effective mitigation was developed for OpenSSL proper, but because we are not allowed to make even the most trivial of modifications to the FIPS module that mitigation could not be effected for the “FIPS capable” OpenSSL when FIPS mode is enabled.

2 Technically speaking they only disallowed the use of the PRNG, but since  most non-trivial applications need RNG that amounted to an effective revocation.

3 While the RSA cryptography originates from and is closely related to OpenSSL, their Dual EC DRBG implementation was done prior to and separately from the OpenSSL one.

4 If you’re using Microsoft Windows cryptography is not your biggest security worry.

]]>
http://veridicalsystems.com/blog/secure-or-compliant-pick-one/feed/ 0
DoD PKI and the Beat of a Different Drummer, Part 2 http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/ http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/#comments Wed, 20 Mar 2013 00:40:30 +0000 http://www.marq3.net/blog/?p=24

After several years of dealing with huge unwieldy CRL files DoD finally stands up a OCSP server, and after months pass it is more or less usable for awhile. Then I noticed the OCSP responses were being signed by an expired certificate (for unknown reasons DoD decided to use self-signed responder certificates). Here’s a typical query using a revoked certificate:

$ openssl ocsp -issuer ca.DOD_CA-13.pem -cert xxx.yyy.zzz.mil.REVOKED.crt -url http://ocsp.disa.mil/ -resp_text -VAfile ca.dod_ocsp_ss.pem
OCSP Response Data:
OCSP Response Status: successful (0×0)
Response Type: Basic OCSP Response

Response verify OK
xxx.yyy.zzz.mil.REVOKED.crt: revoked
This Update: May 3 23:00:00 2009 GMT
Next Update: May 10 07:00:00 2009 GMT
Revocation Time: Feb 21 13:53:33 2008 GMT
$
Note the CA certificate used for verification, ca.dod_ocsp_ss.pem. It expired nearly a year ago:
$ openssl x509 -noout -enddate -in ca.dod_ocsp_ss.pem
notAfter=Jun 22 19:26:25 2008 GMT
$ date
Mon May 4 08:45:07 EDT 2009
$

Unfortunately Apache mod_ssl doesn’t care for expired responder certs, so I wrote a patch to add a SSLOCSPResponderNoCertVerify configuration option to suppress the responder certificate validity check.

]]>
http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-2/feed/ 0
DoD PKI and the Beat of a Different Drummer, Part 1 http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/ http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/#comments Wed, 20 Mar 2013 00:24:16 +0000 http://www.marq3.net/blog/?p=13 So, several years after first implementing the use of client and server x.509 certificates, DoD finally stands up an OCSP service. Good thing, because the relevant CRL files total over 200 megabytes, with some of them having a lifetime as brief as 18 hours.

But, they had to do it a little differently. For starters self-signed certs are used for signing the responses. That caused some problems for my DoD client because Apache mod_ssl assumes the OCSP responses will be signed in the CA cert chain. With a little prodding from me Dr. Stephen Henson of OpenSSL fame came up with a patch to implement a new directive to specify trusted signer certs: https://issues.apache.org/bugzilla/show_bug.cgi?id=46037.

This patch implements the two configuration directives

SSLOCSPResponderCertificateFile file
SSLOCSPResponderCertificateFile Set of trusted PEM encoded OCSP responder certificates

Also available in in httpd 2.3 and later, if using OpenSSL 0.9.7 or later.

“This supplies a list of trusted OCSP responder certificates to be used during OCSP responder certificate validation. The supplied certificates are implicitly trusted without any further validation. This is typically used where the OCSP responder certificate is self signed or omitted from the OCSP response.”

]]>
http://veridicalsystems.com/blog/dod-pki-and-the-beat-of-a-different-drummer-part-1/feed/ 0
NSS Trickery http://veridicalsystems.com/blog/nss-trickery/ http://veridicalsystems.com/blog/nss-trickery/#comments Wed, 20 Mar 2013 00:22:08 +0000 http://www.marq3.net/blog/?p=11 Like other comparable products Firefox and Thunderbird ship with a wide assortment of pre-installed CA certificates. Not only the usual ones from Verisign, Equifax, and the like but also ones from some obscure entities like “Staat der Nederlanden”, “”Camerfirma Chambers of Commerce”, “TURKTRUST Certificate Services”.

The DoD PKI policy mandates that CA trusted keystores should only contain the CA certs specifically authorized by DISA. This make sense if you think about it, as a desktop system in the Pentagon shouldn’t be trusting CA certs from foreign CAs.

Fixing the keystore should be easy, we just use the handy-dandy GUI based certificate management tool to remove the unauthorized certs, right? No so. If you try that you find that after tediously clicky-clicking your way through and deleting 100 plus certificates that they initially appear to be gone. But, as soon as you restart Firefox (or Thunderbird, etc.) they all reappear. What is happening is that the NSS shared library libnssckbi.so automatically re-adds the bundled CA certs to the disk resident keystore (the cert8.db file).

Now this is downright annoying. Presumably the Mozilla Foundation is being paid for the inclusion of the bundled CA certs and wants to discourage their removal in order to boost the commercial value of that placement, but as with the DoD policy there are legitimate reasons why end users may want to remove bundled certificates.

There appears to be no alternative to complete replacement of the libnssckbi.so library. The bundled certs are defined in the file mozilla/security/nss/lib/ckfw/builtins/certdata.txt in the source tree. The Mozilla specific build process is annoyingly awkward and different for both Linux/Unix and Windows.

It should be noted that we have essentially the same problem in a different form with Microsoft Windows, as routine Microsoft issued patches tend to reinsert CA certificates. As we don’t have the option of modifying the software culling the unwanted CA certs requires constant vigilance.

]]>
http://veridicalsystems.com/blog/nss-trickery/feed/ 0
The Fickleness of FIPS http://veridicalsystems.com/blog/the-fickleness-of-fips/ http://veridicalsystems.com/blog/the-fickleness-of-fips/#comments Wed, 20 Mar 2013 00:19:50 +0000 http://www.marq3.net/blog/?p=9  
(Updated 2015-12-11)
 
All of my clients seeking FIPS 140-2 validations are concerned about schedule. The elapsed time to the final validation award is usually more important than cost. The biggest element of that timeline is the long hiatus between the test report submission by the test lab to the CMVP, and the time when it is picked out of the inbox for CMVP review.

That time interval can vary dramatically and capriciously as demonstrated by two recent validations. The test report for #1051 was submitted on 2008-04-28 and validation award was 2008-11-17, approximately 7 months. The test report for #1111 was submitted on 2008-02-29 but the validation award was not until 2009-04-03, approximately 13 months. Quite a difference, roughly half a year, sufficiently long in the latter case to spoil any commercial value of that validation.

How did the two validated products differ? Here’s the interesting part — both were based on the same source code! Even stranger, the “quick” validation was for source code based delivery and static linking, both well off the beaten path for most validations. The tardy validation was a bog standard binary shared library validation, the whole purpose of which was to quickly obtain a few validated binaries for DoD (the sponsor) while waiting for the source code based validation.

The FIPS validation process is so shrouded in secrecy that I will never know for sure why the one validation took nearly twice as long. The validations were performed by different test labs, but there was no evidence that I could see of negligence or incompetence on the part of the one test lab. The most likely cause was different reviewers at the CMVP. The CMVP review is (in my opinion) a very subjective process and different reviewers show very distinct preferences in their commentary and requirements for document changes. Interestingly enough the test lab informed me that the NIST reviewer in this case insisted on remaining anonymous; in the past I’ve always been told who was involved.

So there you have it — a very non-transparent process, anonymous bureaucrats, nearly a 2x difference in validation times for the same software. You pays yer money and you takes yer chances.

[Update 2015-12-11] An even better example of CMVP capriciousness:

The “RE” validation, an “Alternative Scenario 1A” clone of the #1747 validation, was approved November 13 2015 (http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140val-all.htm#2473).

It was submitted along with its identical twin “RE” validation on April 17 2015. The two sets of paperwork differed in only one trivial aspect, “RE” in the module name for one versus “SE” for the other. Same module, same test lab, same paperwork, submitted together at the same time. A more perfect controlled study could not have been devised on purpose.

The “SE” validation was approved on June 25 (#2398), after a little more than two months (69 calendar days, 48 working days).

The “RE” validation was not approved for almost seven months (210 calendar days, 145 working days). That’s three times as long for the exact same submission. This is the most striking example yet of CMVP capriciousness.

Why the wild disparity? Well, probably because the two identical submissions were farmed out to two different reviewers. The review process is notoriously subjective, and in fact we received “comments” (requirements for changes) for the “RE” validation whereas the “SE” one was approved as-is. As a result the two Security Policy documents are no longer identical. That doesn’t explain the time discrepancy, though, as those “comments” weren’t received until long after “SE” had been approved.

The moral here is that FIPS 140-2 validations are a crapshoot; it’s impossible to make any reliable predictions on how long any validation action will take or how it will be received. If you have really deep pockets you can submit the same validation multiple times to hedge your bets (as done for the #1051 and #1111 validations discussed above), but for most of us it’s an open ended gamble: submit, hope, wait, …

]]>
http://veridicalsystems.com/blog/the-fickleness-of-fips/feed/ 0