Implementer's notes

What might get caught in the gears under the hood?

Heartbleed Status: Upgrading to Heartbreak

  • Font size: Larger Smaller
  • Subscribe to this entry
  • Print

(Update: A number of references to this article have incorrectly referred to me a working for Opera Software. Please note that I left Opera Software more than a year ago, and that I now work for Vivaldi Technologies AS)

Update May 12: After closer investigation together with F5, it seems that, due to an issue with the network connection of the prober the test used to detect F5 BigIP server showed higher numbers than it should have, and the numbers of such servers therefore got very inflated for the scans that were run in the past month. This means that the BigIP related information and conclusions are not correct, and I have therefore moved down and struck out the section regarding BigIP servers. My apologies to F5 and their customers for this mistake.


As was mentioned in my previous article, a few weeks ago the existence of the so-called "Heartbleed" vulnerability (CVE-2014-0160) in the OpenSSL library was disclosed, which could be used to extract sensitive information, such as user passwords and the site's private encryption keys, from vulnerable "secure" web servers (xkcd explanation).

As a result, all affected web sites must patch their servers, as well take other actions to remove the danger to their users, a danger that have increased after the disclosure (there have already been several reported incidents, and at least one arrest in relation to exploits using Heartbleed).

Patch coverage slowing

In the weeks since the disclosure I have been tracking the patching of the Heartbleed issue, using my TLS Prober. The TLS Prober currently scans a set of about 500 000 separate servers, using variations of host names in various domains (in total 23 million), with a selection bias towards Alexa Top million sites.

In the six scans I have made since April 11, the number of vulnerable servers have trended sharply downward, from 5.36% of all servers, to 2.33% this week. About 20% of the scanned servers support the Heartbeat TLS Extension, indicating that up to 75% of the affected servers had been patched before my first scan 4 days after the disclosure.

However, while the vulnerability number had been halved, to 2.77%, after 2 weeks, in the most recent scan, 2 weeks later, the number has only been reduced to 2.33%, indicating that patching of vulnerable servers has  almost completely stopped.


The data indicates, however, that most publicly used vulnerable sites have been patched. Generally, 73% of the sites scanned are using certificates issued by a Certificate Authority recognized by the browsers, only 30% of the vulnerable sites are using such certificates. This reduces the severity of the problem slightly, although what may look like an "unimportant" server, could be very important to the people using them.

A more problematic issue is that many of the certificates that have been used on vulnerable servers are still being used on patched servers. In fact, assuming that all servers supporting heartbeat in the first scan were vulnerable, then 2/3 of the certificates have not been replaced after patching the vulnerable servers (as the certificates of the patched servers have been observed in previous scans). Given that any server that was patched after April 7 has to be assumed to have had its certificate private key compromised (because criminals may have used Heartbleed to compromise their server), this indicates a serious problem for the users of those sites.

Additionally, hiding in the numbers was another problem.

Upgrading to Heartbreak

This problem was hinted at by my analysis of the (later found to be incorrect) BigIP numbers, but may be the worst of them: In my most recent scan 20% of the currently vulnerable servers, (as distinguished by IP addresses), and 32%(!) of the vulnerable BigIP servers, were NOT vulnerable when they were scanned previously. This means that thousands of sites have gone from not having a Heartbleed problem, to having a Heartbleed problem!
A possible issue with these numbers is that the analysis assumes that servers stay on the same IP address throughout the period, which might not always be the case. However, performing the same analysis while tracking the server certificates indicated the same trend.

It is difficult to definitely say why this problem developed, but one possibility is that all the media attention led concerned system administrators into believing their system was unsecure. This, perhaps combined with administrative pressure and a need to "do something", led them to upgrade an unaffected server to a newer, but still buggy version of the system, perhaps because the system variant had not yet been officially patched.

Fixing this problem will require the affected sites to do another system patch sequence, which will cost each affected site money they had not really needed to spend. Assuming that each server patch, certificate replacement and test cycle consumes 4 hours for 3 sysadmins, each hour costing USD 40, the estimated extra cost for patching the 2500 "Heartbroken" servers in my sample will be around USD 1.2 million. As my sample is probably not more than 10% of the secure servers on the net, the unnecessary patching cost could exceed US 12 million.

Heartbleed is a very serious issue, and getting the word out to all affected parties is necessary, but I am starting to think that the press coverage, with all its hype and incorrect statements about issues surrounding the problem (such as "change your passwords immediately!" before the servers are patched, and "Heartbleed related certificate revocation will slow down secure surfing") has, at least to some extent, become counter-productive.

The above numbers about servers that have been "upgraded" and thus become vulnerable to an issue they weren't vulnerable to, may be an indication of these problems with the press coverage.

My recommendation is, as before, to get the servers patched, certificates updated and revoked, passwords changed (in that sequence). The coverage should concentrate on that, and the panic level should be dialed way down. Please stick to the facts about the problem, provide guidance to the readers about how to discover if they are affected, e.g. there are some web tools available, such as the SSL Labs Server tester, and how to fix it.

Number of vulnerable F5 BigIP holding steady

Update May 12: As mentioned above, the data in this section were based on incorrect data. The text have therefore been struck, and moved down to the end.

The first problem is that the absolute number of F5 BigIP (powerful SSL/TLS accelerator) servers with a special configuration that use vulnerable versions of OpenSSL have been holding steady. BigIP servers are currently very easy to detect, as they have a problem with certain kinds of TLS handshakes.

This is actually a bit deceptive, since this variant of BigIP servers has actually been slightly better patched than the general population of servers that support Heartbeat.

The reason the absolute number of vulnerable BigIP servers has been holding steady is that the number of BigIP servers including those using OpenSSL 1.0.1 (which supports Heartbeat) has doubled in the past month (after a slow decline over the past two years), and that many of the new servers using the OpenSSL mode were using a vulnerable version.

I suspect that part of the reason for this issue is that some new customers have installed their new BigIP server, probably from the vendor's existing inventory, but forgot to upgrade the firmware of the server before deploying it.

As BigIP servers are used by sites serving large number of users, this represents a significant security problem for those users.

in Uncategorized Hits: 7507 3 Comments


  • DLCox
    DLCox Wednesday, May 7, 2014

    A very astute and informative post; thanks for your effort to elaborate on the issue of net vulnerabilities. The best course of action is for users to not only change their password, but to do it frequently (sort of like flushing the toilet after use)--I would suggest weekly.....

  • QuHno
    QuHno Thursday, May 8, 2014

    Nice post about a really messy issue that shows well that security and hysterics don't go together well.

    BTW: Could you give some insights on the certificate revocation process and how it could be handled in a more secure way than it is now in a later post, please?

    The reason why I am asking for that:
    If I am not completely misinformed, e.g. Chrome exclusively uses his own CRL which is compiled form only a fraction of the available CAs: What it does not have in its own CRL is not checked for revocation. This would mean that if Chrome does not use the specific CA's revocation list, it will show the site happily as secure despite the revoked cert. IMHO that would allow for great MITM attacks.

  • john.yaya
    john.yaya Sunday, May 18, 2014

    Thanks for the detailed analysis.

  • Please login first in order for you to submit comments
Go to top