D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] OpenSSL 1.0.1 "Heartbleed" vulnerability

 

On 08/04/14 09:10, Martijn Grooten wrote:
> Things rarely get more serious than this:
> 
> http://arstechnica.com/security/2014/04/critical-crypto-bug-in-openssl-opens-two-thirds-of-the-web-to-eavesdropping/
> http://heartbleed.com/
> 
> Martijn.


This really ruined my day :[

As usual, I was up late last night and saw this in the early hours of
the morning (of course, if Full Disclosure hadn't just shut down there
might have been a few whispers about this a bit earlier... *grumble
grumble*) so had a few hours head start on this, and haven't slept
since. After some hectic work furiously updating auth+certs all over the
place and updating snort rules, emailing onsite admins and stalking IRC
for updates and info I think I'm pretty much done now.

And you know what? I'm not that worried actually. Whilst it is obviously
a catastrophic flaw in OpenSSL I've been using PFS (Perfect Forward
Security) everywhere, with no exceptions, for a while now after the
CRIME/BEAST attacks and this heavily mitigates the impact. This renders
any recorded traffic flows useless even with a compromised cert and
forces an attacker back to  computationally expensive active MITM
attacks - definitely possible, but back to manageable levels of worry,
not "the sky is falling" levels.

Obviously, the researchers want to talk up their finding as much as
possible but I want to see a PoC of this 'easy' recovery of openssh
certs/keys, user+pass details, etc, and I'm not the only one. I don't
think it follows that it's anywhere near as easy as they're stating that
it is - I mean, *any* data? From a vulnerability that can 'only' read up
to 64k in the process that does the TLS heartbeat without a choosable
offset and a rapidly growing heap? Sure, this is bad, but is it that
bad? I want to see code.

None of my clients have a particularly large web presence, although
nearly all of them do have websites and hence webservers to probe.
However, I had already long since put in place full instructions and
procedures for truly apocalyptic security scenarios like full rooting
(pull ALL the plugs, enter lockdown mode, await my onsite arrival) and I
don't think I'm going to need to use them. All of my clients knew
basically what to do and are more relieved that I just put them through
yet another full "rotate out critical keys/passes/certs" cycle (which I
make them all do every 6 months anyway for practice, regardless of
normal update routines) than annoyed, and none of them are panicking, at
least now. I'm not prescient or anything, it's just that after the
recent wave of critical TLS vulns particularly it seemed like madness to
not put in a complete procedure accounting for utter disasters like this
or even much worse: it was obviously only going to be a matter of time
until something like a SSH backdoor turned up.

I had a good look at the various honeypots we run during the chaos which
quite intentionally obviously catch a lot of bad behaviour and run
intentionally weak security - if this was really as bad as the
researchers were saying then I would expect to see them utterly
compromised at this point yet none of the planted canaries have been
triggered at all other than the usual boring daily crap. I'd expect them
all fully and comprehensively root compromised with most of the usual
logging conspicuously absent yet despite their routine daily torrent of
abuse they're just functioning as usual. This again makes me think that
while it's entirely possible, if not probable, that our friends at the
NSA/GCHQ/etc are entirely skilful enough to have detected and started
exploiting this bug it's either not as bad as stated, harder to reliably
exploit than stated (you're going to notice in the logs when your SSL
linked services start crashing a lot), less well spread than expected,
or all of the above.

Further reading - there's a really good comment on slashdot from a
similarly sceptical fellow looking for hard advice on the full
implications of this, well worth a read:

http://it.slashdot.org/comments.pl?sid=5000257&cid=46691385

There are already several online tools to scan a site for vulnerability:

http://filippo.io/Heartbleed/
http://s3.jspenguin.org/ssltest.py

Snort rules (not everyone has a live paid-for feed):

http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/

Best technical writeup I've seen:

http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html

IETF TLS/DTLS draft:

http://tools.ietf.org/html/draft-ietf-tls-dtls-heartbeat-04

So, apart from lingering concerns about how this will obviously end up
negatively affecting a whole bunch of virtually defenceless end users
and make the web generally less safe, leave a whole bunch of unpatched
servers out there for all eternity and most of all, how the upstream CAs
are going to handle this (have fun negotiating with your CA providers!)
I'm not going to lose any sleep over this. Well, apart from the entire
night I've already just lost, of course.

Apologies if this post is any more incoherent than usual, it's been a
long day for me. Now I'm going to have a nice beer and probably fall
asleep in the back garden and wake up with sunburn and ant bites all
over me. I'm willing to bet that when I wake up, none of my servers will
have any issues, a proper PoC for truly easy extraction of SSH keys,
etc, will still not have emerged and the internet won't be on fire.

One of my friends emailed earlier from the depths of his server room:
"Thank god I don't run Linux on any of my machines any more: I'm so glad
I switched them all to Windows XP today!"

Regards

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/listfaq