Lesson to be learned from Wannacry Friday

This article was published on LinkedIn on 16th May 2017. I’ve copied it in its entirety for you here. 

If you don’t know what you have, how can you protect it effectively?

Last Friday, the world received a massive wake up call, in regards to the vulnerability of it’s computer systems, their interconnectedness and the impact of failure or disruption on a large scale. In some respects it was reminiscent of the “fire sale” in Die Hard 4.0, though in the movie the attacks were specifically targeted and the motives were purely financial. 

In the real life event, infected systems were not deliberately targeted – as far as we can tell at the moment. What better way to hide your true motives or targets than to hide them in plain site along with multiple other victims who in effect become collateral damage. That has shades of the first Jack Reacher movie, but is a viable tactic which is used often. (I’ll do my best not to make this article all about movies, please bear with me.) Misdirection is a common ploy: think of it as a bit like while you’re looking at a fire in a field, someone is stealing your belongings from your house behind you. 

As more detail comes to light, with suggestions that the North Koreans were involved as source of the Lazarus Group (who famously were behind the Sony Pictures attack several of years ago, and the Bangladeshi bank theft last year), it’s been interesting to watch vendors and consultants vying for a piece of the “action”. A contact of mine noted on LinkedIn that all the GDPR experts had “disappeared” or had suddenly become Ransomware experts overnight. Opportunism or good business sense? I think the jury is still out and I’ve seen both praise and condemnation levelled at a whole range of people and businesses. 

I recently wrote a piece cautioning users to beware of vendors selling the latest and greatest in terms of shiny equipment or jazzy software. Friday’s attack brought this home in spectacular fashion I think. I’ve long been an advocate of doing the simple things well, and addressing your threats through a risk based approach. And what did we find the main reasons for infection were? 

  1. Poor patching. A Critical patch had been released by Microsoft on 14th March, and would have protected systems from being infected if it had been deployed. This is a simple thing to fix. Examine your patching schedules / processes, and ensure that Critical patches are deployed as soon as possible. They’re Critical for a reason. Don’t forget that after patches have been applied you should reboot the machine, check that the patches are in place and only if they are, move on. 
  2. Unnecessary protocols not shut down or disabled. The SMB protocol appeared to be the main method used by the ransomware to spread once inside an organisation. It should not be omnipresent in most networks, but gets installed by default by some new systems. Disable it if you don’t need it, and check after every upgrade or new implementation that it is still disabled. Run internal network penetration tests and / or vulnerability scans on a regular basis – at least annually – and remediate any Critical, High or Medium risks highlighted in a timely manner. Then test again, to make sure you’ve not introduced any further vulnerabilities. 
  3. Use of unsupported software. I know that in some cases software cannot be upgraded because legacy systems depend on it, and that goes for Operating systems too. Lack of support means no security patches or (in most cases) antivirus patches. Lack of support means your environment is becoming more at risk every day. I you have to use unsupported software, make sure it is fully patched up to and including the latest patches, then look for options which help reduce the risk. For example, can it be run in a virtual environment? Can it be run with a whitelist of permitted applications and software versions? If so, do both of those things. 
  4. Poor user awareness. It appears that a good proportion of infections came when documents which had been emailed in were opened by unsuspecting users. Training your staff in how to spot suspicious emails, documents and links has to be more than just a tick box exercise carried out once a year. It has to be something which people are actively involved in, something they talk about on a regular basis. They can then ask colleagues for their views or opinions on suspect mail or attachments without fear of being thought silly or being too cautious. Talking about this sort of thing needs to be a normal and common part of everyday business life. 

It can be seen that these are simple to do fix, and don’t necessarily cost the earth. The first three are really good candidates to include on a risk register and / or monthly (perhaps weekly?) security report. In all things cyber, it’s important to be able to know what “normal” looks like for your environment so that you can then measure improvements (or otherwise) of implementing new solutions. The last aspect, user awareness, appears to be changing slowly and I think we can do more to help speed it up. 

One thing that hasn’t come up too often in the analysis after the fact is something which isn’t particularly easy to do but which would help in cases like Friday’s. I’m talking about good asset management: knowing what you have and what’s on it. Having a corporate view of what software is currently – or at least was in place yesterday – could go a long way to knowing which devices you need to concentrate on. It sounds complicated, so let me explain.

My perception is that many systems were shut down as a precautionary measure, because people didn’t know where the infection was coming from or how it was spreading. Once those facts were known, restarting everything took quite a while because each individual machine would have to be manually checked to ensure it wasn’t infected and that it was patched appropriately. A good and up-to-date software inventory / asset list would have shown which devices were patched and could therefore be discounted from needing so much manual time. 

There’s a tried and tested mantra which I still like: if you don’t know what you have, how can you protect it effectively? 

In summary – you don’t need shiny hardware or high cost software. Do the simple things really well, keep measuring how well you’re doing them, and you’re in a great starting place. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: