How does your security measure up?

I published this article on LinkedIn on Monday 3rd July 2017, and I’ve copied it here for you.

If you don’t know what you have, how can you measure it?

We read a lot these days about equipment and training to help combat cyber attacks and reduce risks, but I don’t see much about today’s topic. It’s really good that you have controls in place, with defence in depth etc, but how do you know they’re working?

It seems to me that we often forget to take into account the requirement to measure key components on our systems, so that we know when things are working well and when they’re not. This isn’t about audit, which gives you a snapshot, a point in time view. This is about consistent, regular (possibly even real-time) monitoring and reporting on systems.
The first step in this process is to identify what matters to you most – in many, if not all, cases this will be the data your systems hold. 
Then, look at the controls you have in place, and think about what information would give you assurance that your controls are effective. 
For example, if you have highly sensitive data on all your laptops, knowing which devices are not encrypted might be a really key measurement for you. In this instance, you may decide it is unacceptable for any laptops to be unencrypted, or you may decide you’re happy with a tolerance of 5% or 10%.
One of the fundamental features of reporting is knowing what you have, where it is, and what software is loaded on it. If we look at the recent ransomware outbreaks of Wannacry and Petya, we know that these malware packages make use of specific vulnerabilities which were addressed by specific patches. If your inventory is up to date, you can check for the devices missing those specific patches, and target them immediately, rather than checking every single machine. The same held true with Heartbleed and other outbreaks of a similar nature. 
Some would say that regular reporting on critical patches which have not been installed is a waste of time: personally, I think it’s a good metric and invaluable in deploying resources effectively. You should already have a patch schedule, but does it take into account Critical patches? If not, time to start thinking about being proactive with them and pushing them out outside the patch schedule.  
Similarly, you will probably want to know what devices have aged (out of date) antivirus signatures: if they’re not within a couple of days release then in this day and age you’re running a risk. Report / alert on devices where this is the case, or where AV isn’t running at all. (While you’re at it, you might want to investigate ways of determining whether AV is running but not scanning anything – I have seen this on several occasions.)
You will also probably want to baseline the traffic profile coming into and out of your network so that you know what looks normal, making it easier to spot unusual activity. Pay attention to the days and times that traffic is present: if you get a lot of traffic at 3 in the morning, why is that? 
Finally, when presenting this information to your senior management, don’t leave it as raw figures. Present it in terms of risk and impact, from a financial and reputational viewpoint. That makes it easier to understand why something needs to be done and should help with getting additional resources to address those risks. 

If you don’t measure what you have, how can you improve it?

Lesson to be learned from Wannacry Friday

This article was published on LinkedIn on 16th May 2017. I’ve copied it in its entirety for you here. 

If you don’t know what you have, how can you protect it effectively?

Last Friday, the world received a massive wake up call, in regards to the vulnerability of it’s computer systems, their interconnectedness and the impact of failure or disruption on a large scale. In some respects it was reminiscent of the “fire sale” in Die Hard 4.0, though in the movie the attacks were specifically targeted and the motives were purely financial. 

In the real life event, infected systems were not deliberately targeted – as far as we can tell at the moment. What better way to hide your true motives or targets than to hide them in plain site along with multiple other victims who in effect become collateral damage. That has shades of the first Jack Reacher movie, but is a viable tactic which is used often. (I’ll do my best not to make this article all about movies, please bear with me.) Misdirection is a common ploy: think of it as a bit like while you’re looking at a fire in a field, someone is stealing your belongings from your house behind you. 

As more detail comes to light, with suggestions that the North Koreans were involved as source of the Lazarus Group (who famously were behind the Sony Pictures attack several of years ago, and the Bangladeshi bank theft last year), it’s been interesting to watch vendors and consultants vying for a piece of the “action”. A contact of mine noted on LinkedIn that all the GDPR experts had “disappeared” or had suddenly become Ransomware experts overnight. Opportunism or good business sense? I think the jury is still out and I’ve seen both praise and condemnation levelled at a whole range of people and businesses. 

I recently wrote a piece cautioning users to beware of vendors selling the latest and greatest in terms of shiny equipment or jazzy software. Friday’s attack brought this home in spectacular fashion I think. I’ve long been an advocate of doing the simple things well, and addressing your threats through a risk based approach. And what did we find the main reasons for infection were? 

  1. Poor patching. A Critical patch had been released by Microsoft on 14th March, and would have protected systems from being infected if it had been deployed. This is a simple thing to fix. Examine your patching schedules / processes, and ensure that Critical patches are deployed as soon as possible. They’re Critical for a reason. Don’t forget that after patches have been applied you should reboot the machine, check that the patches are in place and only if they are, move on. 
  2. Unnecessary protocols not shut down or disabled. The SMB protocol appeared to be the main method used by the ransomware to spread once inside an organisation. It should not be omnipresent in most networks, but gets installed by default by some new systems. Disable it if you don’t need it, and check after every upgrade or new implementation that it is still disabled. Run internal network penetration tests and / or vulnerability scans on a regular basis – at least annually – and remediate any Critical, High or Medium risks highlighted in a timely manner. Then test again, to make sure you’ve not introduced any further vulnerabilities. 
  3. Use of unsupported software. I know that in some cases software cannot be upgraded because legacy systems depend on it, and that goes for Operating systems too. Lack of support means no security patches or (in most cases) antivirus patches. Lack of support means your environment is becoming more at risk every day. I you have to use unsupported software, make sure it is fully patched up to and including the latest patches, then look for options which help reduce the risk. For example, can it be run in a virtual environment? Can it be run with a whitelist of permitted applications and software versions? If so, do both of those things. 
  4. Poor user awareness. It appears that a good proportion of infections came when documents which had been emailed in were opened by unsuspecting users. Training your staff in how to spot suspicious emails, documents and links has to be more than just a tick box exercise carried out once a year. It has to be something which people are actively involved in, something they talk about on a regular basis. They can then ask colleagues for their views or opinions on suspect mail or attachments without fear of being thought silly or being too cautious. Talking about this sort of thing needs to be a normal and common part of everyday business life. 

It can be seen that these are simple to do fix, and don’t necessarily cost the earth. The first three are really good candidates to include on a risk register and / or monthly (perhaps weekly?) security report. In all things cyber, it’s important to be able to know what “normal” looks like for your environment so that you can then measure improvements (or otherwise) of implementing new solutions. The last aspect, user awareness, appears to be changing slowly and I think we can do more to help speed it up. 

One thing that hasn’t come up too often in the analysis after the fact is something which isn’t particularly easy to do but which would help in cases like Friday’s. I’m talking about good asset management: knowing what you have and what’s on it. Having a corporate view of what software is currently – or at least was in place yesterday – could go a long way to knowing which devices you need to concentrate on. It sounds complicated, so let me explain.

My perception is that many systems were shut down as a precautionary measure, because people didn’t know where the infection was coming from or how it was spreading. Once those facts were known, restarting everything took quite a while because each individual machine would have to be manually checked to ensure it wasn’t infected and that it was patched appropriately. A good and up-to-date software inventory / asset list would have shown which devices were patched and could therefore be discounted from needing so much manual time. 

There’s a tried and tested mantra which I still like: if you don’t know what you have, how can you protect it effectively? 

In summary – you don’t need shiny hardware or high cost software. Do the simple things really well, keep measuring how well you’re doing them, and you’re in a great starting place. 

Global Cyber Attack 

Yesterday, May 12th 2017 saw a mass global cyber attack launched with impeccable timing just before the weekend. Over 75000 machines were affected in around 100 countries – so far. 

It is believed that a hacking group called Shadow Crew is behind the attack. This is the same group that hacked the CIA in the USA and a couple of months ago released hacking tools developed by that agency and the NSA.

The effect was for many businesses and government departments to be hit with Ransomware (which I’ll cover on here soon). This encrypted files and could only be removed by paying a ransom in a virtual currency called Bitcoin. 

Once the ransom is paid the bad guys may or may not decrypt the files – there are no guarantees. 

I said it was good timing because the Ransomware gives users 3 days to pay the fine. Many users will have started their weekend already (and in much of the Middle East the weekend is Friday and Saturday) so there’s a good chance that some users will not get to their devices in time and will have to pay – or trash their machines and rebuild them.

Many businesses and government agencies such as the NHS simply shut all systems down in order to prevent them being infected. This is one reason why the impact has been so huge.

No doubt the plan is that once the fix is known (for devices which are infected) then it will be applied to machines individually as they are restarted. 

It’s also worth mentioning that at present this doesn’t look like any kind of data breach. Files have been encrypted so the data is inaccessible, but the data hasn’t been accessed or copied – as far as we can tell at the moment. 

That’s what happened, so how do you protect yourself and your business? The answer is surprising straightforward. 

  1. Install the MS-17-010 patch on all Microsoft Windows devices. This Critical patch was released by Microsoft on 14 March this year, and the Ransomware takes advantage of a vulnerability which the patch fixes. If your machine has been set to apply updates automatically, then assuming you’ve rebooted your machine since the update was applied you should be safe. If you don’t have Auto Update enabled – manually search for updates and install them now. 
  2. If you’re on a network, make sure that your network administrators have disabled the SMB protocol on all devices that don’t need it. This is how the Ransomware spreads on an internal network.  
  3. Make sure your antivirus software is up to date and running 
  4. Be extra careful when clicking on links you don’t recognise and on unsolicited documents.
  5. Make sure any devices you use for backing up your data are not physically connected to your computer – if they are, then chances are your backups could get infected too. 

That’s all you need to do. It’s clear from this outbreak that the things I’ve been talking about – patching, antivirus, backups, phishing awareness etc – which are all simple things to do but often neglected, are all really good protection against even global attacks. 

I’ll be releasing a podcast about this later today, so keep your eyes peeled for that! 

To certify or not

I published this article on LinkedIn on May 3rd 2017. Here it is in its entirety for you.

The age old question of whether certification is important or not reared its head again recently. I was talking to two prospective clients, and they held opposing views.

One wanted their staff to be well trained, but didn’t want them to complete any certifications. They were concerned that once the member of staff was trained they’d look elsewhere for [and get] a better paid job.

The other wanted their staff to be well trained, and saw the certification process as a way of validating that the learning on the course had stuck. They thought they would be able to market themselves better with certified staff, and make more money that way.

I can see both side of the arguments, as I’m sure you can. Perhaps the main differentiator is that in the first case, they may not be able to charge their clients as much, and will therefore have lower income / profit margins, which would mean they couldn’t pay their staff as well. In the second case, their ability to charge higher rates could be reflected in higher income and therefore they may be able to meet the wage demands of their teams.

To be honest though, neither of these scenarios floats my boat. I’d much rather employ someone with appropriate experience than just take someone who has passed a course and may have a piece of paper telling you that.

Many years ago – you’ll realise how long ago shortly – I received a salutary lesson in this very topic. I had a member of staff come to me to say that they had done a lot of self study and had not only passed their Microsoft CSE but their Novell CNE (I told you it was a long time ago). As a result, they wanted a massive pay rise – something like 35% as I recall. Naturally I said I would have to think about it and, if appropriate, ask approval from my manager.

Fast forward to the following week. I was disinclined to award the rise as I had concerns about the person’s ability, but had yet to tell them that. They came to me (because at the time I was still relatively hands on technically) and asked how to bind an IP address to a network card. (Again a sign of how long ago this was, TCP/IP was only just starting to appear on Windows-based networks.) Naturally, my first question was whether this had been covered in either the Microsoft or Novell courses – it was – and I then suggested that the staff member in question focus on getting experience before thinking about pushing for a pay rise.

I recently had cause to consider the benefits of certification for, shall we say, more senior people (myself included). Some clients seem to not worry too much about the letters after your name and prefer to see the experience you can bring to bear on their needs.

It is very helpful being able to speak from first hand knowledge about the process for obtaining various certificates and accreditation, but I find that I don’t get to talk to prospective clients because I’ve done a few exams. They are more interested in what experience I’ve had, where, and whether any of it has relevance to their requirements / situation.

My advice is therefore this: make sure you gain experience in several sectors including SME, government, public sector, etc, and make sure you know how to apply that experience in a range of scenarios. Being flexible and adaptable in your approach to client requirements is what you should be aiming for. Having some experience of the certification process and perhaps even a degree is helpful, but it’s not what is really needed by the clients out there.

Cyber Security is Doomed

I published this article on LinkedIn on March 23rd 2017, and rather than post a link to it I thought I’d share it here. 

Yes, I know that was a controversial headline deliberately cast to lure you into reading this article, but it’s also true: now I’ll tell you why. 

Cast your minds back to the late 90s / early 00s. It seemed like every aspect of IT was related to e- something. E-Business, e-commerce, e-procurement etc were all the rage. Everyone seemed to be talking about e-services of some sort, and there was a lot of hype and excitement about this new way of working. 

I even went out and bought cufflinks with an @ symbol on them as I was doing so much e-consultancy.

But then gradually, those bubbles seemed to disperse. They didn’t burst, it was more like bath bubbles slowly decaying and disappearing till you’re left in a tub full of water. E-business, e-everything became just business, the way everything was done. The plethora of web services and interconnectedness of things means that the e- is superfluous, it’s accepted as a given. 

My prediction is that soon – maybe within 5 years, certainly within 10 years – the word “cyber” will be dropped from all sorts of services and descriptions. Cyber security will become what it’s always been – security. Cyber crime will become just crime. Cyber awareness will become awareness. And so on…  

I think we’re close to that happening because something like 90% of crimes committed today have some form of cyber aspect, whether it’s the use of Google Maps for reconnaissance, social media to find out that someone is on holiday, LinkedIn to determine corporate structures and contacts for spear phishing etc. The use of cyber services to prepare for, to plan or to commit crime is now pretty much in 100%.

We use cyber services of some sort all the time, without thinking about it. They’re everywhere. For most people, it starts with their mobile phone, that helpful little GPS tracker which probably wakes you up in the morning, delivers your email, gives you access to social media etc. What about the computers in your car, on the bus / train / plane you take to work? Or in the stock booking systems for your favourite shops? Cyber enabled services are everywhere.

At what point do we accept that it’s how we do business, how we interact socially, how we live, and drop the prefix cyber? That’s why I said at the beginning that cyber security is doomed. In the end, we’ll just be talking about security, without any prefixes, which in my mind is a good thing.

Who should the CISO report to?

This article appeared on LinkedIn on 25th April 2017. Rather than publish a link to that post, I thought I’d repost the whole thing here.  

This question caused a lot of head scratching in the past, and it continues to be a very contentious issue. 

Historically, the Chief Information Security Officer (CISO) has typically reported to the CTO (Chief Technical Officer) or perhaps the CIO (Chief Information Officer) – if a company had either of those roles. The majority of companies viewed (and perhaps continue to view) Information Security as an IT or Technology issue, and those that are a bit more forward thinking ally Information Security to Information Management, hence these two traditional locations in the company hierarchy. 

The other most common reporting lines which I’ve witnessed are reporting in to the CFO (Chief Finance Officer), or reporting in to the CRO (Chief Risk Officer). There are good reasons for both of these – one holds the purse strings (and security rarely costs less than not having any) and the other is concerned with risk (and security is all about risk mitigation).

What should we be doing?

I think it is very much accepted these days that the CISO should be a full board member, and this fact has to be welcomed. To my mind, there should be strong dotted line from the CTO, the CIO and the heads of HR and facilities in to the CISO. I know it’s a bit chicken and egg, particularly with the CIO role, but I think that all of these roles must be accountable to the CISO in terms of security. 

The CISO should not be telling any of the other roles how to do their jobs, but they should be defining the security requirements which fall within the remit of each of these roles. 
For example, the CISO shouldn’t be worried about whether Windows, MacOS or Linux is used as an Operating System, but they should be concerned with whether those machines are patched, have antivirus installed, are encrypted if necessary etc. They should let the CTO work out how to do all of that, on whatever OS is required, but the CTO must ensure that the CISO’s requirements for security are met. 

As another example, the CISO shouldn’t concern themselves with HR issues such as appraisals, pay etc., but they do have an interest in ensuring that new starters are appropriately vetted, that access rights are revoked on termination of employment etc. 

Please note that I’m not suggesting that HR, Facilities, IT etc. should report to the CISO: that just wouldn’t make sense. All I’m suggesting is that they have a level of accountability in to the CISO and that companies would do well to recognise that going forward. Who’s with me? 

You may also be interested in this article from Dark Reading, about why CISOs have a different view of the primary objectives of cyber security compared to some other board members.  

Shiny kit isn’t always what you need

This article appeared on LinkedIn on 5th April 2017, and you can read it in full here. 

Earlier this week I saw an item on LinkedIn where someone was asking advice about building a SOC (Security Operations Centre). It set me thinking that often we see a great clamour for solutions, for the latest shiny bit of kit with flashing lights and a cool name, but do we ever stop to wonder why we need it?

Before we even look at equipment or software, the very first step should be to look at our business objectives. Why are we doing what we do, and do our objectives help achieve that? What is our end goal? Without knowing this, how can we possibly determine the best solution for our needs? 

We should then look at our risk registers, to identify the key areas of risk, and to determine whether by mitigating any of those we will reach our end goal – or at least be closer to it than we currently are. How many of those risks require human interaction, and how many are dependent on hardware and software?

Looking at our policies and procedures next, we should try to establish whether they are helping us achieve our stated aims, or are they hindering that task? Are we able to amend the working processes in a way that makes them cost effective and help us meet our business goals? 

Do your staff understand the business objectives, and are they appropriately skilled / experienced to help reach those objectives? If not, what do they need to help them understand, and what training / guidance do they need? 

Once you’ve gone through all these steps, you’ll have a good idea of what’s missing, what is preventing you from achieving your business goals. Write these down, as they will form the basis of a specification document which will identify the requirements of any solutions you need. It might not be that big shiny box from vendor A: it might be additional training for your staff, it might be a paper based process or it might be a bit of software instead. You’ll also have some idea of the level of risk, and how much money you’re able to devote to addressing the gap, through a cost benefits analysis. This will help determine your budget for any additional actions / solutions you find that you need. In some cases it boils down to scale, and the type of business. For example, why would an SME with 5 people working in an office need their own SOC? They may need one, but could probably outsource it rather than build and maintain their own much more cheaply.  

I’ve worked on a number of consulting engagements where the client has told me they need the latest and greatest bit of kit, but when pressed for the reason behind this decision they could only come up with “because all my competitors are using it so I should have it too” or “the salesman told me it would solve all my problems”. Those are hardly sound business reasons, wouldn’t you agree? 

I was speaking at an event recently, one of a long series, and the moderator told be before it began that they’d had quite a few people in, from the intelligence community as well as vendors, telling the attendees that this gadget or this software would solve all their problems, would address their biggest issues, would remove most of their risk. Fabulous claims, but how could they be sure? They didn’t know the attendees’ businesses, they didn’t know the policies, processes, controls and systems the attendees already had in place, they didn’t know what the attendees’ risks were – so how could they possibly offer a solution? It doesn’t make logical sense, does it?  

I’m reading a really good book at the moment, called Start With Why by Simon Sinek. It very sensibly suggests that before setting out to build a new business, or to grow an existing venture, you should ask yourself why you are doing it. The same applies to technical solutions I think – work out why you are doing what you are doing, and why you need to change, then take things from there. The answer may not be that super cool shiny box with lots of flashing lights. 

Cyber Essentials and ISO 27001 explained

At some point in your working life, you’ll probably come across these two terms, and you may want to know more about them. Look no further than this article on LinkedIn, where I’ve gone into a bit of detail about the two, what their similarities are, what the key differences are, and I’ve even given some advice on how to choose between them.