In the mid-1970s, when I was working in the Navy’s new cybersecurity group, we had it easy.
Not that we felt like that, of course. Automation was still pretty primitive, and encrypting messages often took hours (if not days). But looking back, we had one huge advantage over the cybersecurity engineers of today: we didn't let anyone touch our equipment. Everyone had to go through us to send messages and protect data, and so we could make sure that this was done correctly.
I suspect that these reflections have come to mind recently because of the COVID-19 pandemic. Most of my younger colleagues are now having to deal with a problem that was almost unheard of in my day – how to make sure that staff doesn’t undermine security controls, and how to educate them out of the common misconceptions about cybersecurity (Read more about COVID-19 and cybersecurity in our blog).
And so, amid talk about the setting up of a Federal cybersecurity agency, and much excitement about the way that AI is helping organizational cybersecurity, I thought I'd take the opportunity to share some wisdom gained from 50 years in the business.
Spoiler alert: nothing ever really changes. Your users are still the biggest threat.
About the author
Sam Bocetta is a cybersecurity coordinator and a freelance journalist specializing in U.S. diplomacy and national security, with emphasis on technology trends in cyberwarfare, cyberdefense, and cryptography. (Email, Twitter)
Automation, Automation, Automation
I would hope that the value of automation is clear to most cybersecurity analysts. If it's not, you should revisit your sophomore textbooks. When it comes to IT tasks, there is a fast, efficient, and safe way to do things, and then there is doing the same task manually. Your boss might not understand why you are spending a week automating a process that will take one hour by hand, but you certainly will.
I don't know how many hours (days, weeks, months) I've saved over the past 50 years by automating tasks the first time I was given them, but suffice to say I probably wouldn't have reached retirement age if I had had to do everything by hand.
Beyond saving you time, automation also has another huge advantage: it is safer. There is nothing more likely to cause mistakes than having to complete a repetitive, boring task once a week. By automating your processes, you reduce the possibility of human error, which – as I will explain shortly – is still the biggest challenge that cybersecurity analysts face.
Policies Are Useless
A second, related point that I want to share is this: that cybersecurity policies are not worth the paper (or drive) they are written on.
That might come as a shock, so let me qualify it slightly. It's certainly important for organizations to think carefully about how to protect themselves, and for security staff to have a central policy from which to work.
But you should also recognize that most of your staff are not going to read your carefully designed policy, let alone follow it. Just look at how few employees take the most basic steps to shore up their business computer security, and you'll see that most simply don't recognize the value of cybersecurity.
So what's the solution?
Well, technical controls. Don't ever rely on users to follow a policy. Instead, lock down what they can do with their machines as much as possible. They will complain, but it's for their own good.
Beware the User
All of these points lead to my final observation: never trust your users.
This is an old adage in the cybersecurity business, but one that bears repeating. It is also one that is backed up by the stats; despite all the advances made in AI-driven threat intelligence systems and on-the-fly endpoint security over the past two decades, the simple phishing email remains the biggest threat to most systems.
Some analysts will tell you that the user is just poorly informed, and not actively mischievous. They claim that if you provide your users with a guide to encryption and managers with a guide to vetting cybersecurity vendors, they will educate themselves and make intelligent decisions. Don't believe it.
I don't mean to blame users, of course. It's just that their priorities are completely different from those of security analysts. The biggest benefit conferred by IT, for most people in most situations, is speed. This means that users might be happy to comply with security controls when they are not stressed out, and not being pushed to compile a report within the next few hours. But realistically – how often is that the case?
Limiting the damage that your users can do to your systems can be approached in a number of ways. Limiting their access to critical systems is a good start, as is educating them about the true dangers of working insecurely.
Nothing Ever Changes
Or, I guess, you could take a more draconian route, and not allow your users to do anything at all. That was our approach in the Navy, where access to mainframe terminals relied on extensive vetting processes and a knowledge of how these machines actually worked. In practice, no one without a grad degree in computer sciences got anywhere near a computer, and it was great.
Nowadays, of course, that's not really feasible. Unfortunately.
But the principle remains the same: there really is nothing new under the sun. The most important lesson I can give from 50 years in the business is that you should limit access to critical systems to those staff who actually need to use them, and who actually understand them. Whether you are running an SMB or a huge multinational corporation, the user remains your worst enemy, just as he or she did back in the 1970s. We were just lucky we didn't have to let them in the server room.