Cybersecurity is a growing concern around the world, and while the discussion focuses on technology, we cannot forget the role humans play in preventing or facilitating cyberattacks.
Matthew is a system administrator at a growing startup. For the most part, Matthew's job is easy, most processes have been automated, and without a few bugs here and there, he spends most of his time optimizing and providing technical support to the rest of the team.
That is, until one day Matthew received a billing notice from his cloud provider. To their surprise, they went over budget. After he checks the dashboard, it's even worse than he imagined: in less than 24 hours his systems have consumed more resources than allocated for the month.
Matthew discovers that someone has gained unauthorized access to the cloud and is running scripts. In other words. Matthew's startup, about 30,000 web pages a day, was the target of a cyber attack.
With the increase in remote work and cloud computing, cyber attacks have been a growing threat, the estimated loss in 2015 due to hackers was more than 3 billion dollars, and experts believe that by 2025 we could be facing losses of up to 10.5 billion dollars .
Companies are spending more than ever on cybersecurity, implementing strategies like DevSecOps to create processes that are more secure and less prone to exploitation. Unfortunately, technology moves both ways, and with better security also comes more refined methods for breaching it.
Designing with security in mind is always difficult, a system is only as secure as its most vulnerable part. All it takes is one bug or one omission to open the floodgates to all kinds of exploits.
Humans are also part of the equation, and unlike software, we cannot receive security updates. In terms of cybersecurity, we are passive, attack vectors waiting to be exploited by someone with knowledge of social engineering.
social engineering
It should come as no surprise that cyberattacks are very different from their portrayal in the media. Genius hackers intercepting a transmission and writing code on the fly is about as realistic as an 80s action hero entering a building by hacking his way through mooks.
In fact, most cyberattacks aren't that flashy. In fact, the most common forms of cyberattack, phishing and man in the middle , rely on tricking a recipient. Something that requires very little to no coding skills.
Why force entry into someone's account when they can voluntarily provide their personal information? Why waste time trying to find an exploit to access a network when someone on the inside can give you access?
Inside jobs happen, but most of the time people don't realize they are being played by con artists. After all, social engineering works by exploiting the vulnerabilities of human cognition. From our limited ability to process information to our innate belief that most people are good.
Case in point, in the past, when USB sticks were extremely popular, cybercriminals distributed them on the streets as advertising material for fake companies. All it took was for a regular office worker to need a USB stick and put it in their machine, and hackers would have access to their computer or even the company network.
Who in their right mind would think that a legal person distributing advertising products would be part of a cybercriminal network? Most of us would assume that a company is trying to promote its brand. It's the most plausible explanation. This is what cybercriminals count on.
But most people can recognize a suspicious email or a strange phone call, right? Yes, but in this type of attack, the criminal relies on the possibility that one person in a hundred will not do it. As with the USB outage, all it takes is one vulnerability.
And once again, with technology comes new challenges. A quick example: Discord is a social media and chat platform for gamers that has exploded in popularity due to the pandemic. It's fantastic to meet and play with other people with similar interests.
But it is also known for its wide range of exploits, even allowing RAT (remote access Trojans). If an employee were to launch Discord in a browser on their work computer to chat with friends, they could inadvertently download malware from a “trusted source.”
“No, that won’t happen to me”
Perhaps the biggest risk is thinking that this kind of thing won't happen to you. Allow me to share a quick story.
I usually provide consultancy services for a travel agency. Every three weeks they have to submit a report on ticket sales for a specific airline. The airline created a web application so that agents could check their status as well as upload the necessary information into each report.
Due to a poor user interface, an agent asked me if I could help him upload files. To my surprise, the app had more than its fair share of bugs, one of which triggered a page not found error. That's fine, except for the fact that the application was created using a very popular framework that has a debug mode enabled by default.
So the page not found error wasn't the typical 404, it was a full debug report, with code and routes, and everything you need to get a good idea of how the server was structured. Remember, this is an app designed for people to upload critical information like credit card numbers and personal details.
It's not the framework's fault, one of the first things the documentation says is to turn off debug mode before going to production. I wrote a long email explaining the risks of keeping the mode enabled and sent it to the web administrator.
A few days later, I received a response assuring me that the risk was minimal, as only important people had access to the web application. Except that wasn't the case, I was a living example of a non-agent who knew the app.
And the error? It can be triggered by literally writing nonsense in the navigation bar of any web browser. This kind of arrogance is the bane of security systems, thinking you can leave an exploit because it poses very little risk is asking for trouble.
Training your people
Everyone in a company, regardless of position, must go through a mandatory safety workshop. Understanding the basics of security goes a long way toward preventing people from making the kinds of mistakes that can end catastrophically.
At the same time, it is very important to create a set of security policies and promote them by offering incentives. Safety-conscious behavior is rarely reinforced. We punish people who make mistakes, but we rarely reward those who set an example in promoting a safety-conscious culture.
In short, the human element is as central to cybersecurity as it is to security in general. Technology can help us, but it can only take us as far as our practices allow.
Source: BairesDev