As technology expands its influence in every aspect of our lives, the need to protect people from digital threats grows dramatically. Furthermore, because cybersecurity technologies are rapidly strengthening, parties seeking to breach information systems are increasingly exploiting human weaknesses, rather than strictly technological vulnerabilities, in order to achieve their goals. On both a personal and professional level, therefore, it is critical for people to understand the human aspects of cybersecurity.
One area where this comes to play – and in which many readers may want to take action – is with regard to smartphones. In truth there is no such thing as a “smart phone;” the devices in our pockets are full-blown computers – possessing more processing power, and housing more sensitive data, than desktop machines of just a few years ago. Smartphones also sport 24×7 connectivity to insecure networks, and have a far greater chance of being stolen or lost than machines that weighed tens of pounds and never left our offices. Terming pocket computers “smart phones” is like describing a jumbo jet as a “horseless carriage.”
Yet so many people who routinely ran security software on far inferior computers a decade ago don’t do so on far riskier smartphones. Why? We view them as phones, not computers, and people are used to securing computers, not phones. Our perception is distorted by our biological history; since evolution takes far more than a human lifetime to transform one species into another, we humans are biologically predisposed to view offspring as belonging to the same species as its parents. Since we typically obtained “smartphones” by upgrading our phones, purchased them from the same “cellphone service providers” as our older phones, and retained “calling plans” when obtaining the new devices, we view them as the next generation of phones, even though they have evolved into a completely different species. Furthermore, providers not wishing to discourage people seeking to upgrade their “phones” by offering replacement “computers,” called the computers “smart phones” — exploiting the same human weakness, reinforcing our mistake, and contributing to our risk.
Likewise, we hear so much in the media about computer viruses and malware – about their advanced capabilities and technological sophistication, how much damage they do, and how they are now utilized for cyberwarfare. What we don’t hear often enough is how malware is increasingly reliant on human error – and how the best defense against malware is basic human vigilance and common sense.
Malware commonly invades computers when people download music, applications, or movies from rogue websites. Besides the obvious legal issues, do downloaders consider the basic human question – why is someone whom a downloader never met, who is neither a friend nor a relative, offering the downloader free material? Why is he/she spending time, energy, and money – and risking legal problems – to provide it? We all know that there are no free lunches, so why would someone accept digital candy from a stranger – especially a stranger who publicly violates the law? This is a human error whose risks we have known about for centuries – yet, somehow in cyberspace the lesson is lost.
Even Stuxnet — the technologically sophisticated malware that temporarily crippled the Iranian nuclear program by misprogramming uranium-processing centrifuges — relied on human error to obtain access to a secure network. While the details of the mistake are not public, it is self-evident that without some serious lapse in human judgment – perhaps people allowing a spy or saboteur in, perhaps someone bringing into the facility a USB drive labeled “Top Secret” in Persian found nearby, etc. — Stuxnet would have been altogether unable to communicate with the targeted centrifuges in order to reprogram them. Serious human error was also necessary for the authors of Stuxnet to obtain the information needed to code the malware, as clearly the Iranians did not intend to publicize the make and model of their centrifuge control units.
Why do humans make these mistakes?
For the vast majority of human history, threats were visible and/or tangible, physically dangerous, and in close proximity. Dangerous things looked dangerous. There was no mistaking an invading army, a wild animal, or a fire. Phenomena which could not be explained with something visible or tangible – for example, disease – were usually ascribed to supernatural forces. While we now have conscious knowledge to the contrary, we retain biologically-evolved programming to view localized, visible, and tangible threats as more dangerous than invisible risks at a distance or seemingly nonthreatening objects. While in many cases such an approach may be beneficial, in the case on cyberthreats it contributes to the problem.
We commonly hear talk about “cybersecurity education” – posters warn of information security risks, memos discuss current threats, and the Department of Homeland Security celebrates an annual National Cybersecurity Awareness Month. But the cause of humans being the weak link in the cybersecurity chain is not simply a matter of insufficient education, it is a matter of human nature. If we are going to improve our digital safety we must dramatically increase the amount of human psychology that we apply toward the design of cybersecurity plans, systems, and technologies. I will discuss this more in future posts.