It is not news that the human factor is the greatest risk to computer systems. In fact, a multitude of researchers have been pointing to this conclusion for at least a few decades now (Colleman 2011). It does not matter if it is a virus, a malware, a phishing attack, a faked website, a true hacking action, a sorry mistake, or even an accident. Behind each one of these actions there will always be a person.
It could be a careless user trying to download some free content from his home computer, a distracted employee who forgot to follow his company’s policy, an individual determined to hurt the organization who laid him off (The McCart Group 2011), or a criminal hacker who wants to make money by selling any information or secret he may steal.
The fact is humans are as unpredictable as the motives that drive them to commit such actions, which makes it very hard for any computer system or organization to monitor them and react properly against security breaches.
Think about it… How easy is it for someone to make a mistake? It happens all the time, right? Now, how hard is it to hack into the Pentagon? Very hard (at least I hope so). It takes a lot of effort and preparation to pursue such attack, and even worst, to cover all the tracks and leave the system undetected after the attack is over. As you may see, no matter how small or how big the damage is, the human factor is undoubtedly the major player behind such actions.
Still, to me, as a software developer, there is more to it than just the human factor. There is also the “computer factor”! Please, let me explain myself. I do agree that humans are the medium through which the “thought of an attack” could become a real threat. But it does not seem right to just label people as security threats. In my opinion, computers and computer systems are pieces of a complex puzzle which is still evolving and has not matured yet to harmoniously interact with humans, and vice-versa. Human behavioral patterns should be taken in consideration whiles designing computers and computer systems.
The computer world and everything that supports it is a work in progress and it seems that much time was spent formulating the machine rules, its electronics and the miniaturization of its components, its architecture, its computer languages, its management environment (which we call the operating system) with the purpose of making it ever more easy-to-use, mobile, powerful, faster, and scalable.
Computers are fascinating devices indeed. However, not nearly as much time was spent trying to understand the effects of the human-computer in the sense of self-aware computing. In fact, studies such as the Computer Self-Efficacy: Development of Measure and Initial Test (Compaeu & Higgins 1995), Self-Aware Distributed Systems (Rish, Beygelzimer, Tesauro, & Das 2004), and Self-Aware Computing (Agarwal, Miller, Eastep, Wentziaff, & Kasture 2009) are still considered science-fiction by most computer scientists.
Humans are in control and aware of their actions and needs. Data is just an abstraction unaware of its own context, purpose, and lifecycle. If the knowledge gained from such studies could be translated to computer systems security and If data could be made aware of its own context and usability, maybe it could "defend itself" against improper use and contribute to its own integrity throughout its lifecycle (usage) on computer systems. This would allow for more adaptive, self-healing, goal-oriented, efficient, resilient, and easy to program and to maintain systems.
Conclusion
Humans do represent the biggest risk to computer systems security. However, the lack of studies in human-computer and computer-human interaction towards self-aware computing is the main reason why computer systems are so vulnerable.
In a cyber-world, where the governing laws are all about sophisticated abstractions (objects) of our physical world, data is just a static model, unaware of its purpose and incapable to react to its users’ actions.
We try to protect the castle by adding layers and layers of hardware and software around the data. But once the data is outside of a safe environment it just becomes a defenseless prey.
If data could be made aware of its context and integrity mechanisms could be embedded into its abstractions (i.e. a file, a group of files, a data stream, and etc…), computer systems could become more adaptive, self-healing, goal-oriented, efficient, resilient, and easy to program and to maintain.
Works Cited
Compeau, Deborah R., & Higgins, Chriostopher A. (1995) Computer Self-Efficacy: Development of Measure and Initial Test. Retrieved from: http://www.jstor.org/pss/249688
Rish, Irina, & Beygelzimer, Alina, & Tesauro, Gerry, & Das, Rajarshi. (2005). Self-Aware Distributed Systems. Innovation Matters. Retrieved from http://domino.watson.ibm.com/comm/research.nsf/pages/r.ai.innovation.2.html
Agarwal, Awant, & Miller, Jason, & Eastep, Jonathan, & Wentziaff, David, & Kasture, Harshad. (2009). Self-Aware Computing. Massachussets Institute of Technology. Retrieved from: http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA501367
Colleman, Kevin. (2011). Digital Conflict. Retrieved from http://defensesystems.com/blogs/cyber-report/2011/07/human-vulnerability-computer-systems.aspx
The McCart Group. (2011). Disgruntled Employees – A Cyber Risk.. Retrieved from http://www.mccart.com/Webs/_Base/Default/uploadedFiles/documents/Newsletter%20Articles/January/Disgruntled%20Employees%20-%20A%20Cyber%20Risk.pdf