As early as 1994, the concept of security by design was beginning to take shape; anticipating malicious intent and incorporating designs to circumvent or altogether exclude scenarios exploiting vulnerabilities in software—and to the degree it could be done, hardware. A quarter-century later, hackers have learned well to adapt, bringing their own creativity and design expertise to bear in their efforts to penetrate ever-more-thoughtful means of protecting sensitive data.
Governments have plenty of skin in the game, here. The U.S. Government under the Obama administration ruined 20% of Iran’s nuclear enrichment program using state-sponsored hackers. Russia is theorized to be using the Ukraine as a testing bed to hone their skills for the rest of the world. Even Microsoft is out to profit from political unrest caused by hackers; the software titan created an application designed to detect state-sponsored attacks on organizations.
For the largest part, bad actors still need the human element to pull off a successful exploit. Some 97% of malware attacks are based on social engineering. The methods vary, but all the data point to the fact that humans are the weak link. Some malware is diabolically designed to bide its time for human interaction, lying dormant until someone comes along and makes repeated keystrokes or mouse clicks, whereupon it begins exfiltrating the results. Spoof sites mimic real sites—and are not that hard to make—depending upon gullibility or just plain ignorance to trick victims into submitting credentials.
All of these are dependent upon someone being fooled, making a mistake, or intentionally inserting or allowing malicious code along the path leading to sensitive data. Even when air-gapped systems are compromised, the code that flashes the lights in a meaningful fashion has to be introduced by a human. Amateur hackers don’t have to be quite so ingenious, though, since more forward-thinking hackers have created malware that is bent not upon pilfering data or ransoming unrecoverably encrypted files, but paving the way for other hackers. That’s right; meta-hacking has arrived in a cyberverse near you, and it’s here to stay.
Even if we turn to Artificial Intelligence (AI) and Machine Learning (ML) for data protection—which is forecasted to happen very soon—the same tools can be used to anticipate AI/ML-style thinking and learning, and then to circumvent them. Machine Learning is still bound to an algorithm, and algorithms can be predicted. Attempts to allow AI/ML varying degrees of freedom has had dire results ranging from car accidents to racist proclamations to medical treatments that would have been fatal without human intervention. A few years ago Microsoft released an AI onto Twitter, from whose illustrious and discerning users it learned almost overnight to embody many of the worst aspects of the internet community. We still have a long road ahead of us where dependence on computers is concerned.
Intertwined with these disquieting findings is the fact that researchers have, in a controlled environment (we hope), created ML generative adversarial network (GAN) algorithms, which code their own variants of malware based on behaviors they witness in other ML code. Combine this with some of the other facts in this article, and an AI with the freedom to make its own decisions, write its own code, and lie better than we do might just decide humans are the problem and quietly plan our demise. Maybe that road mentioned above is shorter than we know.
A less science fictional result is more probable: that the infrastructural underpinnings of the world wide web—electricity—will be the target of large-scale attacks, whether they come from people or a cathartically inclined AI. With this in mind, in late 2018 the Pentagon isolated a power grid on a small island off the coast of New York and tested a worst-case power grid hacking scenario. If the grid completely shuts down, power companies must perform what’s called a black start, meaning they get a secondary grid up and running to restore power; one that’s not connected in any way to the compromised grid for obvious reasons. It’s akin to having to rebuild your server farm from the ground up with backups, different equipment, and a different network.
Chances are your business has been hacked or is not prepared to handle a serious hacking attempt. The cost of data integrity must be added to the fiscal considerations for building or maintaining a business; anyone who fails to do so will fall behind, losing the trust of their supporters and customers, and potentially facing prohibitive fines—potentially from as many regulatory commissions as governments where one does business.
GDPR and similar legislative efforts attempt to create a preventative framework within which we—the weak links—can curtail the worst aspects of a potential hacking disaster before it’s too late. Despite the “adequacy status” described by the EU, it’s impossible to accurately predict all the cyber threats from inside a nation’s border. The statistics for human fault where breaches are committed show that prevention of data exfiltration happens in the majority by thoroughly vetting potential employees for humane values, honesty, and trustworthiness. The remaining few percentage points can be managed from the inside out, securing the core via encryption, strong authentication, and an adaptable model that prioritizes a data-first mentality. Design your business with data security in mind and—barring human error—the investment will pay off a hundredfold in consumer confidence and peace of mind.