Fun fact: There's a business that will accept Bitcoin (and other cryptocurrencies) to let you search supposedly “secure” files in AWS and Azure. AWS and Microsoft cloud contracts clearly explain that the security of applications, data, and files is a customer responsibility—not that of the cloud service provider … which is why they won't do anything about it even though this site operates in plain sight. In a nutshell the Shared Responsibility is that AWS, et al, are solely responsible for the security of the cloud, while customers are responsible for security in the cloud.
Given these breaches—not all of which were in the cloud—what are the chances that the people on this list understood the risks of what they were doing? Given that Spectre and Meltdown were zero-day vulnerabilities, until AWS, Azure, and Google patched their software/firmware, what could the customer do? What if the attackers knew about these zero-day vulnerabilities before they were discovered, and were exploiting them? While this is conjecture, there is a thriving underground market for zero-day vulnerabilities; businesses have a responsibility to factor this into their risk management equations, but how many really do?
In an on-premises environment, the chances of a completely unknown user on a VM next to yours is near zero—only authorized employees of a company would be able to spin up such a VM in an on-premises environment. But in a public, multi-tenant cloud, having a random unknown user in a VM right next to yours is a given unless you're paying for a specialized and/or dedicated environment, in which case all you've done is outsourced your data center to a Cloud Service Provider (CSP).
If you were worried about getting your wallet stolen, would you feel safer in your car with your family, or on a crowded subway train? It’s easier to protect something valuable in a known environment vs. an unknown one. But with a computer, how many people are really aware of what's going on inside? It’s a black box even to the most knowledgeable professionals because things have become extraordinarily complex.
Managing risk is not a science in the computer field—we don't even have a government agency to track all the known breaches in the US like we do with accidents/mishaps that occur in other fields (automobiles, airlines, shipping, chemicals, rail, etc.). Even the site I referenced above—a private site—got tired of keeping track in 2019 and stopped recording breaches. We don't measure the relative strength of authentication technologies on a scale, to judge their ability to protect the credential from being compromised. The Shared Responsibility model is designed to protect the CSP from liability because most business managers who sign cloud contracts never read the fine print, or if they do, rarely take the time to understand it. This makes it easy for CSP lawyers to argue their cases in front of juries or arbitrators on where the CSP responsibility stops.
It has more to do with accounting than technology. Companies moving to the cloud tend to follow a few basic steps:
- They stop buying computers, network and storage gear; this eliminates depreciable capital expenditures—which is good for the balance sheet
- They lay off a significant number of their employees; this is good for their profit and loss (P&L) statements
- They eliminate their data centers and all the assets/expenditures associated with them; this is good for the balance sheet and P&L statements
If the CEO/CFO/CIO manages to do this right, a public company's earnings per share (EPS) shoots up because they've eliminated a lot of things dragging their profits down, and it looks like they're doing marvelously with their earnings. This boosts stock prices and the profits executives make on their stock options (which they likely received earlier) plus any bonuses for doing this migration to the cloud successfully.
All goes well for a while, until the complexity of the cloud, the cost of ever-increasing charges starts hitting their P&L statements. While CSPs show their prices as fractions of a penny per KB stored—per AES key, per CPU second (Azure, AWS)—unless they’ve gone through the process before, these kinds of metrics are new to CIOs when they move to the cloud. When the bills start coming they don't know how to dispute the numbers. With the exception of file storage, CPU cycles, or network traffic consumed from the past month, these rates cannot be measured if they never started keeping track—and, of course, keeping track of that data will cost them money, since more cloud services are sold just for that purpose. By the time the company realizes that they're paying more than what they anticipated, it's too late.
They've disposed of all their servers. They've given up their data centers. They've laid off most of their IT Operations staff. It’s impossible to move away.
In my experience, most CIOs of this generation have never managed an operation of bringing back computing in-house—for the last 30+ years, they've only been managing outsourcing, not insourcing. This could easily explain why Amazon makes most of its revenues from AWS, and why Amazon, Microsoft, Google, and Apple are trillion-dollar companies while everybody else ends up working to keep paying their bills (taxes?) to them. I'm not saying that cloud computing is all bad. But you’d better know what you're getting into before you get in too deep—once you cross a certain point, returning to owning your own hardware and administrative staff can be daunting.
If you find yourself sympathizing with this narrative, contact StrongKey for a low-cost open source single-tenant solution to key management and on-premises, cloud-based, or hybrid hosting with maximum security in mind.