Ask a security professional what the acronym "CIA" in data security expands to. If you get the response "confidentiality, integrity, availability," then you'll get an indicator of why the state of cybersecurity is so woeful today. It also might explain why data breaches have been on the rise, and why 4.1 billion records were exposed in just the first half of 2019.
The Twitter breach, however, is an unusual one. It provides an example of what happens when professionals don't understand the purpose of a specific technology (to which CIA refers) and the attacks it might deter.
In the late '80s, early '90s, when public-key cryptography made a splash, and Phil Zimmerman released Pretty Good Privacy as open-source software through MIT, early adopters used the acronym CIA to highlight that public-key cryptography was unique in that it provided three business benefits in a single technology:
• Confidentiality: By generating a symmetric key and encrypting sensitive information with that key, using public-key cryptography, the sender could further encrypt the symmetric key with a recipient's public key, package the encrypted information and encrypted key into a single object, and send it to the recipient over email. Upon receiving the object, the recipient would use their private key to decrypt the sensitive information.
• Integrity: Public-key cryptography preserves the integrity of information transmitted from a sender to the receiver through the use of a digital signature. The novelty of the digital signature lies in the fact that the recipient can verify it with just the sender's public key. If the digital signature verifies correctly, it confirms the message's integrity.
• Authenticity: If the receiver was certain about the sender's public key, this indicates that as long as the sender maintained secure control of the private key, no one other than the sender in question could have sent that message. Not only was the message not tampered with, but the message was authentic.
As long as users were authenticated by a password (even in conjunction with a one-time PIN (OTP) to their mobile phone or email), as long as Twitter was not acquiring digital signatures from the Twitter user (and verifying those signatures before disseminating them over its network), there was little possibility of Twitter determining the authenticity of those tweets until the original users themselves might have complained (or the nature of those messages raised red flags).
Even if Twitter had been using the strongest authentication protocol — FIDO2 — which Twitter (along with Google, Facebook, GitHub, Salesforce, etc.) support on their sites, the fact that Twitter's "task chain" from the tweet's origin to its destination on readers' computing devices did not include digital signatures to verify the message's authenticity, would have inevitably led to such a compromise.
This problem is not unique to Twitter — the vast majority of internet transactions do not use any mechanism to verify the authenticity of messages or preserve their integrity when stored.
One of the most egregious examples of this failure is the infamous Bangladesh Central Bank heist in 2016, that not only exploited policy and technical failures at the Bangladeshi bank, but also relied on procedural failures in the Federal Reserve Bank of New York (where the Bangladesh Central Bank maintained its depository account of U.S. dollars), and in the SWIFT network that transfers money electronically for a huge segment of the world's financial institutions.
While the technical details are not available to the general public, it is my opinion that had SWIFT and/or the Federal Reserve insisted on verifying the provenance of transactions through digital signatures based on public-key cryptography, even with the compromise of the SWIFT computer in the Bangladeshi bank, with the additional security controls implemented while using public-key cryptography, the attack would not have succeeded.
Another attack that could have been prevented with the use of public-key cryptography (perhaps, with world-changing consequences), is the attack on the Democratic National Committee during the 2016 U.S. Presidential Election.
What IT professionals must take away from these examples of high-profile attacks is this: When the rewards are high, attackers have sufficient resources to execute the attack — and get away with it.
But this is only true so long as IT organizations don't recognize that the game has changed and that the security defenses that worked 20 years ago no longer deter professional attackers.
Relying upon the ancient adage of keeping the "barbarians at the gate" cannot apply to a world where it is extraordinarily difficult to keep attackers out of your network — IT professionals have to operate on the assumption that attackers are already within the network, and the objective is to prevent them from getting to sensitive data and/or executing unauthorized transactions.
The time to learn these lessons from these breaches and implement better defenses is now. In order to defend against attacks that leverage quantum computing capabilities, IT professionals will have to take another major leap in defenses within the next few years. I previously covered some actionable advice in my last article to protect against data breaches.
Companies attempting to earn and preserve the trust of their customers need to do far more to engender that trust in the 21st century. Given the average cost of a data breach — currently $3.86 million globally, $8.64 million in the U.S. — company executives must adapt to managing technology risk with similar diligence as business or financial risk.
They must also recognize that investing in cybersecurity controls is an ongoing effort — attackers do not remain static, but rather they evolve by learning from their and others' failures, while technology providers continue to release new bugs regularly.
As a result, not only must cyber risk mitigation be implemented as a continuous improvement process, but it must introduce defenses that go beyond the mundane.
ALSO SEEN IN: Forbes