At a hearing on SolarWinds last Friday before the House Oversight and Homeland Security committees, U.S. representatives seized on a statement by SolarWinds’ former CEO that in 2017 an intern had mistakenly posted a password, “SolarWinds123,” on the public-facing internet that could have provided a malicious actor with full access to the company’s update server.
Why it matters: Last week’s congressional hearings — and the confused and confusing response to the hack by some policymakers, industry mavens, and government agencies — underscored the inherent tensions in “achieving” cybersecurity in a world of cross-cutting, and often mutually exclusive, goals. Get market news worthy of your time with Axios Markets. Subscribe for free. Though the compromised password was just one theory for how the Russian intelligence services may have compromised SolarWinds — and there is no evidence that this was, indeed, how they accomplished their initial intrusion — an incredulous member of Congress pounced on the idea that such a potentially monumental cybersecurity failure could have originated in such a banal human error.
Human errors provide a good, easy-to-understand framing for complex problems, particularly attractive to politicians and private actors alike insofar as it individualizes responsibility — on the shoulders of a single intern, no less, with SolarWinds — while avoiding the deeper structural issues at play in cybersecurity.
For sure, individual mistakes can and do sometimes have massive systemic effects. One of the greatest intelligence coups in U.S. history, the Venona Project, was born from a World War II-era mistake by Soviet officials wherein some keys to “one time pads” — among the most secure forms of coded communications — were used twice, allowing U.S. cryptanalysts to crack Soviet ciphers. Human failure has also led to massive compromises in the digital era, with deadly real-world effects. For instance, in 2004, a CIA officer attempted to send an encrypted digital message to one of the agency’s assets within Iran — but, in a kind of “carbon copy” from hell, mistakenly included information in the transmission that “could be used to identify virtually every spy the CIA had in Iran,” reported James Risen in his 2006 book “State of War.” Tragically, this particularly Iranian asset was actually a double agent, and Iranian security services rounded up the CIA’s Iranian network as a result of this single botched message.
The big picture: Though the human error is an always-present cybersecurity threat — you can’t stop everyone from clicking on a malicious link — there are deeper, thornier issues at work. And these cannot be solved, if they indeed are solvable, without trading some important goods for other ones.
The ubiquity today of managed service providers, companies that outsource platforms for IT management and other core network functions, guarantees that firms have less insight into and control over the software running on their systems.
The ease of not building these capabilities in-house (if doing that is even possible) — saving employee time, increasing interoperability, and perhaps most importantly, positively benefiting the bottom line — is potentially partially offset by what may be increased risk brought on by the use of these services.
The use of such providers can make one’s own networks more opaque, and by using these platforms, firms are also importing whatever flaws may lay quietly dormant within them — as happened, disastrously, with SolarWinds. Moreover, compromises via software supply chains like SolarWinds are particularly pernicious because it is so difficult to pinpoint their origin, and once they spread from company to company, or from platform to platform, rooting out hackers can be a logistical and counterintelligence nightmare. Private companies are outsourcing work to other private companies, which are themselves providing services to government agencies, who themselves may not realize just how exposed they are, even on unclassified networks. The bottom line: The imperatives of commerce in an era of cutthroat competition, and the need for the smooth functioning of large, complex bureaucracies in the digital era, will inevitably lead to greater cybersecurity risks — even if organizations or government agencies attempt to institute “zero trust” security models.
There will never be a “silver bullet” in cyber defense — and if there were, it would likely be deemed entirely unpalatable, thanks to deeply held norms and assumptions shared among U.S. cybersecurity institutions and actors. For instance, the National Security Agency cannot monitor all private U.S. internet service providers as part of some massive early detection system against cyber threats. Even assuming such an arrangement would be possible or desirable, given the potentially vast civil liberties violations, there might be more pedestrian reasons for opposing it.
Lawful interception systems that concentrate power also concentrate risk: If a foreign power secretly gained access to such a system and burrowed in, it could be cataclysmic.
Between the lines: Private digital networks are inherently fragmented and opaque to outsiders — precluding any type of one-size-fits-all upstream response. Greater transparency from the private sector help might ameliorate some of these problems, but it won’t solve them. At last week’s congressional hearings, for example, Microsoft president Brad Smith suggested imposing legal duties on private companies to report breaches, which could help stanch the spread of some compromises by facilitating important information-sharing.
But this is still an effort at mitigation and not outright prevention. Other proposals floated occasionally — like the notion that private companies should be able to “hack back” against attackers in their networks — are fanciful at best and delusional at worst. Tensions between offense and defense in cyberspace.
The United States’ offensive cyber operators will aim to manipulate the broader IT environment. Defenders within the public and private sectors, in turn, have their own prerogatives. These needs will always be in tension — and are most likely unresolvable. The big picture: While the NSA does important defensive work, this reliance on, and facilitation of, insecurity is a core part of the NSA’s work. Indeed, according to an NSA document leaked in 2013, the NSA carried out a secret “SIGINT Enabling Project,” the objective of which was to engage “the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs … [to] make the systems in question exploitable through SIGINT collection.” Nothing underlines the tension in the NSA’s work more clearly than the evidence that it pushed what it knew to be a flawed encryption standard upon the National Institute of Standards and Technology (NIST), so that NIST in 2006 would (unknowingly) validate it as safe for general use — all while NSA was able to crack it.
Moreover, the NSA and the CIA need to keep vulnerabilities of all sorts secret in order to exploit them against intelligence targets. Expecting them not to do so would be folly.
But this is a decision made within the government regarding its own prerogatives, which may not align perfectly with the welfare or wishes of the wider public. There are other cases where the government and the public interests might not align. Let’s stipulate that legislators wanted to, for instance, require Microsoft products to meet certain security standards for general public use. What if those standards interfered with important collection programs for the CIA or NSA, whose targets abroad were running Windows? Whose needs should be prioritized if national security is deemed to be at stake? And who should get to make that determination?
Between the lines: Many U.S. intelligence officials view the very porousness and insecurity of the digital domain as an immense structural opportunity and advantage. They see good reasons to exploit these avenues for collection in the country’s intelligence activities abroad (and sometimes at home). Yes, but: Such activities do not mitigate risk or online insecurity. In fact, they almost certainly increase it more broadly. Some argue that this trade-off may be worth making. But it’s a trade-off nonetheless.
The bottom line: In cybersecurity, we’ll never have it all
The philosopher Isaiah Berlin wrote… “The notion of the perfect whole, the ultimate solution, in which all good things coexist, seems to me to be not merely unattainable — that is a truism — but conceptually incoherent; I do not know what is meant by a harmony of this kind. Some among the Great Goods cannot live together. That is a conceptual truth. We are doomed to choose, and every choice may entail an irreparable loss.”
Berlin was talking about moral choices — like the “goods” of liberty and equality, which were often mutually exclusive. In cybersecurity, such “goods” — which are often rooted in deeper prerogatives of national security, individual privacy and the system of free enterprise — also sometimes work at inherent cross-purposes. We cannot maximize them all simultaneously.
My thought bubble: The pursuit of cybersecurity, along with the managed maintenance of desirable cyber insecurities, must be overseen by policymakers. They owe the public a fuller account of their ethical and practical calculus.