Original CyberCOP PDF here. Slides enlarged below:
Another program, which NDR does not mention by name but is most likely called MonsterMind and has been reported on earlier, could then automatically with no human involvement, decide whether to fire back at the suspected origin of attacks.
Bamford: Now, also looking a little bit into the future, it seems like there’s a possibility that a lot of this could be automated, so that when the Cyber Command or NSA sees a potential cyber-attack coming, there could be some automatic devices that would in essence return fire. And given the fact that it’s so very difficult to—or let me back up. Given the fact that it’s so easy for a country to masquerade where an attack is coming from, do you see a problem where you’re automating systems that automatically shoot back, and they may shoot back at the wrong country, and could end up starting a war?
Snowden: Right. So I don’t want to respond to the first part of your question, but the second part there I can use, which is relating to attribution and automated response. Which is that the—it’s inherently dangerous to automate any kind of aggressive response to a detected event because of false positives.
Let’s say we have a defensive system that’s tied to a cyber-attack capability that’s used in response. For example, a system is created that’s supposed to detect cyber-attacks coming from Iran, denial of service attacks brought against a bank. They detect what appears to be an attack coming in, and instead of simply taking a defensive action, instead of simply blocking it at the firewall and dumping that traffic so it goes into the trash can and nobody ever sees it—no harm—it goes a step further and says we want to stop the source of that attack.
So we will launch an automatic cyber-attack at the source IP address of that traffic stream and try to take that system online. We will fire a denial of service attack in response to it, to destroy, degrade, or otherwise diminish their capability to act from that.
But if that’s happening on an automated basis, what happens when the algorithms get it wrong? What happens when instead of an Iranian attack, it was simply a diagnostic message from a hospital? What happens when it was actually an attack created by an independent hacker, but you’ve taken down a government office that the hacker was operating from? That wasn’t clear.
What happens when the attack hits an office that a hacker from a third country had hacked into to launch that attack? What if it was a Chinese hacker launching an attack from an Iranian computer targeting the United States? When we retaliate against a foreign country in an aggressive manner, we the United States have stated in our own policies that’s an act of war that justifies a traditional kinetic military response.
We’re opening the doors to people launching missiles and dropping bombs by taking the human out of the decision chain for deciding how we should respond to these threats. And this is something we’re seeing more and more happening in the traditional means as our methods of warfare become increasingly automated and roboticized such as through drone warfare. And this is a line that we as a society, not just in the United States but around the world, must never cross. We should never allow computers to make inherently governmental decisions in terms of the application of military force, even if that’s happening on the internet.