When it comes to meatworld bombs, the more powerful they are, the bigger their impact. With a cyber weapon, the opposite is true: the more powerful it is, the more limited the damage it causes. The deeper a bug can get into any given system, the less likely it is to trouble anything else.
And that's why cyber weapons aren't real weapons, says Thomas Rid, a reader in War Studies at Kings College London and co-author of a new paper published today in the security journal RUSI Journal.
Rid, the war boffin who brought us the theory that cyber war wouldn't actually be war because no one gets killed , has some more soothing common sense for those worried about cyber-geddon:
[Having] more destructive potential is likely to decrease the number of targets, the risk of collateral damage and the political utility of cyber-weapons.
Rid's point is that cyber weapons that can attack any web target tend to be low-level and quite crap: DDoS bots that can take a website offline temporarily or deface it, tools that cause inconvenience and sometimes embarrassment.
Bugs or malicious software threats that could cause significant damage to a system – eg, penetrating databases for specific sensitive internal data or causing particular real-world machines to malfunction – would need to be so specific to their target that they would be harmless to almost everything else and cause little to no collateral damage.
Take say, the worst of the worst – Stuxnet – the virus that allegedly set the Iranian nuclear programme back two years: it spread over 100,000 Windows computers en route to Iran's critical computer network and didn't damage any of its carriers.
Cyber-weapons with aggressive infection strategies built in, as popular argument goes, are bound to create uncontrollable collateral damage. The underlying image is that of a virus escaping from the lab to cause an unwanted pandemic. But this comparison is misleading.
What we shouldn't worry about
So while a DDoS can cause what Rid describes as "second order" damage, in itself the code doesn't harm a system, take data or cause any physical damage to a person.
Also - we don't need to fret too much about crazed warrior hackers from North Korea reducing all figures in the stock exchange to zero. Most high profile systems that provide services like the Stock Exchange have active protection and back-up systems.
Weaponised code does not come with an explosive charge. Potential physical damage will have to be created by the targeted system itself, by changing or stopping ongoing processes.
Simply knocking a site offline would alert the target to the problem immediately and probably cause a back-up to kick in. Serious damage would require an intelligent malware agent that was capable of changing ongoing processes while hiding the changes from their operators, Rid says. To our knowledge, this has not yet been created, and making something as complex would require the backing and resources of a state, he added.
But even if new smart high-power cyber weapons were created, though they "open up entirely new tactics" they also have "novel limitations".
He adds that "all publicly-known cyber-weapons have far less 'firepower' than is commonly assumed". Concluding: "At closer inspection cyber-weapons do not seem to favour the offence."
What we should worry about
Speaking to The Reg earlier this week, Rid said that the systems we really should be worried about are industrial control systems – SCADA – computer systems that control the national grid, public transport, chemical mixing in factories andprison doors . These are the systems that he claims have poor security set-ups:
"We don't see enough pressure on control systems vendors and creators, security in these areas is often shocking and I don't know how they've got away with it for so long," the war boffin complained.
Control systems software is deeply specialised, and often even the people running it don't understand it:
Rid said: "These types of software are so specialised that people running the software have to go back to the manufacturer when there's a problem, because their own help desks can't help."
Threat attacks could almost only come from insiders, because systems like this are so specific.
Rid's top tip is that it's the people who work on esoteric software – maverick insiders – that we should worry about rather than patriotic foreign hack-warriors.
One of the computer hacks with the greatest physical impact ever came from an angry Australian sewage worker who used his knowledge of pumping systems to pay back an employment grudge. Rid recounts the case:
One of the most damaging breaches of a SCADA (Supervisory Control and Data Acquistion) system happened in March and April 2000 in Maroochy Shire, in Queensland in Australia.
After 46 repeated wireless intrusions into a large wastewater over a period of three months, a lone attacker succeeded in spilling more than a million litres of raw sewage into local parks, rivers and even the grounds of a Hyatt Regency hotel.
The author of the attack was 49-year-old Vitek Boden. His motive was revenge; the Maroochy Shire Council had rejected his job application. At the time Boden was an employee of the company that had installed the Maroochy plant's SCADA system.
Boden's inside knowledge and the software he had on his laptop allowed him to take control of 150 sewage pumping stations in the area. Otherwise pulling off such a physically significant and gungy attack would require such a detailed knowledge of a particular system and its weaknesses that very few people would be capable of it.
What the government should do
Though the threat may be lower than media hype suggests, the government needs to know more, and there needs to be more open debate on the risks of cyber weapons, Rid said. "We need more expertise in these areas in both government and in public," added the war boffin.
He said that government needs to understand industrial control systems.
In short, Rid suggests ditching the spooks : "If we put GCHQ in charge of cyber security, by their clearly secretive nature, they won't be able to put public pressure on businesses to make necessary reforms and improvements to their products. That's not where public pressure comes from.
Then he suggests getting in some hackers: "Why not have a team of white-hat hackers – people who publicly expose weaknesses and make them public if they are not fixed."
And then, he adds, the debate needs to be clarified a bit – so that the focus is on the real threats: "If we look at the world of brick-and-mortar weapons, we wouldn't call a bag a weapon though it could be used to carry away stolen goods. But in cyber space we lose our common sense and call everything a weapon. We need to inform the public debate."