Last Friday around 8am UTC, an unprecedented cyberattack called the WannaCry malware began spreading throughout networks, locking down computers and demanding US$300 in Bitcoin as a ransom payment for unlocking them.

The major news story broke when divisions of the British National Health Service (NHS) began turning away patients after they were hit with the malware. Throughout the day, WannaCry spread throughout the UK, Russia, Australia, the US, Germany, India, and China. The attack dealt blows to organizations such as FedEx, Telefónica (Spain), Deutsche Bahn (Germany), Sun Yat-sen University (China), and Russian Railways.

Around six hours after the attack began, it was completely and unexpectedly thwarted by Marcus Hutchins, a 22-year-old British cybersecurity researcher. Hutchins “accidentally” activated a kill switch for the malware, shutting down WannaCry worldwide.

It turns out that WannaCry was programmed to interface with a specific unregistered domain name. Hutchins registered that domain name in order to track the virus. He didn’t realize that by doing this he would be killing the malware. It just happens that the authors of WannaCry made a rather silly mistake.

Unfortunately, removing the kill switch wasn’t hard, and by now many variants of WannaCry have emerged that bypass this Achilles heel.

Unlike other ransomware, getting hit by WannaCry is probably not your fault

WannaCry is a particularly powerful piece of ransomware. Ransomware works by holding data or services hostage and demanding a payment. In the cyber world, the data is often held hostage on a victim’s own computer. It is encrypted, scrambled using a secret code that is only known to the attacker. The attacker demands a payment in exchange for unlocking the data.

Ransomware is not new; in fact, has become increasingly widespread over the last five years or so. According to Symantec’s most recent Internet Security Threat Report, the number of detections of ransomware increased 36 percent from 2015 to 2016; the average demanded ransom more than doubled, and the number of different malware families tripled. In summary, Symantec called ransomware “the most dangerous cyber crime threat facing consumers and businesses in 2016.” And that was before WannaCry.

This attack is different from the average ransomware. Typically, ransomware spreads via email or infected web domains. As long as users don’t click on bad links or open suspicious attachments, they stay out of danger. Email filters block hundreds of thousands of these attacks every day, before users even see them.

Unfortunately, WannaCry can spread throughout networks using other channels. After a computer is infected, WannaCry uses the Windows Server Message Block (SMB) to spread to other computers both over the Internet and over internal networks (such as local networks within a company). This is scary, because secondary targets don’t have to make any web browsing or email opening mistakes. Getting infected is like being mugged in your own home, despite keeping the door locked.

Who is to blame? 

Microsoft? Some people have blamed Microsoft for not preventing the WannaCry attack; after all, it was a mistake in Windows code that provided the weakness.

Surprisingly, though, Microsoft spotted the mistake this past March, and released a security update to fix it. This was a month before attackers learned that the NSA had also spotted the problem, and two months before this weekend when WannaCry used the bug in its ransomware. So what happened?

Some users knew about the update, but didn’t install it. Other users – particularly corporations with outdated technology – had Windows versions that didn’t receive the patch. Microsoft didn’t release updates for versions of operating systems that they had already ceased supporting.

Is Microsoft to blame? In my opinion, hardly. Many infected machines were running Windows XP, which by now is a 15-year-old system. Patching best-practices probably don’t include updates for technology that is so aged.

Moreover, once Microsoft realized the extent of the attack, they released a security patch for Windows XP, Windows 8, and Windows Server 2003 systems that were previously not supported. This is one of the only times that Microsoft has ever gone back to fix unsupported versions of its operating system.

National Health Service? If Microsoft gets off the hook, the NHS (also one of the main victims of the attack) does not. This is a good example of why not to continue to operate infrastructure on outdated operating systems. Running Windows XP is a huge security risk, and especially on critical systems such as those in hospitals.

Apparently, a group of security advisors also specifically gave NHS hospitals a patch that would have prevented their systems from being infected by WannaCry. But NHS didn’t deploy it.

It is true that many, many critical systems (power plants, water treatment facilities, airports) run outdated operating systems such as Windows XP, so hopefully the damage to the NHS will serve as a warning.

The NSA? Yes, the NSA is probably to blame. It is not simply the case that the NSA developed a weapon. Rather, the NSA realized that the weapon (the security vulnerability in Windows) existed, and they chose not to inform Microsoft.

The NSA had the choice between keeping a secret weapon to themselves, and ensuring that the weapon could never be used by revealing the flaw to Microsoft. They chose the secret weapon route – but someone spilled their secret.

Still, a full assessment of the NSA’s blame would likely require an entire thesis on the morality of offensive cyber capabilities. Most rely on finding bugs, and many of these bugs are probably in systems that are ubiquitous – such as Windows.

Therefore, the NSA exploit behind WannaCry belongs to the bread and butter of offensive cyber weapons. If cyber warfare is a staple of modern combat, should governments hold back developing their offense? Maybe the answer is yes, but I’m waiting to read that thesis before I make up my mind.

So far, there are no plausible motivations

Ransomware is financial malware, through and through. Encrypt a system, and demand payment. The amount collected in ransom payments is public, as are all Bitcoin transactions. As of 4pm in New York, WannaCry had earned about $74,000.

But $74,000 is peanuts! A business scheme needs to be judged not only on the potential revenue, but also on the risk. Financially astute attackers want to gather a large profit with low noise, in order to attract as little law enforcement response as possible.

Imagine that there are three hackers behind WannaCry, and that it cost each of them $5,000 in effort and exploits to launch the attack. Then they are left with $20,000 each after three days of world-wide mayhem. That’s hardly enough to risk getting apprehended as the most notorious cybercriminals of the year. Either the attackers expected more people to pay, or they didn’t expect the attack to spread so quickly and to make so much noise.

Other motivations are not much more logical. A state actor testing out a weapon for future use in warfare would also have tried to remain under the radar. And a terrorist group interested in simply causing damage would not demand a ransom.

So far, it seems that the press has focused on the actual attack, rather than who launched it, or on the red herring of how they were motivated. We are just beginning to see signs that point to North Korea, but the jury is still out. Let’s stay tuned.

Jeffrey Pawlick is a PhD Candidate in Electrical Engineering at the Tandon School of Engineering, New York University.

Jeffrey Pawlick holds a PhD in Electrical Engineering from New York University, where he developed mathematical models for risk analysis in cyber security and information privacy in the Internet of things....