X
Tech

Why it's too hard to hack the hackers

Hackers often turn IT systems against their owners, but what if law enforcement took over hackers' botnets and used them to fight back? It's not so simple, according to University of New South Wales law lecturer and PhD candidate Alana Maurushat.
Written by Michael Lee, Contributor

Hackers often turn IT systems against their owners, but what if law enforcement took over hackers' botnets and used them to fight back? It's not so simple, according to University of New South Wales law lecturer and PhD candidate Alana Maurushat.

bots1.jpg

(Dad's Army image by Tom Rolfe, CC BY-SA 2.0)

Speaking at the Security on the Move event hosted in Sydney last week by AusCERT and SC Magazine, Maurushat said that the security industry needs other ways of halting botnets.

"The problem with the takedown of [some] botnets is that the amount of money and energy that [goes] in from the various security firms, the amount of collaboration and effort involved with ISPs, domain-name servers, university researchers from multiple continents, all coordinating efforts within a small timeframe, [is] incredibly resource intensive, and it's not a sustainable model," she said.

"You can't expect Microsoft, Panda Labs, eBay, whoever it is, to consistently and constantly engage the millions of dollars necessary to access the court system [and] to coordinate all of these particular affairs. It's very good that they're doing so, but it had its particular problems."

One of these problems is that infected machines or zombies, which make up the army behind a botnet, all contact a command and control centre that has been the main target for security researchers. While that single point of contact can be shut down, there are a huge number of ethical and technical issues involved in cleaning the hundreds, if not thousands, of individual zombies that go unaddressed. If not cleaned, these zombies remain open for anyone else that might seek to incorporate them into their own army.

"If you use an analogy back to war, you can take the General out of the picture — you can sabotage the command and control — but unless you remove all of the soldiers, the command and control can be restored; the General can be replaced."

Australia has been considered by others as being ahead of the rest of the world on tackling this issue with the Internet Industry Association's (IIA) voluntary Code of Practice, also known as the iCode. The iCode requires participating ISPs to notify users when they are infected with malware and possibly take other preventative actions such as quarantining a customer's service.

However, according to Maurushat, the majority of botnet activity occurs in the US and parts of Europe, limiting the global effectiveness of it.

"It's an absolutely fantastic initiative [but] how much is this going to put a dent in the botnet situation? If everybody picks up and does these kind of programs, maybe, but at the moment it's merely a slow initiative."

To combat this, it has been previously proposed that rather than shut down the command and control centre, it should be used to take control of the infected machines and clean themselves.

The idea of turning the botnet on itself is something that the FBI has trialled on the Coreflood botnet, according to Trend Micro CTO Raimund Genes, who also spoke at the event.

"They sink-holed the command and control server, which meant they re-routed all the traffic from the botnet. They analysed it and they came up with the idea, 'These bots, this malware, has a kill command', so you could remove them."

But not everything went according to plan.

"They tested it in their labs and the malware was not very good in the removal itself. It blue-screened about 10 per cent of all computers in the lab environment, and they decided not to do it."

Even without problems in the execution, the idea is contentious, according to Maurushat.

"It all sounds wonderful in theory, but ... the law in most jurisdictions, and, in fact, virtually all jurisdictions, [is that] there's absolutely no exemptions for security research for unauthorised access and misused provisions. In this country it infuriates me to no end," she said.

Maurushat's frustration stems from her claims that she was practically begged for her input into the Cybercrime Bill. She said that her recommendations and those of the security industry, which, including provisions for ethical hacking, were ignored.

"The law doesn't distinguish the motivation for hacking. Any type of unauthorised access or modification is potentially a criminal act. The only reason we don't see more people in the news is because it's up to the public prosecutor as to whether or not they're going to prosecute for that crime."

Maurushat also thought that Australia had missed its chance to have this issue addressed.

"Given we just had the new Cybercrime Bill passed as an Act, [the Federal Government is] not going to want to sit cybercrime again probably until — I don't even want to make a prediction — a long time."

But even if ethical hacking provisions were put in place, and a hacker's intentions are sound, there are still numerous questions that need to be raised surrounding the control of someone else's machine.

"If I clean up this machine, what happens if this machine is connected to critical infrastructure?" Maurushat asked.

She said that along with the legal landmine that "cleansing" a remote machine would inevitably have, not to mention privacy concerns, there is a real possibility that removing malware could crash or damage the machine or equipment connected to it.

As examples, Maurushat pointed out that she has found pacemakers that are connected, unencrypted, to networks, as well as crop harvesters and seed planters that are vulnerable to subtle changes that would result in the loss of fields of produce.

"Could you imagine, in Australia, what would happen if the machinery that we ran [to plant] the seeds were 4mm too deep and we had no crops?"

Editorial standards