Monday, June 5, 2017

Tech companies and critics push back after Theresa May calls for global internet regulation

Source: http://www.businessinsider.com/tech-companies-respond-to-theresa-mays-call-for-internet-regulation-after-london-terror-attack-2017-6
June 05, 2017 at 01:24PM

Theresa May

  • Theresa May has attacked big tech companies for not doing more to fight terrorism after this weekend's attack in London Bridge and Borough Market.
  • The companies have responded by highlighting the work they already do.
  • Critics have argued that May's approach fails to get to the heart of the problem, labelling it "misleading" and "intellectually lazy."
  • Secure encryption tech will remain available to terrorists, no matter what action the UK government takes.

LONDON — In the aftermath of Saturday night's terror attack in London, Prime Minister Theresa May has angrily attacked internet companies, accusing them of inadvertently providing support for terrorists.

In a strongly worded statement made the day after the attack that killed seven people, May accused the firms of giving "this ideology the safe space it needs to breed."

The government needs to, she said, "work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning. And we need to do everything we can at home to reduce the risks of extremism online."

Many of the tech companies implicitly blamed — Facebook, Google, Twitter, and so on — have since pushed back. They essentially argue that they already do what May is asking for.

And some critics worry that Theresa May's calls are dangerous, disproportionate, and "intellectually lazy."

Facebook: 'We want ... to be a hostile environment for terrorists'

Facebook, the world's largest social network with more than 30 million UK users, has been quick to highlight the work it already does to combat terrorism.

mark zuckerberg facebook ceoIt prohibits content that supports terrorist activity, letting users report potentially infringing material to human moderators. It also uses some technical solutions, like image-matching tech that checks new photos to see if they've already been banned from the platform for promoting terrorism. It also reaches out to law enforcement if it sees potential evidence of a forthcoming attack (or attempt at human harm more generally).

"We want to provide a service where people feel safe," director of policy Simon Milner said. "That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists."

This does not directly address the issue of encryption, however — but more on that shortly.

Here's Facebook's full response:

"We condemn the attacks that took place in London on Saturday night and our thoughts are with the families of the victims and those who are injured. Facebook’s Safety Check was activated by the local community last night. We hope the people in the area found the tool a helpful way to let their friends and family know they are okay.

"We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. We want Facebook to be a hostile environment for terrorists. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if we become aware of an emergency involving imminent harm to someone's safety, we notify law enforcement. Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.

Google: We are committed to 'ensuring terrorists do not have a voice online'

Google's response is largely similar, and already takes action to police for content that potentially promotes terrorism. YouTube, its video platform, also takes down anything that incites violence and bans any accounts it believes to be operated by agents of foreign terrorist organisations. And once a video is taken down, it is flagged so it can't just be reuploaded.

The search engine, meanwhile, removes links to illegal content once it has been notified of it.

Here's the official statement from a spokesperson:

"Our thoughts are with the victims of this shocking attack, and with the families of those caught up in it. We are committed to working in partnership with the government and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online. We are already working with industry colleagues on an international forum to accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges."

Twitter: 'Terrorist content has no place on Twitter'

Lastly, here's what Twitter's UK head of public policy Nick Pickles said in a statement: "Terrorist content has no place on Twitter. We continue to expand the use of technology as part of a systematic approach to removing this type of content. We will never stop working to stay one step ahead and will continue to engage with our partners across industry, government, civil society and academia."

A spokesperson also highlighted the fact that it suspended 376,890 accounts in the six months leading up to December 2016. And of those, 74% were detected via its internal tech, and just 2% came from government requests.

The issue of encryption

May's statement called for the elimination of "safe spaces" where terrorists and terrorist ideology can "breed." It doesn't directly mention encryption — but this is being interpreted by some as an indicator the Conservatives plan to crack down on encryption tech.

Here's what May said:

"Second, we cannot allow this ideology the safe space it needs to breed.

"Yet that is precisely what the internet, and the big companies that provide internet-based services provide.

"We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning.

"And we need to do everything we can at home to reduce the risks of extremism online.

"Third, while we need to deprive the extremists of their safe spaces online, we must not forget about the safe spaces that continue to exist in the real world."

Strong, end-to-end encryption has been increasingly adopted by major tech companies in recent years. It's used in messaging services including Facebook's WhatsApp and Apple's iMessage, among others, meaning the messages can't be intercepted and decoded by anyone else en route, including the companies themselves and law enforcement.

Tim CookThis keeps users' data secure — but some fear that they are enabling terrorists and other criminals in the process. Privacy advocates counter that there's no alternative if you want to keep ordinary people safe. There's no such thing as a backdoor that can only be used by the good guys, and any attempt to weaken encryption makes everyone's data vulnerable.

Tim Berners-Lee, inventor of the world wide web, made this case after home secretary Amber Rudd made similar calls after March's Westminster terror attack. "Now I know that if you're trying to catch terrorists it's really tempting to demand to be able to break all that encryption but if you break that encryption then guess what - so could other people and guess what - they may end up getting better at it than you are," he said.

Removing encryption — if the UK government decided to try and force tech companies to do so in the country — would be a complex, expensive undertaking. And even then, it wouldn't stop people using it. There are plenty of companies and organisation based outside of UK jurisdiction that could simply not comply.

Critics: May's proposal is 'intellectually lazy'

May has faced some direct criticism over her remarks

Open Rights Group, a London-based digital liberties group, asked for more information about how these proposals would work in practice. Its executive director Jim Killock, wrote:

"It is disappointing that in the aftermath of this attack, the Government’s response appears to focus on the regulation of the Internet and encryption.

"This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.

"But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming."

That second point is worth highlighting. Encryption tools are easily available right across the world. They will remain so, regardless of how the public content on the platforms of big tech companies is regulated.

As Kings College professor Thomas Rid wrote on Twitter: "Focus on 'big companies' is misleading. A range of secure comms channels will remain available to militants no matter what big firms do."

Peter Neumann — another professor at Kings College — also tweeted on the subject. "On 'Islamist extremism', she failed to spell out specific measures. So how is this different from what we've been hearing for 6 years?" He wrote. "Most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram. This has not solved problem, just made it different ... Moreover, few people radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy."

Join the conversation about this story »

NOW WATCH: Everything we know about the next iPhone — including a completely new look

No comments:

Post a Comment

Blog Archive