Would a global cyber ethics commission help ‘counter the lies’ of the tech lobby?

Deutsche Welle
10 Min Read

For computer scientist Hany Farid, developing image analysis tools that can stop illegal content online is not enough. He says we all need to take responsibility, not least the companies that lobby and lie hard.DW: Where are we at with forensic technology for tracking, verifying and removing fake, hateful and other illegal content online?

Hany Farid: The state-of-the-art, cutting-edge stuff is in the hands of a few people around the world. There is no commercial software. Media outlets and governments don’t have access to the types of tools we need.

Now I’m part of a DARPA program — DARPA is a funding agency here in the United States — and we’re trying to take the last two decades of research, and techniques, and put that into the hands of law enforcement, media outlets, and others, for them to be able to authenticate content. It’s an incredibly difficult process because it’s a very young science and there’s a huge amount of work to be done. But we have to figure out how we can get these forensic techniques into the hands of the people who can do verification.

I will also say that the social media companies have to change the way they think about their responsibilities.

For example, we saw that horrible shooting in Florida recently, and what was the first thing that happened? The conspiracy theories came out, whipping people up, saying the whole thing was acting and nobody died. And when you go to YouTube and Twitter, there it is on their front page. They’re promoting it. Not intentionally, they’re not bad people, but people are figuring out how to get around [the companies’] fairly dumb algorithms.

Read more: How deepfake porn is killing our trust in tech

Now if the social media companies were doing a better job of moderating this content, making sure the hateful, illegal and absurd content didn’t get promoted the way it does, we’d have less of an issue here.

Is this where your idea of a cyber ethics commission would come in?

Yes, and not only in the US, because this is a worldwide issue. Every country having elections is dealing with fake news. So I would love to see this at the United Nations, the EU, or some joint US-EU level. We’ve been running full steam ahead for the last 20 years with the development of technology, and I would be the first say there are many wonderful things that come from that. But as with everything there is a dark side to this, and I don’t think we’re talking about it enough. So it would be a centralized body, just talking about the issues before they get so far away from us.

Because what you hear from social media is “These things are so big, we don’t know what to do about it.” And my answer is, “Yeah, but you let it get that big — this is not a bug, it’s a feature!” You can’t build some monstrosity and say “I have no idea how to control it.” We have responsibilities and we should ask these companies to be better corporate citizens.

But would a cyber ethics body have teeth — any power?

There has been a dramatic shift in the way people view Silicon Valley. They have gone from the golden city on the hill to “They’re like the tobacco industry.” And I’m not just talking about the fake news and the manipulation of video and audio, I’m talking about data protection or credit card theft that happens because companies can’t be bothered to secure their networks. It’s across the board, and people are getting very frustrated. Citizens are getting frustrated, our leaders are getting frustrated, and the companies that advertise on these platforms are frustrated. And I think the combined pressure can have an effect.

Read more: Opinion: The one way to control Facebook — delete your account

The goal of an advisory board would be to inform people, and help us understand the landscape, because right now tech is the single largest lobbying force in dollars spent in Washington D.C. — and that is not an accident. We need people to stand up with credibility, saying, “The things you’re hearing from these companies are not true.”

Countless times I’ve heard people in Congress say: ‘Google said this’ or ‘Facebook said that’, and it’s an outright lie. So having highly-skilled and thoughtful people, advising governments at the highest level, is not a terrible idea to counter the lobby forces on Capitol Hill and in the EU.

I don’t suppose you can give me any concrete examples of the lies?

Well, I’d rather not give you a concrete example, but broadly speaking I can give you an example from the area of counter-terrorism. The companies say, “We can’t eliminate this content. It’s too big, complicated, and we don’t know how to define it,” but that’s completely untrue.

Take for example the bomb-making video that we know the man behind the 2017 Manchester bombing viewed. That video keeps getting uploaded. YouTube says it violates their terms of service. So why, if they can take down child pornography or copyright-infringement, which they do very effectively, can they not on a regular basis take down videos of bomb-making or beheading videos? They have the ability to do it, but they choose not to do it.

Why do they choose not to do it?

Well, they will say this is a freedom-of-speech issue, which it is not. This is a terms of service issue. They are allowed to take down anything they want and they do it regularly. So here’s what I think it comes down to: In the US, we have something called the Communications Decency Act (CDA), and Section 230 of the CDA says that platforms, like YouTube, Twitter and Facebook, cannot be held liable for the content that their users upload.

If, however, these platforms moved from being a platform to being a publisher, they wouldn’t have that broad protection of CDA 230. But here’s the thing: We’ve already penetrated CDA 230 with the Copyright Millennium Act (CMA). The CMA says that if you host copyright-infringing material, you can be held liable. And guess what, YouTube, Facebook, Twitter have got really good at removing that content. But they play dumb when it comes to child pornography, hoaxes and fake news, and terrorism-related content, because they don’t want to be seen as publishers. Once they get into the business of moderating content, they fear they will lose their CDA 230 protection.

Now the courts have demanded that Congress put in an exception in CDA 230 to allow particularly egregious actors to be prosecuted. The legislation was pending and Google, Twitter, Facebook, Microsoft, all lobbied to kill the Bill. And they didn’t do it in public either. They did it through a third party. Why did they do it? They did it because they see it as a threat to CDA 230.

So we called them out on it. They backed down and finally that legislation is moving forward, but only after huge resistance from tech. And we’re talking about stopping the worst possible actors in the space — platforms, for example, whose almost exclusive use is to traffic under-age and illegal prostitution — and Google says, “No, no. We’re okay with that.” These are companies that seem to have absolutely no moral compass whatsoever. They have mottos like “Don’t be evil” but at the end of the day they are like every other corporation. They are there to maximize their profits, and I think people are just a little tired of it.

Hany Farid is a professor of computer science at Dartmouth College in the US. He has developed a number of forensic tools, especially in the area of image analysis, and worked with the company Microsoft to develop PhotoDNA, a technology that is now widely-used to find and remove child pornography online.

TAGGED:
Share This Article
Leave a comment