YouTube: Now playing, everywhere

Can the world’s biggest video-sharing site police itself?

Below are some excerpts from a big article in The Economist which surveys the issues and difficulties of regulating social media generally and YouTube in particular.  It essentially comes to no conclusions. I think one conclusion is possible however.

On all social media platforms, the administrators are constantly urged to ban "hate speech".  But it cannot be done -- for the simplest of reasons: One man's hate-speech is another man's fair comment, or even part of his religion.  The obvious recourse in that situation is NOT to censor at all.  And that was the initial policy of some sites.

Fascist attitudes are however much more common than tolerant ones and the torrent of attack and abuse directed at site administrators had to have an effect.  All administrators have now been trying to please everyone  They have however  discovered a version of an old political formula:  You can please all of the people some of the time, some of people all of the time, but you cannot please all of the people all of the time.

So the acceptable censorship of social media sites is an impossible task.  All we can hope for is some compromise that is not wholly unreasonable.

But if we cannot reasonably regulate ALL of the content on a site, can we reasonably regulate SOME of the content satisfactorily?  I think we can.  I think we can regulate it in a way that avoids political bigotry.  That is a much smaller ask than regulating everything but it should be possible.

What I propose is a variant on the ancient Roman
Tribunus plebis.  A tribune is someone appointed to safeguard the interests of a particular group.  I think social media platforms  should appoint two tribunes -- one for the Left and one for the Right.  And NO content should be deleted without the approval of BOTH tribunes.  Each tribune would need a substantial staff and he should be free to choose and train  his own staff.  The tribune himself (or herself) should be appointed by the head of the relevant party in the Federal Senate

That should do the trick


YouTube’s immense popularity makes the question of how best to moderate social-media platforms more urgent, and also more vexing. That is partly because of the view taken in Silicon Valley, inspired by America’s right to free speech guaranteed by the First Amendment, that platforms should be open to all users to express themselves freely and that acting as a censor is invidious. With that as a starting point platforms have nevertheless regulated themselves, recognising that they would otherwise face repercussions for not acting responsibly. They began by setting guidelines for what could not be posted or shared —targeted hate speech, pornography and the like— and punished violators by cutting off ads, not recommending them and, as a last resort, banning them.

As governments and regulators around the world have started to question the platforms’ power and reach, and advertisers have pulled back, the firms have gradually tightened their guidelines. But by doing so they have plunged deeper into thorny debates about censorship. Last year YouTube banned certain kinds of gun-demonstration videos. In January the platform said it would no longer recommend videos that misinform users in harmful ways, like certain conspiracy theories and quack medical cures. It also banned videos of dangerous pranks, some of which have caused children to hurt themselves. On April 29th Sundar Pichai, boss of Google, declared, in an earnings announcement that disappointed investors, that “YouTube’s top priority is responsibility”. He said there would be more changes in the coming weeks.

Governments meanwhile are taking direct action to curb content that they deem inappropriate. On April 21st, after bombings in Sri Lanka killed 250 people, its government took the draconian step of temporarily banning social-media sites, including YouTube, to stop what it called “false news reports”. After the Christchurch massacre, Australia New Zealand  passed a hastily written law requiring platforms to take down “abhorrent violence material” and to do so “expeditiously”. Even in America, where social media has been largely unregulated, members of Congress are drafting measures that would give significant powers of oversight to the Federal Trade Commission and restrict how online platforms supply content to children, an area where YouTube is especially vulnerable.

Ms Wojcicki says she needs no persuading to take further action against unsavoury material. Yet YouTube does not plan to rethink the fundamental tenets that it should be open to free expression, that people around the world should have the right to upload and view content instantly (and live), and that recommendation algorithms are an appropriate way to identify and serve up content. What is needed, she says, is a thoughtful tightening of restrictions, guided by consultation with experts, that can be enforced consistently across YouTube’s vast array of content, backed by the power of artificial intelligence.

Video nasties

YouTube’s record thus far does not inspire much confidence. Children’s programming, one of the most popular sorts of content, is a case in point. Parents routinely use their iPads or smartphones as baby-sitters, putting them in front of children and letting YouTube’s autoplay function recommend and play videos (see chart 3). Children are served up nursery rhymes and Disney, but sometimes also inappropriate content and infomercials.

YouTube has acted more decisively in other circumstances. Its crackdown on terrorist-recruitment and -propaganda videos in early 2017 used machine learning and newly hired specialists. There was an obvious incentive to do it. In what became known as “Adpocalypse”, big firms fled after learning that some of their ads were running with these videos, essentially monetising terrorist groups. There have been a couple of sequels to Adpocalypse, both related to children’s content, and both first uncovered by outsiders. This adds to the impression that YouTube lacks a sense of urgency in identifying its problems, and responds most rapidly when advertisers are aggrieved.

Ms Wojcicki disputes this, saying she began to recognise the increasing risks of abuse of the platform in 2016, as it became clear more people were using YouTube for news, information and commentary on current events. She says that was when she started to focus on “responsibility”. In 2017, as a result of Adpocalypse, she began expanding the firm’s staff and contractors focused on content issues; they now number more than 10,000, most of them content reviewers. Chris Libertelli, the global head of content policy, says that Ms Wojcicki and Neal Mohan, the chief product officer, have told him there are no “sacred cows” in deciding what content should be limited, demonetised or banned. Ms Wojcicki says that with wiser and tighter content policies, and the company’s technology and resources, she and YouTube can solve the problems with toxic content.

Everything in moderation

While the need for regulation might be clear, the details of what should be regulated, and how, are messy and controversial. Few free-speech advocates, even in Silicon Valley, are zealous enough to want to permit beheading videos from Islamic State or the live-streaming of massacres. Yet most of the questions about content moderation that YouTube wrestles with are much less clear-cut. YouTube appears to be weighing whether to ban white nationalists, for example. If it does so, should the site also ban commentators who routinely engage in more subtle conspiracy theories meant to incite hatred? Should it ban popular personalities who invite banned figures to “debate” with them as guests? Ms Wojcicki is conscious of the slippery slope platforms are on, and fears being criticised for censorship and bias.

Another important question will be how to go about enforcing restrictions. When you serve a billion hours of video a day the number of hard calls and “edge cases”, those that are hard to categorise, is enormous. The tech firms hope that AI will be up to the job. History is not reassuring. AI has been trained for straightforward tasks like spotting copyright violations. But even with low error rates the volume of mistakes at scale remains immense. An AI capable of reliably deciding what counts as harassment, let alone “fake news”, is a pipe dream. The big platforms already employ thousands of human moderators. They will have to hire thousands more.

Given the complexities, wise governments will proceed deliberately. They should seek data from platforms to help researchers identify potential harms to users. Regulations should acknowledge that perfection is impossible and that mistakes are inevitable. Firms must invest more in identifying harmful content when it is uploaded so that it can be kept off the platform and—when that fails—hunt for it and remove it as quickly as possible. With the great power wielded by YouTube and other social-media platforms comes a duty to ensure it is used responsibly.

More HERE

No comments:

Post a Comment

All comments containing Chinese characters will not be published as I do not understand them