American tech companies have long been bullish about the potential for India to grow their fortunes. India is the world’s second-largest internet market after China, and the largest market where industry giants Facebook and Google operate. But until this year, those companies and others haven’t really addressed the flip side: the havoc created via their platforms in India and other nations where many millions of people have newly come online.
In the last two years alone in India, mobs largely reacting to rumors spread on Facebook’s WhatsApp, which has more than 200 million active users in India, have lynched more than 30 people; fake news and propaganda have flourished on Facebook; harassment, abuse, and political propaganda on Twitter have reached new highs; and fake videos and conspiracy theories have flooded YouTube.
Yet besides some baby steps like the time when Facebook ran newspaper ads in India in 2017 giving people tips on spotting fake news, American tech companies had been mostly silent about the problems plaguing their services outside the US. As recently as late last year, Twitter, for instance, declined to talk about the specifics of how it was tackling abuse and harassment in India. And WhatsApp wouldn’t comment on the record about its misinformation problem in the country.
That changed in 2018. Faced with increased government scrutiny ahead of major national elections in India in 2019 — and global backlash for fueling mayhem in countries like France, Myanmar, Sri Lanka, and the Philippines — tech companies have been forced to show in earnest that they’re at least trying to stave off malicious activity internationally as well as at home.
In April, Facebook announced that it would start fact-checking for fake news in India. At various points over the year, CEO Mark Zuckerberg said that the company was committed to preventing interference in the 2019 Indian elections. In October, the company said it would set up an election task force consisting of security and content specialists who would try to understand election-related abuse in India. And earlier this month, the company started requiring anyone who wants to run political ads on the platform in India to confirm their identity and location first — something it began earlier this year in the US, the UK, and Brazil.
“We are committed to maintaining elections integrity and this is something we take very seriously,” a Facebook spokesperson told BuzzFeed News. “We have been adding to our capabilities and defenses over the last few months and have done a lot of work to prevent interference and protect the integrity of elections. We have updated our policies to require more thorough documentation from advertisers who want to run election-related ads on the platform.”
WhatsApp, under fire from both the general public and the Indian government for its role in recent lynchings, has restricted message-forwarding in India to five people or groups; removed a “quick forward” button for images and videos; launched a series of ad campaigns around spotting misinformation; collaborated with NGOs and mobile carriers to spread on-the-ground awareness; funded academic research papers; and flown in senior executives from Menlo Park, California, multiple times to meet with Indian government officials. A WhatsApp spokesperson pointed BuzzFeed News to a page recently published by the company that lists the steps it has taken in India this year “to address the harmful consequences of misinformation.”
Google announced more human oversight in India to prevent conspiracy videos from trending on YouTube, and in June, said that it would train 8,000 Indian journalists in seven languages to spot instances of fake news on the internet. In a statement to BuzzFeed News, Satya Raghavan, head of partnerships for YouTube in India, said the company has changed its algorithms “to surface credible content, built new features that clearly label and prominently surface news sources on our homepage and search pages, and introduced information panels to help give users more sources where they can fact check information for themselves.”
Twitter admitted last month that it needed to do a better job at protecting India’s minorities from abuse on the platform and is currently looking at building tools that would let Indian minorities report harassment. “Improving the health of the public conversation is the number one priority for our company,” a Twitter spokesperson told BuzzFeed News. “From our CEO down, it’s part of everyone’s job at Twitter.”
It’s a change from previous years when American technology platforms largely spoke in starry-eyed language about connecting people in all corners of the globe, and of India, a country that only has about half of its 1.3 billion citizens online, as a mere next chapter in a global growth story. All connectivity, and by extension growth, was “de facto good.”
To be sure, India’s government and regulators have sometimes reined in the platforms, like in 2016 when they banned Facebook’s Internet.org program, which promised to bring millions of Indians online for free, for violating net neutrality.
Still, critics have said tech companies’ recent efforts to get their acts together are too little, too late, and that these policy and product tweaks aren’t likely to fix deep-seated problems ahead of India’s elections.
WhatsApp’s marketing blitz against misinformation, for instance, doesn’t target people in remote places with little access to media, where most of the lynchings have happened. Details about Facebook’s election “task force” are scant. And Twitter’s sudden awakening about the abuse that Indian minorities face on the platform comes at the same time CEO Jack Dorsey kicked up a caste controversy during his first trip to the country.
Tech companies still have a long way to go to fix the unique problems that have arisen as millions of Indians, and other people from around the world, sign up for their services — but we can point to 2018 as the year Silicon Valley finally started to take notice that the impact it has on all these new users isn’t entirely rosy.