In what has been called an unfolding “tech backlash,” officials in India have recently issued proposals to impose various regulations on tech giants such as Amazon, Facebook, Twitter, and Google. A recent Pacific Council teleconference discussed the current situation and its implications for internet regulation worldwide.
Mishi Choudhary, legal director of the Software Freedom Law Center, and Emma Llansó, director of the Free Expression Project at the Center for Democracy & Technology, provided context on the current situation, while Alexander Abdo, litigation director at the Knight First Amendment Institute at Columbia University, moderated the discussion.
Listen to the full conversation below:
In India, attention to increased regulation has been gaining traction in the lead up to the country’s nation-wide elections that took place in 2019. While Choudhary explained that most of these initiatives are not yet detailed, they importantly lay out the government’s “political vision” of its technological aspirations.
“That [software product conversation] really is driving a lot of conversation within the country…what or what not they have been able to create in contrast to our neighbors across the Himalayas, which is the Chinese.”
A significant driver of this vision is to make India a global center of software products by 2025. Although the country’s technology sector is valued at $168 billion, only $7.1 billion of this revenue is brought in by software. To increase this revenue inflow, Choudhary explained, officials are looking to a model set by their Chinese neighbors.
“That [software product conversation] really is driving a lot of conversation within the country, which is the market share Indian companies have, and what or what not they have been able to create in contrast to our neighbors across the Himalayas, the Chinese,” said Choudhary.
Choudhary added that there is some “envy” as to how China has been able to “limit its borders” and incubate its own technology giants. India, which she says is not home to as many globally recognized brand names, is turning to regulation in order to catch up.
“At least the motivation in India seems to be for the past year or two that we could also create a level playing field for the domestic players and perhaps regulate the foreign players,” she added.
Key to creating this level playing field, she noted, is to enact policies that monetize the wealth of data gathered from its large population (currently about 1.3 billion). Stakeholders also hope to see 10,000 start-ups funded within the country by 2025, provide upskilling opportunities to technology personnel, and leverage its “Centres of Excellence.”
“At least the motivation in India seems to be for the past year or two that we could also create a level playing field for the domestic players and perhaps regulate the foreign players.”
“Safe harbor” laws, under which technology companies are not legally obligated to regulate user-generated content on their platforms, have been the subject of significant debate in India. While India does have safe harbor provisions in place for digital intermediaries (such as social media platforms), a 2015 decision by the Supreme Court of India stipulated that companies are required to respond to government or court orders. Furthermore, intermediary technology guidelines are attempting to change how this framework operates.
Notably, these guidelines expand intermediaries’ responsibility over user-generated content: it requires “proactive” monitoring of content and the “swift removal” of content found “objectionable” within a 72-hour time limit. This “one size fits all” approach, Choudhary stated, constitutes the government’s response to misinformation spread on internet platforms, especially WhatsApp, and its sometimes dangerous consequences.
Llansó explained critics’ contention that regulations increasing the burden of responsibility by technology companies over content posted on their platforms may favor censorship and impair free speech around the world. This global debate poses the fundamental question, “Do these intermediaries need to have this legal shield anymore?”
Llansó sees intermediary liability laws as crucial measures that protect speech in the 21st century. Companies that may be in violation of laws for failure to remove problematic content may cast too wide a net over speech posted online.
“At the end of the day, if a content host, search engine, or domain name provider faces the threat of a lawsuit for hosting somebody’s speech, they will probably censor that speech,” she said.
“At the end of the day, if a content host, or a search engine, or a domain name provider faces the threat of a lawsuit for hosting somebody’s speech, they will probably censor that speech.”
While much must be done to detect and remove hate speech, disinformation, and terrorist activity online, unclear articulations of these terms may complicate the task. Technology companies’ terms of service often set boundaries on what content is considered acceptable for their platform—when regulators nebulously describe what constitutes illegal content and what constitutes free speech, companies are pushed to define these terms.
This deciding role has traditionally been in the legal systems’ jurisdiction, noted Llansó. But when left without explicit intermediary liability protections, companies devise their own methods for complying with regulations. While many have hired staff to remove content, “speed of removal” requirements pose challenges to human personnel. A draft EU regulation, for example, would have imposed a one-hour response time on companies, which Llansó said is unlikely to provide staff with enough time to digest an inquiry.
More and more companies are addressing this challenge by utilizing proactive content filtering measures. However, automated processes can be faulty, Llansó added: “It’s certainly not a perfect system as companies are using these sorts of tools on their own, but to have the use of them mandated in law really starts to compound the risks that automated tools will lead to over-enforcement of the law over broad removal of people’s lawful speech.”
An ongoing challenge in this global debate over safe harbor and intermediary liability is that views toward free speech differ around the world. Choudhary noted that these conversations play out differently in India, and that when looking at the global impact of technology regulations, one “cannot really in good conscious always make the blanket free speech argument” that dominates debate in the United States.
“It’s certainly not a perfect system as companies are using these sorts of tools on their own, but to have the use of them mandated in law really starts to compound the risks that automated tools will lead to over-enforcement of the law over broad removal of people’s lawful speech.”
Efforts to enact solutions to this debate are also subject to our current understanding of relatively new technology, Llansó added. “Part of the dysfunction that we’re seeing is that we’re still grappling with [the question of], ‘How do you actually create a productive, positive, useful environment for free expression out of this internet thing?’” she said.
In India, policies responding to this broad, existential question are likely to solidify in the coming months, especially after newly elected officials are sworn into office in June 2019. Globally, the most effective solutions, Choudhary and Llansó noted, are likely to balance global norms, clarity, and transparency.
“We need more information from companies about what’s actually going on when they enforce their policies,” Llansó said. “But we also need more sophistication among regulators and other policymakers, and hopefully civil society and academia, in how to actually evaluate information that we might get from the companies about these issues.”
Nicole Burnett is the Summer 2019 Communications Junior Fellow.
The views and opinions expressed here are those of the speakers and do not necessarily reflect the official policy or position of the Pacific Council.