Designed for Censorship
To fully understand information control in China, we need to go beyond the common notions of deletion of information and propaganda. A more subtle but equally powerful new tool can be found in the architecture of social media platforms operating in China, which are systematically designed to guide and limit user interactions. The implications reach far beyond China and therefore merit scrutiny.
The video-conferencing software Zoom has rapidly become subject to one of the biggest political controversies after the company was asked by the Chinese authorities to terminate sessions commemorating the Tiananmen Square crackdown. This controversy ultimately stems from the challenges cross-border platforms face in dealing with different local censorship requirements. But Zoom is not the only platform walking the tightrope between Chinese and international interests: platforms such as LinkedIn have operated in both markets for years. Yet closer scrutiny of LinkedIn, too, reveals unsettling new ways of controlling “undesired” content – i.e. anything that endangers the stability of the CCP. Instead of wholly relying on censorship, the digital design of social platforms – features, visual design and user interface – has become a political tool to structurally constrain information flows.
On LinkedIn’s Chinese app, for example, some features that are available in the international version are strategically disabled for being potentially sensitive. One might not attribute many of these “features” to political motivations but simply to choices made purely in the commercial interests of the platform. Yet closer scrutiny suggests these choices are not merely coincidental. Compared to the international version, LinkedIn China disabled the "groups" function, the news feed and does not allow users to upload videos or PDF and Word attachments. The nature of these features indicates that this is for political reasons: the news feed is likely to contain political content, videos and attachments are technically more difficult to censor, and groups are the most effective tools to organize collective action. LinkedIn did not bother with censorship and risk a similar backlash to Zoom, it outright disabled these functions for Chinese users.
Dark motives behind digital design
This strategic digital design is also an approach favored by China’s most popular social media platform: WeChat. WeChat is inherently more difficult to censor than other social media in China, as most of the interaction happens through real-time private chats. This means that only automated systems can censor content quickly enough to be meaningful. While these algorithms exist, they are not always accurate, especially when users use creative language to circumvent algorithms – recent examples include “2020-1997=50” to refer to the Hong Kong National Security Law or the use of the character 占during the commemoration of the Tiananmen protests in 2019 to symbolize tanks. When it comes to censorship of images and videos, algorithms generally have even greater difficulties – although this does happen.
Instead, WeChat is fundamentally designed to restrict the public broadcasting function of the network without needing heavy censorship or algorithms. Group chats are limited to 500 members, meaning it is difficult to transmit information to large groups. WeChat’s “Moments” – a Facebook-like news feed with updates posted by people in your circles – only displays content made by one’s friends. In the comment boxes below, it only displays replies by mutual friends – others are not displayed. Videos posted on Moments can only be around 10-15 seconds long; good for a quick “glimpse of life” but ill-suited for substantive content. All these design choices limit the transmission of possibly sensitive information to large groups and thus the chances of large-scale protests being organized through the platform.
In cases where the design of functions has not proven sufficient, there is another option: disabling them outright. In 2012 the government forced WeChat to disable its comments function altogether in response to rumors about a coup. In 2018, when the CCP announced the removal of term limits for president Xi Jinping triggered Chinese netizens to change their profile picture to a picture of Winnie the Pooh being crowned, platforms such as WeChat and Weibo disabled the ability to change one’s profile picture. Just this June, Weibo was forced to freeze its “hot topics” list by the government.
Much more subtle than censorship
This approach is much more subtle than censorship while having immense potential for information control. And while these instances are specific to China, the use of these tools does not stop there. There is an acute need to take a closer look “under the hood” of social media platforms generally. During the COVID-19 epidemic, WhatsApp limited the forwarding of messages during COVID-19 to stop the spread of fake news. Other platforms are also seeking ways to stop the spread of fake news or other potentially harmful content. These are generally positive or at least well-intended moves, but there might be severe inadvertent effects. YouTube’s algorithms recently removed anti-CCP comments from its comment feed and Facebook blocked fact-based sources on COVID-19. Platforms worldwide are also under pressure from advertisers to sanitize their platforms of content that might not be "advertiser-friendly", including “subjects related to war, political conflicts, natural disasters and tragedies”; one Chinese platform even censored information that could have harmed its stakeholders.
The examples from China teach us that digital design significantly shapes users’ ability to inform themselves and organize collective action. But it should also be clear that censorship is not limited to political or authoritarian contexts. Both in China and beyond, we must therefore scrutinize the workings under the hood of social media platforms to understand how they affect one of our most cherished rights: freedom of expression.
About the Author:
Vincent Brussee is pursuing an MA in Asian Studies at Leiden University, focusing on Chinese digital society and cyber-governance. He gained professional experience at the Consulate General of the Kingdom of the Netherlands in Shanghai. He holds an BA in International Studies with a specialization in East Asian politics and political economy from Leiden University. He was an intern at the Public Policy and Society team at MERICS from April to June 2020.
The views expressed in this article are those of the author and not necessarily reflect those of the Mercator Institute for China Studies.