In 2019, Chinese researcher Li Bicheng brought attention to the potential of using AI to manipulate public opinion. He proposed the concept of an army of fake online personae, controlled by AI, which could shape consensus on important issues. As a former researcher at the People’s Liberation Army’s top information warfare research institute, Li’s ideas should be regarded as a warning of an impending flood of Chinese influence operations using AI across the web.
Recent reports by Meta, a large internet platform, revealed that pro-Beijing content from groups linked to the Chinese government has already inundated Western social media platforms. A Chinese network, relying on click farms in Vietnam and Brazil, attracted more than half a million followers on Facebook. Although these operations still seem to be run by humans and have limited real-world impact, the introduction of generative AI poses a significant threat.
Generative AI can revolutionize China’s social media manipulation efforts, making them more effective and cheaper. Traditional methods require content farms and human resources to create and promote content, whereas generative AI would allow the creation of scalable and believable content at a fixed cost. Microsoft’s recent report confirmed that China-affiliated actors started using AI-generated images, highlighting the country’s readiness to adopt this technology.
The cost of building AI models for such manipulative purposes is already relatively inexpensive, with the potential for further affordability. An experiment by a researcher known as Nea Paw demonstrated how a fully autonomous account could be created using publicly available AI tools for just $400. This generative AI, designed to mimic human behavior rather than traditional bot accounts, grants the Chinese Communist Party (CCP), Russia, Iran, and other bad actors the power to shape global conversations in unprecedented ways.
Xi Jinping, the General Secretary of the CCP, has long emphasized the importance of leveraging technology to manipulate public opinion. In his remarks during the CCP Politburo’s Collective Study Session, Xi highlighted the need to create a favorable external public opinion environment for China. He expressed satisfaction with China’s current influence on global public opinion but stressed the CCP’s ongoing ambitions.
The Chinese military has been researching synthetic information and its potential use in shaping narratives and creating disinformation campaigns. The advent of generative AI offers the People’s Liberation Army (PLA) an unprecedented opportunity to manipulate social media at scale, with content of near-human quality. This increases their ability to influence foreign audiences and fuel political firestorms, as seen in the 2017 disinformation campaign regarding religious regulations in Taiwan.
Mitigating this threat requires collaborative efforts from social media platforms and the U.S. government. Platforms should strengthen their measures to combat inauthentic accounts disseminating disinformation, making it harder for malign actors to create new accounts. The U.S. government, on the other hand, should consider revising export controls to cover the hardware necessary for training large language models at the core of AI technology.
Recognizing the urgent need to address this issue, it is crucial for the U.S. government and social media platforms to work together closely, particularly in the run-up to the 2024 elections. While it may not be possible to completely regulate the use of generative AI, proactive measures can be taken to ensure the integrity of online information and protect democratic processes from manipulation.
The threat is real, and failure to act swiftly could have far-reaching consequences for global democracies. It is imperative that we stay vigilant and resilient in the face of this emerging challenge.
The post “The Growing Threat of AI-Assisted Social Media Manipulation” first appeared on the European News Global.
