A Taiwan-based non-profit organization has issued an alert over suspicious social media accounts and fake videos it says may have been generated by artificial intelligence. The development comes ahead of Taiwan's presidential election to be held on January 13.
Doublethink Lab, which investigates disinformation, says it identified suspicious social media posts after it was revealed in November that Taiwan plans to sign an agreement to accept workers from India.
On the social media platform X, formerly Twitter, at least 130 comments about the agreement appeared in half a day. The posts included racist remarks about Indian people and criticism of the policies of the administration of President Tsai Ing-wen. They also expressed opposition to the acceptance of Indian workers.
NHK analyzed the posts with a tool for examining fake information. It found that almost all the account profile pictures were fake and made with pictures taken from elsewhere online. Some of the images may have been made with generative AI technologies.
The NPO concluded that since the accounts were created only several months ago and some of the uploaded text is similar, the posts were automatically published by AI and other systems with the aim of fueling social anxiety and division to influence the presidential election.
An approximately one-minute voice data file, in which a candidate calls another candidate arrogant, as well as a video in which a candidate appears to praise other candidates, are also spreading online. Analysis by NHK suggests these materials could also be fake and generated by AI.
Doublethink Lab CEO Wu Min-hsuan says fake images and news are being created at no cost. Wu says that to limit their spread, careful fact-checking is essential.
Doublethink Lab, which investigates disinformation, says it identified suspicious social media posts after it was revealed in November that Taiwan plans to sign an agreement to accept workers from India.
On the social media platform X, formerly Twitter, at least 130 comments about the agreement appeared in half a day. The posts included racist remarks about Indian people and criticism of the policies of the administration of President Tsai Ing-wen. They also expressed opposition to the acceptance of Indian workers.
NHK analyzed the posts with a tool for examining fake information. It found that almost all the account profile pictures were fake and made with pictures taken from elsewhere online. Some of the images may have been made with generative AI technologies.
The NPO concluded that since the accounts were created only several months ago and some of the uploaded text is similar, the posts were automatically published by AI and other systems with the aim of fueling social anxiety and division to influence the presidential election.
An approximately one-minute voice data file, in which a candidate calls another candidate arrogant, as well as a video in which a candidate appears to praise other candidates, are also spreading online. Analysis by NHK suggests these materials could also be fake and generated by AI.
Doublethink Lab CEO Wu Min-hsuan says fake images and news are being created at no cost. Wu says that to limit their spread, careful fact-checking is essential.
Similar Readings (5 items)
AI Adding to Threat of Election Disinformation Worldwide
China-connected spamouflage networks spread antisemitic disinformation
Journalist held in Taiwan over fake opinion polls
Summary: Viral fake videos prompt misunderstanding of bears
Tech giants sign accord to fight AI interference in elections
Summary
Taiwan non-profit Doublethink Lab has flagged suspicious social media accounts and fake videos ahead of the January 13 presidential election, suspecting artificial intelligence involvement. These posts, primarily on platform X, express racist sentiments against Indian workers, criticize the
Statistics
279
Words1
Read CountDetails
ID: 729339f2-d82b-4094-98d3-debb633365b6
Category ID: nhk
URL: https://www3.nhk.or.jp/nhkworld/en/news/20231223_17/
Date: Dec. 23, 2023
Created: 2023/12/24 06:30
Updated: 2025/12/08 19:52
Last Read: 2023/12/24 18:58