China has released a new government policy designed to prevent the spread of fake news and misleading videos created using artificial intelligence, otherwise known as deepfakes. The new rule, reported earlier today by Reuters, bans the publishing of false information or deepfakes online without proper disclosure that the post in question was created with AI or VR technology. Failure to disclose this is now a criminal offense, the Chinese government says.
The rules go into effect on January 1st, 2020, and will be enforced by the Cyberspace Administration of China. “With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people’s interests, creating political risks and bringing a negative impact to national security and social stability,” the CAC said in a notice to online video hosting websites on Friday, according to the South China Morning Post.
China’s stance is a broad one, and it appears the Chinese government is reserving the right to prosecute both users and image and video hosting services for failing to abide by the rules. But it does mirror similar legislation introduced in the US that is designed to combat deepfakes.
China’s policy is broad, but it mirrors similar legislation in the US
Last month, California became the first US state to criminalize the use of deepfakes in political campaign promotion and advertising. The law, called AB 730 and signed by Gov. Gavin Newsom, makes it a crime to publish audio, imagery, or video that gives a false, damaging impression of a politician’s words or actions. California’s law does not use the word deepfake, but it’s clear the AI-manufactured fakes are the primary culprit, along with videos misleadingly edited to frame someone in a negative light.
California’s approach does exclude news media, as well as parody and satire, with the sole aim for now being to prevent the potential damage that deepfake attack ads could cause when used in the run-up to an election. The law applies to candidates within 60 days of an election and it’s designed to expire by 2023 unless explicitly reenacted.
Congress is also in the process of analyzing the potential harm of deepfakes and how best to combat their influence in the upcoming 2020 presidential election. The House Intelligence Committee held a hearing on the subject after convening a panel of experts from universities and think tanks to come up with a deepfake strategy with regard to election integrity and security. There are also numerous pieces of legislation moving through Congress at the moment that would require special watermarks over or disclosures around fake or misleading media, as well as criminalization of the creation and distribution of such video.
On the US platform side, Facebook and Twitter are in the process of creating better tools for detecting deepfakes and helping reduce the spread of those videos and imagery across the respective platforms. Twitter this month said it was drafting deepfake policy after a number of high-profile incidents, including a misleadingly edited video of House Speaker Nancy Pelosi going viral, that have highlighted how vulnerable the company’s platform is to misinformation of this variety.
Facebook, which also faced criticism for failing to stop the spread of the Pelosi video, has begun developing technology to detect deepfakes, but it notably has refused to remove such videos in line with its policy on speech. Similarly, Facebook has come under fire for allowing politicians to knowingly lie in advertisements, opening up the future possibility of deepfake political ads in the absence of federal legislation. CEO Mark Zuckerberg has said his company does not want to regulate speech on the platform. Twitter took the opposite stance and announced an outright ban on all political advertising last month.