When AI Face-Swapping Becomes Easier, Does Public Trust Erode?
8 Jan.
Author:WU WENYU
Editor:LIU YITING
[Source: CCTV News, screenshot from Weibo video (edited) ]
Recently, Chinese actress Wen Zhengrong made a remark on Douyin that left many viewers uneasy. She wrote, “If you are Wen Zhengrong, then who am I?” The remark was prompted by the platform’s widespread circulation of clips that appeared to show her endorsing various brands via livestream. Many users assumed the products were personally promoted by her and paid out of pocket to purchase them. It later emerged, however, that the so-called “Wen Zhengrong livestream sales” videos were AI face-swapped forgeries that used her likeness to mislead consumers. What makes the incident unsettling is not only that a celebrity was impersonated, but also that as the barrier to AI faceswapping continues to fall, portrait and reputation rights are shifting from widely recognised legal protections into moral boundaries that can be tested and, in everyday online circulation, crossed.
Before AI tools became widely accessible, producing convincing manipulated videos typically required high costs, technical barriers, and professional editing skills. Today, however, AI tools are everywhere. Their rapid proliferation has moved beyond simple question-and-answer functions and has begun to lower the technical threshold once reserved for specialists. With only a few prompts, users can make photos appear to “speak,” alter a person’s actions in a video, and even generate realistic content from only limited image or short video material. In fast-scrolling social media feeds, such content can pass unnoticed. When a capability shifts from being held by a few to being tried by many, the risks embedded in the technology do not disappear; they spread. Within this diffusion process, a more latent structural issue lies in the shifting perception of responsibility. Some users increasingly frame infringement as something caused by the AI, rather than as a result of their own decision to use it. This change in perception lowers the perceived threshold for wrongdoing, while making attribution and enforcement more difficult in practice. As a result, it widens the gap between the ease of infringement and the difficulty of accountability.
Against this backdrop, when portrait and reputation rights can no longer be consistently protected within clear legal boundaries, public trust in information is forced to change. In the past, people generally assumed that voices and images carried a high degree of credibility. Today, as the barriers to using AI tools fall, the difficulty of identifying authentic information and verifying real identities has increased. Audiences can no longer rely on what they “see with their own eyes.” Instead, they must cross-check sources and repeatedly verify context to avoid being misled. When such self-protective mechanisms are applied to the most basic level of trust, they inevitably intensify fatigue and uncertainty in the public information environment, placing trust itself under strain.
At the same time, what deserves serious attention is that this shift does not remain at the level of isolated “celebrity face-swap” incidents. Scams and misinformation are increasingly wrapped in a video format that looks credible on the surface, which lowers audiences’ guard more effectively than text ever could. In Malaysia, for instance, a recent case involved a public figure, MCA president Datuk Seri Dr Wee Ka Siong, whose likeness was abused through AI face-swapping to fabricate a video and lure the public into an investment scheme. This information was drawn from Wee’s official Facebook post dated 23 January 2025, and the case was also reported by The Star.
[Source:The Star ]
Even earlier, Malaysia’s Securities Commission (SC) warned about a similar pattern. In an official media release dated 22 July 2024, the SC stated that it had detected multiple AI face-swapped videos circulating on Facebook, impersonating “well-known individuals” and “well-known companies” to promote investment fraud.
[Source: Securities Commission Malaysia]
These tactics exploit the public’s trust in authority figures and familiar faces, using AI-generated visuals to quickly create an illusion of legitimacy and achieve fraudulent aims. What is most regrettable, however, is the time lag in correction. Even after a forged video is eventually debunked, the misleading impact and the social division triggered by content that has already spread as “facts” or “evidence” may have been completed. This brings us back to the central concern. As the boundaries of rights become blurred and the risks of technological misuse diffuse, the burden of identification placed on the public rises, even at the most basic level of trust.
The greater the challenges we face, the closer we may be to meaningful progress. Today, the public can see many examples of embracing technology— AI-driven virtual worlds and AI presenters, for instance. These developments are controversial, yet they can also be valuable. Pointing out the risks and difficulties does not mean rejecting AI as a technology; rather, it raises the question of how we can advance technological development while safeguarding clear legal boundaries.
As technology is increasingly used by the public, platforms, merchants, and the distribution chain should bear clearer responsibilities and higher compliance costs. Platforms need to strengthen identity verification and authorisation review for high-risk content such as livestream selling and investment promotions and establish fast handling channels for victims of impersonation. They should adopt tiered measures for suspected deepfakes, such as warnings and reduced distribution, to avoid “go viral first, delete later.” At the same time, traceability should be improved through watermarking, content fingerprinting, and verifiable authorisation mechanisms, so that authenticity is no longer judged only by personal intuition. More importantly, accountability must return to human decision-making. AI is a tool and cannot become an excuse for exemption. Publishers, merchants, and organised distributors should bear corresponding responsibility for the consequences of what they circulate. Only when society has the ability to protect rights well, platforms have corresponding rules to regulate, and all parties clearly understand the weight of accountability, can the public avoid relying on long-term self-defence to maintain the most basic trust.
Returning to the initial question, when the barrier to AI face-swapping decreases, will public trust disappear completely? The answer is never simply “yes” or “no,” and it is not to keep the public forever in a self-defence mechanism toward the basic trust in information. Rather, it is to make “authenticity” a shared understanding that needs to be traceable, maintainable, and accountable. At the same time, as young people in a post-truth era, we not only need the ability to embrace new tools, but also need to keep the caution toward images and information, and the courage to question AI outputs. Because when truth cannot be traced, questioning is no longer encouraged, and verification cannot be achieved, what collapses first is not the technology itself, but the foundation of public trust.