Policy Very Bearish 8

ByteDance’s Doubao AI Under Fire for Non-Consensual Deepfake Generation

· 3 min read · Verified by 3 sources ·
Share

Key Takeaways

  • A grassroots investigation has revealed that ByteDance's Doubao AI is being systematically exploited to create non-consensual pornographic deepfakes of women.
  • The use of 'fenjue' coded prompts to bypass safety filters highlights a critical regulatory gap in China's rapidly expanding generative AI market.

Mentioned

ByteDance company Doubao product DeepSeek product Free Nora organization QuestMobile company Baidu company BIDU Bilibili company BILI Weibo company WB Telegram product Fenjue technology

Key Intelligence

Key Facts

  1. 1Doubao leads the Chinese AI market with 155 million weekly active users as of December 2025.
  2. 2DeepSeek, a primary competitor, trails with 81.6 million weekly active users.
  3. 3Users are using 'fenjue' (coded prompts) to bypass Doubao's safety filters for pornographic content.
  4. 4Free Nora volunteers infiltrated Telegram groups to document the systematic exploitation of the AI.
  5. 5The investigation labels the activity 'digital public shaming' targeting ordinary women.
Metric
Weekly Active Users 155 Million 81.6 Million
Primary Issue Weak moderation/Safety bypass Market competition
Parent Entity ByteDance DeepSeek-AI
AI Safety & Regulatory Outlook

Analysis

The rapid proliferation of generative AI in China has hit a significant ethical and regulatory wall as ByteDance’s Doubao, the country’s most popular AI chatbot, faces allegations of facilitating 'digital public shaming.' A report by Free Nora, a feminist media collective, has exposed a sophisticated underground ecosystem where users exploit Doubao’s large-scale language model to generate non-consensual pornographic images of real women. This development underscores a growing crisis in AI safety, where the speed of product deployment has far outpaced the implementation of robust moderation guardrails.

At the heart of the controversy is the scale of Doubao’s reach. With 155 million weekly active users as of late December, Doubao is the dominant player in the Chinese AI market, significantly ahead of competitors like DeepSeek, which maintains 81.6 million weekly active users. The sheer volume of traffic on Doubao makes it a high-stakes environment for safety failures. According to the investigation, the platform’s safeguards are being systematically circumvented through a coded system known as 'fenjue.' This term, borrowed from Chinese fantasy literature to describe secret techniques, refers to a set of prompts specifically designed to trick AI safety barriers into generating prohibited content.

With 155 million weekly active users as of late December, Doubao is the dominant player in the Chinese AI market, significantly ahead of competitors like DeepSeek, which maintains 81.6 million weekly active users.

The investigation by Free Nora’s volunteers involved infiltrating anonymous Telegram groups, where users collectively fine-tuned these bypass methods. This highlights a broader trend in the AI sector: the emergence of 'shadow communities' that treat safety filters as puzzles to be solved rather than ethical boundaries. For startups and venture capitalists, this signal is clear: the era of 'growth-at-all-costs' in generative AI is meeting a hard regulatory reality. In China, where the Cyberspace Administration of China (CAC) has already established some of the world’s first generative AI regulations, the failure of a flagship product like Doubao to prevent such egregious misuse is likely to trigger a new wave of enforcement and mandatory safety audits.

What to Watch

The implications for the industry are twofold. First, there is the immediate 'safety tax' that AI companies must now pay. Resources that might have been directed toward improving model performance or expanding features must now be diverted to adversarial testing and more sophisticated moderation layers. Second, the controversy creates a reputational risk that could dampen investor enthusiasm for consumer-facing AI tools that lack ironclad safety protocols. While global platforms like X and Telegram have faced similar criticisms, the specific targeting of ordinary women through a mainstream tool like Doubao brings the issue of 'digital public shaming' into the domestic spotlight in a way that is difficult for regulators to ignore.

Looking forward, the industry should expect a shift toward more transparent safety reporting and potentially the introduction of mandatory watermarking for all AI-generated imagery. The 'fenjue' phenomenon demonstrates that simple keyword filtering is no longer sufficient; AI companies will need to develop more context-aware moderation systems that can detect intent rather than just specific words. For the venture capital community, the focus of due diligence is likely to shift from pure user growth metrics to the robustness of a startup’s ethical framework and its ability to defend against adversarial prompt engineering. The Doubao controversy serves as a stark reminder that in the race for AI dominance, the most successful companies will be those that can balance innovation with the fundamental safety of their user base.