Revolutionizing Content Moderation: Moonbounce's AI-Powered Solution
When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, he was tasked with fixing the social media giant’s content moderation problem. However, he soon discovered that the issue ran much deeper than technology alone could solve. Human reviewers were struggling to keep up with a 40-page policy document, leading to accuracy rates of only “slightly better than 50%.” This reactive approach, marked by delayed and inaccurate decision-making, was unsustainable in a world where nimble and well-funded adversarial actors are increasingly prevalent.
The rise of AI chatbots has further exacerbated the problem, with content moderation failures resulting in high-profile incidents such as chatbots providing teens with self-harm guidance or AI-generated imagery evading safety filters. Levenson’s frustration with this status quo led him to develop the concept of “policy as code,” which involves turning static policy documents into executable, updatable logic tightly coupled to enforcement.
This innovative approach has given birth to Moonbounce, a company that has recently raised $12 million in funding from Amplify Partners and StepStone Group. Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or AI. The company has trained its own large language model (LLM) to evaluate content at runtime, provide a response in 300 milliseconds or less, and take action.
Moonbounce’s system serves three main verticals: platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators. The company has already secured several high-profile clients, including Channel AI, Civitai, Dippy AI, and Moescape, which rely on Moonbounce to support over 40 million daily reviews and serve over 100 million daily active users.
The importance of safety in AI-powered applications cannot be overstated. As Lenny Pruss, general partner at Amplify Partners, noted, “Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting.” Moonbounce’s solution offers a beacon of hope for AI companies struggling to ensure the safety and well-being of their users.
In an era where AI-powered applications are increasingly ubiquitous, the need for effective content moderation has never been more pressing. By developing innovative solutions like Moonbounce’s policy as code approach, we can create a safer and more responsible online environment that benefits both users and developers alike.
Analysis based on: https://techcrunch.com/2026/04/03/moonbounce-fundraise-content-moderation-for-the-ai-era/
