Revolutionizing Content Moderation: The AI-Powered Solution

When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, he was tasked with fixing the social media giant’s content moderation problem. However, he soon discovered that the issue ran much deeper than simply implementing better technology. Human reviewers were expected to memorize a 40-page policy document and make quick decisions on flagged content, often resulting in accuracy rates of only slightly better than 50%.

This reactive approach is no longer sustainable in today’s digital landscape, where AI chatbots have become increasingly sophisticated and well-funded adversaries are constantly pushing the boundaries of what is considered acceptable. The consequences of these failures have been severe, including chatbots providing teens with self-harm guidance and AI-generated imagery evading safety filters.

Levenson’s frustration with the limitations of traditional content moderation led him to develop a revolutionary new approach: “policy as code.” This concept involves turning static policy documents into executable, updatable logic tightly coupled to enforcement. This insight ultimately led to the founding of Moonbounce, which has just announced a $12 million funding round co-led by Amplify Partners and StepStone Group.

Moonbounce provides an additional safety layer wherever content is generated, whether by a user or AI. The company’s proprietary large language model evaluates content at runtime, providing a response in 300 milliseconds or less, and takes action accordingly. This can include slowing down distribution while the content awaits human review later or blocking high-risk content immediately.

Today, Moonbounce serves three main verticals: platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators. The company has already supported over 40 million daily reviews and serves more than 100 million daily active users on its platform, including customers such as Channel AI, Civitai, Dippy AI, and Moescape.

According to Levenson, safety can be a product benefit, rather than just an afterthought. Moonbounce’s technology is allowing its customers to build safety into their products in innovative ways, making it a key differentiator for their businesses.

Tinder’s head of trust and safety has even reported a 10x improvement in accuracy of detections using similar LLM-powered services. As AI companies face mounting legal and reputational pressure due to chatbot failures, Moonbounce is providing a much-needed solution to the content moderation problem.

“Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” said Lenny Pruss, general partner at Amplify Partners. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.”


Analysis based on: https://techcrunch.com/2026/04/03/moonbounce-fundraise-content-moderation-for-the-ai-era/