Inside the Grok controversy: Musk’s ‘user active seconds’ strategy explained
How the push for unfiltered engagement removed critical safety layers, triggering investigations into deepfakes and child safety violations.
News Analysis
The modern digital economy is built on a fundamental tension: the friction between user safety and user engagement. For years, social media platforms have walked this line, often stumbling. But documents revealed this week regarding xAI’s chatbot, Grok, suggest that for Elon Musk’s artificial intelligence venture, the line was not merely crossed—it was erased.
The revelation that xAI required human data employees to sign waivers acknowledging exposure to “sensitive, violent, sexual and/or other offensive or disturbing content” marks a definitive shift in the generative AI sector. It signals the end of the “safety by design” era that characterized the early rollout of competitors like OpenAI’s ChatGPT and Google’s Gemini, and the beginning of a more aggressive, engagement-centric phase.
This is not simply a story about content moderation failures. It is a story about the commodification of risk. By prioritizing a granular engagement metric—”user active seconds”—The Washington Post reports that xAI has effectively operationalized a business model where the potential for psychological and reputational harm is calculated as a necessary overhead for growth.
The metric that changed the model
At the heart of the controversy is the metric “user active seconds.” In the Silicon Valley lexicon, engagement metrics are the north star. Platforms like TikTok and Instagram optimized for “time spent,” leading to algorithmic feedback loops that served users increasingly extreme content to keep them scrolling.
Generative AI was supposed to be different. The foundational promise of Large Language Models (LLMs) was utility—answering questions, drafting code, summarizing text. The primary metric for these tools has typically been “satisfaction” or “task completion.”
However, according to internal accounts, Musk championed “user active seconds” as the defining success metric for Grok. This shift in Key Performance Indicators (KPIs) fundamentally alters the engineering incentives. If the goal is to maximize the seconds a user spends conversing with a bot, utility is no longer the primary driver; emotional arousal is.
Utility is efficient; it reduces time spent. If a user asks for a recipe and gets it instantly, the “active seconds” are low. If a user enters a roleplay scenario with a “possessive” and “jealous” AI companion—as Grok’s “Ani” chatbot was programmed to be—the active seconds skyrocket.
The source code obtained by The Post reveals this intent explicitly. Instructions for the bot to be “always a little horny” and to “shout expletives” if jealous are not bugs; they are features designed to maximize the specific KPI Musk prioritized. The “undressing” of non-consensual subjects and the generation of sexually explicit imagery are the logical downstream effects of a system optimized for retention above all else.
The Waiver as a Corporate Confession
The internal waiver presented to xAI’s human data team serves as a tacit admission of this strategy. In the standard corporate playbook, such waivers are reserved for content moderators—the “janitors” of the internet who clean up toxic waste created by users.
For xAI to require this of employees training the model itself suggests a proactive decision to ingest and generate such content. It indicates that the toxicity is not a byproduct of user misuse, but a component of the product’s training data.
This is a critical distinction. When Meta or YouTube moderators sign similar waivers, it is because the platform is open to the public, and the public can be vile. When xAI employees sign them, it is because the company is deliberately feeding the machine “profane content” to ensure it can speak the language of the “engagement era.”
The concern expressed by the human data team—that the company had strayed from its mission “to accelerate human scientific discovery”—highlights the internal culture clash. The “scientific discovery” mission implies a pursuit of objective truth and utility. The “user active seconds” mission implies a pursuit of subjective attention. The two are often mutually exclusive.
Collapse of guardrails
The operational impact of this philosophy was immediate. The reduction of the AI safety team to a mere “two or three people” for most of 2025 stands in stark contrast to the dozens employed by competitors. In the logic of “user active seconds,” a robust safety team is an impediment. Safety filters introduce latency; they refuse requests; they break the flow of conversation. Every “I cannot fulfill this request” message is a moment where user engagement drops to zero.
Consequently, the removal of these guardrails was likely not an oversight, but an optimization. The result was a product that could generate 23,000 sexualized images of children in under two weeks—a scale of abuse that would be technologically impossible without the removal of standard industry filters like hash-matching against known abuse databases (CSAM).
David Thiel, former CTO for the Stanford Internet Observatory, noted that Grok is “completely unlike how any other image altering service works.” This uniqueness is not technological innovation; it is the removal of the braking mechanism. It is the equivalent of building a car that goes faster simply by removing the seatbelts and airbags.
A tale of two regulatory administrations
The fallout from Grok’s pivot has exposed the widening chasm between European and American regulatory frameworks.
In the European Union, the response has been systemic. The European Commission’s investigation under the Digital Services Act (DSA) focuses on the process. The EU views xAI’s failure to conduct a risk assessment prior to launch as the violation itself. The “precautionary principle” inherent in EU law mandates that a company must prove its product is safe before releasing it to the public. By optimizing for “user active seconds” without mitigating the risk of non-consensual deepfakes, xAI likely violated the core tenets of the DSA.
The potential fines—up to 6% of global turnover—are designed to be punitive enough to alter business logic. The EU is essentially saying that you cannot commodify risk if the cost of that risk is borne by society (in the form of CSAM or non-consensual pornography).
In the United States, the landscape is fragmented and reactive. The DEFIANCE Act creates a civil pathway for victims to sue, and state attorneys general like California’s Rob Bonta are using consumer protection laws to investigate. However, this framework treats the harm as a tort—a civil wrong to be litigated after the fact.
For a company led by the world’s wealthiest individual, civil liability is often a manageable line item. If the revenue generated by surging into the top 10 of the App Store—a metric tracked by Sensor Tower—outweighs the settlements paid to victims like Ashley St. Clair, the “commodification of risk” strategy remains financially viable in the U.S. market.
The “user active seconds” metric thrives in the American regulatory vacuum, where “permissionless innovation” is culturally and legally protected until specific harm can be proven in court. In Europe, the metric itself is under trial.
The verification trap
The Grok saga reinforces the necessity of verification over narrative. The narrative pushed by Musk on X was one of free speech and “anti-woke” programming. He framed Grok’s lack of filters as a philosophical stance against the “ideological” guardrails of competitors.
However, verification of the underlying mechanisms—the waivers, the source code, the KPIs—reveals that the “anti-woke” narrative was likely a cover for a raw engagement play. The removal of guardrails was not just about “freedom”; it was about friction. Removing moral and safety constraints reduces friction, which increases speed and “active seconds.”
By verifying the internal metrics, we can see that the “political” stance of the chatbot was actually a “product” stance. The bot was not designed to be a “truth-seeker”; it was designed to be a “user-keeper.”
Future of the engagement era
The success of this strategy—Grok’s 72% surge in downloads—poses a dangerous question for the rest of the industry. If xAI can gain market share by lowering safety standards and optimizing for “horny” engagement, will competitors be forced to follow suit?
The “race to the bottom” is a common fear in tech regulation. If safety is expensive and limits engagement, and recklessness is cheap and drives downloads, the market naturally selects for recklessness unless regulation intervenes.
The “human data team” at xAI, who feared their work was causing psychological stress, are the canaries in the coal mine. As AI models become more multimodal—processing audio, video, and image simultaneously—the potential for “high-engagement” content to become “high-trauma” content increases exponentially.
Ultimately, the “user active seconds” metric is a mirror. It reflects a business calculation that views user attention as the ultimate resource, one that justifies the collateral damage of non-consensual imagery, child exploitation material, and the psychological toll on human trainers.
Until the cost of that collateral damage exceeds the value of the active seconds—either through EU fines or a massive shift in US liability law—the commodification of risk will continue to be the operating system of the engagement era.





Excellent analysis, this piece truely hit home with its insight into the "commodification of risk". It's genuinely troubling to see how quickly the focus shifted from utility and "safety by design" to just maximizing engagement, even if it means exposing users and data employees to such difficult content.