Investigation Reveals App Store and Google Play Algorithms Actively Promote Harmful ‘Nudify’ Apps
A new investigation shatters the assumption that tech giants are merely slow to police their platforms. Instead, it presents a far more troubling picture: their systems are actively steering users toward harmful content. According to a report by the Tech Transparency Project (TTP), the App Store and Google Play are not passive hosts for so-called nudify apps. Their built-in search and advertising mechanisms are functioning as promotional engines for these tools.
For the uninitiated, nudify apps utilize artificial intelligence to digitally remove clothing from photographs of real individuals. Their capabilities often extend to generating pornographic videos or creating sexually explicit chatbots that misuse a person’s likeness. Alarmingly, the investigation identified 31 such apps that were rated as suitable for download by minors.
How Search and Ads Actively Guide Users to Nudify Apps
The TTP’s methodology was straightforward yet revealing. Researchers conducted searches on both platforms using terms like “nudify,” “undress,” and “AI NSFW.” The results were consistent and damning. Approximately 40% of the top ten results for each query returned apps designed to render women nude or scantily clad. This means the core discovery function of these stores is directly facilitating access to harmful tools.
Building on this, the problem extends beyond organic search. Both platforms were found to be running paid advertisements for nudify apps within those very search results. Google’s implementation included a carousel of sponsored apps, some of which featured openly pornographic imagery. This represents a direct monetization of harmful content by the platform owners.
The Role of Autocomplete in Amplifying Harm
Furthermore, the autocomplete feature, intended to aid user search, exacerbated the issue. When researchers typed “AI NS” into the App Store search bar, the system suggested completing the phrase with “image to video ai nsfw.” Following this suggestion led users directly to more nudifying apps in the top results. This is particularly striking given that Apple controls all advertising in its App Store and has a published policy explicitly prohibiting ads that promote adult content. Despite this policy, three separate TTP searches on the App Store returned a nudify advertisement as the very first result.
The Staggering Scale of Downloads and Revenue
Why does this matter beyond the obvious ethical breaches? The scale of the issue provides a compelling, and troubling, answer. The apps identified across both stores have been downloaded a staggering 483 million times collectively. Their lifetime revenue exceeds $122 million. Crucially, both Apple and Google collect a significant cut of this revenue through paid subscriptions and in-app purchases. The TTP suggests this financial incentive may be a key reason behind the apparent lax enforcement of their own rules.
In response to being flagged by TTP and Bloomberg, Apple removed 15 of the identified apps, while Google suspended several others. However, when pressed for details, both companies declined to explain how these apps passed their review processes initially or why their age ratings were set to allow access by minors. This lack of transparency does little to rebuild trust.
Mounting Legal and Regulatory Pressure on Platforms
Therefore, external pressure is mounting rapidly. Legislative bodies are beginning to take action against the creation and distribution of non-consensual explicit deepfakes. The UK government has started proposing and enacting relevant laws, and the United States recently secured its first criminal conviction under a similar statute. As public awareness grows, the pressure on Apple and Google to enact more decisive and transparent moderation will only intensify.
Apple’s own inconsistent enforcement is already under scrutiny. A separate report revealed that the company privately threatened to remove the xAI chatbot Grok from the App Store in January over concerns about it generating sexualized deepfakes. Apple reportedly rejected xAI’s first attempted fix as insufficient before ultimately allowing the app to remain. This incident, coupled with the TTP’s findings, suggests a pattern of reactive, rather than proactive, governance.
Consequently, both tech giants are running out of room to plead ignorance or claim technical difficulty. Their systems are demonstrably architected in a way that promotes harmful content, and they profit from its distribution. The central question is no longer if they can act, but how long they can afford not to. For more on platform accountability, read our analysis on evolving app store policies. The conversation around user safety is also evolving, as discussed in our piece on the future of AI ethics and regulation.
Ultimately, this investigation moves the debate from one of content moderation speed to one of fundamental platform design and financial incentive. When search algorithms and ad markets are optimized for engagement and revenue above all else, the results can actively undermine user safety. The path forward requires a fundamental re-evaluation of these priorities by the world’s most powerful digital gatekeepers.