Tech giants Apple and Google are actively steering users toward AI apps that create non-consensual nude images, including tools rated suitable for minors, while pocketing millions in revenue despite policies explicitly banning such content.
Story Snapshot
- Apple and Google app stores promote AI “nudify” apps through search results and ads, earning over $36 million from apps generating $122 million in revenue
- 31 deepfake apps capable of creating non-consensual nude images are rated suitable for minors despite rising school scandals
- Apple removed only 15 apps after media inquiry, leaving dozens still available; Google has taken no reported action
- Investigation reveals platforms profit from policy violations while enforcement remains opaque and reactive
Tech Giants Profit from Policy Violations
The Tech Transparency Project revealed that searches for terms like “nudify,” “undress,” and “deepnude” in Apple’s App Store and Google’s Play Store surface apps designed to digitally strip clothes from real people’s photos using AI. These platforms not only host the apps but actively promote them through autocomplete suggestions, search rankings, and advertising. The investigation found 18 such apps in Apple’s App Store and 20 in Google’s Play Store as of April 15, 2026. Both companies collect 30 percent commissions on in-app purchases and subscriptions, generating an estimated $36 million from apps that have collectively earned over $122 million while violating their own content policies.
Minors Exposed to Harmful Technology
The investigation uncovered a disturbing reality: 31 apps capable of generating non-consensual nude images carry age ratings suitable for minors, despite explicit policies against sexual content. This comes amid escalating school deepfake scandals where students have been victimized by AI-generated explicit images. Apps like FaceTool and FaceSwap Video by DuoFace disguise themselves as generic photo editors or face-swapping tools, allowing them to slip through app store review processes. The 483 million combined downloads demonstrate the massive scale of potential harm, particularly as these tools become accessible to tech-savvy teenagers looking to exploit classmates.
Reactive Enforcement Exposes Double Standards
Apple removed 15 apps only after Bloomberg media inquiries following the TTP report, claiming some identified apps were non-violative despite their deepfake capabilities. Google has taken no reported action and did not respond to questions about search promotion, minor-rated apps, or revenue generation. Dr. Anne Helmond from Utrecht University’s App Studies Initiative characterized the enforcement as “uneven and opaque,” noting that generic-looking apps easily pass review despite their potential for serious misuse. This reactive approach reveals a troubling pattern: platforms move to protect their reputations only when public pressure mounts, not to genuinely safeguard users from predatory technology.
The enforcement gap reflects a deeper problem with Big Tech gatekeeping. Apple and Google control approximately 99 percent of the mobile app market, giving them near-total power over what reaches consumers. Yet this investigation demonstrates they’re using that power to funnel users toward harmful apps while collecting substantial revenue. The platforms claim to investigate violations when reported, but their search algorithms and advertising systems actively promote the very content their policies prohibit. This undermines any assertion that they’re serious about protecting users, especially children, from AI-generated exploitation.
Broader Implications for Privacy and Consent
The proliferation of these apps accelerates the normalization of non-consensual content creation, eroding fundamental privacy rights and consent principles. Women, classmates, and celebrities become targets for digital exploitation without their knowledge or permission. The technology builds on earlier deepfake tools like DeepNude, which was shut down in 2019 after massive backlash, but has resurged with advances in generative AI between 2023 and 2024. Recent cases underscore the danger: a 37-year-old Ohio man downloaded over 24 apps to create AI nudes and deepfakes of minors, and xAI’s Grok platform generated explicit images of children, triggering investigations.
The economic incentives driving this problem are clear. App developers monetize AI misuse, platforms profit from their cut, and enforcement remains deliberately opaque to avoid accountability. This scandal pressures Apple and Google to overhaul how they review AI-capable apps and manage search algorithms, but the question remains whether they’ll prioritize user protection over profit margins. The episode reinforces growing concerns across the political spectrum that powerful tech companies operate with impunity, putting revenue ahead of basic decency and the safety of the American people they claim to serve.
Sources:
Apple and Google Are Steering Users to Nudify Apps – Tech Transparency Project
Apple, Google offer ‘nudify’ apps despite policies against them – The Straits Times
Apple and Google help users find ‘nudify’ apps – The Chosun Daily
Apple and Google accused of promoting AI ‘undressing’ apps – CyberNews