AI TEXT-TO-IMAGE COMMUNITY ART PLATFORMS: THE PERILS (first published in ‘Medium’)
- Pam Saxby

- Oct 31
- 3 min read
Updated: Nov 14

Hi!
I live in South Africa, where I write about policy and legislative developments of interest to the legal fraternity. I also run social media accounts for one of my clients.
Last year, I joined a popular AI text-to-image international community ‘art’ platform to generate the quality of images I need to attract attention to the alerts I post on my client’s behalf. At the time, online free text-to-image apps struggled to generate images combining the South African flag with a banner describing whatever policy or legislative development I had in mind.
The community I joined has a set of standards that, among other things, describe the type of images members should mark as adult content or ‘not-safe-for-work’ (NSFW). These include images portraying women with exaggerated body parts in suggestive poses. According to those very standards, the platform is used by unsupervised children — which is one reason for requiring members to mark any adult content ‘NSFW’.
Each member of the community has an account with settings that include a safe browsing option. Theoretically, that should protect any member choosing the safe option from seeing NSFW content when browsing images published on the platform’s website.
The rules for daily competitions clearly state that NSFW entries aren't allowed. And entering competitions is the best way to attract ‘followers’.
Unfortunately, neither the community standards nor the competition rules are adequately enforced. As a result, images of voluptuous, often provocatively clad young women in suggestive poses find their way into competitions and appear regularly in various website searches (where they should be labelled NSFW but rarely are). Members generating that type of image simply have to tag them with ‘girl’, ‘woman’, ‘female’, ‘fantasy’, ‘beauty’ (or a combination of two or more of those words) to have them featured among the results of a website search for those categories of ‘art’.
Innocent images of pre-adolescent girl children in the harmless activities one would normally associate with little girls also feature among the results of searches for ‘girl’ images — sending rather worrying signals in the context of child safety. Yet when I alerted the platform to this I was rebuffed. Earlier queries about NSFW images in competitions with rules prohibiting them were met with similar responses.
There are widespread concerns internationally about the extent to which AI text-to-image generating models could entrench perceptions of women as objects and commodities.
One example is ‘Perpetuating misogyny with generative AI: how model personalisation normalises gendered harm’ (Wagner & Cetinik, University of Zurich). Another is ‘The AI art movement has an objectification problem’ (NFT NOW).
As far back as January 2021, Jaimee Swift and Hannah Gould penned a warning in UNICEF USA that “hypersexualized models of femininity in the media affect the mental, emotional and physical health of girls and women on a global scale”.
Most members of the community platform to which I belong live in the US, where data is apparently being collected in anticipation of drafting legislation to address this issue among many others related to AI. Meanwhile, the ‘Take It Down Act’ passed in May 2025 now criminalises deepfake, the development of which may well entail scraping the web for AI text-to-image work.
I’m hoping this article will attract the attention of an investigative journalist writing for US mainstream media so that more people are made aware of what’s happening. Meanwhile, perhaps publishing it here will prick a few consciences ...












Comments