SUSPENDED! FOR TRYING TO HELP A COMMUNITY ART PLATFORM ADHERE TO ITS OWN STANDARDS … (first published in ‘Medium’)
- Pam Saxby

- Nov 5
- 3 min read
Updated: Nov 14

I’ve just been banned from using the popular international AI text-to-image ‘art’ community platform to which I’ve subscribed since 2024. In place for two weeks, the suspension prevents me from communicating with other subscribers, creating images, entering competitions and ranking other entries.
It was imposed because I generated and published images protesting against women objectification and oversexualisation by other users. Apparently, by naming and shaming the platform itself and a support chat room host in the image captions, I violated a community standard prohibiting harassment. My protest images were caricatures using animals with speech balloons to illustrate how frustrating it has been trying to alert human moderators to oversexualised images of women and ‘girls’ that slipped through the automoderator filters.
Last week, after exchanging text messages with chatroom volunteers and an official human moderator intermittently for at least three hours, one with better-than-average communication skills explained the reporting system to me — admitting it’s flawed, and that as volunteers, human moderators are overwhelmed by the volume of reports. By then, all his colleagues had accomplished was to irritate me by repeatedly advising me to use the platform’s normal reporting channels instead of submitting complaints on the support request form available to subscribers — or using the chat room.
Normal reporting channels include clicking a barely visible little grey flag icon during the official daily competition voting phase — an option I only discovered the day before being suspended and wouldn’t have known about at all without help from the more communicative chat room volunteer. Other options are either to report individual images encountered when browsing by using the three-dot facility in one corner of each published creation, or (during the competition voting process) to alert a human moderator by sending a chat room message (which I’d been told not to do).
A few days later, during a text message exchange with the same helpful chat room volunteer, I was told the little grey flag is available only during the voting phase of official daily competitions. It isn’t available as a reporting option during the voting phase of community-hosted competitions. So, there’s no uniform set of procedures for reporting adult/NSFW (not safe for work) content encountered during the voting phase of official daily and community-hosted competitions. Which is confusing.
During the first text message exchange, another chat room volunteer openly acknowledged the importance of platform users reporting adult content the automoderator has missed. Which is what I’ve been doing regularly during the past year. But here’s the thing: The day before this text message exchange, a member of the platform’s permanent staff had told me (in writing, and in no uncertain terms) to stop reporting adult content. Apparently, ‘most’ of the images I’d reported were found by overstretched human moderators to be sufficiently artistic for safe viewing — possibly even by unsupervised children, since the platform’s community standards expressly refer to underage users and the importance of creating a safe environment for them.
The problem is that subscribers aren’t made aware of the reporting measures available. In fact, the term ‘report’ appears only three times in the platform’s community standards — with no reference to the steps subscribers are expected to take. It’s simply assumed that we’ve already learned the ropes from other social media or community platforms. Not that I use many. But I do know that the measures in place for reporting inappropriate content on Facebook and X (to which I belong) are readily accessible, clear and easy to understand.
I was expecting to be suspended. Fortunately, I’ve already gathered sufficient hard evidence for yet another report to the consumer protection commission. What unsettles me is this: I’ve received no feedback on the adult content that triggered my interactions with chat room volunteers.
The first was a group of images portraying well-endowed girls who could well be underage. The second was an image of overweight African women in swimming costumes huddled around a bath. They’re buxom, to say the very least — evidence of which bulges generously over the tops of their swimsuits. And it’s not for me to pass judgment on whether the image of African women is racist, sexist, bigoted — or all three and some. Neither am I in a position to comment on the aesthetics. But …
I’ve been instructed to remove all the caricatures I created to protest against the platform’s ongoing objectification and oversexualization of women. Yet the very images that triggered each exchange are considered safe viewing.
What a travesty!












Comments