Pinterest has decided to let its users limit the amount of AI-generated content on its platform. But how can it identity the AI creators, asks Tim Green, Programme Director for MEF ID and Data?
On my golf club WhatsApp group, there was a very heated discussion about slow play. Members were shouting at each other about how to solve the problem. Then someone chimed in with a very reasonable but quite bland statement about courtesy and respect.
“AI has been working hard today,” came the cynical response from the group.
The writer was outraged: “That wasn’t AI,” he said, “that was me!”
But was it? His comments bore all the hallmarks of Chat CPT. I assume most readers will know the signs – the bullet points, the cliches, the business speak.
Anyway, it just shows that AI-generated content is everywhere, as is the suspicion that AI-generated content is everywhere.
ltimately the internet needs a way to identity humans, assign IDs to bots and provide an immutable link between a human and his/her AI agent. It’s a huge challenge, not least because the internet was built before anyone even dreamed of this stuff.“
So what can we do about it?
A few days ago, the design and decor platform Pinterest made an announcement. It said it would let users select “the right balance of human and AI-generated content… with new controls that allow users to decide exactly how much GenAI content appears in their feed.”
Following the tweak, users can dial down Pins in categories that are highly prone to AI modification such as beauty, art, fashion and home decor. They can manage these preferences in settings by visiting “refine your recommendations.”
The update follows another move made by Pinterest last May, which introduced GenAI labels – a label on image Pins to show when it may have been generated or modified using GenAI.
You can see from these strategies that AI ‘slop’ is becoming a huge issue for online platforms that make it easy for users to upload content. This is especially the case for a company like Pinterest, whose brand is built on home spun, crafty material – the very opposite of the stuff circulated by cynical automated bot farms.
Users mostly don’t like AI content. It overwhelms the site, spoils the ambience and wastes time. They complain that the clothing, accessory, or furniture products they see in a well-disguised AI image don’t exist. This is “inspiration,” they can’t buy in a shop.
Two things make the situation more complicated. The first is that some human users like to use AI as a tool to make their ideas more exciting. In fact, last year, Pinterest itself rolled out a slate of AI-powered products (like personalised background generation) to help creators come up with ideas and save time.
Should their efforts be branded as slop?
The second, more relevant to the MEF community, is exactly how Pinterest thinks it can spot AI content in the first place.
The company hasn’t explained its newly launched methods. One technique may be to look for incriminating metadata in the content – but I believe culprits find it easy to strip this stuff out now.
Identifying AI content looks set to become one of the big challenges of the age. Already there are multiple tools for doing so. But it’s the old whack a mole problem. The incentives (ad clicks, phishing, mischief) to keep pumping it out are just too compelling.
Ultimately the internet needs a way to identity humans, assign IDs to bots and provide an immutable link between a human and his/her AI agent. It’s a huge challenge, not least because the internet was built before anyone even dreamed of this stuff.
Find out more about the themes discussed – Join the MEF ID & Data Interest Group.


