AI slop is the evolution of spam, in a way.
Like spam, slop is low-quality content, but thanks to artificial intelligence (AI) tools like ChatGPT and Midjourney, it's even easier to produce. Like spam, slop can grow like a weed if left unchecked, overwhelming social media feeds and leaving users unsure of what's real and what's not. Like spam, slop comes in many forms — posts on social media, of course, but also books on Amazon, music on Spotify, articles from less-than-reliable news outlets (and, unfortunately, some reliable outlets) and even occasionally in peer-reviewed scientific journals.
Snopes has fact-checked spam content since the early days of the internet. It's only natural that we've regularly covered AI-generated content since 2023.
For instance, we have repeatedly checked claims about celebrities supposedly doing good deeds that originated with YouTube videos or Facebook pages that post slop. Former NFL quarterback Peyton Manning was a frequent focus of such stories in June and July 2025.
Animals also frequently appear in slop content. For example, we've reviewed viral videos of rabbits and raccoons jumping on trampolines that were (sadly) fake.
(No, these bouncing bunnies aren't real — TikTok user @rachelthecatlovers)
Finally, of course, Snopes has checked a litany of claims about politicians. While real photos exist of U.S. President Donald Trump and deceased sex offender Jeffrey Epstein, some are AI-generated. We've also disproved an AI-generated speech attributed to Trump and confirmed several instances in which his administration posted AI-generated content, some of which could be considered slop.
The growth of AI slop feels like an inevitable side effect of "enshittification," a word coined by writer Cory Doctorow in 2022 to describe how online platforms like Amazon and Facebook have worsened over time. "First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die," he wrote.
Slop is not good for users or businesses
AI slop isn't designed for humans, according to an article from 404 Media. Instead, it directly targets the algorithms that decide what content to show users. That article called AI slop's underlying strategy a "brute force attack," the simplest of hacking strategies that involves just trying every possible combination one at a time until something gives. What a brute force attack lacks in efficiency, it makes up for in efficacy. Slop games the system by flooding algorithms with AI-generated content until something goes viral.
This is likely why posts made Peyton Manning seem saintly for two months — one post about Manning supposedly doing a good deed went viral and the people who make AI slop attempted to capitalize on the trend.
(Manning's supposed saintliness shown above — Facebook page Magic Clement / Snopes Illustration)
That's right — slop is a business. However, slop is also not good for businesses.
A 2024 article in New York Magazine documented several individuals who promoted AI-generated content as a side hustle. It said "sloppers" can sell books on Amazon and use music on Spotify to receive royalties, while news articles can be hosted on cheap WordPress blogs filled with advertising links. The money can even come from social media platforms themselves — most have programs that offer creators money based on a post's engagement. Or, as the New York Magazine article described it, "a slop subsidy."
According to a separate 404 Media article, the payments generally aren't substantial, "hundreds of dollars" at most. However, that money can go a lot further in countries like India, Vietnam or the Philippines. Claims that Snopes has fact-checked and found to have originated as AI slop often have a tie to such countries — AI slop "news stories" often link to websites based in Vietnam, for instance.
(This image accompanied a false story about a drifter named Ronald McDonald murdering children across the U.S. Midwest in 1892, which supposedly inspired the McDonald's fast-food chain mascot — @buried__truths/TikTok)
But AI slop comes at the cost of making real people and businesses more difficult to find because AI tools can generate fake posts much faster than a human can create real ones. More posts make advertising space more valuable and platforms like Facebook can raise the price of paid advertisements in response.
The end goal for a company like Meta, according to the first 404 Media article, is to "move toward a world where a never-ending feed of hyper niche content can be delivered directly to the people who are into that type of content." That requires a massive amount of content and data collection.
AI slop's real-world impact
As AI slop increases, platforms have placed the onus of figuring out whether something is real or not on the user. Snopes has a page containing a few tips and tricks for identifying AI-generated images, but with how quickly those tools adapt, results might vary. The decision to delegate that job to users in the first place can have genuine negative consequences.
Introducing uncertainty in the form of fake AI slop can cause people to discredit legitimate information or worse, tune out entirely. As one Forbes writer described it, "When people feel they can no longer trust what they see, they may stop trying altogether. It's easier to not care than to expend the mental energy required to verify every image or story." An op-ed in The Guardian called the effect "profound disorientation."
(One story falsely claimed former NFL star Tom Brady, pictured above in an AI image, donated millions of dollars to victims of the July 2025 Texas floods — Gridiron Master / Facebook)
AI slop has also had an impact in at least one natural disaster, according to a segment on comedian John Oliver's "Last Week Tonight." When Hurricane Helene devastated the Southeast U.S. in late September 2024, fake AI slop images supposedly showing the storm's aftermath spread widely online. An ABC newscaster featured in the "Last Week Tonight" piece noted that first responders use social media to determine what areas need what assistance. "[Slop] was creating a lot of noise, and it was making it more difficult for them to act quickly," she said.
Republicans also used AI slop images to criticize then-President Joe Biden's response to the disaster, despite the images being AI-generated. One Republican National Committee member said "it doesn't matter" where the photo came from in response to being told one viral image was created by AI tools.
(This image shared on X didn't actually show a real girl crying and holding a puppy on a boat in the aftermath of Hurricane Helene.)
As previously mentioned, Trump's administration has repeatedly posted AI-generated content, some of which could be classified as slop. For instance, Snopes fact-checked one AI-generated video from July 2025 supposedly showing former President Barack Obama being arrested. In February 2025, we also fact-checked a video posted by Trump supposedly showing the Gaza Strip, currently occupied by Israel, redeveloped as a beachfront resort.
In a parody of Trump's unique style of posting, the X account of California Democratic Gov. Gavin Newsom's press office has also posted content that could be described as slop.
