The Quiet Spread Of AI-Generated 'Brainrot' Across Social Media
Authored by Jacob Burg via The Epoch Times,
Elephants drop-kicking crocodiles while breaking the laws of physics, bewildering deepfakes of politicians and deceased public figures, and seemingly animated children’s videos of Jesus fighting the Grinch: generative artificial intelligence (AI) is sweeping across online video platforms and may now account for a sizable portion of YouTube’s short-form video feed, recent research shows.
After being accused last year of causing users to end up in psychiatric wards and allegedly helping multiple depressed teenagers take their own lives, generative AI tools are also inspiring new genres of online content.
AI-generated images and clips were found in 21 percent of the 500 short-form videos screened in a study released last November by the video editing software company Kapwing, with some of the channels analyzed amassing millions of subscribers and billions of views.
Some, such as India-based channel Bandar Apna Dost, were estimated to generate millions of dollars in YouTube ad revenue annually. These channels are found worldwide, with those based in Spain and South Korea garnering the “most devoted viewerships,” according to the study.
“Generative AI tools have dramatically lowered the barrier to entry for video production,” Rohini Lakshané, an interdisciplinary technology researcher, told The Epoch Times.
“So, the channel can churn out massive amounts of content and maintain a high frequency of posting. Channels using these methods can flood recommendation feeds simply by volume, irrespective of intrinsic quality.”
Here’s what we know about “brainrot” and “AI slop,” what’s at stake for viewers and content creators, and why you might want to pay closer attention when browsing social media.
‘Brainrot’ and ‘AI Slop’
Kapwing determined that 33 percent of the videos it screened after creating a new account on YouTube appeared to have the hallmarks of “brainrot” content, which Oxford defines as “trivial or unchallenging” and considered to deteriorate a “person’s mental or intellectual state.”
Existing long before the advent of generative AI, “brainrot” includes memes, humor, nonsensical skits, videos of children or animals engaging in “silly” actions or behaviors, and other forms of content that minimally engage users intellectually or convey little or no meaning beyond randomness or absurdity.
Combining generative AI with “brainrot” characteristics gives rise to the emerging genre many refer to as “AI slop,” which Kapwing defines as “careless, low-quality content” generated with AI tools that is intended to “farm views and subscriptions or sway political content.”
By its definition, what content may be considered as “brainrot” or “low-quality” can vary from person to person. For example, one person might describe all short-form “comedy” videos as “brainrot,” while another might find them genuinely entertaining and choose a different label.
The same may be said about “AI slop,” as some content creators, such as Montreat College language professor T. Michael Halcomb, use generative AI tools as an extension of their own academic work.
Halcomb, who also parodies “AI slop” and “brainrot” with his student-led comedy club, told The Epoch Times that he uses AI tools to make short-form videos based on posts he writes on his blog, deploying the technology to create video clips, clone his voice for narration purposes, and generate text on the screen.
There’s a lot of overlap between users that maintain a human element while taking advantage of AI tools and those merely using AI to create what others would refer to as “slop” or mass-produced content aimed at farming views, he said.
“I do think the human element isn’t completely gone. It just allows humans to speed up things,” Halcomb said, adding that even some of the so-called “AI slop” channels such as Spain’s “Imperio de jesus,” which features AI-generated animations of Jesus fighting Satan and the Grinch, play into “shock humor” and absurdism—driving curiosity among viewers.
There’s also a “lore” element to many of these videos, as the channel above has repeated story tropes that build on previous videos, which Halcomb compared to “inside jokes” in comedy, allowing one video to lead to another, and so on.
Looking at the Bandar Apna Dost channel, which, according to its creators, features a “realistic monkey in hilarious, dramatic, and heart-touching human-style situations,” the videos utilize AI for everything from their visuals to background audio.
The videos are popular, Lakshané says, because they mimic scenes from popular Indian films and display “the trope of a hypermasculine male protagonist who commits illegal or abusive acts or commits superhuman feats, and, at times, has an outlandish amount of social or political power.”
“The videos in the channel are disjointed and do not follow a storyline or narrative. No prerequisite knowledge or context is required to watch the short videos. There are characters, such as one with a likeness of the Incredible Hulk—named Hulku—which gives the videos an appeal and broad demographic reach,” she said.
Other channels may use less overt AI, or AI harder to detect for some viewers, like a video found by The Epoch Times, which “looks” like a real safari video of an elephant protecting another from a crocodile.
But once viewers see the second elephant drop kick the crocodile more than 30 feet in a way that breaks the laws of physics, it becomes much clearer that the video was made with AI, even though its creator seemingly went to great lengths to make sure the OpenAI-made tool Sora’s watermark only appears on a single frame—three seconds in—on the eight-second video.
A screen displays examples of AI prompt-created videos, made with Xai’s Grok app in London on Jan. 12, 2026. Leon Neal/Getty Images
Risks of AI-Generated Videos
As with the example above, many AI videos are intentionally generated to look as lifelike as possible, which increases the risk of deception and misinformation online, some organizations say.
AARP, a nonprofit and advocacy group for Americans aged 50 and older, warned last month that “AI slop” videos are making it increasingly difficult for some users to “detect what is real.”
The organization noted ChatGPT creator OpenAI’s decision in October 2025 to block “disrespectful” AI-generated “deepfake” videos depicting the likeness of Rev. Dr. Martin Luther King Jr. in its Sora 2 video creation app.
Quickly generated “AI slop” deepfake videos also permeated online platforms throughout the 2024 presidential election, and the Brennan Center for Justice warned last March that AI videos could have serious impacts on future voting cycles.
Science researchers are worried this phenomenon may creep into medical information and educational videos, where there are “specific hazards to learning from purportedly educational videos made by AI without the use of human discretion,” according to a study released by the National Library of Medicine in November 2025.
That study screened 1,082 online videos in the “preclinical biomedical sciences” educational category and found that 5.3 percent appeared to be “AI-generated and low quality,” suggesting the technology is still in slow adoption among online medical information content, but that its proliferation may be slowly increasing.
Even in the absence of misinformation, the “AI slop” videos that are sweeping across YouTube and TikTok have psychological impacts on users, Jeff Burningham, a tech industry venture capitalist and author of “The Last Book Written by a Human: Becoming Wise in the Age of AI,” told The Epoch Times.
“I think it’s pretty probably self-evident as to why it’s becoming popular, and it’s not something that I or we as a collective society should be proud of,” he said.
“It preys on kind of our most base desires. And I think it’s an indication of dopamine over discernment … [and] engagement over insight.”
Burningham says in his book that the real danger with AI isn’t necessarily the technology itself, but the “atrophy of human attention and awareness.”
A woman holds a phone displaying the Youtube app, in this file photo, on Aug. 11, 2024. Oleksii Pydsosonnii/The Epoch Times
Results of Experiment
The Epoch Times created a new YouTube account using a new email address on a private web browser to prevent previous browser cookies from impacting the type or genre of videos first seen.
We then analyzed the first 300 short-form videos shown on YouTube after initially logging into the account and found that the vast majority—88 percent—had the characteristics of “brainrot,” with little or no meaning beyond the absurd, random, or attention-grabbing.
However, some of these videos fell into gray areas, particularly within the amorphous “comedy” genre, making it difficult to pin down exactly how many would fit the “brainrot” category, which Halcomb says is largely subjective.
In our analysis, only 8 percent of the first 300 videos seen on the new YouTube account appeared to be AI-generated, with some using AI-images, while others—like the elephant video and another that features a woman hiding in her car’s hatchback to escape a horde of wolves trying to attack her—appear to be fully AI-generated video clips.
We did not see videos from any of the channels mentioned in Kapwing’s study under its “Most Subscribed AI Slop YouTube Channels,” which may come down to location or other variables, particularly if previous browser cookies can influence which videos a new account sees on the platform.
Another possibility is that AI slop, even though it’s increasingly growing in popularity, is simply not yet outpacing the other forms of so-called “brainrot” that The Epoch Times did see in its experiment: meme videos, people performing skits in front of their cameras, and bizarre attempts at comedy that are otherwise filmed and edited by real people.
Even if AI-generated content explodes in prevalence as some predict, its rise may not be as apocalyptic as some fear as long as humanity can face this existential reckoning moment as an opportunity to evolve, Burningham said, describing AI technology as a “cosmic mirror to humanity.”
“A reflection can be a powerful thing, because you see yourself a little more clearly, and with that, you know additional clarity—you’re able to pivot or change. Now, will humans do this? I don’t know,” he said.
“It’s hard to be optimistic, but this is the opportunity that I think that AI allows us. These things thrive because right now, attention is cheap and it’s fragmented in a million different ways, and we’re exhausted. But my fear, obviously, and I think the danger of AI slop is when attention collapses, so does wisdom, so does memory, and so does meaning, and that’s a scary place for humanity.”



