Reddit, known as “the most human place on the internet,” faces challenges due to the rise of artificial intelligence-generated content. Moderators from various popular subreddits are beginning to recognize both the potential benefits and the significant drawbacks of incorporating AI-generated material into their discussions. Many moderators worry that the influx of such content may erode the authenticity and social value cherished by the community.
According to Travis Lloyd, a doctoral student in information science, moderators are concerned about three primary issues: a decrease in content quality, disruption of community dynamics, and the difficulties in managing and governing AI-generated contributions. “They were concerned about it on three levels: decreasing content quality, disrupting social dynamics, and being difficult to govern,” Lloyd stated. He is the lead author of the paper titled “There Has To Be a Lot That We”re Missing: Moderating AI-Generated Content on Reddit,” which is being presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing from October 18 to 22 in Bergen, Norway. This research received an honorable mention for best paper.
The study involved interviews with moderators overseeing over 100 subreddits, with membership numbers varying from a handful to over 32 million users. Initial findings showed that most moderators perceive AI-generated content negatively. While some acknowledged its utility—for instance, as a translation tool—others were vehemently opposed. A moderator from the “Ask Historians” subreddit mentioned the potential value of using AI for translating answers from non-English speakers, while a moderator from r/WritingPrompts declared, “Let”s be absolutely clear: you are not allowed to use AI in this subreddit, you will be banned.”
Quality of AI-generated content emerged as the foremost concern among moderators. One moderator pointed out, “AI content tries to meet the substance and depth of a typical post; however, there are frequent glaring errors in both style and content.” Issues of style, accuracy, and relevance to the topic were cited as common problems. Additionally, fears arose regarding how AI might undermine meaningful interactions, potentially leading to less personal engagement and strained community relationships.
Moderators also expressed anxiety that their already challenging responsibilities would be exacerbated by the growing prevalence of AI-generated content. “I would rate it as the most threatening concern … It”s often hard to detect, and we do see it as very disruptive to the actual running of the site,” a moderator from r/explainlikeimfive commented.
Mor Naaman, a senior author of the study and professor at Cornell Tech, emphasized the volunteers” role in maintaining the humanity of the platform. “It remains a huge question of how they will achieve that goal,” he noted. “A lot of it will inevitably go to the moderators, who are in limited supply and are overburdened. Reddit, the research community, and other platforms need to tackle this challenge or these online communities will fail under the pressure of AI.”
Lloyd concluded by highlighting that there is still a strong desire for human interaction among users. “This study showed us there is an appetite for human interaction, too. And as long as there is that desire, which I don”t see going away, I think people will try to create these human-only spaces,” he stated. This research was partially supported by funding from the National Science Foundation.
