YouTube”s artificial intelligence moderation system has begun removing videos that provide guidance on installing Windows 11 using local accounts or on unsupported hardware, labeling them as potentially dangerous.
On October 26, tech content creator Rich White, known as CyberCPU Tech, was among the first to highlight this issue. He reported that his instructional video on setting up Windows 11 25H2 with a local account was taken down by YouTube. White expressed disbelief at the platform”s assertion that creating a local account could result in serious harm or even death, which he believes is an exaggerated claim.
After appealing the decision, White was met with a swift denial from YouTube within 10 to 20 minutes, leading him to suspect that a human reviewer was not involved in the process. The following day, another of his videos, which detailed installing Windows 11 25H2 on unsupported hardware, was also removed shortly after being uploaded, again under similar safety concerns.
In a subsequent video, White stated, “The appeal was denied at 11:55, a full one minute after submitting it.” He raised concerns about the possibility of an automated system making these decisions without human oversight.
Other creators, including Britec09 and Hrutkay Mods, have reported similar experiences, asserting that their videos on Windows workarounds were also removed. They echoed White”s sentiment that the AI moderation process is hindering content related to legitimate technical instructions, despite these methods being neither illegal nor inherently dangerous.
White mentioned in his communications that he had not spoken to any human representatives from Google or YouTube since the issues began. “It”s been all automated,” he noted. He speculated whether Microsoft might have influenced YouTube to take down these videos, particularly since Microsoft recently closed the local account loophole in a new insider build of Windows 11.
While acknowledging the timing, White does not firmly believe that Microsoft orchestrated the takedowns, stating that his comments stemmed from confusion and frustration rather than concrete evidence. He attributed the removals to the AI”s inability to accurately assess content and YouTube”s lack of resources for processing appeals.
The broader implications of this situation concern many content creators, who fear that such automated moderation could stifle free expression on the platform. White explained that creators are increasingly hesitant to publish more advanced tutorials, worrying that such content could lead to additional strikes against their channels.
“My fear is this could lead to many creators fearing to cover more moderate to advanced tutorials,” he stated, noting a decline in engagement as creators opt for safer, less controversial content.
Ultimately, the affected YouTubers are seeking clarity from YouTube regarding its moderation policies. White concluded, “We would just like YouTube to tell us what the issue is. If it”s just a mistake then fine, restore our videos and we”ll move on. If it”s a new policy on YouTube, then tell us where the line is and hopefully we can move forward. Operating blind isn”t going to work.”
This incident highlights the challenges of AI-driven moderation in online platforms, raising concerns about its impact on content creators and the potential chilling effect on free speech.
