Beverly Hills middle school expels 5 students after deepfake nude photos incident

NBC News Clone summarizes the latest on: Beverly Hills School Expels Students Deepfake Nude Photos Rcna142480 - Technology and Innovation | NBC News Clone. This article is rewritten and presented in a simplified tone for a better reader experience.

The eighth graders were accused of using generative AI to create fake images of 16 classmates.
Beverly Vista Middle School
Beverly Vista Middle School, in Beverly Hills, Calif.Jason Armond / Los Angeles Times via Getty Images

The Beverly Hills Unified School District voted this week to confirm the expulsion of five middle school students who were accused last month of using generative AI to create and share fake nude images of their classmates, according to the Los Angeles Times and the school board’s meeting minutes.

The case became national news days after Beverly Vista Middle School officials began investigating the incident in February and the Beverly Hills Police Department launched its own criminal investigation, which is ongoing. No arrests have been made and no charges have been brought. 

The five students and their victims were in the eighth grade, according to the school district. Sixteen students were targeted, Superintendent Michael Bregy said in an email to the district community, which was obtained by NBC News. 

“This incident has spurred crucial discussions on the ethical use of technology, including AI, underscoring the importance of vigilant and informed engagement within digital environments,” Bregy wrote. “Furthermore, we recognize that kids are still learning and growing, and mistakes are part of this process. However, accountability is essential, and appropriate measures have been taken.”

The expulsions, which the school district reportedly approved on Wednesday, are a turning point in how schools have publicly handled deepfake cases so far. The expelled students and their parents did not contest the district’s decision and will not be identified, according to the Los Angeles Times

The Beverly Hills case followed a string of incidents around the world over the past year involving AI-generated fake nude images of school-age children. The number of cases has exploded as AI technology has reached mainstream audiences, and apps and programs that are specifically designed and advertised to “undress” photos and “swap” victims’ faces into sexually explicit content have proliferated. False and misleading AI-generated images, videos and audio clips are often referred to as “deepfakes.”

Today, it is faster, cheaper, and easier than ever to create sophisticated fake material. 

The same week that the Beverly Hills case became public, NBC News identified ads running on Facebook and Instagram throughout February for a deepfake app that “undressed” an underage photo of a teen celebrity. It is already illegal to produce, distribute, receive or possess computer-generated sexually explicit content that features the faces of identifiable children, but that hasn’t stopped such material from being posted online for decades. 

Fake nude images and fake pornographic videos overwhelmingly victimize women and girls, and such material is easily searchable on major social media platforms and search engines

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone