•  
  •  
 

Abstract

This paper examines the growing legal and constitutional challenges posed by AI-generated child sexual abuse material (CSAM) in the United States. Tracing the evolution of federal child pornography laws from the Protection of Children Against Sexual Exploitation Act of 1977 through the PROTECT Act and modern reporting statutes, the paper argues that existing legal frameworks were developed for an era preceding generative artificial intelligence and are ill-equipped to address fully synthetic yet hyper-realistic depictions of minors. Through analysis of key Supreme Court decisions, including Ashcroft v. Free Speech Coalition, New York v. Ferber, and Miller v. California, the paper explores the tension between First Amendment protections and the government’s interest in preventing child exploitation. It argues that while fully synthetic CSAM may not involve direct abuse of a real child, its creation and distribution nonetheless perpetuate grooming behaviors, normalize exploitation, and create substantial societal harms. The paper further examines statutory gaps surrounding AI-generated imagery, the limited obligations currently imposed on AI developers and platforms, and the urgent need for clearer regulatory standards, mandatory safeguards, reporting obligations, and dataset oversight. Ultimately, the paper contends that meaningful reform must balance constitutional protections with proactive technological regulation in order to prevent the expansion of AI-driven child exploitation while ensuring that legal doctrine keeps pace with rapidly advancing artificial intelligence.

Share

COinS