Georgia lawmakers are seeking to criminalize AI-generated child sexual abuse and shore up gaps in state law.
Senate Bill 9 aims to criminalize the distribution of sexually explicit, AI-generated content involving children. The bill, sponsored by state Sen. John Albers, R-Roswell — who chaired a Senate study committee on artificial intelligence — would punish anyone found in possession or distributing the deepfakes with up to 15 years in prison. The children depicted would not have to exist in real life.
“Ever since the release of AI models that can generate images, we’ve seen an explosion in the generation of sexually explicit images, and a not-insignificant portion of those are images depicting children,” said Kate Ruane, the director of the left-leaning policy nonprofit Center for Democracy & Technology’s Free Expression Project.
Deepfakes — AI-generated videos, images or audio of a person that alter their appearance — have been used to fabricate realistic depictions of actual people. In some instances, these creations have been used by bad actors to create and circulate false sexually explicit content online with the intention of damaging someone’s reputation, which is illegal in Georgia.
But what isn’t clear in state law is whether that AI-generated child sexual abuse material is illegal if the depicted people only appear to be children.
Albers told the Atlanta Journal-Constitution the law would provide a clear legal definition for AI and enhance the penalties associated with crimes committed involving the technology. Also under the state legislation, anyone using AI to help commit other misdemeanor and felony crimes would face additional punishment.
SB 9 comes as a growing number of states grapple with how to regulate the emerging technology through a patchwork of state laws. California passed a similar measure last year banning sexually explicit deepfakes of children, with the bill drawing broad bipartisan support.
Ruane said people using AI for nefarious purposes will often go on social media to find images of children, load the child’s likeness into an AI generator to create an image in a sexually explicit context and then circulate it across the internet.
These fabricated images can have a significant impact on the child’s well-being and reputation, Ruane said.
However, some of these AI uses are not outlawed under the First Amendment and have opened the door for other potential legal questions, Ruane said.
“I think a lot of states wrote their child sexual abuse material laws to fit into what the Supreme Court said they could regulate versus what they couldn’t,” Ruane said. “And now, we are experiencing a change in technology that is enabling the creation of child sexual abuse images at a tremendous scale.”
For Ruane, the next question at a federal level is, “To what degree does the First Amendment apply to images depicting real children engaged in sexual conduct, even if the conduct they’re engaged in isn’t real?”
This isn’t the first time state legislators have tried to outlaw deepfakes.
Last year, Georgia lawmakers aimed to limit the use of AI-generated deepfakes in political campaign ads by making it a felony to broadcast or publish deceptive information aimed at influencing a candidate’s chances of being elected. Supporters of the bill said it would protect voters from deceptive ads, while opponents said it could limit free speech.
The measure died in the Senate.
The Georgia Senate Study Committee on Artificial Intelligence made recommendations ahead of the legislative session after hearing from experts about the rapidly advancing technology. Among the recommendations were suggestions to adopt data privacy and deepfake laws.
Another sponsor of SB 9, Sen. Sheikh Rahman, D-Lawrenceville, said that guardrails are essential to protect children from dangerous uses of the emerging technology.
Albers said he intends to file several other AI-related measures this session.
About the Author