Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As Congress lags, states have taken the lead in regulating the emerging AI industry

MARY LOUISE KELLY, HOST:

Congress has been slow to regulate the emerging industry of artificial intelligence, but turns out states have been plowing ahead. State lawmakers worry about things like the spread of fake AI images of people without their consent. They are proposing and passing laws dealing with AI in elections, in health care and beyond. NPR's Ryland Barton covers state trends and has been keeping an eye on what is going on across the country. He joins me now from Louisville. Hey there.

RYLAND BARTON, BYLINE: Hey, Mary Louise.

KELLY: So this is fascinating, actually, that it's at the state level, not Congress, that we see lawmakers taking the first cracks at trying to regulate AI. Why?

BARTON: Yeah. So the federal government is definitely interested in AI. Congress has held committee hearings. A bipartisan group of lawmakers created a kind of wish list of what they want an AI framework to look like last year. But Congress is slow to act on just about anything these days. That's where the states come in, and they can sometimes take action a little bit faster. One of the biggest areas of interest is these nonconsensual, deepfakes of people, which has raised even more alarms after AI-generated nude images of Taylor Swift spread throughout the internet last month. These so-called nudeified (ph) images are happening to noncelebrities, too, and advocates say many current laws don't adequately protect people. Here's Courtney Curtis with the Indiana Prosecuting Attorneys Council. She's talking to a legislative committee earlier this year about weaknesses in the law.

(SOUNDBITE OF ARCHIVED RECORDING)

COURTNEY CURTIS: So because of the way it's currently written, if you just take someone's image off of their social media and create either a nudeified or deepfake image, we're just not currently covering it. And obviously this is a problem that is of increasing relevance.

BARTON: So some states have already taken a swing at it. California and Illinois passed laws last year allowing people to sue those who create images using their likenesses. Texas and Minnesota make it a crime punishable with fines and prison time. States are starting to get ideas from each other, but at this point, there's still this patchwork of how they're all dealing with it.

KELLY: Tell me more about how this works for people in the public eye, like how AI is being used right now to manipulate voices, images of political figures. How are states thinking about that?

BARTON: Yeah. So this is the area that's already had the most movement in legislatures. A lot of this is because the presidential election is looming this year. There's a lot of worries about how these politically charged deepfakes are spreading throughout the country. Last year, California, Texas and Washington all passed laws either totally banning or requiring disclosure if campaign materials use artificially manipulated sound or images. But they do it in different ways. So in California, they don't explicitly mention AI. Instead, it restricts deceptive media within 60 days of an election, and it allows those harmed to seek civil damages. Texas went a step further and created a criminal penalty, though it's just a misdemeanor, and that applies to deepfakes targeting candidates.

KELLY: Well, and I'm thinking this has been in the headlines just in recent days because of that robocall that went out. This was mimicking President Biden's voice during the New Hampshire primary and had him, allegedly him, discouraging voters from coming to the polls.

BARTON: Right. And so it's still unclear if that voice was generated from AI, but that incident prompted at least a dozen more states across the nation to start considering bans on using AI-generated sound and images in political ads. Just last week, the feds did take some action. It wasn't in Congress, but the Federal Communications Commission can now fine companies that use AI-generated voices in robocalls. But the ACLU has expressed concern over regulating generated political content. They say that the rules run the risk of infringing on satire, parody and other free speech protected content.

KELLY: Such an interesting balance that we are trying to strike with all this. I mean, when we're talking about AI, just big picture, it's not just deepfakes and that type of thing that's the concern. What else is out there that states are working on?

BARTON: So one big worry is what happens when AI is used to make big, important decisions on things, like who a bank gives a loan to, who gets priority for medical care, who can get insurance. AI can be incredibly helpful in speeding up those decisions, crunching those numbers, but there's also evidence that algorithms discriminate against people of color when getting home loans, hiring and in the criminal justice system. So Illinois, California, Vermont and Virginia have all proposed bills to try and target AI-based discrimination, and we expect other states to follow as well as they look for strategies to regulate the emerging technology.

KELLY: NPR's Rylan Barton reporting from Louisville. Thank you so much.

BARTON: Thanks, Mary Louise. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Ryland Barton is the Managing Editor for Collaboratives. He's covered politics and state government for NPR member stations KWBU in Waco and KUT in Austin. He has a bachelor's degree from the University of Chicago and a master's degree in journalism from the University of Texas. He grew up in Lexington.

Email Ryland at rbarton@lpm.org.