KJZZ is a service of Rio Salado College,
and Maricopa Community Colleges

Copyright © 2024 KJZZ/Rio Salado College/MCCCD
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

An Arizona law regulating AI was crafted with the use of AI — which may be a first

When state Rep. Alexander Kolodin (R-Scottsdale) began to draft a bill to regulate artificial intelligence, he consulted what he calls a “subject matter expert” — ChatGPT.

According to MultiState, a government relations company that tracks AI-related legislation, this is a first. 

A bill to regulate AI was drafted with the use of AI and signed into law. 

Specifically, Kolodin’s bill, HB 2394, deals with deepfakes — digital impersonations. 

He said he used ChatGPT to write the definition of a deepfake and emphasized the language was reviewed by several human lawyers.

“It’s not actually that different than the status quo with most lawmakers relying on leg counsel to draft their bills. And that does not usually end well,” Kolodin said on social media.

The bill allows candidates for public office who are targeted by deepfakes the right to get injunctive relief from court.

Kolodin says it’s a way for candidates to make it clear that an AI deepfake isn’t really them.

“It's gotten to the point where it creates … a genuine inability of folks to tell whether that's really the candidate speaking,” he said.

Kolodin said the legislation is necessary to protect against what he calls “serious social destabilization.”

He pointed to a deepfake of poor quality, released in April, of President Joe Biden issuing a call to arms.

“Joe Biden is behind the podium and he says, ‘due to the situation in the Ukraine, I am invoking the provisions of the Selective Service Act,’ which means I'm reinstituting the draft,” Kolodin said. 

According to Newsweek, the fake video was shared roughly 2.5 million times as of mid-April.

Kolodin says it’s inevitable that a more convincing imitation is coming.

“Even though it was such a poor quality deepfake, at least some people online took it seriously. Had the quality of that deepfake been just slightly better? There would have been blood in the streets. I would have grabbed my rifle, and I think a lot of people would have,” Kolodin said.

In one case, the quality of AI was good enough to fool Kolodin.

He says he was listening to an episode of the Economist podcast “Checks and Balances” before the session started. The host of the episode announced after telling a story that the voice listeners had heard wasn’t his, but was artificially generated.

“I just sort of sat both upright and went, ‘Oh my God, the technology is that good,’ because I've been listening to this podcast forever. I know what the guy's voice sounds like, and you could not tell the difference,” Kolodin said.

Arizona isn’t the only state where lawmakers are experimenting with using and regulating AI. 

According to MultiState’s tracking, roughly 600 AI-related bills were introduced across the country this year as the technology has improved.

Roughly half of them focused on deepfakes.

Kolodin is not the first lawmaker to use AI, but he may be the first lawmaker to get a bill passed into law that was both written with the help of AI and aims to regulate it. 

A bill introduced in the Massachusetts Legislature last year was also drafted with the help of ChatGPT and aimed to regulate AI, but the bill hasn’t been signed into law.

California legislators adopted a resolution drafted with AI last August, but it does not regulate AI and serves essentially as a statement.

Texas lawmakers used AI to write a report in May, and in New York, a bill introduced last year (which didn’t pass) includes a disclosure that AI helped draft it.

Bill Kramer is vice president and counsel for MultiState. He launched a policy-tracking project on AI. He said most of the legislation introduced so far that was crafted with AI has been a “gimmick.”

“It's mainly more for, at this point, visibility and a couple of friendly press reports, I think, from the state legislative perspective,” Kramer said.

Kolodin’s bill could have a real impact on the use of AI, though it doesn’t include any civil or criminal penalties for someone who makes a deepfake of a candidate running for office.

A deepfake of a candidate that includes disclosure the impersonation is artificially generated is still lawful under the legislation.

As for whether it’s wise to use AI to help write the bill to regulate AI, Democratic Gov. Katie Hobbs wouldn’t say.

But a spokesperson for Hobbs said the governor didn’t know AI wrote any part of Kolodin’s bill when she signed it.

Arizona State University professor Andrew Maynard says it isn’t necessarily a good or bad thing.

Maynard is a member of the school’s Faculty Ethics Committee on AI Technology. 

In terms of ethics, he says it all comes down to the user’s intent and whether they take responsibility for their AI use.

“It feels quite ironic apart from the fact that we’re looking at very different aspects of artificial intelligence here,” Maynard said. “And one of the challenges is that AI is general purpose technology, which can be used in many many different ways, so this is not one-size-fits-all.”

It’s likely — according to Kramer and Maynard — that AI is already used commonly to write legislation.

There are more than 7,500 state lawmakers across the country, and they are often responsible for drafting their own bill language.

“We’re absolutely seeing this in every quarter; different AI tools being used to make everyday tasks faster and easier. So, of course in policy circles, when it comes to both sort of drafting out policy or even sort of brainstorming the original ideas, it’s almost inconceivable that people won't be using AI, whether they claim to be using it or not,” Maynard said.

Whether any other state policy is being crafted with the help of AI in Arizona is unknown.

More stories from KJZZ

Camryn Sanchez is a field correspondent at KJZZ covering everything to do with state politics.