With the rapid expansion of AI, we’re seeing the technology popping up across a wide range of industries, from health care to publishing.
One of the software’s most significant implementations is law enforcement. More and more police departments in the U.S. — including in Phoenix, Gilbert, Scottsdale and Tempe — are using an AI tool called Draft One, created by Scottsdale’s own Axon, to draft incident reports.
Officers feed body cam footage into Draft One, which then uses a customized ChatGPT bot to analyze the video and generate a report.
Axon markets the tool as a way for departments to run more efficiently, but a recent Mother Jones investigation revealed significant issues with the way Draft One is being used.
Tekendra Parmar wrote the piece for Mother Jones, and he joined The Show to talk more about it.
Full conversation
TEKENDRA PARMAR: One of the major concerns about this piece of software is that ChatGPT is known to do something called a hallucination. This is when the chatbot makes up something that doesn't exist. But it is also known to be biased against certain communities, creating stereotypical images, characterizing certain communities through negative stereotypes, all of which, you know, the ACLU, the Electronic Frontier Foundation worry can seep into these ChatGPT-generated police reports.
SAM DINGMAN: Axon has built in some kind of complicated safeguards to try to force a human police officer to review the report that would be generated by Draft One to prevent it from just being sent down the line with these errors baked in. What are those safeguards?
PARMAR: So there are a couple of things. So one of them is a minimal level of editing that is required to do before a police report is shipped off into the cloud and becomes a generated report. They have another system where they intentionally insert obvious errors. That's to, you know, make sure that officers are looking through.
It's like, "and then an octopus appeared out of nowhere." You know, something like fantastical and, like, silly and stupid that, like, an officer would have to edit out of their police report, thus ensuring that they're actually reading the police reports.
And then the last thing that Axon says it does is it has a header and footer that acknowledge that a police report was written by AI.
DINGMAN: That first safeguard that you referenced, the idea that the report basically cannot be forwarded up the chain of command without some kind of human review, how does Draft One verify that a human reviewed the AI-generated report?
PARMAR: The easiest example of this is in something like Google Docs. You can see when something has been changed, right? You can log a change to the system. So if Draft One generates a report and you rewrite a segment of things, that's a minimal level of editing that would then allow that report to be shipped off and reviewed and documented as a piece of evidence.
DINGMAN: So the mere fact that somebody made some edits to a document, that's not the same thing as somebody reviewing the document, right? Like, if I'm understanding correctly, it would be very easy for me as an officer to go into an AI-generated police report, maybe delete one word without reading the whole thing, or you know, a sentence or two, and then indicate that I have, quote unquote "reviewed the document and send it along."
PARMAR: Right. I mean that's definitely a concern. And the way that Axon sort of has it in their system is their minimum change threshold veers from 10% to 40% in 10% increments. So 10, 20, 30, 40. But that doesn't change what you just said.
One of the technologists I talked to described this type of user interface and these types of, you know, sort of lip service to, you know, ensuring human invention in a police report as ethics washing. So that might be a framework to think about it through.
DINGMAN: Yeah, well, I mean it's making me think about, you know, when I was a kid and wanted to look at the website of like a beer company and I wasn't 21 yet and a little screen popped up when I went to the website and it said, are you 21? And I was like, I sure am. And then I could look at the website.
PARMAR: It's exactly that.
DINGMAN: And can I ask you, when it comes to the automatic insertion of errors into the report, was that octopus example you gave, is that the type of error that it would insert?
PARMAR: Very similar in the Axon promotional material for that. They had examples that were like that. It was like fantastical animals coming out and doing silly things. Like I can see from a user design perspective having something so far out would makes sense. You know, you, if you're trying to ensure an officer deletes a certain bit of information while not deleting others, you want to make sure it stands out. But again, the question of utility, I think it's a good one.
DINGMAN: Well, speaking of utility and usage, that brings us to the next very interesting part of your reporting on this, which is that you found that a number of the police departments that are using Draft One in their police work have turned off these safeguards.
PARMAR: Yes, I sent FOIA requests to 20-something departments that I knew were using this software. I got back about seven from various different parts of the country. What I noticed was that two departments had wholesale turned off the ability to check whether a report was generated by AI or not. That was Fort Collins in Colorado and Lafayette, Indiana.
In the case of Lafayette, Indiana, what I also found was the captain of that department had told the Axon representatives that the reports that were generated by Draft One were used in plea deals.
When I then asked for those plea deals, because the department did not have the rider saying this report was generated by Axon Draft One, they couldn't find those reports themselves.
DINGMAN: Oh, that is unreal. Let me just make sure I'm understanding that particular pretzel that you just described.
So in Lafayette, Indiana, there are AI-generated police reports that were used in plea deals that weren't run through the security filters that are built into Draft One. One of those security filters is a header and footer that indicates that the police report was generated with AI, which means that because they had turned that feature off when you asked them to then show you the plea deals, they couldn't find them because they themselves can't tell where which police reports were generated with AI.
PARMAR: You're absolutely right. That's exactly what I found out. Wow, right. But, like, let's look at this from two different perspectives. So the first perspective is, you know, the person interacting with the criminal justice system taking a plea deal. I would want to know if I was that person, whether or not a report that was, you know, pushing me to take a take a deal like that was generated with AI.
But from the officer's perspective, one of the things that a defense attorney told me is that if Axon itself is admitting that their, you know, skinned version of ChatGPT is prone to bias and hallucinations and these officers are signing off on these AI generated reports, if a instance of hallucination, misinformation, miscontext comes into one of those reports that is then taken to trial, it would be very hard for an officer to stand by that report without perjuring themselves.
DINGMAN: It'll be very interesting to keep an eye on.
-
Four people have been wounded or killed in ICE shootings across the county this month — including US citizen Renee Good, who died in Minneapolis after an ICE agent shot into her car’s front window.
-
Glenn Thomas Tate Jr., who was White Mountain Apache and also from the Gila River Indian Community, went missing in 2020 after seeking medical treatment on the reservation just south of Phoenix.
-
A Chandler woman at the center of an animal cruelty case was sentenced this week to three and a half years in prison and seven years probation.
-
In a post, the State Department called Mexico’s progress on border security “unacceptable.” Meanwhile, Mexico’s president is calling on the United States to do more to stop the flow of firearms into her country.
-
The FBI Phoenix Field Office has confirmed the death of 8-year-old Maleeka “Mollie” Boone — a Navajo girl last seen Thursday playing within tribal housing in the town of Coalmine near Tuba City — hours after the Arizona Department of Public Safety issued a Turquoise Alert in connection to her disappearance.