Earlier this month, a Phoenix courtroom heard from the victim of a road rage shooting. It happened during the sentencing hearing for the man who shot the 37-year-old. But, the statement was not actually from the victim, who had died in the incident, but rather an AI version of him. Chris Pelkey’s sister says she wrote what she thought her brother would have said.
Arizona Republic editorial page editor Elvia Díaz and longtime Columnist EJ Montini, who wrote about this, joined The Show to discuss.
Full conversation
MARK BRODIE: EJ, let me start with you — You’re not a big fan of this, why do you think it was not a great precedent?
EJ MONTINI: I think it could be a lovely thing for people to do for a memorial service or a funeral or something like that. But I think when we’re talking about a court of law, what has to be taken for testimony in a court of law is something that someone actually said. And we know in this instance that it was his sister who wrote what this AI avatar of him said.
And as close as they may have been together and as close as she knows what he might have said, there’s always that word in there: might. And I think from that point of view, it really shouldn’t be in a court of law.
MARK BRODIE: Is there a difference or maybe would you draw a distinction if, for example, in his hospital room he had said these things and someone in the family had just written them down and then created this AI version as opposed to, as you say, sort of assuming what he might have said in this situation?
MONTINI: I suppose there’s a difference there, yeah. I would imagine that there will be people now — there probably already are people now — for instance, who are leaving what amounts to sort of like a last will and testament, a long message to whomever they leave behind, and may in fact, want to have their images in an avatar say these things to these very people. I can see that.
But I think that even in a situation where it’s a courtroom, the avatar isn’t the person. So that’s another thing that isn’t real. And I think in a courtroom, things that are being discussed, things that are being presented as evidence, things that are being talked about in terms of a sentencing, they really should be real, genuine, what the person actually says.
So I think that from the point of view, both the words and the images are incorrect for a courtroom.
BRODIE: Elvia, I’m curious to get your take on this. Do you agree with EJ here that if somebody is going to address the court, it should actually be them in their own words?
ELVIA DÍAZ: Yes, absolutely. And the question that you had earlier about, if the sister had taken verbatim, for example, what he said then that sister should be the one appearing in court and saying those things on behalf of the brother. Again, this is being used for sentencing for an actual trial, so that there’s a huge difference between doing that on a personal level and not the court.
You know, the manipulation of it is the one that worries me the most. I think it was an ASU professor that was quoted everywhere but also here and in the column, talking about the potential for dishonesty and manipulation in the future.
The person is not alive. Most unfortunate. And yes, the deceased is a victim, but also the family members of him in this case are the victims. And they have the responsibility and are entitled to speak in court. But they should do it themselves, not a fake person. Because let’s call it what it is. He’s not back to life. It should not have been done.
BRODIE: Elvia, when you talk about manipulation, are you mostly describing maybe how a family member even perhaps inadvertently could take — as E.J. was referencing — like assuming what the deceased might have said? I’m also wondering if there’s a potential manipulation of a jury in a case like this, of a sympathy factor hearing from, you know, an AI-created version of somebody who’s no longer alive.
DÍAZ: You’re playing to the feelings of the jury, right? If it is a person to begin with. And again, there’s no way for a family member to know exactly what he would say now. Obviously if you are a relative, you are closer to that person. You know what he would have said, for instance.
But then again, just saying, “This is what I think he would have been feeling right now.” But yes, it could be strategic manipulation to appeal to the feelings of the jury and who knows what else.
BRODIE: So EJ, with the stipulation that none of the three of us are actually attorneys here, I’m curious to get your sense of like, is this something that you think the bar association or courts need to take up to try to set some, some ground rules for how AI can be used in these kinds of settings or not?
MONTINI: Well, absolutely. And I think the Arizona Supreme Court, they have a committee on AI, which clearly they’re studying to try to figure out how it can be used in court and how it can be best regulated to make sure that everything is fair and honest. And I think that that’s going to be something we’re going to have to deal with in the future, for sure.
It’s going to get more and more sophisticated. And the difficulties for future generations to recognize the difference between the real and the manufactured is going to be more and more difficult. And I think that that’s something that you see in everything from video games to movies now and you think, ‘Well, that’s just sort of entertaining.”
But when you recognize that it could also have very strong real world consequences in a place like a courtroom. It’s not the easiest thing to try to sort out. And I think they definitely will, in the not-too-distant future, probably come up with some rules.
BRODIE: In a situation like this, though, I wonder what kind of effect do you think having this AI-created character actually had. Because everyone in the courtroom knew that Mr. Pelkey, it wasn’t him, that he wasn’t alive. He was not able to make this statement, that somebody else had to have essentially put these words into the AI. Do you think that it really had some kind of impact on how people felt about him or about the case?
MONTINI: Well, that we can’t know. But what you just said is really important. It’s a courtroom, and what was allowed to be offered up as testimony was something that someone did not say. That’s important. That’s significant. That should not occur in a courtroom, from my point of view.
And that’s what I think is the more important aspect of this. And we don’t know. It may not have had any effect on this particular case. But I think, as Elvia pointed out, you never can tell in the future as this stuff gets better and depending on what point in the trial something like this is introduced, it could have very significant impact.
And, if it leads to a result that is not quite judicious, that would be really unfortunate.
DÍAZ: You mentioned that at the beginning, everyone in the court knew that this was not real, that this was AI. Then why do it? Why go through the whole process of having the avatar if everyone knew this was not real? What was the point of it from the family’s perspective and also from the court?
I mean I just don’t see it if everyone knew that it was fake.