top of page

The First Ever Murdered Man Spoke at His Own Trial



A man just made an impact statement in an Arizona court before a judge, possibly affecting the sentencing of the defendant who had shot and killed someone in a road rage incident in 2021. All that’s pretty standard, nothing to see here. Except, the only problem is, the man who made that impact statement was very, very dead. Because he was in fact the man who was shot and killed in 2021. An AI-generated likeness of him was created thanks to the victim’s sister and then given a script and put on the stand. It was said to be so effective that the judge in the case called it “moving.” “I love that AI,” he said. And it may have moved him so much that he tacked on extra time to the sentence he gave out. This isn’t even the only case of AI roaming around in the courtroom--it’s been popping up here and there lately, so in this video I’m going to take a look at what happened in Arizona and all the various questions it raises and the effects it could possibly have on us and society. 


I’m Kevin Lankes, but good luck being able to tell if I’m even alive or not. Maybe you’re not alive. Maybe we’re all dead. Maybe we’re all cows. 


Earlier this month in a Maricopa courtroom, the AI avatar of a deceased man named Christopher Pelkey gave an impact statement before Judge Todd Lang. This happened at the sentencing of Pelkey’s killer, a man named Gabriel Paul Horcasitas, who shot Pelkey while the pair was stopped at a red light in Chandler, Arizona in 2021. The AI-generated digital Pelkey started off by explaining that he wasn’t real, but even so, it went on to firmly establish to the viewer that it was an honest representation of Pelkey, and was using his words and voicing his beliefs, and it was designed to speak in a very human and very endearing way. And the reason that’s an issue is because of how people respond to this kind of thing. Studies show that we can’t always even tell the difference between AI-generated people and real flesh-and-blood humans. A University of Waterloo study from last year found that 61% of people could tell the difference. And that number’s only going to keep falling as the technology gets better. Studies also say that people do experience empathy for robots and AI, and that empathy grows the more realistic or human-like something is. So seeing a robot like R2D2 get zapped and fall over, for instance, makes people feel kinda bad for it, but watching the AI avatar of a murder victim give an impact statement is on a whole different level. 


Pelkey’s sister, Stacey Wales, spoke to media outlets about creating the AI avatar and how she went about it. She and her husband Tim work in tech, and they’ve previously created AI video avatars of former CEOs and founders and had those digital recreations speak at company conferences. They got together with their business partner and put together the Pelkey bot in just a few days. They used a combination of different tools along with clips and images of Pelkey and melded it all together into a roughly representative digital homunculus. I’ve seen both a genuine video of Pelkey and the video of the avatar from the courtroom, and all I’ll say is that the avatar appears a bit more intentionally curated, maybe more deliberately polished than that man himself. 


Wales told NPR that, "I knew what he stood for and it was just very clear to me what he would say.” Of course, if you’re playing along at home, you may have picked out the AI-generated flaw in the logic there, because by default, there is no way at all to know what someone would say about their own murder or the person who murdered them. That’s one of the main concerns surrounding this new trend that some people are calling digital necromancy. Some people also call this and related things by the name PCTs, or posthumous communication technologies. I’ve also seen at least one study refer to them as deathbots. 


But choosing to resurrect someone digitally could be bad in a number of ways. For one, you could be giving away access to their personal information because all someone would need to do is talk to their AI avatar and ask it questions to unlock their life. So identity theft is a real concern here. But so is misrepresenting someone’s views and beliefs. Making an AI avatar and giving it a script is quite literally putting words in someone else’s mouth. 


Ultimately, ten people gave impact statements at the sentencing. Robo-Pelkey was the last to speak. Judge Lang expressed his love for the AI as he handed down 10 and a half years to Pelkey’s killer, even though the prosecution only asked for nine and a half. And the defense was apparently not given any notice that AI was going to be used at all. The lawyer for Pelkey’s killer said that, “it appears that the judge gave some weight to the AI video and that is an issue that will likely be pursued on appeal.” Yeah, no shit. I’m not a lawyer but it does seem like this case was given a pretty big reason to be reexamined. 


Wales cites one of the post-hoc reasons for making the avatar is that it really helped the family along in the grieving process. She said it brought them closer to Pelkey and closer to each other, and allowed them to process their loss. This kind of justification reminds me of an area of ethics called consequentialism -- it’s the idea that the ends justify the means, or that something can be judged by the result that it has rather than on its own merit. And, when we look at the research, the data shows only a small percentage of people believe that some positive benefit could come from digital necromancy in the grieving process. Most respondents in survey data believe that AI avatars of deceased loved ones would only make the grieving process worse and could potentially serve to continually reinflict trauma. Only 3% of people say they would pay for an AI avatar of a loved one if it was available and affordable. 


One study from 2021 that looked into VR technology that created avatars for the deceased stated that it even had the possibility to manifest a condition called Prolonged Grief Disorder, which is a clinical diagnosis that produces a lot of complicated symptoms in a situation where grief just doesn’t go away. 


Now, there is some research that suggests VR and AI avatars can be helpful for people to process grief. But they need to be administered in a regulated and clinical setting by a professional. You can’t just rig something up in your backyard and summon the deceased and expect your campfire techno-seance to turn out well for you. 


One thing that’s driving me crazy about all this in the media coverage is that every outlet is going on and on about how the Pelkey bot was awash in a message of forgiveness. The judge talked about this, and how he felt this, and everyone is picking up on that and just keeping that narrative alive. 


But there are major logical flaws with that reporting. One is that the AI avatar never ever ever says the words “I forgive you.” It does not forgive Pelkey’s killer. The closest it gets is that it says it believes in forgiveness, and in God who forgives. The other massive thing to point out here in all this talk about the message of forgiveness that Pelkey’s family seems to want the AI to embody is that the family asked for the maximum sentence. And in light of that, the cynic in me finds it very hard to believe the AI avatar was anything more than a tactic to secure that outcome. 


Of course, I’m not claiming that’s the case, and it’s hard to say with any certainty what anyone in this case really thinks, least of Pelkey himself. One thing we can talk about though is how the media really needs to be far more critical, of course, just in general, but also in these kinds of human interest pieces. The angle of approach here was fluffy, feel-good, lifestyle piece, and when you write those a lot of your critically thinking brain cells just turn off. I’ve done those pieces, I get it. You want to highlight the good, the cathartic, the hope in a dark void of awfulness. But when one of these stories intersects with a situation that could have massive implications on the future of not only the legal system but many other aspects of society, I think we need to turn those brain cells back on and do some genuine analysis. That’s all I’m going to say about that. A lot of people probably already hate me for saying some of these things, so I’m going to stop while I’m behind. 


What do you think? Is AI in the courtroom justice served, or is it big tech gone too far? Let me know in the comments what you think about using AI in the courtroom, or about the AI avatar used in the sentencing of Pelkey’s killer, or even weird, new, morally ambiguous uses of AI outside of the legal system. I’m going to cover more of that soon, so let me know what you’d be interested in learning about. 


Also please like and subscribe so that I can bring you more of these in the future. All my relevant details are down in the digeridoodles, so find me online to argue with me about things or to buy me a coffee on Patreon. I’ll see you in the next one. In the meantime, let’s do some f*cking good wherever we personally can in the world.




Episode sources:










 
 
 

Comments


Recent Posts
Archive
Follow Me
  • Youtube
  • Threads
  • Twitter Classic
  • Facebook Classic
  • LinkedIn Square
  • Blogger Square

​Follow Me

  • Youtube
  • Threads
  • Twitter Classic
  • Facebook Classic
  • LinkedIn Square
  • Blogger Square

© 2024 Kevin Lankes.

bottom of page