Robots have been a part of our fiction and pop culture for decades. But as our technology advances dramatically on a regular basis, what had seemed impossible or even decades away is already here. So as robots expand their importance in our daily lives, and in some cases are starting to look like us, how do we keep all of this in perspective and maintain our own humanity?
STEVE GOLDSTEIN: Ed Finn is director of ASU's Center for Science and the Imagination, and is with me to talk about robots. So, Ed, why have robots appealed to us for so long and played such a part in fiction and pop culture and now increasingly in daily activities?
ED FINN: I think there are a few different taproots to our endless fascination with robots, and one of the simple ones is we always want to make new friends, right? So we want to create creatures, beings that can hang out with us. But of course, we want to be in charge. So there's always this interesting dynamic of parents and children or masters and slaves. I'm fascinated by how many of our robot stories about — are about creating these perfect servants or slaves that are going to do things for us and create a life of luxury. And we're always looking for ourselves in these, these other creatures. You know, we, we create a humanoid. It's interesting how many of our robots are supposed to look like us and sometimes we get worried because they look too much like us. That's because we're always trying to hold a mirror up to ourselves when we're talking about creating these other creatures — we want to learn more about who we are. And maybe, you know, a final reason, I think is we always want to play God. You know, there's something really exciting and tantalizing about this idea that if only we could create a robot, we'll do so much better than we do in real life. We'll do so much better than we do with actual humans.
GOLDSTEIN: So one of the things that often comes up with the concept of robots or some sort of creation, as you said, playing God, is how much of the actual human element of who we are goes into a robot — whether a robot can actually have feelings or develop closeness and that sort of thing. You mentioned making new friends. How does that play into how we define ourselves as human, as to whether a robot that is either created or becomes close to us can actually adopt more of those human qualities?
FINN: One of the funny things to me when we talk about this whole question of emotion in robots is how little we understand about our own emotions and cognition. The romantic poet Samuel Taylor Coleridge said, "I cannot think without feeling or feel without thinking." And we're still arguing about that. We still don't know whether there is some sort of rational, objective consciousness that's divorced from our sense of feeling or whether we, those things are bound up together. And so when we create robots, we project all these emotions onto them because that's part of how our brains work. We know that, we know that we have these mirror neurons and we're very interested in how other people feel about us and how people are getting along together. And so we anthropomorphize everything — you know, I project feelings onto the toaster sometimes. And so, of course we do that with robots, but there's no reason we have to, right? We could be thinking about robots in a different way. We could come up with a different metaphor, a different frame for them, and maybe we would all get along better if that were the case. You know, if you think about your electric car, not as, as a sentient being, maybe you think about it more like a horse or a dog that — it has some kind of agency and some kind of intelligence, but it's not going to have complex emotions and everybody might get along better if we had a different metaphor for it. I think that we are always looking for that emotional connection again, because we need — we want to get those feelings back. We want reciprocated love. We want a friend who will, who will like us back. And I think that in a lot of ways, our, our obsession with robots is about the challenges of loneliness and the way that, as much, as connected as we are to other people, you never really know, you can never really bridge the gap between two minds. And so this idea of creating a mind that we would understand from the ground up is, is really appealing.
GOLDSTEIN: Ed, how much is physical appearance play into this? Whether a robot is designed to look more human or if the robot, frankly, just looks like parts put together that isn't really supposed to look like much of anything.
FINN: I think we read a lot into physical appearance. We're, we're judge a book by its cover people so often. And so if you put some, some cute eyes on a robot or a goofy smile, we'll automatically be motivated to like it more. If you make a robot that looks like a big fluffy seal that you can hug, we're going to like that robot more. And it's an interesting question, this idea of robots performing different emotions or performing a kind of cuteness. One of my, my favorite thoughts on this whole conversation is, was when people asked Alan Turing, the famous computer scientist in the mid 20th century, how will we know if machines become intelligent, robots become intelligent? And his answer was the Turing test, which was really just a party game. So he said, like, we don't know what intelligence is. So the way you can tell is if a robot can trick you into thinking that it's intelligent. You know, have a robot behind a screen and a human behind another screen and you pass notes back and forth. And if the robot successfully passes as a human because it writes things that seem human-like, you know, that's intelligence, right? So intelligence is always about performing, it's always about pretending for one another. And so we like to see robots that perform and pretend in their physical appearance too. I don't know if you saw that Boston Dynamics video of their robots dancing recently.
GOLDSTEIN: Oh, yeah, yeah.
FINN: Right? And it's so evocative. It pushes all of our emotional buttons to see robots — they seem joyful, right? They are having fun. But, but that's all coming from us, right? We're, we're putting all of that emotional language onto the table. And maybe maybe that's, that's where we should keep it. Instead of trying to teach robots how to feel things, we should just accept that we need them to perform certain roles and be more thoughtful about what roles we have them perform.
GOLDSTEIN: Ed, where is the dividing line between who think it's fun to see what you just mentioned, seeing robots dance, or some of the ones that people are afraid they're going to be killer robots. They're not just going to take our jobs, they might take our lives.
FINN: I think one of the big problems we're facing with robotics, AI and algorithms in general is that we're really not good at understanding what's going on inside the black box. And so if you paint a cute face on the black box and you say this is your little helper robot or this is, this is a smart speaker that's going to sit in your home and you can ask it what the weather's like outside instead of looking out of the window for yourself, that's different. That's a different story from what that thing is actually doing. And a lot of these systems really are doing different kinds of surveillance and they're interested in marketing and selling you stuff. And I think it still takes a lot of human imagination and ingenuity to think outside the box to understand, well sure, maybe you designed this system to solve problem X, but there are some people who are going to try to use it for Y, Z and you know, giraffe, and you really need to plan for that, too.
GOLDSTEIN: Ed Finn is director of the Center for Science and the Imagination at ASU, among many other things. Ed, always good to talk with you. Thank you.
FINN: Thank you, Steve. This was fun.