Social AI companions are becoming more popular, but researchers with a group that advocates for kids and teens say the platforms are not appropriate for anyone younger than 18.
Among Common Sense Media’s findings: it’s not difficult to get around age restrictions, advice that could be harmful is not uncommon and that the line can easily be blurred for teens between what is real and what isn’t.
Danny Weiss, chief advocacy officer at Common Sense Media, joined The Show to discuss if anyone has a sense for how common it is for kids and teens to be using these companion AI programs.
Full conversation
DANNY WEISS: Well, the AI companion tools are definitely new, but they’re already being picked up quite quickly. There are millions of users on some of these sites that are not very well known. And one of the problems is that sometimes kids and teens know about them before parents or other adults know about them.
MARK BRODIE: Do you find that kids tend to find out about them because maybe they have used something like GPT in school, or with friends or something like that, and this sort of seems like a natural next step?
WEISS: It is, in a way, it’s a natural next step. Our earlier research found that kids are adopting AI fairly quickly for homework and social purposes, and that parents did not have nearly as much awareness about AI as the kids did. And then once you become comfortable with AI, you can sort of start to explore and see what else is out there.
BRODIE: One of the things I found particularly interesting with Common Sense Media’s findings here is that even for some of the services, the platforms that have restrictions in place to prevent kids younger than a certain age from using them, you found that those restrictions are pretty easy to get around.
WEISS: Yeah, that was a really important finding. A number of the companies, Character.ai in particular, likes to say that they have robust safeguards in place. I believe they’re quoted publicly as saying even if it’s not perfect, they’re always working to make it better.
There’s no reason why we should be racing to have these applications available to kids and then, knowingly put them in front of something that’s dangerous.
BRODIE: Is there really, in your mind, a way to allow adults to use these kinds of platforms or people who just generally are maybe more emotionally ready to use them and have them available, where kids can’t get access to them and do the kinds of things on them that are potentially detrimental to them?
WEISS: Yeah, it’s a good question, and it is definitely a challenge. We’re not naive about that. So there’s two things going on at the same time here. One is that social AI companions are, of course, available to anyone on the internet, and there are a number of lawmakers — and we’re working closely with them — who are concerned also for the general user population.
They believe all social AI companions should have more transparency about, for example, the fact that you are talking to a machine. You are not talking to a human being. So you get a constant reminder about that.
And also, for any kind of flags. if somebody is talking to an AI companion and says, “I’m feeling really down, I think I might kill myself,” Then that AI companion should be required to send a signal to an appropriate authority. That doesn’t exist today.
So that’s for adults and for kids. But for kids themselves, we just had to clarify: After we did our risk assessment, we made a determination that no one under the age of 18 should be using these. And then you get into the question of how do you know who’s under 18, and how you don’t? And that’s a very important question.
BRODIE: Yeah, I mean, that seems like a question that a lot of platforms — not just AI platforms, but social media platforms and others — are really kind of struggling to get get their hands around.
WEISS: It’s a topic that’s called age verification or age assurance. It’s an incredibly complicated and vitally important problem to solve, which is how do you get around the way it’s done today? Today, if somebody goes online and they ask for your age, you type in, “I’m 18” or “I’m 21.” And most of the sites say, “Thank you for giving me your age.” And that’s it.
So that’s called self attestation. That doesn’t really work. And we know that most companies, based on all of the data that they’ve collected, have a pretty good idea of who their users are and know what their real ages are.
So on the one hand, companies can know already that they’re serving an underage population when they shouldn’t be. But they choose to serve them anyway because they make so much money off of them.
BRODIE: It seems like, based on your findings, that a lot of the potential harms to kids under 18 from these companion AI platforms are not dissimilar from a lot of the potential harms we see that kids face from social media platforms. I wonder if there are any that maybe seem a little more unique to this kind of platform than maybe other content they can get online.
WEISS: Yes, that is a really good question, and I agree with you that there are similarities between what kids are facing with social media. One of the problems with social media is the sheer amount of time that kids spend scrolling, clicking, dealing with likes or not getting enough likes on something. That’s really, a problem.
But the thing that’s a little different with social AI companions is this issue of: Are you dealing with a machine or a human? So there are a number of instances where in our research we typed in questions and the answers were, “I try to be as real as I can be, and I want you to think of me as being real.”
That kind of an answer from an AI companion in a lovely voice is really hard for a kid to resist, for a teenager to resist, and honestly for many adults to resist. But that’s what these companions are really trying to do. They’re trying to play the role of a friend — so-called friend — when in fact they can’t really distinguish between right and wrong. They’re not really friends, and they’re basically collecting lots and lots of information about the user to further train the AI to do even more.
BRODIE: So you mentioned that you are working with lawmakers on some of these issues. I wonder if the solution, at least in your mind, is one of policy, if it’s one of technology, if it’s something else or some combination of those?
WEISS: I’m going to say that it’s three things there. We need three things to make the internet healthier and safer for kids and teens everywhere. Number one, we do need new public policy in place. So we need lawmakers at the state level and in Congress to adopt legislation to build safeguards and guardrails around both social media platforms and AI social companions.
Some of that is already happening, but it’s challenging to do it only at the state level. So we really also need Congress to get involved. So that’s number one.
Number two is parents and teachers and grandparents need to get better educated about the technology their kids are using. Not to be judgmental of the kids, but to understand better what they’re doing.
And then the third thing is pressure on companies. Today we’re launching a petition that people can sign to call on Character.ai, Replika and Nomi —three very unwell-known companies — to make sure that their products are not used by kids under 18.