If you never thought a conversation about artificial intelligence would turn into one about unconditional love, I think you’ll find this episode especially inspiring.

Hear fascinating stories from ground-breaking scientist, author, and speaker Julia Mossbridge about how we as humans have to make the decision to use technology for good, and how solving the “human problem” comes first. The question then becomes: when we listen for our next move, do we really understand how to be with ourselves and each other? And can we really elevate unconditional love through technology?

If you haven’t yet listened to Part One: The Ear Can Go Where the Eye Cannot, click here. Each part is less than 30 minutes, making the whole series about an hour! Each episode can stand on its own, but I believe you’ll enjoy them together as well.

This episode marks the end of the regular season, but stay tuned for a special bonus episode coming your way in September!

To follow Tim’s journey as a conductor and creator, find him on Facebook, Instagram, LinkedIn, and bookmark the “Listening on Purpose” webpage.

Click Here to Expand the Transcript

TIM: Hey, it’s Tim, welcome back to the Listening on Purpose Podcast. Today, we have the second half of my conversation with scientist and author Julia Mossbridge. And in this half of the conversation, we really dig into some stuff you’re gonna want to hear. I would encourage, if you haven’t had a chance to listen to the first episode, that you maybe go back and catch up on that real quick, and you’ll come into this with a little bit of momentum. Enjoy the show.


TIM: I want to hear more about loving AI. And this process about how AI agents can communicate this unconditional love. And how the adaptability of AI assists in that process. How does all of that kind of come to the fore in your work?

JULIA: So always, we come back to the same thing, which is when humanity gets excited about a new technology. And this has happened – this happened with the printing press, right? And this happened with the wheel. Right? Any new technology can be used for ill or for good. 

And what we come up against is do we organize our society? It’s a human problem. It’s not a technology problem, right? So I’m always noticing that it always comes back to the people. So there’s no amount of technology that we can create, that’s going to make it so that we have to use the technology for good. We ourselves as humans need to make that decision. Yeah. And so, and we’re not great at it. But we can potentially get better by using technology. 

In other words, using AI sort of to bootstrap the human problem. And by the human problem, I don’t mean like the way a robot or AI that wants to kill the all humans, he’s like, Well, there’s this human problem, and we’re gonna solve it. I mean, the human problem in that we don’t really understand how to be with ourselves and each other. That’s how I think of the human problem. Like, we’ve lived a long time, we’ve done amazing things, we’re great at building things are great at thinking about things. We’re really good at that compared to many other species and making things and solving problems. It’s fantastic. It’s really powerful. This is what we could do well. 

What we don’t have solved and what like sea otters have all over us, is how to coexist with each other peacefully, yeah. And how to be accepting of ourselves so that we can learn how to coexist with each other peacefully and to recognize that mortality is part of life. And that, without mortality, then we can’t have new sea otters or new humans, right. I mean, so these are things that the animal kingdom has all over us. Like is I’m not saying that they don’t do violent things, right? Animals do violent things all the time. But it’s not about wanting to cause someone else to suffer. Right? It’s not about that. And so that’s the key thing that we kind of, we don’t know how to deal with this thing of wanting to cause someone else to suffer because we can’t hear our own pain.

And so how can we – so my journey in artificial intelligence is like, yes, there’s all sorts of cool algorithms. And we can all be super smart. And there’s plenty of really smart people out there who can do really interesting things, that is just a given. How do we guide it so that what we do is positive? How do we do that? And one way to do that is to create technology that acts kind of like – I think of it like a printing press in the sense that we don’t have the human problem solved now, we don’t know how to be with ourselves and each other, but in some way someday we will. And so if you can create technology that can actually pull people up towards that end goal, then that will help them create other technology that’s not negative.

And so the way to do that, I think is, we were starting to do it with the Loving AI project. And the way we started was we embodied the technology in a robot. Because humans respond to their mirror neurons – our mirror neurons. I shouldn’t say “their,” I’m also human (laughes). Our mirror neurons are very powerful. Right? So because we’re imitators, or unconscious imitators, to put the Loving AI into a robot meant that we already had 90% of the nonverbal communication piece handled. So most of what we were working on was, what is the AI to control the animation of the robot so that the person feels loved. The actual words that the robot was saying, were kind of like – it was just good old-fashioned AI, was just a bunch of if-then statements, right? Yeah, okay, that’s not super interesting, right? 

Technically interesting. It worked because of the robot having animations that made the person feel loved. Right, right. And so those animations were more interesting. So we first used a convolutional neural net, in the second phase. We were using – in the first phase, we just merely imitated the person’s face, without even knowing what emotion that they were expressing, not having a label for it. And we just had a rule that if they looked angry, instead, we would just look compassionate, right, that’s how the robot would look, right? 

So they would mirror to imitate the person’s face, but not the emotions of anger, or jealousy, or these kinds of things. So the second wave was, I think, more effective, because we actually had a convolutional neural net that was saying, this is the emotion and then the robot was imitating that emotion. And again, we had the same rule if it’s anger, so we had a significant effect where anger was reduced during the session was about a 20 to 25 minute discussion with this robot that had AI embedded. That was really powerful. 

And when you think about it, think of all the nonverbal messages that we hear in a voice and the voice control pieces, like still needs a lot of work, right? Because voice is one of the hardest things to crack with AI. But all the nonverbal messages that we hear in the voice and that we see in the face, imagine that, if we could have Sophia or another, made-to-be-similarly loving robot with a prisoner who has committed a horrible crime, and the prisoner’s talking about their crime. Yeah. And this robot is being with them, and deep listening, and really only every so often saying, Wow, that sounds hard, or whatever’s, you know, appropriate in the moment, almost like an Eliza psychologist would back in the day, right? Like very simple sort of verbal responses. But the face, and if we could the voice, is not expressing disgust. Is not expressing fear, is not expressing anger. Or any of the emotions that get in the way of someone really feeling like they’re heard, if they’ve done something really bad. Yes. Right. I think we could have better negotiations between countries, I think we could have more people actually reveal what they actually did in an interrogation. Because this is what everyone wants. This is the drug. Right? 

Love is the drug. Yes. As Roxy Music says, and you’re offering the drug. And that’s all, I mean, I don’t understand why anyone does anything evil if they’re not trying to get love. I mean, I think that’s what everyone’s trying to get, or what we call evil. Even someone whose brain is built really differently. And they feel pleasure when they cause pain, for instance, or torture animals. I still think that, yes, of course, they need to be put in a place where they’re not going to be causing people pain or torturing animals, for sure. But I think that we don’t yet know how to work with them. And maybe this is one way to work with them, where they’re only getting feedback of love, unconditional love. And it’s truly unconditional, because they’re talking about heinous things. And that idea comes straight to the heart of our culture that believes that love should be withheld if you’ve done something wrong. And that’s the same kind of culture that creates a world in which people are like, let’s have AI solve all of our problems, as if that could happen without humans solving our problems. So anyway, that’s where I’m at with that.

TIM: I’m thinking about, you know, making someone feel heard and how that occurs to them to feel heard. And that’s something we’ve delved into with some of the guests, but I’ve not connected it to a physical idea like you just did. Which, of course, makes a lot of sense. Because the statistic we mentioned earlier, right? Only 7% of communication happens in speaking and listening. But the strength of the physicality with which you’re listening to somebody – I mean, I’ve thought about this and sort of basic-level concepts of body language. You can easily tell if someone’s wanting to listen to you, or they’re trying to turn away from you, or they’re looking away or looking down. But you just really brought this up at a core level as something essential in this project, that you really made strides in making someone feel heard with the physicality that the robot was using. That’s mind-blowing. What were the specific traits of that physicality that you found to be most effective and making someone feel heard?

JULIA: There are pieces that worked and pieces that didn’t work. So what didn’t work was, at the time she didn’t have any memory. So if they said something and they were repeating themselves, she wouldn’t know. She wouldn’t be responding as if it was the first time she heard so people felt heard in the moment. But over time, if they said something, again, they were like, wait a minute, do you have a memory disorder? Well, if you know someone has a memory disorder, you could still feel like they love and hear you in the moment. 

Many of us have aging parents who have memory disorders, and you can still feel like they love and hear you even if they can’t remember your name, or whatever that is. So that feeling can still be there. But that was a piece of it. And that was something that didn’t work. You know, the piece that did work is she was clear, like, it really worked that she was a robot – actually, the fact that she was a robot. Several people in the debriefing interviews afterward brought this up and said, I’m not sure if a person who behaved exactly as she did would make me feel as good. And we were like, wait, what? And then when they elaborated, and they’re like, well, See, I know, she’s a robot. Her braincase? I could see into it, and I see that machinery so I know she’s a robot. So I know, she doesn’t have ulterior motives. She only has the motives of you all as experimenters. And you all seem nice. So I don’t think she has negative motives. And so therefore, it’s like a pure conversation with someone without ulterior motives. And I was like, so it helps that she’s not sentient, it helps that they don’t see her as another person or sentient. That’s actually a boon because we don’t trust other people. It’s kind of devastating. But that’s what people said.

TIM:  But that’s so true. Right? In this exploration of listening, of how do we listen to each other in a way that the other person has the experience of being heard? And how if, I mean, you mentioned conflict resolution – this is a thing in negotiation. How do we make progress in these areas? How do we bridge that gap, though? Because obviously, that comes down to context, right? Somebody has the context, they’re equating your well-meaning in the project with the way you’ve designed the machine. But how do we take that to a personal level and interpersonal level? How do we create that kind of sense for somebody, from you to me, and you to your neighbor and me to my friend?

JULIA: Right? Well, so many ideas for how to do that, that I’m almost stymied. But, I mean, one kind of obvious idea is to use technology to do that. Right? Once you’ve raised up, think of it like raising children. So once you have a child who has like a Mr. Rogers, who used to tell children, you know, you’re finding exactly as you are. Totally guileless guy. Yeah. Right. And he’s got guilelessness on display as a perfectly fine way to be an adult person. So, once you have an interaction with someone like that, it’s now a possibility that you could be that way. Doesn’t mean you could access it. It’s not a habit yet, but like, it’s a possibility. So that’s one way we elevate guilelessness, elevate unconditional love. Yeah, right, right. Unconditional love, if we want it to be, it can be a secular idea that we use for human advancement that’s scientifically validated.

You know, I created an unconditional love scale and validated it for this reason because scientists need to be looking at unconditional love. We need to stop saying that this is what priests and Imams and rabbis do, we need to say this is a human experience that is incredibly powerful, and could change the world. And we’d better understand it. And this is a powerful thing. So elevating unconditional love, through technology, through entertainment, through how do we be with our kids, how do we be with other people. But then there’s this other piece of, I mean, you can train – you know, my dissertation was on auditory learning – you can train folks who have, especially those who have been through trauma or really hard times, actually they’re much better learners in a way. Because they’ve had to be to try to get in and around and run around their situation. And so probably start with people who have had early childhood trauma and trauma. Maybe even especially people who have had trauma around their mouth or around their voice somehow. And help train them to be like emissaries of listening, emissaries of authentic being in the world, emissaries of unconditional love, and scale that up through human contact. And another way is to use technology, where people listen to themselves. And they basically do a self-guided tour through their own psyche, to bring themselves up towards loving themselves. I mean, that scales. Self-guided, everything is scalable, because each person who’s using it is the person who’s guiding it. Right, right. So we have to remember that.

TIM: Yeah, and it’s free. And it’s free. It’s free to everyone.

JULIA: And that’s probably why we don’t like – so just coming back to the capitalism thing, one of the things we have to kind of get is that if we’re going to shift humanity towards using technology positively, the idea that in order to for venture capitalists to support your startup, you have to be sticky, they call it – which means addictive – that has to go away, right? Or we have to do something about capitalism. So basically, addiction is not one of the hallmarks of anything that succeeds.

TIM: Yeah, man. Does that start with the self-love piece?

JULIA: Oh, for sure.

TIM: You know, it’s interesting to hear you talk about this relatedness and love and some of the things that you mentioned, I would have chalked it up to let’s say tribalism, right, or survival instinct. When you were saying, if you were talking to someone in prison who had done something really awful, right, and my understanding of what you’re saying is that you contextualize that in seeking love. Whereas I certainly don’t – as a human, that’s not the first place I go with that, right? I mean, cry for help, maybe. Sure. But I’ve not really gotten it down too deeply to the level of this is communication – unsaid communication that’s seeking unconditional love.

JULIA: Yeah. Well, I think I think that way, partly because I was raised by three parents, one’s a physicist, but the other two are social workers. And they really had this model of every human being has a blueprint for a positive life inside them. Yeah. But it gets stymied. Most people, it gets stymied, and the ingredient they’re missing is often love. And so that’s kind of the model I’ve had since I was a kid. But that makes sense to me. 

But also, recently, we took that Time Machine app that I told you about into Amber Williams at TILT, which is this nonprofit I started. It’s called The Institute for Love and Time. Amber Williams, who’s just this amazing project manager, and UI/UX genius. And she grew up on the south side of Chicago, had a rough life, and she wanted to help the folks at Cook County Department of Corrections, so Chicago jail, and she did so she took Time Machine in there in a six-week trial. And every week met with them for an hour. The guys who were in this program – about 14 guys who were in this pilot program, and she had them because she couldn’t give each of them an account because the technology there isn’t great. They could use one account together. So she had each of them come up and speak into the microphone. On that particular day, whoever wanted to; their hope, or their message to themselves from the future, or whatever was the prompt. And then she got to see them for – the first time after they did this, they’re like, arms crossed, whatever, like, maybe we will. 

And then she got to see them the next week when they listened to their message because the time machine had landed, and listen to their voice. And like, their faces changed. And their bodies changed. And they were like, That’s me. Let’s play it again. You know, that was me. Because it’s proof that you existed back then, all of a sudden, the time horizon is now extended for a week, because you’re here now and you existed back then. That extension of the time horizon strengthens us. And so by the end, they said to her, we know this is ending, we want you to extend it for two more weeks. So she got permission to extend it to eight weeks. And then they just ended their eight-week program. And they wrote these beautiful things about their experience. And most of them talked about unconditional love and how they didn’t realize that they can love themselves. Because they had done something wrong. And they weren’t supposed to love themselves.

TIM: Wow. Yeah. Like you said earlier that we’re right that punishment would be disavowing them of that.

JULIA: It’s okay to punish – the idea we have punishment, we have to separate what’s – we actually have to. Some people do need to be separated from the rest of society for a while. We have to separate that from like, do we withdraw love? And I don’t understand any option where it’s a good idea to withdraw love? I don’t get that argument.

TIM: If we get aspirational for a second, what would the world look like with more listening and more unconditional love?

JULIA: Whew! Look better. A lot better. Yeah. Well, listening, assuming we include listening to ourselves and unconditionally loving ourselves, because that’s the way things scale. Let’s do it. Okay. Yeah. So what it looks like is, of course, there are problems that happen in life, of course, people get sick. Of course, people get angry. Of course, people get sad. People get injured. People get hungry. Yeah, right. There are problems that happen in life. And of course, those are true. But it would look like that we all know that. And that we try to love each other through that. 

And one of those ways of expressing that love is to solve problems, like what can we do about the mental health crisis? What can we do about the hunger crisis? What can we do about war? What can we do about war trauma? What can we do about all of those pieces? Because once you’re – it kind of comes down to what are you listening for? And right now, we’re sort of listening for a way in which we could – like you were on the train, listening for a way in which you could not get involved. What you’re listening for, like the information that would tell you like, This guy’s a scam artist, I’m not getting involved. Right? That’s what I would have been doing. For sure, right. But if we can get to a place where we unconditionally love, that doesn’t mean we automatically believe whatever anyone’s saying. But we’re unconditionally loving ourselves unconditionally loving someone else. So we’re not listening for a way in which we can reject someone, we’re listening for, what is our next move? Maybe our next move is nothing. Maybe our next move is not with this person, but with someone else, so we’re listening for that. But it’s not about, how do I weed out this love. So it comes from a place – instead of scarcity around love it comes from a place of abundance around love.

TIM: And it’s a space the way you describe it. You’re creating space. Yeah,

JULIA: you’re right. Yeah.

TIM: If you could broadcast a simple message that would be translated into every language or every way that people are communicated with, what would it be?

JULIA: Oh my gosh. I guess it would be, who or what can I love right now? Let me say that again, when I’m not feeling weepy and it’s who or what can I love right now?

TIM: That’s amazing. This has been incredible. I’m glad that we approach this the way that we did by just trying to create a space for conversation. And I mean, this is yeah, this is amazing. I’m just so inspired.

JULIA: Me too.

TIM: We’re gonna have to do this again because I do want to dig more into TILT, time travel – I mean, there’s so many more interesting – I just, I feel like we’ve only tapped the surface here…