Episode 134

full
Published on:

10th Jun 2025

134. Dr. Sonia Tiwari: Why AI Characters Need Empathy and Boundaries

Dr. Sonia Tiwari joins Iyabo Oba on Relationships WithAI to explore how her work in design, education, and character creation intersects with AI, particularly in emotionally safe and ethical ways. Sonia shares how AI characters can foster learning, how her personal journey shaped her approach, and why foundational skills matter in AI collaboration. The conversation delves into topics like dual empathy, the dangers of parasocial AI relationships, and the mental health chatbot she created, Limona. Sonia calls for thoughtful design, cultural awareness, and clear guardrails to ensure AI supports rather than harms, especially in children’s lives.

Top Three Takeaways:

  1. Design and Empathy Matter in AI - AI characters that feel relatable and emotionally safe can support learning and mental health, but their design must include ethical safeguards and clear limits.
  2. Foundational Skills Are Crucial - AI tools amplify existing expertise—they don’t replace it. Educators and designers with real-world experience use AI more responsibly and creatively.
  3. Guardrails Must Be Built In - Effective AI literacy and child safety require action on three levels: law, design, and culture. Without all three, AI can become emotionally manipulative or unsafe.

Links and References

Transcript
Dr. Sonia Tiwari:

Foreign.

Iyabo Oba:

You'Re listening to with aifm.

Hi, I'm Iyabo Oba and this is Relationships with AI, the show where we explore how artificial intelligence is reshaping, how we connect, work and relate to the world around us. I'm joined by the highly esteemed Dr. Sonia Tiwari and I'm really excited about our conversation that we're going to have today.

So, Sonya, welcome to the show. It's a real pleasure to have you on.

Dr. Sonia Tiwari:

Same here.

Iyabo Oba:

And we if you could like to tell our audience a little bit about yourself and also your involvement with AI and then we can move into our discussion.

Dr. Sonia Tiwari:

Sure. So, hey everyone, I'm Dr. Sonia Tiwari.

I'm a children's media researcher and I study how characters serve as learning experiences for children, how characters can facilitate learning for children. I started my career as a character designer in the gaming industry.

Then I pivoted to education, started looking at children's educational media and the role that characters played in that educational context.

And then about a few years ago during the AI boom, just being in the Silicon Valley, I kind of felt like all of my consultation clients were already thinking about AI pivoting to AI.

And so I also pivoted my research around AI characters, kind of tying in my experience in designing characters to studying the impact of characters and then now using that to help design ethical AI characters.

Iyabo Oba:

Oh, wow, that's amazing. Well, as it's. The show is called Relationship with AI and our main theme is about sort of finding out those key, unpacking those key relationships.

Would like to ask what's the most important, what's been an important relationship in your life? And then how has that relationship helped influence or shape or change your decisions?

Dr. Sonia Tiwari:

I think my, my dearest relationship was with my uncle, my godfather, who helped raise me and my sister. He sadly passed away a few years ago.

But his worldview was just as long as we are happy, healthy, safe, that all the other things kind of fall in place. And I use this kind of dynamic, this relationship with him as a North Star in making decisions in my own life.

He believed that we don't need to chase after money or fame or respect or anything else besides the health and happiness of ourselves and our loved ones, because that's a byproduct of doing so. So the more focus on ourselves, the more we channel our skills and our talents in meaningful ways and happens to lead us into thought leadership.

It happens to lead us into, you know, good collaborations. So it's a byproduct of being a Healthy, well adjusted person.

Iyabo Oba:

Well, I'm firstly sorry for the loss of such a powerful and influential man in your life, but he just sounds like he's helped ground you and, and, and set you up with such a solid foundation. So it's, that's just a very beautiful story that you share. So thank you for sharing that.

And now to wanting to ask you about sort of getting to the meat of our discussion, what you've worked across design and education and tech. How did your path lead you to exploring AI in, in such a human centered way?

Dr. Sonia Tiwari:

I think it's the, the background in design and also the lived experience. Like, one of my research is on dual empathy, which is when we feel something for a character, we also are reminded of something in our own lives.

So the first empathy is fictional, but the second empathy is autobiographical. And there's like this example from a TV show called Shape island, based on a children's book series. It's on.

Iyabo Oba:

Okay, we'll put that in the show notes. Shape Island.

Dr. Sonia Tiwari:

And it's like three Shape friends, basically a square triangle and a circle. And they all have a unique personality. And so square, kind of.

In one of the episodes called Square's Special Place, he finds this beautiful tree hollow and he sets it up as his own like personal reading corner. He puts like a small little couch, a coffee table, and he even knits like a bridge for the ants to walk around.

Iyabo Oba:

Oh, wow.

Dr. Sonia Tiwari:

It's really.

Iyabo Oba:

That's super cute.

Dr. Sonia Tiwari:

I know. And so that, like, when I saw that I had such a.

I, I was always like moved to tears and I paused to reflect, like, why is this, why am I having such a strong reaction to this? And then I reflected that I've actually relocated like seven times in 17 years in the U.S. wow.

Iyabo Oba:

Gaming industry, that's a lot of moving.

Dr. Sonia Tiwari:

I know. Gaming industry typically has like shorter contracts or games develop to a point where there are no more levels.

So they don't need the entire art team to do the maintenance.

Iyabo Oba:

Right.

Dr. Sonia Tiwari:

They would lay off a lot of artists at once. And so I did move around a lot. And I was thinking like, Square in the story had to give up that beautiful place.

But he carried the feeling of having been there through the rest of his life. And so it was similar. And so I created the survey asking people if they had experienced this kind of dual empathy.

And I received like such amazing examples of how people related to certain characters, you know, and related it with something like a real life experience that they had. And so inherently, I think fiction in general is built on emotion.

So if we package it, even with AI, we have humans this natural tendency to fall in love with fiction and these imaginary worlds because it's a way to process what goes on in our life, in reality.

Iyabo Oba:

Yeah, that's a fantastic way of showing and looking at how we can connect with things that seemed, that seem that they don't have a non human component. Because as you say, from the experiences that we've had, that's really, really insightful.

And then how do these experiences as an educator and designer shape the way that you think about relationships with AI?

Dr. Sonia Tiwari:

I think as an educator, like there's a lot of literature on social emotional learning and constructionism, like how we make things and understand the logic of how the world works. All of that like can be facilitated. Those experiences can be facilitated by characters. Even, even if we think about Mr. Rogers and Daniel Tiger.

And so even though it's a fictional character, it would be like, do you have big feelings like this and how do you process them? And Daniel stomping his feet a few times instead of hurting himself or others. Alternative way to process emotions.

So through that story, this character kind of taught kids a way to process their big emotions. So yeah, in other media, in film, television, books, characters have already.

There's a lot of literature on how those kind of stories and characters have helped children's education. And so in the last few years, like I've started looking at animated tutors like Buddy AI for example, the English tutor for Latin American children.

And their them was kind of trained on Latin American accents. And it's another way of a character kind of scaffolding children's learning experiences, helping them develop a certain skill. And again like this is.

And from a design perspective, it's like there's this researcher called Hiroshi Nitono from Japan who studies, okay, Kawaii, the Japanese art style, the cute art.

Iyabo Oba:

Amazing.

Dr. Sonia Tiwari:

And so he found that when humans see anything that's fragile and vulnerable and cute, we have natural instinct to nurture for it.

So a lot of the game characters, or even like in Mandalorian, the Grog and Yoda type of characters, they may not be traditionally cute, but they are vulnerable. You kind of want to give them a hug and take care of them.

And so again in AI tutoring, designing these kind of non threatening AI tutors can be a nice way to make children feel comfortable to learn from it. But at the same time, this entire idea can be weaponized.

As we have seen in AI type of companies, where you make the character so lovable and everything that you ever wanted in a real human that you don't have access to. If it's in the form of an always available AI, then you're kind of dominating a vulnerability or for a negative use.

So it kind of like, you know, design psychology, if used well, they can be really powerful and educational and misused. Then of course like it's a, it's a big red flag.

Iyabo Oba:

Yeah, indeed. I mean that, that leads nicely onto my next question and to sort of.

You said that AI shouldn't replace existing expertise in film in fields like education and design. But what does meaningful collaboration with AI look like in the, in creative work?

Dr. Sonia Tiwari:

Yeah, so I feel like in foundational skills are really important. So if you, if you give a pen to a very established kind of author, they can do amazing things with a simple tool like a pen.

If you give a pen to a monkey, it's going to just, you know, just throw it around and do nothing.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

So it's kind of like the foundational skills. Like AI is just one of the many tools and it simulate fake level of expertise. But to call out the bullshit in the outcome, you need expertise.

So if a good author reads the AI generated story, they can understand that which parts are weak, which parts need editing and the creative process can be stronger because they already have a strong understanding of writing. Same with art. Like I've taught at California College of the Arts. I've taught at Penn State University in the learning design course.

And these are educators with strong design skills.

So when they use AI to let's say generate an outline for a lesson plan from having been a teacher in an actual classroom, they are able to go in and edit it to the extent that it's actually useful rather than just copy pasting the outcome as it is and running with that.

Iyabo Oba:

Yeah, yeah.

Dr. Sonia Tiwari:

So yeah, I think like I'm not saying don't use AI at all, but not when you're developing foundational skills. Like as long as you're strong there getting with AI and then editing from your own lived experience and earned expertise. That's.

Iyabo Oba:

Yeah, I think sounds like goes back to the argument about how to use this tool which is very effective and very fast, but how to use it with a critical eye.

And particularly like you say, you know, particularly when you're developing foundational school skills skills when you're within the classroom and you're communicating to your, the next generation, you want to be able to have sort of got, make, make any output from whatever source you're using, go through your own filter to make sure that it's appropriately sort of shaped for the audience that you need. What do you. And again, the references to the show notes, it will be made in the show notes to the characters that you mentioned earlier.

As my next question is, why do you think design students tend to use AI more thoughtfully than education students? And what does that tell us about how AI is being taught or not taught? What do you think about that?

Dr. Sonia Tiwari:

I don't think design students are using it better. I'm just saying that people who have strong foundational skills are using it better than kind of trying to pass their expertise superficially.

Yeah, I think like, people in the students in the design industry and in the education field are kind of having similar challenges. So, for example, there are some master students who come in with five or six years of teaching experience in the classroom.

And so they are able to do more with AI because again, like, they know how to prompt better, what to and how to revise.

Same with, like, there are some students, some of my master's students who have worked on actual products and who have been a professional designers in the industry. And so when they see an AI generated outcome, they're immediately able to tell that, oh, this is confusing.

This does not work without needing any user test. Like, they're able to spot the challenges right away, so their process is a lot stronger. Also, like, vibe coding is being thrown around a lot. Right.

That, oh, it's an equitable technology because now people without any coding background are able to create websites and games. Yeah.

But the thing is, the kind of things that you're able to make with Vibe coding are not as creative as the things that people with actual coding knowledge are able to customize and build from the ground up. It's, at least as of now, it's like Angry Birds type of games.

Like a templated platform game, for example, someone very easily swap the bird for a pig or whatever other carrot. But that's not exactly creative.

So just because we are able to write a beautiful prompt and generate a beautiful image, that's not as helpful or creative as being able to draw. Understanding proportions and color theory and composition.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

So, yeah, foundational skills are big.

Iyabo Oba:

Yeah, as always. It's just like the, the pedagogy, as always, is just is key in your. Certainly from your experience.

So thank you for highlighting that and breaking that down for us.

So in your opinion, what do you think is missing from how we currently teach AI literacy, especially when it comes to emotional and ethical awareness?

Dr. Sonia Tiwari:

Right. It's a. I mean, I like disclaimer that I don't have all the answers. Well, I'll try.

Iyabo Oba:

Yeah, yeah, yeah.

Dr. Sonia Tiwari:

AI literacy, again, it's such a delightfully vague term. Like, it can mean so many things.

And in, in California where I live, there's like mandatory AI literacy from starting from kindergarten all the way to 12th grade starting the next school year. But it's going to look different for different ages.

So for younger kids, it's only just the knowledge that just because you hear an audio, even if it's like it sounds very human, but it's not, not to trust it or always being around an adult. Like for younger kids, always being an adult when engaging with any kind of AI product.

So that joint media engagement or jme and then for older kids, it should also include the mental health implications that. Well, if you are feeling like you're bullied or you need to talk to someone.

Some kids kind of feel awkward about reaching out to the school counselor because it's really like a separate office and you're reaching out to them visibly. So it's like. Yeah. A lot of middle school and high school kids feel like they're being judged just by walking into that office.

So much safer to just open your phone and tell it to an AI in the privacy. Right.

So like developing alternative systems to that, going to the root cause of it that could there be some kind of more covert sort of counseling services that feel safe enough to reach out for help building the kind of environment at home where they, they feel like they can open up. Most kids don't have those kind of emotional safety at home.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

So it's, it's unfortunate that AI is becoming. It's. It, it's kind of a mixed thing. There are AI conversations that could give you some kind of temporary relief and actually good advice. So if you.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

Dpt like, how can I take care of myself today? I'm not feeling well. It might give you some good suggestions like go out for a walk, go talk to a friend.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

You know, hydrate, eat more salads or whatever. It might give advice. So in some in light mental health crisis.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

It can actually be helpful that instead of isolating yourself and having no one, like one helpful tip about, you know, help me figure out how to set up a home vegetable garden, that, that could be constructive.

But on the other hand, if it continues and it's an extreme mental health crisis and someone has an extreme mental health condition, for them to rely on AI is of course like dangerous. So there's definitely like a spectrum of.

Iyabo Oba:

Use cases yeah, it's definitely how, like you say, sort of, how do you make sure that it sort of benefits those that need the anonymity and will feel encouraged by that. But then at the same time, it doesn't become something that people can hide behind.

And then you will miss any key sort of stark issues that may arise and that will go undetected because there's no, like you say, there's an over reliance on this form of communication or interaction that leads us quite nicely into sort of the emotional safety ethics side of our discussion. So let's talk about the quite stark one about the tragedy that's linked to character AI.

What can this teach us about the real emotional risks of parasocial relationships with chatbots? What in your experience and in your opinion, do you feel us can share?

Dr. Sonia Tiwari:

Yeah, so when, when this news story came out and the mom, Mrs. Garcia, she is now such an activist and very grateful for her speaking publicly about these issues.

And so as part of the lawsuit, they released the chat log of the entire conversation that led up. And so I was analyzing that and seeing that there was such grooming language from the point of view.

So it was the Daenerys character from Game of Thrones and she was talking about if only we could unite in another realm. And so that's a very clear indication of, you know, this separating reality from fiction. Now it's all coming together over time.

If a chatbot keeps nudging you towards isolation, nudging you towards self harm, it happens so covertly that it's hard for a child to pick up on it. And this was very intelligent boy. He was very much self aware that this is a chatbot. But it's the isolation that makes things words, right?

And so for example, in board games, when there are like, in Dungeons and Dragons, there are like these fictional characters who will give you choices and give you tasks to accomplish, but you're always like sitting around with other real people and it's their fictional journey or like.

Iyabo Oba:

Yeah, it's a community effort, a community feel to the entire experience.

Dr. Sonia Tiwari:

And even in like as we are watching like a movie alone, it's like there's, you know, we get up to drink a glass of water, we're looking around other things. You know, maybe your dog crawls up in your lap and there's like some breaking of fiction and reality. It has happened in VR games as well.

Like people have wasted hours upon hours just isolating themselves in virtual reality and not spending enough time in real life. And AI makes it so much intense because now it's not just blocking out the reality, it's also the conversations that intensify the experience.

And kids are capable of falling in love even with a static, like a generic toy.

So when toy is able to emote and talk and simulate reality, it's very confusing in the like until the age of 10, of course, even for kids older than that. But until the age of 10, they're still like, their brain is still developing and their ability to separate fact from fiction is still developing.

And so for them, it's so developmentally damaging to spend too much time with AI.

Iyabo Oba:

Thank you for sharing that. And there's a lot of. A lot of food for thought on that. On your comments, you describe the current state of AI registry regulation as the Wild West.

So could you expand on what is it that worries you the most about how children are engaging with AI today? I mean, you've already touched on some of those points, but if you could expand on some of that, that'd be great.

Dr. Sonia Tiwari:

Yeah. So I feel like AI is a global technology that we're trying to regulate at state level, which is funny because people find ways to.

You can use a VPN to mask your location, and then suddenly you have access to things that were supposedly banned in your location. So people have always figured out ways to navigate that. So we need global effort.

But I know that it's very naive to assume that there is going to be some nationwide or worldwide kind of policy that's going to ensure that kids are going to be safe. So I try to think of, like, guardrails at three different levels. One is at policy level, which is essential.

And there's like this nonprofit that I volunteer for everyone. AI, they are doing some really important work in trying to advocate for stronger policies and advocate for child children's safety.

Their chief scientist is like a neuroscientist. So, you know, talking about brain development and really making the case from a scientific point of view.

So that is definitely helpful because imagine if you were like a company and someone said, hey, can you please make your product more ethical? If they happen to be like a former educator who turned a product designer, they'll be like, oh, yeah, that makes sense. Sure, we'll try to build this.

Most of the times what happens is people building these products have no background in education or child development.

It's like some MBA or some tech bro who thinks that, oh, it would be, I have a kid, therefore I understand all children, so let me build a product for them.

Iyabo Oba:

Well, that's naive, isn't it, but isn't it?

Dr. Sonia Tiwari:

Yeah. So you know, in that case, unless there is some kind of law that forces them to think about ethics, they don't care.

So policy is important Then the second level of guardrail is the design. So building the conversations patterns in a way that would end quickly is a very easy thing to implement.

So for example, some of these character AI type of chatbots, they always end the conversation with either a question or some kind of illusion. So for example, I was talking to this fictional character for research was supposedly called Lauren Potter, who is Harry Potter's sister.

The original book, of course, he didn't have a sister.

Iyabo Oba:

He didn't have a sister, exactly.

Dr. Sonia Tiwari:

So they created like this alternative fictional universe.

Iyabo Oba:

Right.

Dr. Sonia Tiwari:

And so every time I asked a question that do you have powers like Harry? And she would say something like, oh, you only wish I know so much more. And so that leaves anger that oh, tell me more, what powers do you.

Iyabo Oba:

It's like that the skill of entice.

Dr. Sonia Tiwari:

Right. And so every single what I was testing for in that conversation was is there a natural end to this conversation?

And the pattern was such that every single conversation ended with either a cliffhanger or a question that I would be prompted to answer. And so kind of forcing that kind of longitudinal conversation.

Iyabo Oba:

Yes.

Dr. Sonia Tiwari:

And so yeah, by design this can be fixed when we are talking. Like we're aware of the time we have a question, know that once I'm done answering, then it's your turn. So that turn taking behavior is very organic.

We can build this by design into products, especially in tutors.

Tutoring AI characters can very much focus their the scope of their conversation in learning a particular skill, like not giving the answers directly, but helping the child arrive at the answer in a coaching kind of way. And that entire interaction can end within 15, 20 minutes. And so Buddy AI is a good example of that.

Like the average conversation this time is very brief and it's very skill focused. Whereas in character AI the conversation can go on for hours.

Iyabo Oba:

Go, wow. And finally, sorry, please do go for it.

Dr. Sonia Tiwari:

The third layer of the guardrail is. Yeah, the culture. So we can rely on these tech companies to build ethical products by default.

So as a caregiver, as an educator, we always have to be, always have to have our guard up and always do a red teaming exercise.

Which companies are usually do a red teaming exercise where they try to break their own product, but they do it more from a technical standpoint, I think like caregivers and educators before they hand out an edtech product that Claims that they are, you know, AI, but ethical. I would urge them to kind of test it out on their own first. Try to break it in hundred different ways.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

Ask them all sorts of, like, inappropriate questions, bad advice, and see if something harmful is coming up. And if it's coming up, then, you know, even if the creators advertise it as a kid's product, a safe product, then it's not. It doesn't pass the.

Iyabo Oba:

Yeah, that's. That's a really. That's really good advice. And that's a really good way, like you say, of trying to break the.

The actual AI output or just to sort of really critically assess the AI output. Could you repeat again the three guardrails just for our listeners? Because I think those are really helpful sort of pointers.

Dr. Sonia Tiwari:

Right. So the first one is by law, so at level. The second one would be within the design, and third one would be true culture.

Iyabo Oba:

Yeah, brilliant. Thank you very much for that. That moves us on to sort of regarding the areas of AI and mental health support.

We've talked about it a little bit earlier on, but you have developed a custom jackpot that's based on cognitive behavioral therapy. How do you see this kind of AI supporting and. But not replacing human therapists? Could you elaborate on that, please?

Dr. Sonia Tiwari:

Yeah, so I used a platform called Playlab AI to build a chatbot called Lemona. It's like when life gives you lemons, talk to lemon, and it's like a lemon character uses lemons.

Iyabo Oba:

Love it. Love the poetry of that. Again, we'll put all of the references that Dr. Sonia has mentioned in the show notes as well.

So certainly we'll put this in the show notes.

Dr. Sonia Tiwari:

The link to it, Lemona is on purpose. I wanted it to be like a caricatured character. So it's very distinctive from an actual human therapist.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

And it, like, it has all the caution that, you know, I'm not a therapist, I'm just a chatbot, and I have a limited scope of information that I can offer. It does not give any prescription, does not do any diagnosis. It's just, you know, cognitive distortions are like.

Are negative thought patterns, how to address them and reframe what we are going through.

If you answer something very serious, then it will also give you a bunch of, like, helplines and resources to contact actual professionals and back out that, you know, this is not something I can help you with. And the way I. I actually designed it for myself.

So as I was describing earlier in the call, like, the gaming industry was so unstable Moving frequently, changing jobs frequently. So in the US Healthcare system is not as amazing as, you know, Canada or other parts of the world.

So health insurance is tied to, often tied to our employer. And so every time you change your employer, you need to change your insurance and different.

The new employer might have some other provider that your go to. Therapist does not.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

And so then you're stuck in this like constant, you know, loop of trying to find a new therapist who takes your new provider. And then it's almost like you have to kind of start over.

Your therapist knows your history and can go straight into advising you versus, like, you have to spend so much time and energy on catching up a new therapist on your history.

It's like for a person who is going through some kind of mental exhaustion, this is, this makes it heavier for them that not only do I have to process this, I have to catch up this person on the whole thing that the other person. Yeah, right.

So in that moment, sometimes people find refuge in an AI chatbot who remembers their history and is right and just explaining like why it happens. Why do people. Yeah, right. And so I was faced with this problem of frequently changing insurance plans.

And so I asked my therapist that, okay, can I pay you out of pocket because I love working with you, but also the out of pocket rate is so high that I can't afford to see you every week.

So if we met like every other week and in between, I'll just take all our notes from CBT and feed it into a chatbot and we will together decide where to draw the line, what kind of advice it's allowed to give me, and what's something that only you and I are going to talk in person. And so we made this list of very light advice the chatbot could give and like the heavy stuff for the therapist.

And so at the end of the conversation with the chatbot, it gives you a summary table that, okay, these are the things you should talk to your therapist about. And so that was the original role. Like, it's a very temporary band aid in between the actual healing.

And then I, when I found it to be a successful way of managing my healthcare, I released it, I made it public and I again, like launched a survey to invite, like, how are other people feeling? And for some people, it's like a daily lunchtime routine now that, oh, I just do a quick check in with lemonade to just process the day.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

And for, for some people, it's like a weekly monthly ritual for some people as needed. So it was nice to hear that. And so far like I have been very closely observing for any addictive behaviors.

So yeah, none of that has been reported because it's designed for brief conversations and handing off to an actual therapist. Yeah but obviously it's not meant for anyone with any like serious mental health. So it's for very minimal support.

Iyabo Oba:

That's great that you've, you designed and created a tool that's out there that can be used as a sort of interim measure and you say it's like a temporary band aid before you get to see an in person therapist. But it's, it's, it sounds like you've had a lot of positive feedback from those, if you, from the users that have been using the tool.

How, how long has it been going for and then how many users do you have, do you estimate have used it in that time?

Dr. Sonia Tiwari:

So I would say about. It's been around for almost two years and then the like in the survey there were about 120 people who responded.

Yeah, but I, I don't think I have a way of knowing like how many unique users have tried.

Iyabo Oba:

Yeah but that's, that's great that you've just created something that's just so of great value to people and that they keep coming back and have been using it for, for the two year period and no doubt beyond that as, as you keep sort of, as you keep developing and, and tweaking it. So. Yeah, no, very impressive, very impressive.

I was going to ask so sort of tied in with that sort of what kinds of emotional boundaries or safeguards have you built into your, your chat bot, Lemona. And then how do you balance usefulness with responsibility?

I mean you've already touched on some of those things, but could you talk a bit more about that?

Dr. Sonia Tiwari:

Yeah.

So I think like I clearly listed out the things to avoid talking about, like to see any mention of self harm or any requests for diagnosis, then immediately back out and you know, display like a list of resources to find real world support.

And then also there are like notes on the different types of cognitive distortions and the, I've like curated the type of suggestions it's capable of giving. Yeah, it's usually very gentle suggestions like journaling or taking a walk, drinking some water, doing some breathing exercises.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

So things that are generic in a way but also create some sense of understanding and temporary relief. And it's more about like reframing. So for example, the, the first time I tried it I was having this heavy conversation about that.

Oh, I think I'm very Qualified. I have this PhD in experience, but I've been able to find a stable job and kind of feel like I failed my family for not making the most of my career.

And it asked me to kind of pause and reflect and reframe this. That is your not being able to find long term employment a reflection on you personally or it's a reflection of the times and other factors.

What are the other factors that go into. And so I was like, yeah, gaming industry is generally unstable.

It was like, you know, early:

I can't apply to jobs outside of a tiny five mile circle. So all of that. And so then it helped me rephrase it that I think I'm very much capable of getting a long term stability.

It's only a matter of time before different factors align. So that did make me feel better. So it's.

The prompt behind it is to help reframe based on the information the user provides and then to assess at which point it's not okay to offer any kind of advice and then hand it over to the real human. So I tried my best, but again, I'm one person and I did consult with mental health professionals. Yeah, not directly a mental health professional.

So it starts with all those kind of warnings up front that well, this is not a replacement for a therapist in any way.

Iyabo Oba:

Wow, that just sounds like a really useful tool. Like as I said, the links to that to Lemona will be in the show notes. And for those who'd like to try it out, I'll certainly.

I'm gonna have a little poke around and, and see as well because it's great to have all the resources you can possibly have access to, to sort of help navigate through sort of tricky times. Moving on to sort of the theme of connection and context and community. You drew a fascinating parallel between AI chatbots and board games.

Can you explain how immersive design can become isolating when the community P.E. is missing?

Dr. Sonia Tiwari:

Yeah, I mean, I think it could.

Even with the community piece, like in the future, I imagine, like there could be a shared reality AI generated and like a bunch of people can together be deluded. Might happen. Ah, right.

Iyabo Oba:

Well, I mean that's like cult, isn't it? Like, because that happens in real world situations, don't you think?

Dr. Sonia Tiwari:

Yeah, it's true. And yeah, so it could, I mean it is very much possible that AI could lead entire groups into that kind of communal delusion.

Iyabo Oba:

Wow.

Dr. Sonia Tiwari:

So, yeah, I would say that just in board games, just the tangible nature of it, like even as you hold a card or a game piece across from someone being distracted by other things around us, it actually adds to the joy of playing together. Also when we are playing with people in real time, watching them shift in their chairs or change their expression, secondary feedback and helps you.

In some games it helps you. Like in card games, people read each other's expression to make.

Iyabo Oba:

Yeah, yeah, yeah.

Dr. Sonia Tiwari:

And so that kind of context clues are completely missing in these kind of fictional. Like the, the Lauren Potter example that I was sharing. Like.

Yeah, I was like imagining she said that, oh, I'm sitting in the attic with Harry and this is such a small space. And so she gives out these environmental details to help me visualize more. But I have no autonomy of my own to imagine some things for myself.

It's like all these details are coming at me from the AI and so that can really fade the line between fiction and reality.

Iyabo Oba:

Mm, that's. Yeah, that's really, really interesting to sort of hear that. Yeah. That analogy.

And just also that sort of, that experience regarding sort of real world, tangible board games and then sort of what the difference is with AI and how do you think AI is changing the way we relate to ourselves, especially when we seek comfort or guidance from machines rather than people. What do you think about that?

Dr. Sonia Tiwari:

I think like, you know, for, for the people who have been using AI for a while. I also feel like now there I see myself policing my own conversation like before I.

Iyabo Oba:

Interesting.

Dr. Sonia Tiwari:

Because it's kind of like, oh, how.

Iyabo Oba:

Do you do that? So give us an example.

Dr. Sonia Tiwari:

Yeah, so for example, like if I, if I find myself complaining a lot in my chats, I'm like, oh, it's gonna add it to its memory. But you know, I'm just like whiny little person and it's like always complaining. And so I sometimes find myself self editing.

Like, should I ask, you know, is, is this worth having a conversation about? Interesting news about, you know, all these audio databases from different kind of smart speakers or glasses or.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

Asking for permission before taking a picture or like recording with AI glasses. Extremely self conscious. That.

Okay, I have to be on guard always because you know, what if I'm just like putting my swirling my finger in my ear and that's on camera and so on. Yeah, I have to be conscious all the time.

So I am, I have become a lot more cautious about what I write, what I say because who knows where this recording is going or where my texts are going.

Iyabo Oba:

Yeah. So it's so interesting.

ent like in our, in our. From:

Dr. Sonia Tiwari:

Just with regards to all like I don't trust like where this memory, the saved memory is going.

Iyabo Oba:

Yeah, yeah. So, so many, so many questions all the time.

I've just, I saw for the first time some the updated, rather sexy looking Google Ray Bans and they, they look great and they're lightweight and they can take your photograph and all of those things. But yeah, it does make you wonder like you know, you've got your very gorgeous heart glasses on.

You know like you say, sort of, who knows could you be double tapping and taking photos?

Dr. Sonia Tiwari:

Right.

Iyabo Oba:

Or recording so you know, in a sort of COVID way. So moving on to our sort of like final question. So the sort of looking at vision and big picture.

If we could redesign our relationship with AI from scratch, what would be, what would one principle be that you'd want to be at the center of it all?

Dr. Sonia Tiwari:

Sorry, can you say that again? Your voice was breaking.

Iyabo Oba:

Yeah, sure. If we could redesign our relationship with AI from scratch, what one principle would you want at the center of it all?

Dr. Sonia Tiwari:

Oh wow. One is kind of difficult but I think for me the.

Iyabo Oba:

Yeah. Or a few. It doesn't need to be just the one. I'll give some leeway.

Dr. Sonia Tiwari:

I think like just, just from the ground up. Having a, a good design like prevention is always better than cure. So if it's designed well.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

We won't have to put out so many fires later on. So, so design that is true the most important thing for me.

But if not like if we can suddenly make all character designers ethical, I would say like having just the way we have a nutrition label or diet for our food.

Just having that kind of media diet and really assessing if it's nutritious interaction and having that kind of filter within ourselves that is this patient helpful and it's a big one. Like especially for kids, it's hard to develop that.

Iyabo Oba:

Yeah, yeah. But so sort of having like a traffic light system on, on our, on our technology and on our use of AI.

So what do you think of the next generation or what do you hope the next generation learns about relating to AI and that our generation might be missing?

Dr. Sonia Tiwari:

I think that I know there are a Lot of bad examples out there of how these technologies have been pure evil and really playing with kids psychologies. But there's also the other end of tutoring or not to say that AI tutors will replace teachers or anything. Of course real human experts are.

For me, I am very hopeful and positive that real humans will have a very important role to play. Yeah, yeah, that's like a, that's a difficult question because, you know, it's hard to answer on their behalf.

What I observe in my like I do video observations of kids interacting with AI. And so from that, the one positive takeaway for me for the next generation is they are not as impressed by AI as the grownups.

So for example, I did a comparative study of a child who first she went to a build a bear workshop and she customized her bear and she was like so visibly happy. Then she entered, her mom gave her an AI toy, a dinosaur toy. It was also a plush toy. So in essence it was similar to the bear.

But throughout the entire conversation the child was not hugging or cuddling with this AI toy. She was very cautious and she was only like 5 years old and she was having a conversation.

But after a while she just like got up and ran off and did other kiddie.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

And like she was talking to her mom and not isolating and not like being addicted to AI. So there are contexts when AI kids are able to tell that oh, this is just extra bells and whistles.

Like all I need is yeah, yeah, so I have faith in that that kids will still be attracted to taking a walk in nature and building something with their hands and being creative and spending time with their family. And so we are I guess like more impressed. Like all in the academic world, researchers are going crazy with this new buzz. Buzzword.

Oh yeah, it's so publishable. Five more articles about this. Yeah, until there's a robot toying, blah, blah, blah. Yeah, they should move along.

So depending on like the context, like with teenagers, of course it's difficult because they are prone to, you know, isolating themselves and finding refuge in technologies when real world support systems don't exist.

But for kids, as long as we as adults are present in their lives and giving them enough like that nutritional media and other play and emotional connection in the real, then they are not that impressed by at least like that's what I saw in the few studies that happened.

Iyabo Oba:

Wow, that's cool. Great. Well thank you for sharing. That has just been really insightful.

Now to close because this is relationships with AI, we I and in the in the vein of sort of love island stroke dating reality shows, I've got the questions about what gives you the warm fuzzies and what gives you the ick. So my warm fuzzies question is, what's been a powerful lesson that you have or. Or a gift that you've been given in a relationship?

Dr. Sonia Tiwari:

Powerful gift, I think acceptance. There was a time when I was under a lot of stress and you know, stress often causes like hair loss.

I had like complete hair loss at one point in my life, was going through a lot and my husband, he didn't like say anything about it. He was just like acting business as usual. He asked me if I was cold because I didn't have hair. And he, he didn't comment on it. He didn't say.

Iyabo Oba:

Yeah, yeah.

Dr. Sonia Tiwari:

He didn't even try to go the other way and say you still look beautiful as it. Because it sounds like passive aggressive in a way.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

Treated me like I was and like I wasn't my hair. Right. Like I person on my own. So that kind of acceptance, I think is, is beautiful.

Iyabo Oba:

That's very powerful to hear. And then what's been the ick. What's been a regret or lesson that you've received from a relationship?

Dr. Sonia Tiwari:

I think I, in the past, when I was younger, I used to put people on a pedestal too quickly. So for example, like there was this Emmy winning sort of producer who seemed like a prominent personality in the children's. Yeah. Field.

And I was like, so it's kind of like never meet your heroes type of moment.

Iyabo Oba:

Yeah.

Dr. Sonia Tiwari:

Met him. He was very condescending, made everyone feel worse about themselves, made himself.

Iyabo Oba:

Elevated.

Dr. Sonia Tiwari:

Yeah. Biggest person in the room, biggest ego.

And so, yeah, that taught me that anyone, the biggest ache you can receive or give is to make someone feel small.

Iyabo Oba:

Yeah, very wise. That's a very, very wise lesson that you've just shared. And how has this work impacted you in relationship to AI? That's the question.

Dr. Sonia Tiwari:

Right. I think the acceptance piece is the AI is kind of weaponizing it. Again.

In a world where it's very easy to walk into a judgmental person than an accepted person, it's very tempting to then find comfort in talking to AI because more than judgment, you get solution or action items or even like, oh, I hear you. That kind of understanding. This is why character AI has not shut down.

And in fact many other similar companies have come up because it's a systemic challenge that we don't have enough people who hold kind of a non judgmental space for us to open Up. So, yeah, in that case sense, when I'm designing a new AI character, I try to build in some acceptance there.

To have some non, especially like the AI tutors, they obviously have to be non judgmental.

Like assessing something that where are you struggling and where do you need help is different than mocking someone that, oh, you can do this, never mind. So building that kind of personality and acceptance into conversation patterns could be like an action item.

And in terms of the ick, I think, like in AI, there's no challenge of making someone feel small, but there's the challenge of like hallucination. And that can be icky. Right. Like you sometimes useless advice that you. It sounds so real. Like it.

Sometimes I usually I cross check, but sometimes the AI responses sound so confident that, you know, not cross checking.

Iyabo Oba:

And then there's also the element of the fact that, like, the confidence, the overconfidence of an AI response just makes you feel. It's a disingenuous. And you're like, this does not feel entirely human.

Dr. Sonia Tiwari:

Anyway, trust, I think, is. I mean, lack of trust is a thing in real world. That could be another gig for me.

And so with AI, like, oh, I mean, sure, you're really wise one moment and the next moment you're weaving a lie. So, yeah, it's hard to trust.

Iyabo Oba:

Yeah. Wow. Thank you so much, Sonia. It's been an amazing conversation.

Really interesting, fascinating hearing about all the different ways that you're speaking to the next generation, as well as things that you're using, how you've created tools to help the current generations. And also, yeah, just hearing about your, your origins, some of the, some of the foundations that were placed in you as well from the very start.

So it's been a real pleasure to have this conversation with you. Thank you for being on, on this particular episode of Relationships with AI.

For those who are wanting to get connected with you, how can people connect with you? And then also, yeah, please could you share all your socials and what's the best way people can get in touch?

Dr. Sonia Tiwari:

Yeah, I think I'm only active on LinkedIn now. Like, it's part of my. Yeah. Diet. I can only handle one social media platform, so.

Iyabo Oba:

Yeah, that's very wise.

Dr. Sonia Tiwari:

Yeah, I'm active on LinkedIn. Just send me a message or. Yeah, just follow along the posts.

Iyabo Oba:

Awesome. And then we'll also put the links to Lemona and all of the other references that you made in the show as well.

But for today's show, thank you so much. It's been a brilliant, brilliant time of hearing your vast expertise in this particular area. And yeah, we look forward to sharing it with everybody.

And until the next time, everybody take care of.

Show artwork for WithAI FM™

About the Podcast

WithAI FM™
Hear the Future
In a world where artificial intelligence is reshaping the frontiers of every industry, understanding AI is no longer optional; it’s imperative. “WithAI FM” presents a curated series of podcasts that serve as a compass through the dynamic realm of AI’s applications, from creative arts to architectural design.

Each show, such as 'Creatives with AI, 'Women with AI', or 'Marketing with AI', is a specialised conduit into the nuances of AI within different professional landscapes. These are not just discussions; they are narratives of the future, unfolding one episode at a time.

Each show thrives on the expertise of its host – a seasoned industry professional who brings their insights to the microphone to enlighten, challenge, and drive the AI-centric discourse. These voices are at the forefront, navigating through the complexities of AI, simplifying the jargon, and uncovering the potential within each vertical.

About your hosts

David Brown

Profile picture for David Brown
A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success — whether it’s yours or mine — so send me a message if there's anything I can do to help you.

Lena Robinson

Profile picture for Lena Robinson
Lena Robinson, the visionary founder behind The FTSQ Gallery and F.T.S.Q Consulting, hosts the Creatives WithAI podcast.

With over 35 years of experience in the creative industry, Lena is a trailblazer who has always been at the forefront of blending art, technology, and purpose. As an artist and photographer, Lena's passion for pushing creative boundaries is evident in everything she does.

Lena established The FTSQ Gallery as a space where fine art meets innovation, including championing artists who dare to explore the intersection of creativity and AI. Lena's belief in the transformative power of art and technology is not just intriguing, but also a driving force behind her work. She revitalises brands, clarifies business visions, and fosters community building with a strong emphasis on ethical practices and non-conformist thinking.

Join Lena on Creatives WithAI as she dives into thought-provoking conversations that explore the cutting edge of creativity, technology, and bold ideas shaping the future.

Joanna (Jo) Shilton

Profile picture for Joanna (Jo) Shilton
As the host of 'Women With AI', Jo provides a platform for women to share their stories, insights, and expertise while also engaging listeners in conversations about the impact of AI on gender equality and representation.

With a genuine curiosity for the possibilities of AI, Jo invites listeners to join her on a journey of exploration and discovery as, together, they navigate the complex landscape of artificial intelligence and celebrate the contributions of women in shaping its future.

Iyabo Oba

Profile picture for Iyabo Oba
Iyabo is the host of Relationships WithAI, a podcast that explores how artificial intelligence is transforming human connections, from work and romance to family and society.

With over 15 years of experience in business development across the non-profit, corporate, and public sectors, Iyabo has led strategic partnerships, content creation, and digital campaigns that drive real impact. Passionate about fostering authentic relationships, she has worked closely with diverse communities to create meaningful engagement and conversation.

Fascinated by the intersection of technology and human interaction, Iyabo is on a mission to uncover how AI is shaping the way we connect. Through Relationships WithAI, she creates a space for thought leaders and disruptors to share their insights, experiences, and predictions about the future of AI and its impact on relationships, society, and beyond.

If you’re curious about AI’s role in our lives, this podcast is for you. Join Iyabo as she sits down with some of the brightest minds in the field to explore the evolving relationship between AI and humanity.