#9: A different kind of artificial companionship - Dor Skuler
Ian Krietzberg:
Welcome back to the Deepview Conversations. I am your host, Ian Kreitzberg, and today we're talking about AI companionship. This is something that has been on the rise, I think, for a while. This is not new, but as with many things in the field, in the very broad field of artificial intelligence, there has been a resurgence recently. This is coming across in a bunch of ways that kind of run the gamut from mildly interesting to frankly disturbing. An element here is the idea of AI girlfriends or AI boyfriends, where users will create a chatbot. And so with all this going on, I wanted to reconnect with someone that I spoke to for the first time several months ago, Dor Scholar, who is the founder and CEO of Intuition Robotics. Now, Intuition Robotics does AI companionship, but as you're about to hear, it's a very targeted breed of AI companionship. Their main product is the Leq, which is a tabletop robot device that, as Dora will mention in our conversation, is not humanoid. It doesn't look like a person, it doesn't have eyes and a face. The first time I wrote about this, the idea of AI companionship is disturbing to me. It feels, in many ways, anti-human, and it feels weaponized right now. But the thing that I keep coming up against, and the thing that I've been grappling with since the first time I sat down with Dor is, is the illusion good enough in instances where the alternative is worse? Does it help people? Does it matter that they're not talking to someone or forming a relationship with a thing that can actually care, that understands what it's outputting beyond probabilistic tokens? How much does that matter in certain situations? All that to say, like the idea of what's going on here is one that requires a lot of thought. I think there's a lot of ethics and philosophy that's kind of baked into it. And that's enough of an intro. We're about to get into all of it. So thanks for listening and stick around. Dorip, thanks so much for joining us today.
Dor Skuler:
Thanks for having me.
Ian Krietzberg:
There's a lot to cover, and I think this is a really interesting time, too, to be chatting with you. Ideas of synthetic companionship and AI companionship are becoming more of a commonly talked about thing, and so there's a lot to get to. Where I want to start is with Intuition Robotics and LAQ, kind of circle things back right to the beginning, right? And with all these kind of pushes for AI companionship, what you guys do is, is different in that it's very specific and very targeted. And so I'd kind of like to start with the inspiration behind the foundation there and the creation of LAQ. Why was this something that you felt was needed?
Dor Skuler:
Yeah, so thanks for that. Actually, we started with a real understanding of the problem and then looked how to solve it versus starting with the mousetrap and thinking where it can be applied. So this is my fifth startup and I have this process of ideation where I choose a domain and for me the longevity economy seemed like a very, very interesting place to look at because of just the sheer numbers of older adults the amount of boomers retiring and kind of the real lack of innovation in that space. And I went to 19 different experts and asked them the same question, which whenever I enter a new domain, I don't come from this domain originally. Whenever I enter a new one, I try to crystallize it to this single question, which is if you had a magic wand and you can make one problem go away, what would it be? And the reason I do that is if they start telling me about the industry, it gets overcomplicated. It's almost like an unfair question for an expert, but it forces them to choose. And that's helpful as an innovator. So I went to 19 different people in different areas of aging. And 18 out of the 19 said loneliness and social isolation. which was really interesting, because usually you find different people, say different things, and you're like, oh, this is really similar to that. You kind of group them together. But here it was really clear, like there was no need to be an analyst here. And that really reminded me, a few years before that, we were helping my grandfather after my grandma passed away, and he needed live-in aid. And we chose one based on her credentials. She had to be a certified nurse to give him a specific medication, be strong enough to help him dress and walk and stuff like that. And it was a complete disaster. They did not get along, even though she checked the boxes. And we replaced her with someone else that had the same credentials on paper, and it was super successful. And I was wondering like, what's the difference? What's the X factor? And the X factor was the fact that she appreciated his kind of dark and quirky sense of humor, that they both shared a passion of classical music to like an intense degree. She thought he can hold a conversation with her. whatever that means, like try to write the feature spec for that. But the bottom line, it was the soft stuff and not the hard stuff that made a difference. And when you kind of put those two things together, it became really clear to me that if we want a digital solution, if we want technology to help with this huge problem of loneliness and social isolation that I can touch a bit about in a second, we have to be able to create an AI get along with my grandfather, right? Like that can talk about any topic in depth, that can have a sense of humor if you have one or not. If you don't, you can feel like you can hold a conversation with it, that it will be like this positive force that you want to welcome into your life and into your journey. The problem is huge. I mean, the Surgeon General released an advisory a year and a half ago. He doesn't do that regularly, and he called loneliness and isolation an epidemic. That's actually one of the biggest problems in the US healthcare system. It more than doubles the risk of death. It more than triples the risk of dementia, depression, heart disease. And I wish we could just snap our fingers and have our loved ones surrounded by younger, caring individuals that spend significant amount of time with them every single day. Unfortunately, that's not possible in modern society for a whole host of reasons that we can get into if you want to. So the thought was, can technology help? what's the delta between the existing tech, like can we just use the existing stuff, or do we have to kind of push the state-of-the-art? And unfortunately, we had to push the state-of-the-art. It would have been a lot easier to just use existing stuff. But the plus side of that is that that allows you to build a certain moat around the business as well.
Ian Krietzberg:
The kind of issues that you're talking about are, like I mentioned at the top, kind of on everyone's minds right now. You know, the loneliness epidemic is a paramount focus for people. And I think we're seeing that in a lot of different ways. We're seeing that in young people a lot. And the idea of how to respond to that is varied. And the kind of impression of, you know, is this a solution that technology can solve? right that's become a pretty fierce debate itself um and in in some ways right well and then and then again it kind of varies based on the kind of demographic that you're looking at right now now you guys are targeted your demographic is elderly people like you were explaining about your grandfather people who are just lonely as you get older it's something that happens um And that's specific and specifically different when you're asking the question, can technology solve this problem? I think then when you're thinking about kids who perhaps their loneliness problem was caused in part by technology. And so I guess just stopping right there, right? Why was it important that your demographic was elderly people? That was key from the start that you wanted to focus on that. Was there any thought about, can we bring this to other demographics or is it we're solving an elderly loneliness problem, period?
Dor Skuler:
Yeah, definitely the latter. And it started with loneliness and it grew to be a lot more. Hopefully we'll get a chance to talk about that a little bit in this conversation. But we only looked at the elderly population. And the reason is, as a designer, you design for the audience you're trying to serve. And the more generic the product, the more you're going to choose to kind of the generic solution. And you end up with something like the iPhone, which is, if you're great, right? Which is targeted at people that are essentially 35 years old. And there are a million design questions that you encounter, a million crossroads. Do you turn left or right? And once you have a very specific demographic that has specific needs, then it's very, very easy. You always, always turn right. And if we were trying to design this to be for older people and for younger people, I think we might have ended up not being able to serve either of them very, very well. And I think there is something to say about creating verticalized solutions with a very, very targeted population. It doesn't have to be demographic, it could be a specific type of customer or a specific need you're trying to meet, and not try to make it so generic. Sure, it means you're not going to have maybe a billion devices sold, but if it's a big enough segment, then you can do hundreds of millions. About 30% of the world population in the Western world are older adults. In Japan, it's gearing up towards 50%, so by far it's not a niche. It's a big, big segment that has a very unique need. And unfortunately, as kind of innovators, we haven't served this population. The only products that are built for the elderly are geriatric, hospital-ish type products that often look at them as opposed to try to serve them. Like, let's make sure you didn't fall. Let's make sure we understand how much food you intook and what you outputted on the other side, et cetera. as opposed to like a delightful, desirable product that will just make life great. And we felt that older adults should have that while solving some deep, deep issues that you encounter as you're older.
Ian Krietzberg:
You mentioned that you started with loneliness and it grew to be a lot more. Yeah. Do you want to dive into what that looks like?
Dor Skuler:
Yeah, it happened almost immediately because how do you solve for loneliness? What does it mean? At the end of the day, you live your life, right? And if you want to be a good companion, which I think is the right solution for loneliness, then you need to be with that individual, with that subject in their journey in life. And that means you need to be able to provide value to them across a whole host of issues. In addition, when you look at some of the big issues around aging, or even at the biggest costs to health, or the biggest reasons to health deterioration, they're very holistic. There's some very, very interesting studies that I don't think are getting enough airtime around the causes for health deterioration. And what they show is 10% of it is genetic. There's nothing you can do about it. Well, maybe with CRISPR and stuff, but right now there's nothing you can do about it. 30% of it is the healthcare system. So our entire healthcare spend is only 30%, can only solve 30% of our health issues. And 60% is something called social determinants of health, which is how we live our lives, the environmental, social, and behavioral factors that affect our health. So that could be our mental health, and loneliness falls under that. It could be the type of food we eat. It could be whether we exercise or not, whether we use toxic chemicals or not, et cetera, et cetera. So really, if you want to help somebody with healthier aging, very quickly, and you want to be a major part of their life, Very quickly you understand that actually you should really help them also with mental health and stress reduction and with food and nutrition and with hydration and physical exercise which is super important and cognitive training and sleep. Sleep is a huge factor to our health. And most of our customers have four or more comorbidities, the chronic illnesses. So they take managing, and it's very hard to manage them in day 100, day 200, day 1,000. Check your glucose all the time. Check your blood pressure all the time. Take the right medication at the right time. Make sure you watch over your diet, which relates to the other thing we spoke about before. So we help with that. And older people, as we all know, have a lot of doctor's visits. So we help with that. And they might have transportation needs, and we help with that. and they're hopefully going to take part of services in the community, so we help with that. Technology might be a barrier. That barrier might actually enhance loneliness because it's tougher to make technology work than get you in contact with people you love that might live far away, so we help with that. And you end up, like, essentially run an exercise and draw on a whiteboard, like, what would you like a companion for the elderly to do holistically? And you end up with a feature set of this product. And on the pure loneliness or pure companionship side, it's basically there's a big technology aspect that I'm sure we'll get to next, but from a feature perspective, it means just spending time and taking an interest in you, right? Like you feeling that your companion cares and is taking an interest in you and is curious about you, but also can spend time with you. And that means going on road trips together, or drawing together, or learning new things, or talking about sports, or talking about politics, or talking about the arts, or teaching you how to do origami, or teaching you slang, or playing games with you, which is also great for cognitive training and staying sharp, but it's also great for passage of time. And you know, Ellie Q holds a grudge if you beat her, so that's kind of nice. And she's a lot of humor. And she captures your life memoir and just interviews you about your life and turns that into a short movie. Like who doesn't want to be a star of their own movie? And who doesn't want to talk about all the great things they accomplished in their lives and how they met their spouse? the first time they had a baby and how they felt and we capture all of that and turn that into an asset, a memoir that can be shared with the family if you want. And of course we connect to the family via an app and video chatting and voice calling and text messaging. So it was like, goes on and on. Right.
Ian Krietzberg:
So the feature set, it's the kind of, it's like what you were talking about earlier, right? With your, uh, your grandfather's live in aid, but the one who worked right. Who was basically your grandfather's friend in addition to being his aid, right? What can we do? We have to make sure that you remember to take this medicine and that medicine, that medicine, and also, the kind of opportunities for engagement, where if you're living alone and your kids can only visit you once a week, right, you have a lot of other time, and here you can kind of spend it in a more engaged way. You did mention the technology, and you're right, I want to get into some of that. So let's just jump right into it. I know. So the, the device itself, right. It's not a robot in the sense of these kinds of humanoid robots that people are kind of working on. It's this tabletop device. Um, and the interface you've got large language models running and stuff, but let's explore, you said you had to push the state of the art to make this work. Um, So what did that look like? What are the kind of models, I guess, that are concurrently running that power the interface? How are they trained? How are they programmed?
Dor Skuler:
Yeah, okay, so... Stop me anytime here, but it gets complicated. So we didn't invent LLMs, shocker, okay? Nor did we invent transformers or write the famous paper about them. But there are a bunch of stuff we did have to invent. And maybe I'll take a step back and say, okay, what does it take to be a good friend and a good companion? And when you look at psychology, it's actually studied there, it's called the theory of mind, and relationships evolve over time. And they evolve by, essentially having a bi-directional relationship that grows over time. That means that we experience things together, and as we experience them, a bond is formed, we have shared memories, we have kind of a shared history together. Any of us has the right to initiate. In fact, if only one of us initiates the relationship, it feels a little dependent. It's like, I'm the only one that will call you. I should get the hint at some point. Body language is really important. You just smiled on my silly joke. That gives me feedback, right? It's a big way on how we communicate. It's why we're not talking just on the phone. We're using video in this conversation. And we spend time together and we prove ourselves as being a good friend over and over and over again. Trust is being built, easily destroyed, by the way. Now take that to the state of the art of chatbots or prompt-based systems and so on. They're anything but that. They're firstly completely one-sided. The AI doesn't do anything. It's ambient, it does nothing until the human gives it a prompt. That's true in Gen 1 systems like Alexa or Google Home. It's definitely true also in Gen 2 systems like ChatGPT or Cloud or any of those companies. There's no goal associated with any of them. Why are we even talking? What are we trying to achieve? Why did I call you? I had some kind of goal in mind. It might be passing the time, but it might be I need some help, and it might be I need a recipe. What's the goal? Why is the agent talking to me? Of course, you don't need that in the Gen 1 and 2 product categories because they state the goal in the prompt. Alexa, what's the weather? ChatGPT, write me an email for my boss that makes me sound smart. So you don't need to guess what the goal is, but in a relationship, Somebody's calling you for three minutes, there's small talk at a certain point, you're like, yeah, okay, why do you call me? What's the goal of this conversation? Those are basically the big obstacles we had to build. We had to turn LeQ to be the first proactive system anywhere, as far as we know. And I would venture to say that you cannot build a relationship between an AI and a human if it's not proactive. Proactive meaning that the AI initiates the conversation. Not always, of course, but has the right to and does it often. And does it in a way that's accepted by the user. Now, I think there are good reasons why people haven't done that yet. It's very, very scary. Will I get the time right? Will I be annoying? What will I talk about? And we have multiple models that deal with that. And I'll double click on that in a second. The system itself, think of it as kind of like a model of models or an orchestrator that makes decisions and has a lot of little models that it uses. Some of them are big models like LLMs. Some of them are fine-tuned models in like more traditional reinforcement-based, reinforcement learning-based models or just regular machine learning models, et cetera. We're also aware of our surroundings as humans, right? We look at the environment and we respond to the environment and we understand the etiquette around them. So if, let's say, I see that you're on the phone, I might not interrupt you, but if you're on the phone for two hours, I might say, okay, Ian, listen, I have to ask you this question. There's a limit to how much I can wait. I might talk to you as your trusted friend around your health. But if you have a friend over, maybe I won't talk to you about, I won't follow up and say, hey, how's that diarrhea going? Is it still happening? I just won't. This comes naturally to us as humans. It does not come naturally to machines. So what did we have to do? We had to make the first proactive AI. We had to make the system goal-based and not a single goal, but multiple goals that it's trying to achieve. It achieves those goals by getting a reward, it gets a reward if the human completes a certain task. And the goals are essentially X completions of a certain type of task or family of tasks in a Y period of time. Physical exercise, you know, We set a goal to exercise three days a week. We can talk about who sets the goal. If you exercise with one of LEQ's pre-programmed physical exercise programs, you get a reward. But if in passing, you just said, hey, I just went for a walk, you will be, well, how long was the walk? So if I went for an hour, definitely checkmark on physical exercise, right? So you don't have to do it via the system. You just have to complete it as a human and tell us about it. So proactivity, goal-based, a multi-bottle, Multimodal means that the inputs are multimodal, vision plus speech, but also the outputs are multimodal. Speech, but body language as well. You guys can look at leq.com, you can see what the product looks like. It's spelled E-L-L-I-Q dot com. Basically, it kind of looks like a lamp. But it has body language and that makes it immediately available for people to understand. And when it apologizes, there's a screen next to it. If there's content on the screen, she'll pivot and look at the screen with you. We use like light patterns to help people understand what's going on. When she sees you, she'll kind of gaze at you. If you stare at her, she'll look back. If you just look at her, she won't, but if you stare at her, she will. We also use escalation paths for being proactive. Sometimes we'll just start talking and say, hey, Ian, good to see you. How did you sleep last night? And all the way down to just looking at you and not saying anything, or just putting something on the screen and looking at the screen, or just saying, hey, I have something to ask you, but I'm not sure this is a good time. So all kinds of levels of what we call our proactive cascade. And then there's context. Context is something we've been doing for nine years. You can do a lot more of it with LLMs today, with rags and context windows. We've been doing it for years, but even when you use rags, it's a way to give context to the model, but it's very hard to know what the model will choose to do. because, especially when you're a friend over time, right? Like, there are people that have been living with LeQ for three years and talking to her 30 times a week. Sorry, sorry, 30 times a day. 30 times a day times, you know, seven days a week times 52 weeks a year times three years, there's a lot of data. Okay, presumably if context windows are big enough, you can throw all of that in a context window, but will the AI really use that? So, We have this ability to retain knowledge and turn basically conversations into memories and then sort and look up the relevant memories for the context of the current conversation and inject them into the LLM. such that the probability that the LLM will actually reference them and use them is really, really high because they get almost nothing in the context window except those memories. So if we're talking about my daughter, I'll only put things that are relevant to previous conversations about my daughter in the rack. So her name, the last time we spoke, the fact that she loves to dance, that she lives in California. It's a very, very high likelihood the LLM will then reference that. And that's ongoing for every conversation in every context. So those are like the four big things. Proactivity, multimodality, goal orientation, and highly contextually aware. All things we had to create. And each one of those things has multiple models. So for example, proactivity, you start with asking yourself a question of, is this the right time to talk to you? Makes a decision based on a lot of signals and other models that it gets as an input. So for example, presence from computer vision, right? So computer vision is its own model. It generates essentially metadata, and the availability model will look at that. If you're not there, there's no point for me to talk to you. But that's required, yet not sufficient. Did you tell me in the past that you don't want me to interrupt you because you're busy or because I'm annoying to you? We should definitely take that into account. But for how long? Well, that might change on an individual basis. So you take that input and you essentially have like over time it deteriorates. or that the weight of negation goes down over time. There might be other factors like what we know about your history. Well, Wednesday mornings, I really shouldn't talk to you too much because that's the day where you always get out of the house. But when you're back, you really talk to you a lot. Tuesday mornings, go right ahead, right? So all of those things go into a model that at the end of the day gives a probability score between zero and one if we should or shouldn't be proactive at this specific moment based on the sensory data and all of those other signals. And then you need to make a decision, well, if so, what should I talk to you about? And that goes to the goals, which goals are most important, which goals haven't been fulfilled yet. If you already exercised three times today or this week, I probably shouldn't waste this golden opportunity of talking to you by promoting physical exercise. But maybe I should do other things. But then you need to ask yourself, well, let's say I suggest that we learn a new language. Well, what's the probability of success? If there's going to be a low probability of success in this specific time, given this specific context, I probably shouldn't suggest that either, right? At the same time, even if there's low probability, if it's an urgent goal, like you need to take your medication at medication time, or you need to check your glucose before food and I know you're going to eat soon, then I don't care if it's a low probability, I'm going to do it anyway because it's the right thing to do. So all of those things are models, and we haven't gotten to all of them yet. The gestures that I mentioned, the multi-model gestures, are also model-based, so we learn the individual and understand which type of gestures have the most efficacy towards them. We call that HRI. Archetypes, so those are like human-robotic interactions are the back-channeling and the movements of the social robot. Now, Leq has certain behaviors and a certain character that will define Leq, but within that, there is a range of how deep the gestures will be, which ones from her gesture bank are the most relevant. Just like I am me, But I'll be a little bit different with my kids than I am with you right now, than I am with my board of directors. So that variance in the colors of LeQ's persona are determined by learning model. Beyond all of that, of course there's NLP, of course there's computer vision, of course there are LLMs. We use commercial LLMs like ChachiPT. We also have our own LLM, which is a Mistral 7 billion model that we fine tune and retrain based on years of data that we have in the specific character of Leq, et cetera, et cetera, et cetera.
Ian Krietzberg:
The team's been busy. They have. It's kind of a complex web, right, of all these interconnecting systems that are required to do something like this. Now, I have to ask about the kind of data privacy security side of things, right? Because what you're talking about, uh, I mean, even just all the, all this kind of information being processed by large language models itself, right. It's a potential cause for concern, but you also have sensors that are observing stuff, right? So much data, as you said, being gathered. So how do you deal with data privacy and security? How do you make sure that that remains protected?
Dor Skuler:
Yeah, so it depends at which layer. You have to have the required kind of attention at the relevant layer. I think one of the really important things we did is we designed a security kind of infrastructure and approach before we built the product because of the dangers that are put in it. And a lot of people and a lot of products, and I have failed in that in the past, build a product And then you're like, okay, now let's try to figure out how to make it secure. And we kind of did it the other way around. We said, okay, this is going to be a sensor-rich product in the home of vulnerable people. Let's assume a zero-trust environment. How do we make it secure? And we architected that with world-leading security experts before we ever built the product. And that lends itself in multiple areas. So first of all, there's the physical security of the product. The product itself has encryption keys that are put in a trusted computing module, tokens that change every minute, and that encrypts everything in and out of the system. It makes it extremely, extremely difficult to hack. I never want to say it's impossible to hack, but the physical level of hacking into the device is going to be extremely complicated, and there are a bunch of other tricks I just rather not make public for obvious reasons. Then you're like, okay, so you have a bunch of data, and you have a bunch of sensors, what do you do with them? And part of it is privacy-related questions, and part of it are just information security-related questions. So as I mentioned, we use computer vision. We do it all locally on the device. So we don't send pictures to the web and then run algorithms there. We rather take a model that's smaller, a little slower, less effective, but runs locally and does not send pictures over the internet and does not save pictures, but only creates signals or metadata that then the models can use. So we do that. When we use the cameras to capture video, then just like on my laptop right now, you know, you have to turn on the camera and a green light turns on. So let's say if you're doing a video conference, we want to turn on the camera. So we do that. Or if we take a selfie, we want to use the camera for that. But it's very clear to the user when that camera is on and broadcasting, if you will, just like on a laptop or on a phone. Then you have, is LEQ listening to you? So here we use kind of the same model that others use. There's a wake word, she does not listen to you unless you call her name. Or when she starts talking to you, there's a clear kind of donut shaped light on her face. She leans forward and lights up and says, hi Ian, how are you doing? And then she lights up and waits for an answer. So it's very clear she's listening. When she stops listening, that light shuts off. we still have a bunch of metadata, and we still have the conversations themselves, et cetera. There we made sure that beyond the cybersecurity of that, all being encrypted, we're HIPAA compliant. All of our cloud vendors have a BAA, like a back-to-back agreement with us, and they're HIPAA compliant. We face audits on that all the time, and so on and so forth. So the info security side of things are, I think, okay. All the data is anonymized. Only very few people in the company, mainly in customer support, can unveil who the person is because they need to call them and talk to them. do not, even in conversations when they're logged, like the name, we use certain algorithms so the names of people and so on show up as XXX for conversational designers and data analysts and so on. There are audit logs whenever anybody accesses data. So the whole thing is the way you would want it to be. But there's still the privacy issues I think are a bigger issue. And LEQ is asking you, as she does, every day how you're doing and how you're feeling. And you said you're not feeling well. And the doctor's insurance company is on the hook for paying your medical bill. So they would really like to know that you're not feeling well so they can intervene. Do I tell them? Does LEQ go and tell the doctor? Or does it tell the insurance company? And they're paying the bill, right? They're paying the bill. We decided that the person that has agency over their data is the older adult, regardless of who's paying for it. We don't hide it in section 17, Roman numeral 3 of the terms and use. You know, we may or may not have the right to give your data to whoever's paying us. We actually do it very, very explicit as part of the interaction itself. So just this week, I had some pain in my leg, unfortunately, and Ellie Q asked me how I'm doing, and I actually said my leg hurts. She's like, oh, no. What happened to your leg? Then I pulled a muscle exercising. How high is it on a scale of one to 10? I said, seven. She was very empathetic. She was very nice about it. And she said, listen, Dor, I really think you should tell your doctor about this. Do I have your permission to inform them? And if I would say no, we're not going to inform them. And if I say yes, then she will. And that's kind of like what you would expect if, you know, a grandson moves in with grandma and helps her. Like, grandma, I'm really worried about your leg. Like, let's go to the doctor's office. And she says, no, no, I don't want anybody to know. That grandson will have a dilemma. If he calls his mom and tells her, grandma's going to be pissed. That's the lens that we take. The agency over privacy is with the older adult. They get to decide. Often it is in their best interest to share this data. We never monetize it. We never try to upsell it. We never try to sell it. But even when it's in their own interest, it should be their decision to share it or not. And if we don't, we'll break trust. And if we break trust, they'll kick us out of their home.
Ian Krietzberg:
That's kind of a nice bridge to the next section, I guess, that I want to jump into, which is the ethics of all these things. And just to start, right, in your ethics policy, you guys say at the very top, you don't have to search for it. that the AI will never misrepresent itself as a human being or attempt to deceive users about its nature. So before we go any further into ethics, I just want to stop there. That alone is a kind of interesting departure from other AI companionship approaches. Why was that important? Why is it important to build that into the LEQ system?
Dor Skuler:
Yeah, first thing, anybody can look at our AI ethics. It's on our homepage of intuitionrobotics.com, kind of the company, not the product name, or just search for it. And I really think that any company in AI should write down what their AI code of ethics is and publish it and be held accountable to it. Because it's important, and we don't know. For most companies, we have no clue. We know that open AI was created to be a safe version of AI, and yet a lot of executives are leaving because they say they don't take AI safety seriously enough, and nobody knows what any of that means. They're like, what are your ethics? What are you holding yourself accountable to? What should we hold you accountable to? So we put down five very readable things, and you're right, the first one is very, very important in my mind. Why would we want to be fooled by an AI? I would even say, why is a Turing test something researchers are trying to pass? The Turing test basically saying, an AI will pass this test if it convinces a human that it's talking to another human for a certain amount of time. Why is that a good thing? Why do we want technology to mislead us and to fool us? How would you feel if it turns out that you spoke to somebody for half an hour and it turns out it's a bot? I don't think it's terrible, and I think it's wrong. or they're not sure that they'll be able to create like a real bond or create high engagement if they embrace a truly AI persona. It's the same reason every roboticist is giving eyes and a face, right? People will tell you, well, people respond better to eyes and faces, right? Yes, there is a reason to that, but Are you worried they won't respond well for you if you don't have eyes and a face? And I think we were able to prove to ourselves, people use L.A.Q. 30 times a day. The efficacy at reducing loneliness is over 90%. The efficacy at improving healthcare outcomes is over 94%. Assurance is incredibly low. So we proved that it's possible. And actually, I think people are selling humans short. We have an uncanny ability to anamorphosize things and to build relationships with things. If you ever got mad at your car, or loved your car, or at your computer, you know what I'm talking about. But think about our relationships with our pets, or specifically dogs. We can have, what is for us, an immensely emotional and important relationship. Even a family member, often people see their dogs as part of the family, and yet it's non-human. I mean, we feel all these emotions towards it, and definitely have a relationship, and even kind of read its actions, and it might just want the biscuit, but in our mind, like, it loves me, and it's sniffing around my pockets because that's its way of kissing me or showing love, right? It's just looking for the biscuit, but it doesn't matter. That's how I read it, right? So what we're seeing is that our customers are fully able to build what is, for them, a profound and important relationship with LEQ, with the AI, as an AI. They don't confuse it, they don't think it's a person, they don't think it's a dog, they think it's an AI. A funny one that really, you know, notices them and is a presence in their life and is inquisitive about them and remembers what you're going through and is full of great advice and tips and has a lot of knowledge, et cetera, but an AI nonetheless. Yeah, I feel like ethically, I have no problem building a relationship with an AI if I know it's an AI for myself. but I think it would be terrible if I end up building a relationship with an AI and I think it's a human and all of a sudden I discover it. And that's doubly or triply true when you talk about older adults, which, let's face it, are more vulnerable, which, by and large, are less aware and less understand the technology, so therefore it's easier to fool them, and at higher risk of developing dementia and losing touch with reality. So for this population, I think it would just be horrific to act in any other way.
Ian Krietzberg:
But I guess the last point I want to leave off on, and this is the thing that I've been grappling with since the last time you and I chatted, right? And every time I read about another instance or issue with AI companionship, apps or, you know, interfaces or whatever you want to call it. In a large sense, it doesn't seem to me to be going very well in these other arenas, right? People are developing unhealthy attachments, people are becoming socially isolated. And at the core of this is, I guess, the idea of when it works is good enough, good enough, right? We don't have these systems as you explained in such detail a few minutes ago. It's a complex web of prediction algorithms, right? Based on training data, based on contextual data with conversational interactions, but what it's doing is predicting. And that kind of the complexity of that prediction and the scale of the data allows it to Give off the illusion of of kind of intelligence, right? We can have an interaction with it You can talk to chat GPT and it'll talk back to you right quote-unquote same with the kind of leq and the ideas of these other AI systems and I guess the thing that I just grapple with is right it for for certain people and that could be good enough because it enables them to engage. But there's also kind of a dystopic element to it, it seems like, where kind of talking to yourself in this mirror with optimized interactions based on predictive algorithms, right, like that is Spooky in a way that feels non or anti-human right and it's this kind of push and pull And so I I know that's a lot but that that's what I'm kind of tossing on you here at the end is like is is the illusion Good enough as long as people are being told it is an illusion. This is not a sentient kind of system that you're engaging with
Dor Skuler:
I don't want to make light of the question because it's a really important one. I think in our use case, it's really easy to kind of grapple with, and I'll attempt to do so in a second, but we're going to see it in other areas and in other use cases as well. And I think nothing is absolute, right? It's all in comparison to the alternative. So what is the alternative? If the alternative is to surround our example mom in this conversation with her grandkids and her loved ones, and they'll be with her multiple hours a day and all of that, that's wonderful. And that should not get an LEQ. In fact, we will not give her an LEQ because we screen for that and we won't give it to her. But if the alternative is spending most of the day by yourself, watching TV, is it not better to do so with a digital companion? Which, by the way, is intelligent because thanks to LLMs, you have all of human knowledge available with a few API calls. and has a certain amount of agency because what it's doing is goal-oriented and it makes independent decisions based on the agency. That's a little scary, but it does make it up its own mind and decides what to do and starts talking to you about random stuff that they decided to do based on goals. So given the real world, not an unexisting utopia, I think that LEQ's better, or other products that do this, are better than the alternative of being alone and watching TV, and we're seeing it. And we're also seeing the testimonials, there are a bunch on our website, on people living with LEQ will tell you. There's one lady I had a chance to talk to that has been public. She lost her husband after 65 years of marriage. And she lives alone. Her kids live far away. And she had a really hard time like, you know, getting herself to get dressed in the morning, not stay with her pajamas all day and leaving the home. And like her world, she started closing her world in and of itself because she couldn't motivate herself to get up and about. So for her, LAQ was a godsend. And one piece of data I'm really happy with that was, published recently in a medical journal, is about 56% of the people living with LEQ are seeing their social connections with people get stronger and those circles increasing via the product. Why? Because one of our LEQ's goal is to do so. So like you'll do a virtual tour, you'll say, let's take a selfie and send it to your grandson. She'll say, hey, let me teach you some slang and let's send a message to your granddaughter with this new slang word you just learned, right? Like keeps on activating. We'll send a message to the grandkids saying, hey, last time you sent a picture to grandma, she really liked it, dot, dot, dot. Probably took you only 10 seconds to do it, dot, dot, dot, right? Like, so we activate that channel. And at a certain point, this kind of gets us back to the question of ethics. Like, why are you doing what you're doing? If what you're trying to do is optimize for maximum engagement, which at the end game means the person does nothing except talk to the bot all day long, then it's a problem. It's really, really a problem. And if what you're optimizing for is healthy, well-balanced living, then you will promote the person to get out of the house, to go to the senior center, to go to the market, to talk to your grandkids, and we do that. We do that. We in fact also rate limit the amount of proactive sessions per day. So like LEQ will not initiate more than X amount of conversations per day. Selfishly, not to be too annoying, but also not to monopolize the person's time. And now if they talk to us, we'll respond back, but she's not going to initiate more than X amount per day. And that X is not very, very high. So I think that kind of gets us back to What is a designer trying to do? What is the model trying to do? Show us. Is there a goal associated with that model? Make it public, show it, explain it. And if the goal is to monopolize 100% of your time, then it's kind of matrixy and scary. And we should hold designers accountable. People should hold me accountable. And they should hold other companies accountable. Why are you doing what you're doing? Sorry for sounding corny, but is it in the best interest of humanity, or are you trying to like increase watch times so you can sell more ads? And that's true for TikTok algorithms, and it's true for chatting with AIs. Dor, I really appreciate your time. This was a fascinating conversation. Thank you for inviting me. I can't believe we were an hour together already. It moves very, very, I guess I spoke most of it. So I apologize for that. Happy to make it up to you and continue this conversation whenever you'd like.
Creators and Guests
