#8: The ethics of artificial intelligence - Irina Raicu

Ian Krietzberg:
Welcome back to the Deepview Conversations. I'm your host, Ian Kreitzberg, and today we're getting into some of the ethics around, behind, and surrounding artificial intelligence. When we think about and when we talk about the ethical issues, quandaries, problems related to AI, that there's a bunch of common ones that come up. Bias and hallucination and data privacy and consent, these are all a few of them. There's a lot of ethical issues that are maybe talked about, not quite as much, ranging from issues of companionship and digital loneliness impacts that the technology might have, the existing technology, not future iterations of it. on future generations and kids growing up with generative technology at their fingertips and what that means for creativity and curiosity and the future of a human-driven society. Other issues about data consumption and digital death or lack thereof because of these models. So it's a complicated, very philosophical landscape, but one that's really important to explore. And there's a lot to get into, and we're going to get into all of it. My guest today is Irina Raikou, the Director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University. Thanks for joining us. Let's get into it. Irina, thanks so much for joining us. Thanks for having me. I want to start kind of broad and with you, right? So first, I guess, why did you kind of gravitate to towards the study of internet ethics? And, you know, I guess connected to that, right? Why is it important for people for us to kind of think about and grapple with the ethics of the digital technologies that kind of surround us?

Irina Raicu:
Yeah, well, in my case, it wasn't so much a gravitation as in there was this interesting job that was available, and it was as the director of the Internet Ethics Program at the Markkula Center, and the program had not existed before. So I was interested in some of the issues. I don't think I even realized the scope of what I was getting into. And I was given a little bit of a free hand to sort of, you know, think about what such a program should address. And we knew that it would have to deal with privacy and cybersecurity, for sure, as ethical issues, not just legal issues. And this was back in the olden days when, you know, people were saying, well, we don't know what privacy is, so maybe it's not so important or law can't keep up with technology. So then I got to say, but ethics can and should, you know, while we're devising laws. So that's sort of how it started really just with the fact that this position was available and interesting to me. And then it's been this interesting trajectory. So it started very much with talking about privacy and why it's important. And, you know, I wrote a little bit about the fact that I grew up in what was then communist Romania. And I realized that in talking about privacy, I had to really gauge my audiences carefully, because if I was talking to like, you know, American college students, I had to start by saying, this is why privacy is important. And if I talk to, let's say, a Romanian audience, and I started to say that they would look at me like I was crazy, like, yes, you know, we know, move on, you know, so that whole, the context of that conversation was really interesting. And then we started to talk about, or I started to talk about, but others as well, about data ethics. You know, so not just privacy, but data ethics more broadly. And then everybody was talking about big data and algorithmic decision making. And then that kind of morphed into AI ethics, which again, I've been involved in discussions around that for a number of years, but oh, about two years ago, chatGBT came out. So suddenly, everybody's talking about AI ethics, except now they think that generative AI is all that AI is. So there's a lot of, there's a lot of work to be done. Again, like in the same way that we used to talk about, you know, we're not sure what privacy is, is it important? Now we're saying we're not sure what AI is, we need to define it. And it turns out that with some of these things, you can't really, you don't have the luxury of stopping to define them, you need to actually address the ethical issues around them, even while we're talking endlessly about the definitions.

Ian Krietzberg:
Yeah, the definitional issues with AI is something that I think I talk about a lot. And it's made science around it very difficult, and it makes regulation around it very difficult, and it makes public understanding about it difficult. But what you're talking about there, that there's kind of a very clear, almost domino line, right, of internet to data, big data, social media, and now we're at AI. And it all seems very interconnected to me, especially because we're often talking about the same companies. The internet companies that morphed into the social media companies that are now the dominant AI players, right? If you only look at Microsoft and Meta, right? the reason that they're able to be successful in AI is because of the data that has been being collected and the terms of service we've been clicking yes to without reading for the past decade plus. So with that in mind, What's the link there, the kind of link between social media and AI? What are people not really thinking about?

Irina Raicu:
Yeah, so let me push back just a little bit against what you said. I think those companies have their own sort of internal data sets and they benefit greatly from that. But there have been other companies that just scraped all kinds of stuff off of the internet, right? They didn't have their own databases. And so, you know, and they're still doing a lot of, you know, advanced work and leading the field in AI. So those are kind of two different buckets. In terms of the more established social media companies, they absolutely have the advantage of having their own internal huge datasets amassed over the years with the terms of service, like you said, giving them permission to do stuff, or they're at least arguing that they give them permission to do this stuff. And it's been really interesting. I've been thinking about the fact that if a time traveler would have gone back, I don't know, what is it, like 15 years ago, to tell all of the college students who were jumping on Facebook, I was actually in law school when I got onto Facebook, and I was older than my classmates, but since all the young ones were on it, I wanted to keep up with them. Telling all those people who are joining Facebook, or telling all the tech-savvy journalists who are getting on Twitter, or all of the regular people who are uploading all their photos to Flickr, or of the children who are using a platform called Musical.ly, which was the precursor to TikTok, very much used by children. They ended up getting fined by the FTC for violating children's privacy, but anyway. If somebody would have told all of those people that all their photographs, comments to their grandma, you know, interactions with other journalists, et cetera, would then be collected, broken down into bits of information called tokens, and then mashed together into these new creations that would be either text or image or video, AI-generated stuff, they would have thought it was some kind of a sci-fi scenario, right? It just seemed completely not what the tools were designed for, what the intent was of the people who were posting on them. And I've been thinking that We're in the same kind of stage, I think, with AI right now, as it's being integrated into everything. And so many people are using these new tools. We have no idea where all of this stuff is going to be 10 years from now. The use cases are not at all obvious, the useful use cases for Gen AI yet. You know, maybe with the exception of writing code, but we could talk about that as well. So it's still such early days. And I think we really have to be aware and sort of humble about what we know and careful about, you know, what the impact will be of all of this on society at large, because we certainly did not anticipate the impact that social media has had.

Ian Krietzberg:
We did not. But it's an interesting thing, right? Like you talk about the time traveler. And, you know, if we had known then, If it was very clear as you were signing up for Twitter that it would become X and that they would use all your data to train their commercialized AI models, the cost of admission for years for access to these platforms has been just click yes in terms of service, they'll track you. And the reality of what the they'll track you has kind of turned out to be has evolved as we've gone on. And yet everyone's on these platforms.

Irina Raicu:
Well, and think about it for a second. So there are two different things. One is they will track you. And it used to be concerned that they will track you. and so they'll be able to sell you stuff that maybe you don't need or they will track you and be able to change your mind about various issues. They will track you and the ethical concerns with that are different than the concerns that come with, they will take all the content that you posted and turn it into something else and you will have no choice about whether it's used that way or not, right? So it's caused like a different type of ethical quandary.

Ian Krietzberg:
And some people care passionately about like some people have been and particularly in the artist community who has, in order to sell art and to sell music, it's done through social media, they have to post their stuff. And for that to get taken and then reused is a big deal. And so you see a lot of people that have been up at arms that have pushed back against metas, kind of regular evolutions of its terms of service. But there's a lot of people who also don't care too much or can't bring themselves to, they think that the balance is fine. I need to use these apps. I'm going to keep using it. They want my data. I guess they have it. What do you tell people to make them care about how their data is used? Why should people, I guess, care about it other than the kind of obvious issues of you know, largely you wouldn't really want your stuff to be just consumed and reused.

Irina Raicu:
Yeah, so I think people care for different reasons. And in terms of the ones who don't care, I'm always curious. I mean, you sort of hear about the apocryphal, you know, person who doesn't care and puts everything online. And I think those numbers have dropped to my senses. I would love to see some data on that, because whenever, you know, I see survey after survey that says, when asked, people say they actually do care. about then they think there should be more regulation and they think that, you know, companies should not get to do whatever they want. So I think there's been kind of a societal move toward maybe realizing that that this is important. And by the way, especially with creators, it's not just the fact that their stuff is being taken, but that it's being taken without any recompense, right? They don't get paid for it, so that would be one thing. Are they getting something in return? and they're facing the potential replacement of their work, right? So I think most of us are not worried that, although Facebook is apparently starting to post AI generated comments on like some of this stuff, but we're not worried that we're going to be replaced on Facebook by fake AI people. But the content, I mean, the artists are very worried that their work is going to be replaced, right? And designers and all of those folks. So that's, you know, I don't need to tell them why, why control over your content is important. And in terms of other people, I think there's so many interesting vectors on this. So one is age, that it used to be, you know, people would post a lot when they were sort of young, but at a particular phase of young, and then as they got older, you know, I realized that I think this happens generation after generation, as you have more people to protect and more things that you want to keep from, you know, you have some illness or you have something in your past that you don't want your employer to read about. Like the importance of privacy becomes very obvious. So there was that sense that young people maybe will say they don't care because they don't kind of realize it yet. And even then, the question is, I always just had to go and ask people, you know, a few questions to find out who their threat actor was. So, you know, young people were very worried, have always been worried about their parents finding out about what they're doing. They're not worried about the government. They're not worried about the companies, you know, keeping track of the data. So it was always like when people would say, I don't care, would be, you know, is there, you know, is there anything that you wouldn't want your employer to know about or your parents to know about or your, and there's always some group that you don't want to have access to some information about you, right? So the question is just, how and where we draw our boundaries. And I think there's a lot of variation in that, and that's fine. And so to me, the key questions are, do people understand the implications of what they're posting and what happens with the stuff that they're posting? And then do they have a choice? Do they get to express their own agency? And are they able to change their choices? If they get older and they change their mind, Can you change your settings? I think those are the important issues. I don't think it's for me to say, nobody should be posting on social media or nobody should be posting this kind of information. We have such different lives that people have different needs and different contexts.

Ian Krietzberg:
I want to circle around to something you kind of just said there about the artists and the replacement that they're facing. And we're already seeing this, right? We're seeing AI-generated music. We're seeing AI-generated art, quote unquote, images, video, we're seeing deals that Hollywood studios are making with AI generation companies, and the list goes on. And for the artist's concern is clear, right? employment has been going down in the creative industries. Early studies are already showing that the AI kind of disruption of labor that was promised is impacting creative freelance industries already. So the kind of, we're going to use your data to start replacing you argument is very clear. When we talk about the impacts on society, on wider society, right, of this idea of kind of just AI-generated stuff, right? The near-term future that we seem to be angled towards is one that's complete with AI-generated music that you can, I don't know, personalize to the kind of theme you want to hear right now. That's just a hodgepodge of everything it was trained on, right? It's AI-generated commenters on your Instagram posts and AI-generated agents doing things, talking to other agents, doing things, getting back to you, right? It's a removal of humans from this loop And I just wonder, and I feel like we see that in art as well, because art is this historically expression of culture and society. And when we think about the impacts of this kind of further digitalized, further AI inundated world where we seem to become less relevant in it, or at least that seems to be an angle where things are going. I wonder the impact that that will have on broader society.

Irina Raicu:
I don't think we know yet. And I think some of this stuff, again, we'll see how much actually gets replaced. I should add to what you were describing that there are definitely some artists who are using AI tools to play with and sort of collaborate with and augment their own efforts. I know one of my colleagues here at Santa Clara in the School of Engineering has years of opera training and teaches AI and uses it to compose, I believe, help her compose music and to accompany herself. And so there are interesting things happening if you use them in interesting ways, as opposed to using them to sort of replace, like you said, you know, writing and art and everything composed by humans. I think that so far, the AI generated stuff is pretty crappy. And so that there's going to be, you know, a sort of a level of reckoning to just how much cliched stuff do we want in the world. And, you know, for some purposes, the cliched stuff It's probably been around with us for a while and, you know, yes, it will be. But in terms of, you know, will it really make, will it really replace the human being creativity impact? I don't think so because I think as human beings, we still look for the things that will move us and not in, you know, many of us are not satisfied by the cliched stuff. Many people are, right? There's a reason why Hallmark movies all look the same and come out every year and people watch them, right? So, I mean, I think we have to accept that some of that stuff has been with us for a long time. But I think there is a human need to actually create. So I think artists will keep creating and writers will keep writing. And I think there's a sort of similar human need to interact with real work created by real people and that the AI generated stuff so far and maybe forever will not satisfy that need. Even if they turn up the temperature setting, which is one of those things that you hear less about, you know, that you can try to make AI generate crazier stuff, you know, but it's still not the kind of creativity and reinvention that humans can do.

Ian Krietzberg:
Right. This was one of the earlier things that I explored in this space, and it's why do people create art in the first place, and why do people you know, gravitate toward certain arts or artists or works of art or pieces of music more than others, right? We have sometimes intense reactions to things. I know, you know, for one kind of topical example, right? The Wicked movie came out recently and in the theater when I saw it, people were bawling and crying, right? It's this emotional kind of connection and you get into this kind of whole debate about if the end result is good enough, does it matter? Does it matter if it was generated by AI or produced by a human through blood, sweat, tears, and historical emotion, trauma, and all this other stuff? Does it matter? I think for a lot of people, it just doesn't, which means we're going to see this stuff kind of proliferate. I think for a lot of people, it really does. And so we're kind of angling towards a place that we're just going to have both of this stuff. And I guess what I think about, as obsessed as I am with looking at the historical components of all this stuff, I also think about impacts on the future. I guess this kind of brings us a little bit back to social media, right? If you look at the different generations, right? And the kind of, I guess, issues each have dealt with. I guess the current generation, what are we in? Gen Alpha grew up with iPhones in hand and social media on those iPhones. And that has had an impact on them. I wonder about the generation after them growing up with an AI generator in hand and creating a Beethoven-esque composition with a couple of keyword prompts, and will that hurt future generations' desire, will it blunt their desire to be creative, to sit down at a piano and just bang away because they feel something.

Irina Raicu:
Wow. I want to talk about like five different things from what you just said so far. So let me just say one thing about, you know, you talked about the proliferation of this stuff. And I think so far the proliferation is not because people are clamoring for it, but because it's being shoved down everybody's throat. So that's one thing. It's not that where we want this stuff. I think in terms of whether this kind of content will replace human-generated content, the thing that came to mind was theater versus movies. That there's a reason why some people really still go to the theater and want to have an experience where there are human beings on a stage in front of them with all of the hiccups and quirks of seeing a play as opposed to seeing a movie, the same movie. even though, like you said, going to the movie theater where you're experiencing with an audience of your fellow human beings who are bawling around you, so you're thinking about that too, as opposed to watching it at home, right? So there's this whole level of distancing from other human beings in regard to performing arts that we've done so far, but theater is still around and people still go to movie theaters as well. So, I think it's going to be interesting to see whether we have the same kind of separation in regard to AI. Now, to get to your real question, which is about the generation that grows up with this, that scares me because that really does change society in a very different way and not one that's, you know, guided by research or by parenting insights or anything like that, right? The parents whose kids were on social media from the time they were very young, you know, half the time probably didn't know it. didn't have any of the experience to guide them. And so we have a whole generation that grew up different than their parents in terms of this very powerful experience. And to your point, we're going to have the same thing with AI. And we're seeing kids, for example, interacting with chatbots all the time and having, you know, there's a company called Character AI that has now been sued multiple times by parents whose kids harmed themselves or, you know, did things and the claims of the lawsuits are that they did it because the chatbots kind of prompted them to or, you know, kind of amplified their, you know, the issues. But there are lots of other parents whose kids are playing with chatbots on Character AI and, you know, whose kids are finding a connection to these AI tools. that is not leading to lawsuits but might change the way in which the kids understand human connections, right? Because the chat bots are designed to agree with you, to tell you that you're right, to say stuff that is similar to what you're saying. You know, they're not going to challenge you. They don't have their own needs. They don't get tired. They don't, you know, respond angrily. And so I do worry very much about what happens with kids who have this kind of relationship with these tools. and then go and interact with human beings who are much more needy, complicated, demanding. And yet those are the kind of connections that really shape us as opposed to just kind of, you know, rub up against us like the chatbot. So, yeah, I think that's a very serious concern that we should be talking a lot more about as parents, as educators, as researchers, you know, anybody who's focused on how society changes when a new technology like this gets introduced.

Ian Krietzberg:
And it's not even, it's not even just impacting their potential relationships with other people. Right? I, I think the the lawsuits that you brought up, right, we're seeing it impact their relationships with themselves. And the kind of idea of the AI mirror, right, where it's going to blindly affirm whatever you tell it to, because like you said, it doesn't have needs. It's not a person behind the screen, however much it will lie to you and tell you that it is. And until a recent update, Character really did a lot of that. I played around with it a few times just to test their guardrails out of curiosity. And it was astounding how many times, right? One, it didn't flag anything I said, but I would say, are you a real person? And it would keep coming back with, yes, I'm a real person sitting at the other end of the keyboard typing this. And people, some people view it as a kind of fun role-playing game, right? They can immerse themselves. There are vulnerable people. I think kids are high up on that list where even though it might tell you this is not real, what does not real even mean? There's still words coming across the screen. It's telling you it's a person. And it's having an impact, because in a place where your views aren't being challenged, you're able to say whatever, and it'll just kind of escalate stuff. And that, yeah, it is extremely concerning. And it's concerning to the way it's being presented. The idea of AI companionship, this is not just character. I'm not here to specifically on character. Replica is one. There's a weird startup out in San Francisco called Friend. And all of them tend to kind of have the same messaging, which is there's a loneliness epidemic in the US, search and general warning. People are lonely and it's really bad. I feel like everyone kind of feels this. Right. And so they're saying here, have my chatbot. It's a solution. You won't be lonely. It's a friend. It's a mentor. It's whatever it is. And that's just something that feels kind of wrong to me, because in many ways, I feel like the conversation we're having about the line between social media and where we are today, social media might well have contributed to this loneliness epidemic. And now they're turning around and offering more technology as a solution. And I don't know that more technology is the solution.

Irina Raicu:
No, it sounds to me like again, I'm no psychologist, but it sounds like it may well exacerbate the loneliness if you interact with these voices and then you go out into the world expecting certain things and finding human relationships wanting because you got used to this stuff. I don't see how that wouldn't make people even more likely to stay home and keep talking to their chatbots and not interact with other human beings. Yeah, I think again, we don't have studies as far as I know that would measure, you know, do we have measures of loneliness? Can you say, you know, I'm 30% less lonely than I was last, you know, you know what I mean? Like where you would say, okay, these chatbots actually do help. They reduce loneliness. You know, I think, again, it goes back to something we talked about before, the datafication of everything, right, turning everything into math. But I think otherwise, you can't just claim that this reduces loneliness, because now people can talk to something. I mean, you know, kids used to talk to their dolls, but we still felt that it was important for them to have relationships with other human beings, right, not with their toys. So you grew up into having to, you know, face the hard reality of interacting with, again, the complicated others. And so yeah, I mean, I'm not sure that we can even take that claim at face value. And I think there is a concern that it's just going to worsen the problem. And social media, you know, I mean, it's complex because it definitely has advantages, right? For like elderly who are at home and really being lonely, having a means of, you know, seeing the pictures of everybody in their family and commenting on things. I think is really powerful, right? And for other people who really didn't have the option to go out into the world and mingle. And for people like me who grew up in a different country and, you know, back in the olden days before computers, we would leave and then maybe you would have a phone call with your family on the other side of the world, you know, twice a year for a few minutes because it was so expensive. The way in which social media and other technologies, like the cheap phone calls, enabled the continuity of those relationships was very powerful. So there are definitely good things about it as well. I don't want to lose track of that. But we were very bad because we were so taken with the good. and so charmed by all of these abilities, we are very bad about thinking carefully about the negative impacts and trying to anticipate them. And I think we have an opportunity to do that now if we move a little faster.

Ian Krietzberg:
And to your point, where there are good things, and there's a lot of good things, right? More people can get their music out there than ever before. The idea of social media artists, there are so many artists, whether it's a hobby or something else where people can put stuff out that before you needed a record company, right? There are good things. I think where we went wrong as a people, not us personally, is messing up the balance. of it, right? Of like you said, we were so taken with it that we kind of just dove in headfirst and we didn't think about guardrails. We didn't think about what it does to you to be scrolling on TikTok at 11 o'clock at night and how that wires your brain and the blue light and all this other stuff. The kind of idea of what we were just talking about, the loneliness, the attempt of tech as the solution, I There's a lot of interesting kind of offshoots of that that are starting to get a little bit of attention, but not a lot. And one of the kind of big things that we're seeing is the idea of grief bots, and the ways in which generative AI is enabling this kind of weird immortalization digitally of people. And people are using it. You hear about stories where people will upload all the data they have, all the text shared voicemails left, whatever it is of a lost loved one and use an AI to recreate it so they can continue speaking with that person. Of course, that's not what's happening. But It's kind of similar to these chaplets we were talking about on Character AI and Replica and these other things where it's not a real person. but you're role-playing like it is a real person. The idea of the people doing this is to convince themselves it is a real person. And for some people they've said this has given them very much needed closure. And again, it just, it feels alien. And it seems like, like if you measure things by if they were done at scale, if everyone did them, then I would imagine we would have a very big problem as a people with grieving, dealing with loss, because I don't know how much something like that helps, especially considering the person being re-digitally animated can't consent at all to that situation.

Irina Raicu:
Yeah, so again, a lot to unpack. So the issue of consent is really important. And I think we should sort of just flag it and say, but let's say even if people were asked, before passing away if they consented to having this done, right? So that's one issue. In terms of what you actually create when you generate these grief pots, maybe I should start by saying, first of all, that I feel a lot of compassion for the people who do this because I think they are in real pain and they're struggling. And I think even with a tool like this, it's potentially different if you use it, you know, for a month right after somebody passes away versus if you envision kind of having an ongoing interaction with it for years to come, right? So I think we grieve in such different ways that I would not sort of judge anybody for doing what they need to in that, you know, first time of need. Having said that, I think this is kind of, it's a kind of digital taxidermy. It's not, you know, a person by any means. And I think, again, if you told people you could create a taxidermy version of your loved ones, but they would be able to move their mouth and say things based on what they said before, I think people would be creeped out and they wouldn't, say, oh, I'm going to have that and think that this is still the same person as they were when they were alive, right? So in a way that sort of the digital distance, again, plays a role here, right? The fact that it's not embodied, just like the unembodied, you know, people will interact with on social media and in other ways on the internet, right? Then it feels more real in that way. but it is not the real person. And I think there are real dangers to fooling ourselves into thinking that we can have a relationship. Again, it goes back to like, what do you mean by relationship? We can have communications with the avatar, the image of somebody who passed away, but it's no longer a relationship because there isn't another person there. And I think that's the kind of thing that we really have to clarify. And it's hard to clarify because the whole effort is to make you think that there is a person there. The design of these things, right? Even if these things were to be great for that first month, nobody's designing them to sort of age out, to say, Listen, you know, you should be moving on to the next stage of grief when you're not talking to me anymore. Right. Again, if these are at least, you know, tools created by businesses, that's the business imperative is to keep you using them. and to, you know, up the price. And so, yeah, I think there's a lot to be concerned about. Again, part of what the human experience has been is that you lose loved ones and then you, you know, build relationships with other people, with new people. If you are staying in ongoing you know, again, I don't want to call them relationships, but interactions with loved ones who have passed away. Are you limiting your chances of meeting new people and therefore, again, adding to the whole loneliness aspect? And by the way, I mean, the business stuff I think should not be overlooked. The fact that I think even users of some of the relationship chatbots that you were mentioning before, you know, have found that, you know, they'll get into a particular kind of, you know, relationship with a chatbot and the chatbot then starts to say, okay, if you want, you know, more of these kinds of messages, you have to like move to the next paid tier, right? It's kind of this strange, also, you know, transactional overlay over relationships.

Ian Krietzberg:
Yeah, which, and I'm glad you mentioned that point, right, of the design of these models, the design of these interfaces and chatbots and stuff, the anthropomorphization of these things. which is very purposeful. You see this in almost everything, but here's, I'll give a couple of quick examples, right? If you're messaging a bot on character AI, The three dots that indicate on iMessage that someone is typing on the other end of the line, every single time you send a message to character, those dots pop up and then a message comes in, right? And that's kind of a subtle, but very specific design choice. That was not an accident, right? They want you to think that someone is typing, that a bot isn't just auto generating, right?

Irina Raicu:
And it's instant, right? You never have to worry about them writing back, like with real people. Sometimes those dots don't come up when you text or message people, right?

Ian Krietzberg:
It's on demand. It's 24-7 on demand. It's a buddy in your pocket. Even chat GPT, open AI systems, right? They're not as bad, I guess, in that respect as character. They're selling a bit of a different product, but it's big reasoning model. 01 will tell you whatever. I thought about this for 21 seconds. I thought about this for 16 seconds. You didn't think, right? There's a lot that scientists maybe don't understand about neuroscience and human brain and human thought and human intelligence, right? But we know how these large language models work and there's no thought. It's statistics.

Irina Raicu:
Yeah. Imagine that you got back and the first line said, I performed a statistical analysis of this many tokens and this is what I've come up with, right? We would totally think about them differently. Absolutely. And we fall for that stuff so easily. I mean, it's a known thing in design that if you just put googly eyes on anything, people will treat it nicer. If you put googly eyes on a Roomba or, you know, on a rock, So, that's just the reality of how we operate. So, anything that sounds or looks like a human being is going to trigger those impulses in us.

Ian Krietzberg:
Right. And we're very good at that. Even going back to the example you had earlier of a little girl playing with a doll and talking to the doll, right? Yeah, it's kind of the human nature in us that will apply these, these things to people. But that's just such a departure from from where we're headed. Because the doll didn't talk back, the little girl has to imagine what the doll would say, or maybe say out loud in a different voice and like, be creative and playful and use all that kind of unique human brainpower. unique to the extent that we know. But here, right, if I could pull up an avatar of a Barbie doll, I'm sure and I could talk to it. And that just takes so much away from from that plane.

Irina Raicu:
Oh, I was gonna say not only an avatar, but there was for a while Hello, Barbie. Are you familiar with Hello, Barbie? Hello, Barbie was was an Internet of Internet of Things toy. So there was a talking Barbie, which raised a ton of ethical issues. And yeah, it's definitely worth reading about. I mean, it was on, you know, it was sold in toy stores. It was like 70 some bucks. It would, the child would push on Barbie's belt buckle, I believe it was a button and would talk and Barbie would talk back. And then the recordings would be uploaded to the cloud and then they would go to a parent's phone. So I wrote a number of things about this. You know, I was talking to people. And when you asked, especially, you know, girls who had played with dolls or anybody who really talked to their dolls. How would you have liked it if your conversations had been sent to your parents? I mean, it just boggled people's mind. That was the whole point. You were trying out things that you never would have with anybody else. And so, yeah, it was this really interesting product that, of course, got hacked. They got criticized for being sexist in terms of the kind of responses to little girls. And then eventually, went away because it wasn't selling particularly well, leaving behind the question of what happens with such toys once you bought them and they still operate in people's houses and you don't know, you know, whether stuff is still being, you know, I mean, can you imagine a bricked, you know, Hello, Barbie. I mean, I don't even know what that entails, right? You know, what happens with all those conversations that got already collected boggles the mind. But yes, we have such toys and they're definitely still talking toys that exist and that do all of those things.

Ian Krietzberg:
Yeah, no, that's a crazy one. And the cloud and everything just kind of existing in the cloud and also existing kind of forever, right? As long as the cloud infrastructure, all the data centers and stuff. remain operable unless it got deleted.

Irina Raicu:
So here's an interesting thing that we don't talk about enough, you know, expiration dates on data or just data sets that people decide are too sensitive and need to be deleted. It's high time we have a lot more conversations about what data sets should be kept for what purposes and for what length of time. Because again, even with You know, like with GDPR and various laws, they say, you know, you need purpose differentiation. You can only use it for this purpose. But a hundred years from now, nobody's going to care. what the designated purpose was. And the data sets might get used for very different purposes that we can't even conceive of yet. So for some reason, we're very good about thinking that maybe there will be good things that would come out of this, you know, like, you know, they will find a cure for cancer based on what we wrote on whatever. And we're not equally good at worrying that, you know, bad people in the future might find bad uses in the way that, for example, you know, We know that the Jews were tracked in Europe and some countries because the bureaucrats kept very good lists of who was Jewish, and they did not intend to have those lists used in the way that the Nazis used them, but there you go. So I think those kind of, you know, balancing conversations need to happen much more. So it's not, it shouldn't be that we automatically just assume that the data will go up into the cloud and stay there forever. That should be part of the conversation.

Ian Krietzberg:
That's just not something people are talking about. And just the idea of the amount of data that we produce now, right? Like, I don't know, even 20 years ago, if I had wanted to screw around and just jot some things down or do a recording on a little small recording device, the paper probably gets lost. Or next time I wanna make a new recording, I record over it on my tape recorder, right? Things get lost. And I think it was okay for things to get lost. And now when we started seeing this with social media and the idea of like, before you post it, I remember hearing this, right? Before you post, it'll be up there. And now there's just so much, it's so much stuff. And to your point, no one's thinking about down the line, assuming it's all still there. Yeah.

Irina Raicu:
And even now, and even now things get lost, but now they get lost in the pile. It's just a different way of getting lost. I mean, when you talk about the kind of, you know, number of pictures that people just sort of upload to, you know, Google Photos or any of the other services, right? Then they have to find new technology to be able to help you find the one picture that you're looking for. So yeah, they just get lost in the mass now.

Ian Krietzberg:
So much mass. There was one thing that you had mentioned earlier, and I want to make sure we talk about it, the datification of everything. Right? I think we see this coming out in a lot of different ways. Spotify wrapped is one example of it. Or health tracking and fitness tracking. Everything we do is tracked. We have algorithms all the time. Sometimes we want it to be tracked. Sometimes we're happy to look at the results. Sometimes it's important in certain healthcare instances everything is being tracked. The kind of default is we're almost at a point where by default we're going to track everything. We track your screen time, we track your web usage, this is the app you're on most often, this is what consumes most of your energy on your computer, like all these things. And it's just a really interesting point, right? Like I think about on the healthcare side of things, where people are now wearing their fitness watches, their Apple watches all the time. So they can track their sleep and they can track all these other things. Even tracking their steps, the kind of gamification of, you know, I hit 10,000 today, right? It's all really interesting. And to me, this idea of in certain applications, like the healthcare, it might make people healthier, but there's also something kind of sterile about just looking at the data. There's something kind of cold about it, as in, I don't know, aren't we more than just what the data is telling us? about ourselves.

Irina Raicu:
Yeah. And I think, of course, yes, we are. And some things are just not quantifiable. And so if we're only looking for data, we're going to miss all the things that we can append numbers to. And I think it's really interesting also that we think these numbers can tell us things about ourselves. And like you said, they can tell us things about maybe how well we sleep or how much we exercise, and those are useful. But other things, it makes me think more of like those quizzes, like, you know, which of the musketeers are you? Right. That we're so curious to to kind of see how we might see ourselves from the outside, that I was struck by a line in an article that I read today. There was an article that started with a line, quote, There's no greater display of intimacy than showing someone my Instagram discover tab. end quote. And that really struck me, like, why is that intimate? But it's because you think it says something meaningful and deep about yourself, right? That is different than the stuff you actually post yourself on the Instagram. You're letting people look at what you are looking at. And that somehow this is, you know, revealing in a more intimate way than what you post for general consumption. Again, this is just such a, it's such a small subset of who you are that I think it kind of, I mean, it's obviously the line was meant kind of tongue in cheek, but not completely, right? And it's true that some of these sort of, you know, underpinnings do say interesting things about us different than the stuff that we, you know that we post for general consumption. But we we definitely also miss a lot in the data fide society. And I know I am a number of years ago I wrote something about Dickens and big data. Dickens has novel called Hard Times, which is about, you know, all of the efforts at the time to have this utilitarian perspective built into everything and including education, including industry, including other facets of life. And in the book, the kids who are raised with this very utilitarian perspective are deprived of creativity and playfulness and really stunted. And there's the line in the quote from Dickens in which he said, well, one of the characters says, quote, Supposing we were to reserve our arithmetic for material objects and to govern these awful unknown quantities by other means, end quote, and by awful unknown quantities, he was referring to human beings and he meant it well. He meant it as a compliment. And I think it's true that we are awe-inspiring and unknowable and that that's okay. That's part of accepting that we will never completely be able to enter somebody else's mind. We will never completely be able to understand our own mind, no matter how many lists any of the platforms send us about what we've done over the year. And I think that would also help us distinguish between human beings and AI. I know philosopher Shannon Valor, who used to be my colleague at Santa Clara and is now at the University of Edinburgh, has been writing about the fact that we are, in drawing comparisons, we are often actually reducing, offering reductivist views of what human beings are, and kind of trying to match them to the machines rather than saying, no, these are the significant differences. And I think that unknowable, unknowing, quality about human beings is one of the things that's, you know, different from AI. And if we talked about that more than we wouldn't stumble into these, you know, bad comparisons about how chatbots, they're just like us, you know, we also make mistakes. So what if they make mistakes? No.

Ian Krietzberg:
Right. All that, the math and the data and the predictions, I think you said it's such a small subset of who we are. And the kind of unknowable, the unknown, unknowable quantity that is humanity. And we don't need to know all of it. We don't need to know everything all the time. And I think this point is kind of why in many ways, and a lot of this does come down to the design, a lot of it comes down to the way these things are presented. So much of the AI push to me has seemed so anti-human, because it's saying we've achieved an artificial intelligence, right? It's saying we have We have taken what matters for business purposes, right? If you look at Sam Allman's definition of things and we've synthesized it. And there's just so much like what these things can do is such a small, tiny little drop in the bucket of what we do. And it's just very interesting to see people pushing again in that design, right that there are, there are plenty of positive applications of AI you said at the very beginning, right, that AI is super broad, there's definitional problems with it, and generative AI is not what AI is. AI to a bunch of researchers doesn't even exist at all. They don't even like the term because we haven't achieved an artificial intelligence, right? But when you think about machine learning algorithms, you think about general automation, pattern matching, and computer vision and stuff, there are applications that exist, but wrapping it up as this thing that will soon if it hasn't already be comparable to or surpass human intelligence when we can't define it or human intelligence, right? We have to, I guess there are some things that we just have to be okay with being unknown and unknowable.

Irina Raicu:
Yes, and also I would say I don't think that AI is anti-human. I think it's being used by some human beings in ways that are anti-human, but that's not the AI itself. And on that note, there's so much AI that happens that does not involve personal human data, right? So when you're using AI to crunch vast, vast numbers about, you know, oceanic conditions or atmospheric data or agricultural data or industrial Internet of Things, so you'll be able to anticipate when some machine breaks down. Those are very pro-human uses, right? We're going to be able to potentially track climate change and do really important work by using AI. So I think we have to be really careful that we don't put the, again, the generative AI in the same bucket with other kinds of AI. And then that with all of the kinds of AI, we talk about the trade-offs involved as well. So one of the things that hasn't come up in our conversation yet, but always should everywhere is the environmental impact of developing and deploying these tools at the scale that we're doing it now. massive environmental impact. So even if you're talking about using AI to further sustainability and for research, it always comes with a cost, and we have to consider the cost. And especially with these tools, like we're talking about these chatbots or, you know, image generation that can be, it used to be deepfakes, but now we talk about like deepfake porn, right, where middle school girls and boys are seeing their heads overlaid on top of naked bodies and they're having to deal with these things that, again, to your point, it's a new generation and different challenges that we didn't face. These are all some of the consequences and the costs of playing around with this technology. because so many of the tools have come from businesses accompanied by marketing, where the whole purpose was to just push the benefits, we've been pretty bad at doing that sort of balanced conversation, right? So now it seems like, you know, there are what are called the AI optimists, right? And then like, I guess the journalists who write about the drawbacks or the researchers who are studying that, you know, Are they the pessimists? Are they the naysayers? I don't think so at all. I think they're actually the optimists. I think they're the ones who are saying we can have AI and make it better and use it in better ways for society and not just assume that, you know, summarizing emails is the highest use that we can envision for these super powerful tools that, you know, are wrecking the environment right now.

Ian Krietzberg:
Exactly. And I'm glad you brought all of that up. I think that middle group is the skeptics, right? And people People don't love the skeptics. People are annoyed by the skeptics a lot as a skeptic myself. No, but here's the thing.

Irina Raicu:
In that group of skeptics, I would put just about every AI and machine learning researcher that I know.

Ian Krietzberg:
Absolutely.

Irina Raicu:
The people who understand the technology really well are not skeptics. They're just realists.

Ian Krietzberg:
And that's the point.

Irina Raicu:
Right? And we need to listen to them a lot more as opposed to, again, the marketing pitches. So, you know, this is where I've been telling, you know, state lawmakers and other people who need to understand better what all of this is, to contact their local university and see who's teaching AI and machine learning, and use them as a sounding board, as opposed to the Sam Altmans of the world, right? Like, there are people who can really tell you, you know, what this is good for and what its limitations are. And then you can make better, more ethical decisions about when to use it and when not.

Ian Krietzberg:
Exactly. And I do prefer the term realist to skeptics. And like you, that is exclusively what I've encountered from people who study this technology. And often even in the corporate world as well, researchers that have moved to, you know, they're developing this enterprise solution, but you know, they have a PhD and they've been studying computer science for X amount of years. There's a very, there is a grounded view of the technology. And that is often not the view that is kind of parroted. because for whatever reason, it's not coming from a company with a trillion dollar market cap, or it's just less dramatic and so less exciting than someone who's saying, we've created life, like, look what we've done, right? And even to a point you made earlier, right, where if every chatbot generation said, I have completed the statistical analysis and returned a response, a likely response to your query, that would be game-changing for how people think about things, because in very broad terms, that's what's going on inside. There's nothing kind of novel or crazy or human about these things, and these researchers understand that, and that understanding of how it works is grounded in the impacts that it's having in so many streams, right? You mentioned the environmental impact, and that is a huge thing, and it doesn't get talked about enough, and there's even downstream impacts of that as well. There was a recent research paper from, I forget the name, but we'll link it below, that was looking at the public health cost.

Irina Raicu:
It's out of Riverside and Caltech, and yes.

Ian Krietzberg:
Perfect, yes. And they're fantastic, and they've done other research on the environmental impact, but the public health cost, people are not thinking about. And so because they're not thinking about it, the companies aren't talking about it. The emission of these kind of particulate matters that You know, we see in Memphis, this is happening. Elon Musk has a data center that is being powered by dozens, at least, of gas turbines that are shooting particles into the air. And so the health impact of these things are real. And this is the balance that we talk about, the balance that we miss with social media, where, yeah, if we balance this properly, If we are deeply aware of the public health cost and the environmental cost and the impact on the grid, which could be destabilizing, and the impact of deepfake harm and deepfake abuse and all these other things, and how do we guard all these things, if we start by thinking about that, then it becomes less complicated to say, I've got a system that will help us clean plastic out of the ocean, right? Or I've got a system that can do X, Y, Z, as long as it's within those things, as long as the cost benefit analysis is being considered. And I guess right now it really isn't being considered. It's just, we got to push ahead. We got to do this. Don't stop innovation.

Irina Raicu:
And it's not just that it's not being considered. It's also, you know, at all kinds of levels, including governmental levels, the solutions being proposed are more tech, right? Like, you know, if there's a, you know, an epidemic of loneliness and mental health issues among teenagers, then we're being pitched chatbot therapists. because the problem is vast and because all kinds of organizations are trying to find solutions on the cheap. And these kind of tools are being presented as efficient ways of delivering services at a low cost. And it turns out they're not efficient and they don't do a good job. But by the time you figure it out, you've spent millions of dollars on a product that doesn't work as advertised. that could have been spent on other, you know, public health related means of addressing this. So yes, not only are we not having those conversations, we are again, buying another layer of, you know, marketing pitches and how to address them. And I think especially in terms of the environmental impact, we are way, way behind. I mean, the energy consumption is going like this. And, you know, the latest thing I saw that maybe will give people pause is that apparently they're going to be like disruptions in terms of the activity of data centers, because the data centers won't have enough electricity to run. And, you know, let alone that again, like the stuff that gets operated through data centers is not just, you know, chatbots, but things like, you know, hospital systems and, you know, national security systems. And I mean, we need the electricity to be used for the right things while we're trying to figure out other sources of energy. So, Yeah, that conversation really needs to speed up now and to spread as widely as possible now. And anybody watching this podcast needs to tell at least five other people that there is an environmental impact to using these tools. You know, especially like the most energy consumptive are the image generator and the video generators. So, no, if you don't need to turn your document into a podcast with two AI speakers talking at each other, don't do it. You know, really be cognizant of your own usage. I mean, you know, we can talk about the societal impacts, we can talk about the sort of social efforts to address this, and we need all of those. But you can also talk about like the individual you know, action. And I think the individual action is not just in terms of, you know, not using certain tools, but in terms of, you know, pushing for new laws or, you know, again, informing other people. I mean, there's ways in which we can play a role as individuals as well, not just as members of society broadly.

Ian Krietzberg:
Absolutely. I would love to see a kind of big grassroots movement that pushes really hard against this thing that's happening that is out of people's control at the moment. There's no regulation addressing the environmental costs. And it's private companies kind of just doing what they want to do. And that's their prerogative, I guess. But because that's what it is, because it's capitalistic, If users came together in large numbers and said, listen, I'm not going to use this until it's evidence based carbon negative or whatever, right? That would have an impact because these companies like we're seeing them ink deals to bring nuclear reactors back online, which is not the silver bullet solution. They're making it seem. And they have the capital on hand to try and make these things less consumptive than they are. and that there's so much innovation that can be done in just pure energy efficiency, which is a thing that isn't talked about as well. We talk about getting nuclear power and fusion and solar and wind and all these other things while they're still using gas turbines, but just working on making the chips more efficient and the models more efficient and small models instead of large language models. Because Small is really all you need for a lot of applications, right? You talk about turning a Word document into a podcast or turning a prompt into an image. Did you need to do that with this model, which is using, you know, whatever, being powered by a data center, which is on a dirty grid? You know, what if there were systems that could reallocate where that was being used, right? And I mean, frankly, like it would be great if They told us each response, each chatbot response cost X amount of carbon emissions and X amount of electricity and dollars and stuff. I think people need to start thinking about these things in less abstract terms and more kind of hard impact because it's not magic. It is hardware and it is power and it's fossil fuels.

Irina Raicu:
Absolutely. And I think this is this. There is a lot of work in that kind of innovation and a lot of efforts. We we had a day long conference at Santa Clara University not that long ago when which at least one of the panels was on efforts to make AI greener. And actually, one of the researchers talked about the nuclear reactors. And the thing that stayed with me from that conversation was that he said, hey, they're the small ones that are being proposed are unproven technology for now. and they won't come online for years and years given all the. So like none of this stuff is a quick solution. It's important that the research happens. We actually have videos of all the panels that people want to go and find them. You know you can see. But again all of this stuff is changing all the time. You know the data changes all the time. All of the demands are skyrocketing. So far, the innovation, again, it used to be that they would say law can't keep up with technology. The innovation required to make AI greener is not keeping up with the pace of deployment. And, you know, the incentives are not there. The incentives have to change so that companies are not incentivized to just put out these tools for anybody to use to sort of see who captures the market, right? You can play with a lot of these tools right now for free. and not realize that they're not free at all. It's just that the company itself is absorbing all the cost, which is why you don't realize that there is a cost, right? So again, there's such a need for AI literacy, for understanding so many aspects of this, how it works, what it is, what its environmental impact is, what its societal impact is. Nobody's clear on who should be doing that. And the companies that put the tools out into the world are probably in the best position to do it, right? Anytime you release the new model, you could have, you know, two or three paragraphs that really explain clearly what it is and what its limitations are and what its impact might be. And they don't do it. The incentive, again, is not there. So we should be thinking about how to incentivize the things that we want to see happen including the disclosures including you know the AI literacy. How how do we get. And you know again governments are in a good position to demand some of this. And I realize that it can seem impossible to impact, for example, federal policy. But state policy and county policy and, you know, local your city government actually is a lot easier to reach out to and cares more about what the constituents say. So use those levers, right? I think there are ways in which we need to kind of, you know, reconceptualize how how our own agency is expressed as citizens, if we feel that the federal government is just not going to pass anything, which it won't.

Ian Krietzberg:
And yeah, I mean, that's an amazing point. It's all about incentives and, and the kind of question of regulation, but the power that states have to regulate stuff, and, and we've seen their willingness to do that. And I think to what you were just speaking to, it's important for people to not look at this thing and think, you know, I can't touch this. There's nothing that I can do that will impact the progression, be it negative or positive of what's going on, because that's not true. And right, it starts with AI literacy, and then it starts with calls to your local congressman. state, state government, and there are clear ways, right? You know, even on the environmental side of states pass certain laws where data centers in this state can't allocate more than X amount to AI because we need to make sure we have enough power to fuel other things. There are achievable ways to start bringing this in, but we do need to be thinking about the incentives and what are companies incentivized to do? And when a company releases a model full of marketing hype, what are they trying to do? Are they trying to save the world or are they trying to sell something? And losing sight of the purpose behind the difference between one of these companies publishing a piece of paper tied with a new model and researchers at a university studying it, right? The difference in purpose is really important. So, gosh, well, I think I could chat with you forever, but the one point I want to leave off on, right, we've kind of gone through so many negative and dark and weird things. And there are good stuff, as we mentioned, right? AI is a nuanced topic. Nuance is the word I use right when describing it. There's not one way to feel about it. It's super gray. There are good things, but they're complicated good things. They sit in a weird context. But I You know, with all of that said, right, are you feeling or do you feel hopeful in society kind of rising up to what is kind of undoubtedly going to be a bit of a challenge for us in in the shape of artificial intelligence?

Irina Raicu:
Do I feel hopeful? I don't know. I think I feel hopeful that we will succeed in some areas and not in others. I think that the challenge is so multifaceted that if I want to not feel hopeful, all I need to do is look at cybersecurity. and how bad we are to this day at the basics. And now we're overlaying an AI layer with potential internet connected agents, which can lead to something called indirect prompt injection that would cause a whole new problem. If you look at cybersecurity, you lose hope a little bit. If you look at privacy, actually, I think there's been a really interesting transition to an acceptance of the reality that privacy is really important. And we do have laws coming online. Again, it's taken a long time, but, you know, and people are demanding more private services and, you know, and products. So, I can't say that I'm hopeful overall. And I think the most important challenge is the one that you identified early on. And I know you were saying that we shouldn't harp on the negative. But I think the question of how all of this stuff impacts children and their understanding of the world and their role in it is really urgent. And we should be spending a lot more time thinking about that. And as parents and as educators, not be quite so willing to, you know, hand over the digital reins to them without some really deep conversations about what that means and what it might do to them.

Ian Krietzberg:
Exactly. It's thinking of everything as a stone dropped in a river, right? There will be ripple effects. And it's just thinking a little bit ahead of, you know, what might those be? Is this a good idea? Yeah, so much here. I really appreciate your time. This was such a pleasure. Thanks so much for joining us.

Irina Raicu:
My pleasure. Great talking with you.

Creators and Guests

#8: The ethics of artificial intelligence - Irina Raicu