#11: Why we need AI guardrails - Liran hason
Ian Krietzberg:
Hey everybody, welcome back to the Deepview Conversations. I'm your host, Ian Kreitzberg, and today we're talking about guardrails. My guest is machine learning engineer, Liran Hassan. Now, back in 2019, Liran co-founded Aporia, a company offering third-party guardrails a few years before Chachibit came onto the scene. Now, Aporia has since been acquired by Karamlogix, but the work they're doing is the same. It's all about advancing third-party guardrails to enable the safeguarding of artificial intelligence systems, here usually talking about generative AI. The idea, as we're about to get into, is not to limit or hold back AI. It's to make sure it's actually usable. A lot of the promise of AI revolves around pretty high-risk use cases. Something that Laurent is particularly excited about has to do with AI in healthcare. Now, a lot of people are excited about that, but the risks of AI in healthcare are much higher stakes than me using ChatGPT to write an email. And problems of hallucination and algorithmic bias remain unsolved fixtures of the architecture. That's a multi-step process that encompasses a lot of different flavors of AI safety and reliability, but the end result is the same. Making sure people can actually use these systems. One of the big things that he mentions is that his team was able to develop a series of small language models that are capable of detecting and mitigating hallucinations, which is kind of huge. It makes the promise of AI more so in reach. So let's get into it. Laurent, thanks so much for joining us. I'm so excited to chat with you today.
Liran Hason:
Thank you, Ian, for having me. Really excited to chat with you.
Ian Krietzberg:
Sure. So there's a lot I want to talk about. And the center of it all is the idea of guardrails. And the most, I don't know, not the most, but the first interesting thing is that so you started Aporia, which is a guardrails and observability company back in 2019. This was several years before I guess what we're in the middle of now, which is this kind of AI craze or the AI race, the AI rush, there's a lot of different words for it. At the time, why did you think it was so important to have these kind of third party guardrail and observability options on the market?
Liran Hason:
Yeah, so first, yes, like, things have changed dramatically since December 2022, you know, when ChatGPT came out. But we need to recall that AI has been around for decades, right? Like, and companies are using it. We as consumers, we use it on day-to-day, even if we don't know it, right? When you go to amazon.com, for example, and you see like, you know, items that somehow apparently are so suitable for what you need and what you're looking for, it's not magic. It's actually AI behind the scenes. So today it's called classic machine learning. That was the AI before, I'll say, the chat GPT or the LLM era. And even back then, we just realized, you know, after myself and our team working on such projects, delivering AI to production, we just realized on one hand, how powerful is this technology, right? It can literally change our lives. You know, if you think about healthcare, if you think about, you know, day to day, and we can see it these days. But with that in mind, it's also extremely risky, right? Like it's not deterministic as the traditional software we're used to. And the end result is, on one hand, you have this massive potential for us as a society to gain from this tech. But with that in mind, the flip side, and there are also cases where we see how AI could be harmful. So there's a huge risk with that. And so we just realized, hey, in order for us people, humanity, to benefit from AI technology, there has to be a way to do it responsibly and safely. And for that, we have to have some human oversight or mechanism that is controlled by human beings, even if it's not manual or something like that. So we realized we have to have controlled guardrails observability over AI applications and AI software. And that's kind of what intrigued us to start Aporia and get on this journey.
Ian Krietzberg:
Now, you mentioned December 2022, that's the point, right? That's the launch of chatGBT and then everything that we're dealing with now is kind of the race that that sparked. So the idea of this, what you're talking about, like you said, AI has been around for decades, the LLMs and the chatbot wrappers is new, but the technology is not anything new. And as you had this realization, I guess, kind of before people knew what was coming, broad people, maybe not in the field, you had this realization that we need to understand what's going on with these, we need to safeguard them. Now that we're Believe it or not, several years past that December 2022 moment. What are you observing from enterprises, from your clients and customers? Is there a kind of growing awareness of, yeah, I want to use AI, but I can't use it unless I safeguard it. What's the kind of consumer base look like right now?
Liran Hason:
So let's actually do a deeper double-click into what happened in December 22. We all know that ChatGPT was launched back then, but why was it so dramatic to the AI industry? And the reason being is if we'll take classic machine learning, right, what was available beforehand, in order to build something, you had to collect tons of data to build some basic AI model. You had to be or to hire PhDs for that, right? And even if you have data and you have hired PhDs, it would still take you for the very least a few months just to build one model. What GenAI has changed is that suddenly, all you need to do is call an API call of OpenAI or DeepSeq or someone else, and that's it. You can build an AI application. So if we think about the potential, the potential has changed for about 40,000 data scientists in the United States. to millions of software engineers. Practically anyone who can write code these days can build an AI application. And that's what made it really frictionless. So that's what's the cost of building an AI app. And also the value is immense. Before that day, building an AI application that would chat and interact with you like so freely in such a free form with unstructured data in such a qualitative way, it was close to impossible or extremely costly to get to these results. And suddenly it's like just amazingly easy. So that was kind of the pivotal moment in the industry. Now, because it became so easy to build AI application, right? Like we all have experienced JetGPT and Cloud and the other LLMs. Enterprises all across the globe realized, hey, we have to build something using AI. Like it's not enough just to go outside and say we have an AI strategy. No, it's not enough. You really, really need to build something. You need to do something. And you also have the manpower to do so. So when we talk with the EverAgent Enterprise, what we usually hear is that they have somewhere between 200 to 300 AI use cases, which is a staggering number. Now, when we do a deeper dive into these gigantic numbers, like how many of these are actually in progress or being built right now as we speak, it suddenly gets to a much, much smaller number, which about gets somewhere in between five to a dozen or so. And then when we get to the most interesting question, how many of them, how many of these AI projects are actually live serving, whether it's consumers or your employees or partners? You know what you hear? You literally see people changing colors and the numbers are like one, two at most. So there is this huge gap between this 300 to about one in production. What happens in this gap? It's an interesting question that we have to dig deeper into it. So what we realized is building something, building an AI application, again, quite easy. But that gets you an application that works in 80% of the time. And when we as consumers, we use chat GPT or something like that for our everyday conversation, I know at what age babies start to walk or stuff like that, that's fine if the answers are not 100% accurate. But if you think about public companies, this is actually their brand on the line, Like we saw with Air Canada that their AI chatbot actually just made up the refund policy and got them into a lawsuit, into the courts, losing a lot of money and more than anything, damaging the brand. So enterprises are much more cautious these days. with releasing AI applications to production. They build something, they get to pilot phase, they start testing it internally with five, 10, 20 users, and they realize, hey, wait, we cannot release something that is not reliable or safe enough for our users. So this is kind of an interesting state where the market is today.
Ian Krietzberg:
It's definitely an interesting state, and we see so many different I don't know, take somewhere enterprise adoption is and enterprise adoption is really interesting for me to follow because it's very indicative of there's, there's the two sides of AI, right? There's the, there's the science of it. Cause it's a scientific discipline and then there's the business of it. And what chat GPT did was really highlighted and accelerated the business side. of AI and the enterprise and cracking the enterprise, that tells you if the business is working. And it's interesting that you talk about caution, because that's starting to be what we're hearing more and more of. There still seems to be excitement and spending on AI keeps increasing, but compared to, I think about 2023 and every single company, uh, even if they had nothing to do with, it was saying AI 75 times on their earnings calls. Um, things seem to be a little bit tempered now. And it's interesting that you're, you're noticing that as well.
Liran Hason:
Yeah, 100%. I think if you follow the earning calls, I agree with you. And some earning calls, you can count like 50 times in one year earning call that AI was being mentioned. So yeah, it's absolutely interesting time. And in a way, so you can actually ask, so what will happen? Maybe it's kind of, a hold back that maybe we're far away from using AI than we think we are. But I think with this kind of advancements, like tech advancements, the value, the potential value overachieves the risk that is entailed with it. And that's what makes companies really release these apps to production, to consumers, even though they might have some issues, even though they might have hallucinations in their apps, and so on.
Ian Krietzberg:
talking about the kind of value and the risk profile, right? That this is the core of why safeguarding is necessary. And for that, I'll start with a simple question. You know, everything in AI kind of easily gets abstract. And there's all these different impressions of what it might be capable of. But when it comes to the importance of safeguarding artificial intelligence, What are you most concerned about here?
Liran Hason:
It's a good question. There's just so many things to be concerned about. I think that the very basic is just not following or, you know, releasing something that you don't understand what it is capable of. And I think maybe the best example for that. There's a lot of discussion in the AI community around AGI, artificial general intelligence, right? Like, we all want to get there, we all know we are not there yet. But other than the kind of more scientific discussion of AGI, I think there is the social aspect and philosophical aspect of what is AGI? What happens when we human beings perceive AI as a real person? And that's actually already happening. Like there are so many companies that was founded in the last two years offering for people to talk with your virtual girlfriend or talk with some imaginary friend that you wish. And it might feel like someone real. And there are stories about people falling in love with an AI character that they've been talking to. And I think what's the worst case scenario? The worst case scenario, I think some of that already happened, where a teenager actually committed a suicide because of a conversation with such a chatbot. So I think these are the most hazardous thing that could happen. But the more we integrate AI into our lives, I think the greater the risk that comes with that.
Ian Krietzberg:
It's interesting, right? You mentioned at the top, there's so many things to be worried about. But the social impact, because a lot of it is unknown. I'm with you on that. It's worrisome. And when it comes to this integration of AI that you're talking about, that we're experiencing that for years, has been happening without people really knowing about it, right? Every app, website you go to, smart TVs, Netflix, doesn't matter, AI has been integrated. The integration of the next phase, the generative phase, is one that poses a lot of risk. And that doesn't seem like it's gonna not happen. And so when we're faced with the kind of social impact, sociological impact that you're talking about, and we're faced with this impending integration, is the idea of a guardrail and observability like the best shield against these kinds of risks? Or is it kind of one factor of a lot of different things that maybe should be coming into focus?
Liran Hason:
So I think there are multiple things that should come into focus. In terms of responsibility for AI safety or responsible AI, I think there's kind of a magic triangle of who owns that. So first and foremost, there's the government that should protect the civilians, should protect us from irresponsible use of technology, by regulation, enforcement, so on and so forth. On the other hand, there is the leadership of these companies, AI leaders and such. It's really, really important not to forget about these parts and to actually include them from the very first day as you build and design these systems. And lastly, the practitioners, when you build AI application, how do you include that? No, obviously, it's always kind of balanced between advancement and having safety and checks in place. And we just want to, I think it's just important, I don't think we should be on either extreme. I don't think we should ban AI or not use it. The benefit is immense. We have to leverage this technology. There is no question about it. But with that in mind, we cannot do that without any guardrails, without observability, and, you know, just hope for the best.
Ian Krietzberg:
I'm glad you broke it down into those three kind of categories, the governments, the AI leaders themselves, and then the practitioners. And so I was going to ask about all these different things later, but we'll just talk about it now. When it comes to the first thing you mentioned, the governments, and governmental regulation, regulation from government agencies, laws being passed by congressional bodies, it's very inconsistent, we'll say, globally, so far. You have the EU, which has done the most comprehensive AI regulation to date that I'm aware of. other states and countries are trying to get a handle on it. And then you have the US, which never really managed to get AI regulation going. Now with the new administration rolled back what little regulation we sort of had in place, and it's kind of this hodgepodge state to state. How vital is that piece? And do you think it's enough for the kind of global safety of AI for maybe just Europe to have it on lock? Like, will that Will that tide raise all ships there or do we really need governance at every country or maybe an international kind of organization? How far do we have to go on the governance side?
Liran Hason:
Yeah, and I think it's an interesting aspect, you know, comparing the EU to the US in terms of AI regulation. In a way, I think we're at a point in time where business has become, like globalization has become like our day to day. So even if we look, let's actually take a look on older compliance regulation, GDPR, right, which was brought by the EU, obviously enforced by the EU and so on. I think if we look on companies in the US, by far most of them comply with GDPR. And the reason being is so many of them have business in the EU. So I think that having EU AI Act is a tremendous step in the right direction. Again, I also think it's a matter of balance. You don't want to slow down the companies who are advancing in AI from making these advancements. And you talked about the change in the ministry in the US. In that sense, obviously, no one knows how it will work out and what will happen. What I do know is that Elon Musk also has been talking about and thinking about how do we safeguard AI long before chat GPT has come to the world. So maybe that's kind of something that we should hope for, you know, from the government.
Ian Krietzberg:
Yes, Elon's been talking about it. Unfortunately, Marc Andreessen does not share that perspective, so we'll see what happens. The point you mentioned about the balance, we hear a lot about the balance. Coincidentally, Marc Andreessen has talked about the balance and this idea that regulation will stifle innovation. I don't like that phrase, regulation will stifle innovation, because in large part we see regulation as a way of advancing a certain type of innovation. If regulation calls for safeguards and responsible use, whatever that looks like, then companies have to innovate in those areas, which is a good thing. But striking that balance is a factor that I think is on a lot of people's minds. It's a point of concern for some, for a lot of people. What is that balance to you? Like, is there a simple one kind of a general legal guideline that you would think is the balance that would work the best?
Liran Hason:
Oh, 100%. And I think the devil is in the details. Because we can all agree that we need some governance around AI. How it is being interpreted and how does it look like? There's a whole wide variety of options in which it could happen. I think it's easy, or not easy, but easier to check the box and say, hey, we've created this compliance or standard or governance standard around AI, and now you need to comply with it. But in the details, you'll see it just kind of adding more work, more unnecessary work or blockage for companies working on AI without actually enforcing them to safeguard their systems, right? And that's not the way, what we want, because then we just blocked or delayed or slowed down innovation and technology without actually achieving safeguards. So back to your question, like, is there kind of, I don't know, silver bullet to get to this right balance. In my opinion, regulators should get into the details, the technical details. of what are the real, real risks of AI, what should be governed and what should be in part of legislation and what not. So I think the devil is in the details and the way you define the compliance, the way you define the regulation and the way, the better you understand the technology, the better the compliance would actually serve that purpose.
Ian Krietzberg:
That brings me to the next category you mentioned, the AI leaders. Here we think about Anthropic, OpenAI, Microsoft, Google. There's others, but I feel like those are the main category. An interesting thing that has kind of evolved is, at least on the enterprise side, I'm aware of a number of companies that have grown out of this current craze who are offering, you know, ways to safeguard and integrate artificial intelligence into enterprise tech stacks, who are a lot of cybersecurity companies, a lot of cybersecurity concern and offerings to make these systems safe to use because they're not. And a big question that kind of comes up is, If there's such a business in safeguarding these systems, why don't the people who develop them safeguard them?
Liran Hason:
Let's actually take a look and break down what AI and the enterprise is like. I like to split it into two categories. First category is AI services being consumed by employees of the enterprise. So for example, if I'm working at Microsoft and I'm using ChatGPT, right, I might click information through ChatGPT or through other AI services to them. That's one usage. The other usage is, what about I'm working into the enterprise and what about the AI, sorry, let's rephrase. And the other part of AI within the enterprise is AI projects and applications that are being developed by the enterprise and being offered either for the employees, for partners, for customers. So these are two parts and different risks that comes with each one of them. As for the first one, this is where security, dedicated AI security tools are very much needed. Why? Because the enterprise, it's not for the company who developed the agent to take control on whether they get sensitive information. The ones that are responsible is actually the enterprise itself. And that's why you want to have dedicated AI security solution, right? To make sure sensitive data is not being leaked. The other part, and actually there's a category for that in the cybersecurity space called DLP, Data Leakage Prevention. And now it's naturally in the last year, it has expanded also into AI. The second part is the AI projects you develop. These are much more challenging in terms of how do you safeguard them and the risk that comes with them, right? What happens if my chatbot hallucinates? What happens if it promotes my competition? Like it says, hey, we don't have this service, but you can go to another one. Or what happens if it commits to a lower price than our real price? So these are real, real risks that we've seen out there in the wild. Like I can tell you, I saw customer support chatbot that actually recommended about which stocks you should buy. Just because the user was interested, you know, the user was curious and said like, hey, should I buy Tesla? Should I buy Nvidia? So the LLM, you know, that chat was kind of nice enough to provide their input on that. Yikes. Again, for the company, they should not, they're actually applied not to provide financial advice. So the risk is massive. How do you actually safeguard and create this boundary, that's the way I like to think about it, around any area of application you develop, whether it's a chatbot, RAG, summarization, classification and so on, it doesn't matter really what the use case is. This is where guardrails, like what we've developed in Emporia or offered by other companies, this is where it becomes really, really crucial to have a specialized solution for that.
Ian Krietzberg:
Since we're talking about specialized solutions, what can you tell me about how your guardrails work? I remember the first time you and I spoke back a while ago now, 2023, um, you said that at first you thought about using AI to safeguard AI, but then you were like, that doesn't make sense because the same problems that you're trying to mitigate, you might have in the. in the mid gate tour, as it were, and then that doesn't really work. So deterministic solutions make more sense. But what can you tell me about how, I guess, technically, the guardrails and the observability platforms work, how they're designed, and what they're able to accomplish?
Liran Hason:
So, you know, it's interesting. It's been a while since then, and obviously a lot has changed in our industry. So the mindset is still the same, right? Like, you want the safeguards to be as deterministic as possible. And in the parts which they're not, you'd like to have observability covering you. So even if something does go wrong, which still could happen, there's no 100%, you can still be able to see it. So over time, our research team has found out that in order to properly and effectively identify hallucinations, for example, which is a very, very challenging problem to tackle, we had to use AI for that, right? So the way we actually built a very innovative engine for that, it's called Multi-SLM Detection Engine. SLM is Small Language Models. So think about instead of trying to use big large language model with hundreds of billions of parameters like LLAMA or GPT, we actually trained our own small language models, 7 billion, 13 billion parameters each one, and each one of them is actually specialized in one type of challenge, whether it's rag hallucinations, prompt injection, so on and so forth. So each one of them is specialized in one type of AI safety issue. The user, our user, has full control on how this small language model is going to act. So you still have this deterministic and you still have controls over the safety mechanism while it is using ALG. And that's what allows us actually to achieve the best benchmark in the industry, both in terms of accuracy and in terms of latency.
Ian Krietzberg:
That's really cool. I figured things had changed. That's why I wanted to bring it up. So using those small language models that are trained, I guess, on very specific datasets, you're able to detect the likelihood, I suppose. It's able to predict the likelihood that a hallucination has occurred.
Liran Hason:
Yes, and if you think about it, in real time, we were the first solution ever to provide guardrails for multimodal AI. So that means you could actually have voice conversation with an AI agent. Think about the latency, you don't want the conversation to be delayed. So as you're talking with the AI agent, our system evaluates in real time whether there's hallucination, whether you're trying to attack the system, and mitigates it and changes the response accordingly before it gets to you as a user.
Ian Krietzberg:
Now, that's the interesting point that I was just about to ask you, which is flagging it is one thing, but the mitigation is the other thing. And so this is all an autonomous, automatic process, I guess, that as the system, based on its deterministic procedures that the user inputs, will identify a hallucination has occurred or it's offering a deal that we don't have or that we don't approve, like whatever it is, and it changes the response. Is that mitigation, is that changed response based on like pre-written things that a user will put or is it, is the mitigated response, the small language model just saying, I don't know, error or something?
Liran Hason:
So this is really where the control parts come into the picture. The user of our system, the one that actually manages the AI application and manages the safeguards, they can define what should be the action for each and every type of issue. So for example, prompt injection, that means a user trying maliciously to trick your system. But in this case, usually there is an immediate block and override of the response and saying, hey, please use the system responsibly, and so on and so forth. Obviously, the attempt is being logged and flagged for further investigation. In terms of hallucinations, for example, so we had cases where a client comes in, asked about one day delivery, real consumer company we're all very much familiar with. And the AI agent says, unfortunately, we don't have one-day delivery available yet. However, this company, their number one competitor in the US, already supports one-day delivery since 2020. In such cases, for example, there is either you can override it with pre-written response or actually ask our assistant to rephrase it and just omit the problematic part.
Ian Krietzberg:
Thing to know, grounding going forward, because I want to get into some less clear territory. You mentioned earlier AGI, Artificial General Intelligence. Now, for anyone listening, we haven't achieved it and no one can define it. There's no unified definition of what an Artificial General Intelligence would be. There's so many unknowns here. No one's sure if it is possible. But on the basis that it is, you know, a lot of people are trying to achieve it. OpenAI is very explicitly trying to achieve it. Anthropic is very explicitly trying to achieve it. Uh, DeepMind's working on it. Everybody's working on this. DeepSeek is working on this too. So the, the site is set on a general intelligence, which based on loose definitions from these companies would be roughly equivalent to a human. Now, again, we have no idea what that would look like, but that's what they're trying to do. And then at the same time, even before you get to that general intelligence, and there could be a long time and a lot of advancements between that, you have a really strong rate of progress. DeepSeek came out recently in the past few weeks with v3 and then with our run, and then they just launched Janus, and I'm sure they got other stuff coming up, and it's taken the field by storm. Everyone seems very surprised. Some people seem a little nervous. They're freaking out. So we have a kind of seemingly fast rate of progress with sites set on AGI with all that going on. I, how do you ensure, how much do you think about making sure that your guardrails and observability stuff stays relevant? Like, do you worry that it could be outdated by a more advanced system that you didn't know was coming or by a potential AGI?
Liran Hason:
So will it be relevant the way it is today to AGI? It's a good question. I'm sure there will be adaptations needed and we will make them. But I am 100% certain, based on what happened in the industry, the industry is changing very, very rapidly and very fast since we started way back in 2019 until today. And we were always the first to come with the best performing guardrails, the most innovative ones. And now, actually, as part of the acquisition by CoreLogix, we actually are building an AI research center. So we actually expand our operation of research of AI safety. And that's what makes me so confident and sure that When there will be new updates, whether it's new models, AGI, or probably a lot of more updates until we get there, we will be on top and one of the first ones to deliver what the market is needed. We will be one of the first ones to deliver what the market is in need for right now.
Ian Krietzberg:
So you're, you're thinking about these improvements, you're, you're on pace to, to expand into them. I, I think one element, and we've talked about this before, but I just think it's really interesting. Um, the, the idea of AGI, if it were possible, if we have it, are guardrails still needed? And is it the similar type of guardrails that we have today?
Liran Hason:
I think once we get to AGI, safeguards are much, much, much more needed than they are today. That's if you ask me, like, what am I concerned about is really this day where we reach to such powerful AI that's far beyond our imagination, right? And what we saw in the movies become a reality, even in a way that we haven't thought about before. So yes, and when we get to this day, having Human oversight, human control, human safeguard. I think it's mandatory for the safety of Orbos.
Ian Krietzberg:
And it does kind of make sense, you know, you're what, like, I can see the path to how it, it, it would work. You were just talking about today, making sure that, you know, if I'm an airline, my, my chatbot doesn't give out free flights, you know, and that same kind of those elements of looking for things that it shouldn't be doing and then automatically mitigating that you can picture. a very powerful AI model that is necessarily almost handcuffed to making sure that it doesn't do anything it's not supposed to do. If we see AGI and it doesn't come integrated with that kind of an approach, do you think things will get very bad? Like, do you think that leads to some of these worst case scenarios that we hear some people, not all, some people talking about?
Liran Hason:
So first, I think we will manage, right? I think we've been through a lot of tech changes and let's take a look on social media, for example, right? It has changed the way we think, the way we act. I think as it happened to you that the day went by and you hadn't checked any of your social media, Like, you consume news, you talk with your friends over social media, it just became part of our lives. Yes, it did come with plenty of risk, plenty of issues, right, Dariusz? Like... There are endless stories about that, but we still were capable of dealing with that. So I think in a very much similar way with AI, but this is an important note, this is a different order of magnitude of change. AI is no less than the revolution, right? And I'm not saying that to exaggerate, like what makes a revolution a revolution? In a revolution, the society, we human beings, we have found a new way to harness ten hundred times more value in less than ten or hundreds of the effort. I think about the industrial revolution. We invented electricity, we invented factories. Suddenly we could generate and manufacture thousands more items than we could before, right? At a much faster pace, better quality. So yes, it has created new professions, it has changed entirely the way we live in, right? The same thing happens, like we live in an exciting time, we live in a revolution. I definitely foresee in 20, 40 days, sorry, I definitely see how in 20, 30 years from today, Students at high school are going to learn about this, the AI revolution in history class. And that's what we're talking about. So back to the question of risk, the risk is there. I do believe we will manage it. I do think we live in super exciting time and it's kind of cool to think about the fact that we're living in an interesting time in history.
Ian Krietzberg:
Kind of cool. Kind of, you know, I wouldn't have minded if it was quiet. And well, so you mentioned an interesting thing, and the idea that this will just get integrated, and it'll, to a degree, be like anything else. Like, we are a very adaptive species, and the things that seem dramatic, it's like stepping your toe into a cold pool, you know? Eventually you get used to the temperature. But it brings up a point that I've I've been thinking about because the industry has been freaking out about this, right? We mentioned DeepSeek earlier. And because of the dramatic rate of seemingly, we don't know a lot about it, but from what we do know, seemingly improvement in efficiency and cost and how they trained it, and the more open source availability of it. There has been some commentary that this kind of throws a lot of water on the revolution because it can enhance accessibility at low cost. The business of AI might get thrown into turmoil and maybe slow down or shut down or like go through a brief winter. I don't know. There's been a lot of thought about the cooling impacts of something like this. I don't know if I see that, but I, do you see that? I don't think you do.
Liran Hason:
No, I know. I don't know what these people are talking about. I'm actually pumped about the recent advancements with DeepSeek. You know, regardless to all kind of the after effects and the comments about it, bottom line, we unlocked a new way, again, to achieve more value in a much lower cost. I'm pumped by the fact that it really unlocks a lot of potential new AI use cases for us that, until last week, were just too expensive to make commercial sense. And no one would actually invest time or money into them. But now, now that the price got cut in about 30 times, These use cases now make total commercial sense. So yes, it unlocks more AI use cases than before. And I definitely see it in this trajectory of the AI revolution of faster gaining and harnessing more value. Again, I'm excited. Think about healthcare and what can you do in healthcare. It's amazing. So that's kind of my perspective. But maybe I'm just optimistic as a person. I don't know.
Ian Krietzberg:
Maybe. Well, and that kind of takes us very nicely to the last point I had for you, which is I don't think this is something that you and I have talked about before, which is we talk a lot about your concerns. I mean, your journey here in 2019 was founded because of concerns. But you mentioned health care. You mentioned that you're an optimistic person. I wonder what specifically AI is ushering in that has you feeling very optimistic, very positive about a near-term future, and whether the idea of guardrails is what enables that kind of positive impact.
Liran Hason:
Yeah, so as we were talking, I just was thinking about the fact that we were talking a lot about the risk and kind of the potential consequences and the harmful consequences. When we started with Porya, the mindset, and it was always like that, is how do we enable the use of AI, not how do we block it, right? And the mindset is in order to use this technology, we have to be able to trust it. So it's more from that angle. Specifically, what excites me, what from the get-go got me into, hey, I wanna enable the use of AI, really comes from healthcare. Back in 2007, again, classic machine learning, I've built a model for biometric identification by pictures of the iris. The project went very well. Later on, we have researched to use the same technology for identifying and diagnosing specific type of cancer at an earlier stage than what people and what, sorry, at an earlier stage than what healthcare was providing back then. Since then, I'm familiar with plenty of companies in the healthcare space that actually adopted such AI technologies to provide better audiology services, better CD scanning, identify tumors using this technology in a faster way than before, and really, literally save people lives. So back to the question, that's what excites me. How can we improve our lives? How can we save people's lives using technology? And I think AI has massive potential, and we are far away from reaching its potential.
Ian Krietzberg:
Yeah, and I really love what you said there at the beginning, which is the idea of ensuring safe use of AI is not about holding it back. It's about making sure it can be used in a way that will actually have the intended impact. And when you're talking about healthcare, A lot of promise, really high stakes. You need to be able to trust it. Uh, and the idea of guardrails and observability and safeguards, whether it comes from technical solutions like what you do or regulation or these other things is all to ensure that that kind of promise of detecting cancer really, really early is achieved because without trust, it won't be. So yeah, anyway, this was a lot of fun. Liran, I really appreciate you coming on and, and. you know, taking me through the world of AI guardrails and safety.
Liran Hason:
Absolutely, really enjoyed it as well. Thanks for having me.
Creators and Guests
