#4: The truth behind self-driving cars - Missy Cummings

Missy Cummings:
I think the biggest misconception about self-driving cars is that they're actually self-driving. Cars are having difficulty seeing the correct representation of the world. The problem is that companies don't want to admit that this risk is real and is that they're never going to solve it. But the companies don't want you to know this because they don't want to admit that there's one area that they really don't have under control. How can we be sure that these systems are good enough? They don't have to be perfect. I'm not asking for self-driving perfection. I'm asking for slightly better than your average human driver.

Ian Krietzberg:
Welcome back to the Deep View Conversations. I'm your host, Ian Kreitzberg, and I'm really excited about today's episode. My guest is Dr. Missy Cummings. She was one of the U.S. Navy's first female fighter pilots. She served as a safety advisor to the National Highway Traffic and Safety Administration, and she currently serves as the director for George Mason University's Autonomy and Robotics Center. The topic is self-driving. Missy, thank you so much for joining me today. Thanks for having me. Absolutely. So there's a lot of interesting, uh, details I want to get into, but first, um, just because I'm personally curious, I want to start with you. I know, um, your journey from U S Navy pilot. an engineer to now the director of Mason's Autonomy and Robotics Center. I'd love to hear about what kind of drew you to studying and exploring autonomous systems.

Missy Cummings:
Yes, so I have a very, I call it a very linear background, but I think it seems confusing to people that I would be a fighter pilot first and then a professor second. But indeed, it was critical that that happened in that order because when I was a fighter pilot, That is, in the three years that I flew F-18s, I saw, on average, one person die a month, always because of the human-machine interaction. Never because of combat, but because the planes were designed, what I would consider, so poorly that it was, that design itself was causing the accidents, causing pilot confusion. basically, in some cases, outpacing the human and not really appreciating how we needed to design a system to consider the human. So that's what led me down this path. And I spent the first decade of my academic career looking at unmanned aerial vehicles, what you call drones. And then once they were commercialized, I moved over into the self-driving world because the technologies are all the same. And I knew that the battle that we would be having in cars would be much different and much more intense than the battle for whether or not we should be flying drones in national airspace.

Ian Krietzberg:
Why is the battle more intense for cars compared to something like drones?

Missy Cummings:
Your average person's intuition is that it should be easier to make a robotic car than a robotic plane because they think planes are more complex. But it turns out that the third dimension of height gives you extra risk protection. And so it's actually far easier to have a drone robot in the sky than on the ground.

Ian Krietzberg:
Getting into the self-driving cars themselves, we have a couple of different approaches. And seemingly, they're starting to become a little bit more ubiquitous. Waymo keeps expanding, although their fleet size is not enormous. But with robo-taxis or the semblance of robo-taxis on the road, How do these things work and how do they work differently from how you or I might think when we're getting behind a vehicle?

Missy Cummings:
Yeah, this is pretty complex and how I'm going to describe this is grossly oversimplifying it. But in general, what happens is there's a perception component, how the car sees the world. There's a planning component, how the car is going to estimate its trajectories. And then there's the actuation component, where commands are from the perception and planning systems cumulatively In the aggregate, they're called the stack. And then they're sent to the drive-by-wire system, which actually actuates the wheels on the car. So roughly, that aligns with how humans perceive, think, and then act in the world. But where the real breakdown is is that our ability to see, and to see not just one thing, but to see all things together, and then almost immediately understand what the right trajectory is to handle that situation. That perception cognition loop that we have is much faster and far lower in energy. than it is for self-driving cars. And so one of the things that we routinely see is, and this is gonna come out in a paper that I'm publishing very soon, the bulk of the problems are happening at the perception layer. The cars are having difficulty seeing the correct representation of the world, and then they're also struggling, I call it a category I name as unexpected actions by others. They do okay. You know, their crash rates when they're by themselves and not having to interact with a bunch of people are low. But I was just having a conversation with someone in San Francisco who said, you know, I drove to work today and there was this guy in San Francisco standing in the middle of the intersection juggling. I'm like, yeah, that is San Francisco. That is why I probably never would have tried to do a pilot test of self-driving cars here because that is truly an unexpected action by other. And we're not really sure how cars are going to respond to that because I can assure you a man juggling is not in the training set of the cars.

Ian Krietzberg:
It's all about the training set, the training data. I guess, isn't this the challenge of all robotics and why robotics is a little bit more challenging, I guess, than people think, which is that when we're talking about real world interaction, how do we get enough training data and enough quality training data to account for everything that these things might interact with?

Missy Cummings:
Well, to take a step back for your listeners who are asking themselves, what are they talking about training data? The way that you teach these cars to see the world correctly is that you show them massive pictures, videos, so that they can recognize objects in their scene that they're assessing. And so, for example, if I need to have the self-driving car recognize the juggler in the field, I would need to show the neural nets underlying the perception system a million images, let's say, of jugglers in all sorts of poses. in different outfits, at different times of day, and it's actually very difficult to represent an object in all those different ways that they could actually be presented, including even if it's during the daytime, it turns out that low sun angles, because they cause glare and different shadows, can be very difficult for these systems. And so by default, I'm 100% sure that every data training set out there is incomplete in some way. So the real question is, understanding that we've got incomplete data sets that are underpinning all of self-driving, how can we be sure that these systems are good enough. They don't have to be perfect. I'm not asking for self-driving perfection. I'm asking for slightly better than your average human driver.

Ian Krietzberg:
I like the example of the juggler. Um, cause I think, right, when you're talking about neural nets and how you would have to train these systems to, you know, recognize and respond properly to that versus how a human would do that. Right. If I was driving and I saw a guy juggling on the side of the road, if he was in my way, I would stop. If he wasn't, I would keep going. Right. It wouldn't mess my whole system up. Here when these systems see things that are not in their training data, right? These are like the edge cases right that we hear about So to get to the marker that is good enough that you were talking about What's the approach to have a I guess a guaranteed good enough? Benchmark when we don't know how the training data sets might be incomplete

Missy Cummings:
It's an excellent question. And it basically underpins not just self driving cars, but really robots in our world anywhere that could have a safety implication. Let's take the example of the juggler. I feel like during the day, if we could train the system enough with enough pictures of a juggler, assuming we could capture the balls in the air from the background, that's one of the problems is the background clutter could cause a problem for seeing it. But let's just say the system was really good and could see the juggler during the day. One of the things that we know is a huge problem with self-driving cars, indeed all computer vision systems, are rotating lights. So let's say we put that juggler juggling three balls that were flashing balls. It would not change the juggler's performance at all. But those rotating lights inside the balls, because they rotate and at every timestamp, they're slightly different. They have a dynamic difference just across multiple seconds. We've seen this with Teslas. We've seen this with self-driving cars. Computer vision systems really struggle with rotating lights. And indeed, we don't know how to solve it. we being academics. It's just a, when we start to think about the uncertainty in the system, rotating lights fall into the unknown unknowns category. The juggler himself would be, I would call that a known unknown. Look, we know that there are jugglers in the world. We know that they get out onto the intersections. So, you know, it might've been unknown in that original instantiation of the training set, but we know about them and we can fix that. The rotating lights problem falls into the unknown unknowns. And we, computer vision people, engineers, computer scientists, we have no idea how to stop that problem. It's kind of akin to large language models and hallucinations. We have no idea how to not have hallucinations in large language models. We also don't have any idea how to fix false positives and some of these problems with rotating lights. And so if we don't know how to fix it, then you have to ask yourself, well, what can we do to mitigate that risk in case that's a problem that is both a technical solution and a policy solution, but we really don't know how to do either.

Ian Krietzberg:
Yeah, I mean, when it comes to AI, right, and you're talking about hallucinations and large language models, which, for anyone who doesn't know, kind of underpin the popularized chatbots we see in generative AI, there are these big problems. And there are certain ways in large language models that If you clean the data very carefully, if you use RAG, Retrieval Augmented Generation, or if you only operate on smaller datasets, the risks of hallucination are reduced, but they've never really gone away so far. The rotating lights thing is fascinating. When you're talking about the, I know you mentioned earlier that a lot of the problem is in perception. And here we're also talking about, it seems like with the rotating lights, you've got issues with perception and also with hallucination. Is this a problem that could be maybe partially solved with way better cameras on that side? Or is this a fundamental AI issue that we're just kind of up against a wall in how to respond to it?

Missy Cummings:
So the hallucination problem I forecast is going to be the number one showstopper for self-driving cars and also for large language models. You know, I think generative AI is great for creative tasks, but they simply cannot be guaranteed to give you a correct factual outcome ever. And it's deeply alarming that the US government, for example, says, the military says they're going to start using large language models for planning. They should not. Large language models, because they are guaranteed to hallucinate and will never give you a guaranteed factual outcome, they simply can't be used in safety critical tasks. So I would say the same, we have to start thinking the same thing about computer vision systems that hallucinate, which all of them do, by the way, because the hallucination problem in computer vision is the same as in generative AI. It's a guaranteed outcome. They just simply cannot not hallucinate. So I'm not saying that that doesn't mean we can't use them. It just means that we have to up our game in terms of risk management. We have to develop better systems that can predict when and where and when we're in these regimes of unknown unknowns. So for example, fire trucks, police cars have rotating lights. And Teslas and Waymos and Cruze self-driving cars have all had trouble around police cars, first responders with these kinds of lights. So one risk mitigation path could be, OK, when we recognize that there's a first responder, and you can do that through a number of ways. The computer vision system can see the fire truck, the big red truck. We could put some kind of sensor, some kind of RFID sensor. We could have a connected system that would let them know one is around. then the plan should be I'm going to pull this car over until this rotating light vehicle is no longer anywhere near this car and then I'm going to continue my operations. So that's actually a very straightforward solution and it will mitigate the risk. The problem is that companies don't want to admit that this risk is real and is that they're never going to solve it, at least under the current ways that we use neural nets, unless there's some kind of fundamental breakthrough, we're always going to have this problem. But the companies don't want you to know this because they don't want to admit that there's one area that they really don't have under control. And that is also true for large language models. The companies swear, swear, swear, swear, swear, open AI that is not open, swear that they're going to get this hallucination problem under control. They will not. They never will. And I predict that in the next year, everyone on the planet is going to start waking up to this and realizing, especially after a number of lawsuits hit the books, that you just cannot use these systems to, for example, replace customer service. My county in Virginia wants to use them to do 911 calls. Are you insane? That is the absolute worst possible use of them. I just think that the world, the tech world, and this is true in the United States especially, we want to believe in the magic of these technologies. But the over-subscription to what we want to believe in in technology is overshadowing our ability to think logically about them.

Ian Krietzberg:
I agree with you on that one. What you're talking about where people will start waking up to the fact that hallucinations are here, they're not going anywhere. I think that will happen as well. And I think the reason that will happen is because we're seeing a massive integration of the kind of unsupervised experiment of AI, whether it be generative AI or self-driving cars or you know, all these other applications, they're being pushed out there, right? Waymo very recently closed a almost $6 billion funding round, and they're using it for expansion. They're in a few cities, they're in Phoenix and LA and San Francisco, and I think a few places in Texas, and they want to be in more spots. Now, I think this is even more recent, right? They're, they say they're doing 150,000 Waymo rides each week where a couple months ago was a hundred thousand right and they're scaling up Elon Musk says I don't know if this will actually happen likely not But he says that unsupervised autonomous Tesla's will start operation next year in Los Angeles in Texas Now there's a big bar to regulatory approval, but these things are getting pushed out. It's this big experiment that And in the case of self-driving cars, people have died. I think NHTSA has 14 open investigations into Tesla's FSD, which isn't really full self-driving at all. So in terms of people waking up to the fact that in safety critical situations, you should not be using what we call AI today in whatever application it is. Is there a way, I don't know that this would even happen because of the corporate interests at play, but is there a way to study and work on mitigation and improving the efficacy of these systems without just pushing it out in this massive experiment that impacts people? Is there a way to test these things safely?

Missy Cummings:
So what you're talking about is really the description of the research that I'm working on. You know, I have one of my PhD students is working on how can we change the way that we assess computer vision outcomes to align with reality. Another one of my students is looking at Where are all the sources of subjectivity in the creation of AI that are probably causing it to fail once it gets past deployment? So I do believe that there are ways to do this, but I think it fundamentally just has to become a priority for the people who are trying to commercialize these technologies. I did want to go back really quickly and clarify one point because I do think that this is a common misconception. Tesla is not self-driving, has never been self-driving. I don't care what Elon Musk claims, they have generated zero self-driving miles for regulatory purposes. I have not seen any reports at all that show that they've taken a person out of the loop and are doing any kind of driverless testing. So when you talk about the fatalities, which there have been many, I do think we should be fair to the self-driving car companies. Other than the Uber running over the pedestrian many years ago, there have been no fatalities by self-driving cars. And I, that is, you know, I think that we should be clear about that, although Cruz almost killed someone. But, you know, for the most part, they've had a lot of success, and I don't think we should take that away from them. And Tesla is doing all the damage because they're claiming to be self-driving, but are not self-driving, which is giving drivers too much self-confidence. I do think that we're at a at a real step change point right now, and that is with Waymo getting on the highway. So while Waymo has been, to their credit, pretty safe up to this point, the move from staying at 45 miles an hour and below and then going up to highway speeds does carry significant risk. They know that because they canceled their trucking program. Waymo used to have a trucking company called Via, and they canceled that program. And it's interesting to me that, given that cancellation, that they're moving back into the highway zone. And indeed, what you'll find in the Tesla crashes is that these highway speed speeds, this is just a physics problem. These speeds are leading to a lot of these deaths. So I think that At this point in time, once we see Waymo actually get on the highway with no drivers, I think that that will be the hallmark of whether the technology is truly going to be successful at scale, all places, all the time. I don't think it's going to be a success for them. That doesn't mean that they can't still operate in urban areas at slow speeds. Indeed, this is the sweet spot. I wish that Waymo was in every city, at every airport, taking people from the airport to the downtown area. This is a huge In most major cities, New York, like it's horrible to try to take the train from the city to downtown, right? This is the perfect application. So I wonder sometimes if the companies couldn't, you know, why are they not satisfied with moving into the domains that I think are best suited to them instead of trying to do all things everywhere all the time? We will see.

Ian Krietzberg:
Waymo is an interesting case. I'm sure you saw I think a few weeks ago that they released their safety data from their first 22 million miles compared against humans from that same time and they said Self-driving cars are safer. I think over that time period right they had I've got it here, right? 84% fewer airbag deployment crashes, 73% fewer injury-causing crashes, and 48% fewer police-reported crashes. But to me, it's not really quite a complete story because the fleet size is so small, operating in areas with so many people. And we don't know the average or median duration of the rides they're doing. We don't know. A lot of details about them. Um, and the scale is relatively small, like 22 million miles is great. Humans drive a lot more. Um, and in terms of the safety scaling up alongside their operations, scaling up, especially when you get into, um, highway driving and just more opportunities for higher speed. Edge cases and strange reactions. I don't know that the safety would scale up in kind.

Missy Cummings:
Yeah, I know. I take Umbridge. I don't think it's 22 million. The last sets of numbers I saw were less than 10 million. I personally have analyzed the California data, and that's from all of Waymo's operations, and at best, the Waymos are on par with TNC drivers, meaning your average Lyft or Uber driver. Which I don't want to take that away from them. The fact of the matter is that they are as safe as one group of drivers. But it turns out those drivers are about four to six times more dangerous than your average driver. So that's kind of a meta concept that I just mentioned. I think everyone should be alarmed that Lyft and Uber are having accidents four to six times more often than you or me. We need to look into why are they being forced to use their phones so much, and is that a good idea? But I don't want to take that away from Waymo. They are, from the data they are reporting to the state of California, on par with TNC drivers. But as you very much, your intuition is 100% right, all the statisticians agree that you need about 250 million miles of operations And that's not including your testing. That's just commercial operations to be on par, a statistical fair comparison with humans who generate trillions of driving miles every year. So there is no true, real statistical calculation comparison that can be made right now because the numbers are so low. But Waymo is doing this to try to persuade the public and the regulatory agencies that they're safe. I, you know, which I understand that that is their That is what companies do, right? They're trying to use an advertising campaign, using papers that I do not believe should legitimately be published, and even where they've been published are sketchy at best. We need to sit back and start even asking, is that really, should we even be comparing that? We are using that seemingly as the most important marker, but the reality is that we know that they are rear-ended two times more often than humans are rear-ended. And let me link it back to those hallucinations. We have strong evidence that suggests that Waymo vehicles are exhibiting much higher emergency braking rates than humans. Those emergency hard braking maneuvers are catching drivers behind them off guard. So this idea that if you hit somebody from behind, it's your fault, we need to start rethinking that. Because the hard braking actions by Waymos, Cruzes, it's part of computer vision. Remember, we talked about the hallucinations. We think that they're not all but a large part of the emergency braking actions are being caused by hallucinations. So what are we going to do about that? If the companies won't admit that that's happening. They won't let us see the data that would let us unequivocally know that it's happening. So if they're not going to do something about it, then regulators need to do something about it. They need to paint the cars a different color, tell everybody to stay five car lengths away from them. You know, I do not think the public understands that you're in real danger of rear-ending a self-driving car if you're following them. So we need to either put a policy in place that is better protective of human drivers, or we need to start putting technology fixes and or thresholds that they have to meet before they can be deployed.

Ian Krietzberg:
Yeah, the the painting them a different color just so people know kind of like the student driver signs, right? That sounds like a really interesting idea now that there's a couple of Points that I want to want to get at based on what you were just saying there The first is which we've talked about Waymo a lot We talked about Tesla and I want to talk more about them in a little bit bit, but they're not the same. As you mentioned, Tesla is not self-driving despite the fact that Tesla's driver assist software is called full self-driving. It's not. Waymo is, I guess, considered a robotaxi. I think they're a level four on the system scale. But their technology is different. Waymo has a pretty lengthy stack of different sensors, LiDAR, radar, on top of the computer vision as backstops. Whereas in the Tesla, the human is the backstop, which is why you have to pay attention with your hands on the wheel, eyes on the road, if you're in a Tesla on FSD. Starting with Waymo, when we're looking at that whole stack of different sensors and technologies, what are the limitations of that setup? Is that a pathway to getting good enough?

Missy Cummings:
So you said, what are the limitations to that setup? Which setup are you talking about?

Ian Krietzberg:
Waymos.

Missy Cummings:
So I think that multi-sensor fusion is a critical path forward. I do not believe Tesla's, I'm sorry to all you Tesla drivers out there, you are never getting real self-driving. Computer vision and all computer vision system will not cut it. I am going to send my daughter to college on the expert witness fees that I'm charging for all the Tesla deaths that are happening because of the over claims that computer vision can do everything. But that doesn't mean just because you have LIDAR that that solves the problem. Indeed, if LIDAR, radar, computer vision, and maybe ultrasonic sensors, maybe you can throw them in there, we can see that they're not perfect because we see these accident modes in the self-driving cars. If LIDAR, radar, computer vision, were perfect we would not see these hallucinations happen and we would not see these heavy emergency braking actions. So it's better to have multiple sensors but not ideal and not perfect. So that's why we tell self-driving car companies look you really need more risk management, whether we paint the cars pink, for example, or pull the cars over when the rotating lights we know are nearby. We've got to start being smarter about how, where, and why we're deploying vehicles in certain spaces because of the fact that we will get hallucinations out of these cars unquestionably.

Ian Krietzberg:
And the LIDAR, like you said, better, but not enough to overcome the hallucinations.

Missy Cummings:
Yes, and I think one of the things that people don't understand is LIDAR, for example, will not work very well with moisture in the air. Rain, right after the rain, sheen, the sheen that comes on roads, oil-slicked roads, you know, after a rain. That's enough to not only destroy, but even worse, degrade the LIDAR image. And that's one of the problems that we see. Dust on the sensors can actually seriously hamper the quality of the information coming in. And what's worse than having no information is having degraded information, because then maybe the system thinks that it's still working okay because it has some information but does not realize that that information is degraded. So we also need to do a lot more about understanding where the right weather conditions, the maintenance. This is why you, yourself, or Poppy never own your own fleet of self-driving vehicles, because the maintenance required to make sure all the sensors are clean and up to date and the software is up to date, there's so many points of liability that only large corporations are going to be able to run fleets of robo-taxis.

Ian Krietzberg:
That that's an interesting point. Right. And obviously that's one of Elon Musk's many, many promises. He is, he is a promise man. He is a hype man. Um, he's not a scientist. Um, but you know, you see a lot of other car companies that are pursuing different iterations, I guess, of driver assist software at different levels. BMW and Mercedes are both very interested in selling cars that come equipped with a software that can whatever. Take over when you're on the highway between this speed and this speed or but your point about the idea of a true robotaxi Making the most sense to be in a robotaxi company because of the maintenance Like, is that, do you think that's going to be something that when it comes to the other car makers that are trying to build a higher level, uh, self-driving system, is that going to be something that kind of stops them because it's to do it right. Right. To do it safely. You need really expensive systems and very thorough maintenance.

Missy Cummings:
Well, we've seen Apple get rid of their self-driving car. Ford got rid of their self-driving car. There is clear evidence that there are companies out there that are just like, we're out. We're out. It's too expensive. We can't handle this and the complexity. I also think the car companies, the traditional OEMs are lagging Tesla. And, you know, it's funny to me to watch them. It's like, it's like watching high school kids, like Tesla's the cool kid. And then all the other companies run behind the cool kid trying to act like the cool kid. But nobody really asked you know, why is that the cool kid? Is it really because they have all those sensors or is it because he wears a leather jacket, for example? When you start to ask your average driver, your average driver, not what we would consider tech bros who are between 18 and 48 and love all things technology. I mean, this is Elon Musk's, you know, that's his base. But if you start to ask everyone else, it turns out Increasingly, people are trying to turn off all these technologies in Teslas, but also in other cars. And I think we're going to see an increasing pushback. People do not want a lot of technology in their car that, first of all, surprise them. That's a big one. People do not like the lane change assist, like if you start to drift out of your lane. People like lane departure warning, but they don't like it when the car tries to put them back in the lane. And there, you know, you will see lots of complaints, so much so that there's a website called MyCarDoesWhat.org. There's a whole group of people who've put together a, effectively, a consumer advocacy lobbyist group to start getting rental cars to tell you when you've put these technologies in their car, because they're surprising people when they drive. And people who own these cars, they don't want to have to fight with their system. I think we're seeing basically AI slash cool tech exhaustion. You have to fight with your phone to make sure that you've got all the right settings and where do I get to... Where do I get to the privacy settings, and how do I turn everything off? You know, now we're getting that way with cars. And people are increasingly unhappy with the level of sophistication that they have to have to be able to operate these things. So we will see. I predict that the next really viral car is going to be something like a Tesla on the outside, but a Volvo on the inside. Lots of airbags, virtually, you know, no fancy technology, but AEB, lane departure warning, and a reversion to buttons. You know, we've seen everybody so sick of glass displays because they shouldn't have to go through a glass display to change the heat or the air conditioning in their car, right? So I think we will see kind of an old school, new school merging in the future.

Ian Krietzberg:
That would be fun. I hate the glass screens. I drove a Tesla once, just not, I like my buttons, I like my switches. That's what I like in my cockpit. I've got two more questions for you here. One of them, and a lot of what we've talked about, right, is there's a lot of hype on this side coming from the companies who coincidentally want people to buy their products, right? And on this side, there's a lot of people that don't know what to believe or pick up the hype, right? And as such, there's a lot of misconceptions. This is true of all of AI. But I'm wondering when it comes to, I mean, I guess the whole sector, but when it comes to self-driving cars, robotics, what are some of the biggest misconceptions that, you know, you run into that people tend to have around these systems?

Missy Cummings:
So I'm not going to speak about Teslas because they are not self-driving. I think the biggest misconception about self-driving cars is that they're actually self-driving. We know that all the self-driving car companies, the Waymo's, the Cruze's, the Zoox's, they all have remote operations. So there is a human and often teams of humans helping to control one car. Indeed, Cruze admitted to it's taking 1.7 humans to control one of their self-driving cars. What should jump out to you when I tell you that is, Why even have it then? If it only takes one taxi driver to operate one car, if you have a robo-taxi that takes 1.7 of humans, that is not, you're never going to break even, much less make a profit. So it's not clear how that's going to shake out. And if that number is going to get higher, going to get lower, There's a lot of reasons that those human supervisors are in place. They must be in place because the cars are still having so many problems. So maybe that number will come down, but right now I don't see any evidence of it.

Ian Krietzberg:
I guess the last thing I've got for you really quick, the kind of conversation we're having about hallucinations are part of the architecture. They're not going anywhere. From the Elon Musk side of things, and I don't agree with this either. I don't agree with much that he says. It seems that his path is create general intelligence, AGI, or an AI that would be roughly as smart as a human, whatever that means. And then the self-driving cars will be solved, hallucinations will no longer exist. I don't really know about that, but in terms of what it means for self-driving cars, as we keep improving to a degree, thinking about mitigation, hopefully, do you think the idea of having safe self-driving cars in the near future as a common kind of ubiquitous site that is reliably you know, guaranteeably safe, do you think that's a realistic achievement?

Missy Cummings:
I want to break what you just said down into two parts. Great. The first part is we are nowhere near artificial general intelligence. I don't care how many foundation models you make, Ray Kurzweil, you can have all the data, you could train the data with all, a neural net with all the data in the world, you're still not going to get AGI. Neural nets are just simply pattern recognizers. AI as a tool and toolbox. I'm not saying we won't ever get to AGI, but we're not going to get there in my lifetime, and we're not going to get there in my daughter's lifetime unless there's some kind of dramatic breakthrough. So everyone just needs to calm down. You're not going to start losing jobs. Large language models are not amazing tools. They're just helpful sometimes. And so you have to take that same idea with self-driving cars. We are never going to have self-driving cars driving everywhere all the time under all conditions. You are not going to own your own self-driving car fleet. But that doesn't mean that they won't be successful in some domains. I think the package delivery, Nuro had a great product where purpose-built small vehicle that did last mile delivery in a well-mapped area. That's a pretty good application. They haven't yet commercialized it, and it's not clear if they're going to be able to, but I think that's a good one that companies should be thinking about. Zoox, Amazon's shuttle, self-driving shuttle. This makes way more sense because it's at economies of scale. We're taking multiple people around And we're using, if you use small footprints where well-mapped areas with clear pickup and drop-off zones, also great, amazing applications. I'm not saying we will never have them. I'm saying that because of some of these inherent unknown unknowns, we're just not going to have them at the scale that we're being promised. But we shouldn't look a gift horse in the mouth, right? We need to understand that We need to think more about risk mitigation and put them in the domains that make sense and give them the additional technologies and policies that they need to be successful. But it's not going to transform transportation as we know it. And taxi drivers are not going to be out of a job.

Ian Krietzberg:
That's it for today, Missy. Thank you so much for your time.

Missy Cummings:
Thank you so much for having me.

Creators and Guests

#4: The truth behind self-driving cars - Missy Cummings