#6: The Coming Revolution of Man and Machine - Nada Sanders

Ian Krietzberg:
Welcome back to the Deep View Conversations. I'm your host, Ian Kreitzberg. And today, and I know I say this all the time, and hopefully I will always be able to say this. We've got a fascinating episode for you. My guest is Dr. Nada Sanders. She's a distinguished professor at the D.M. Moore School of Business at Northeastern University and an expert in forecasting and global supply chains. She's also the author of the book, The Hume Machine. The book breaks down what AI and the enterprise might mean. It's very focused on a human-centric approach as the kind of tool for future enterprise construction. Now, there's a lot of complexities to this, to what the book explores, the reality that we're dealing with. You know, in some instances you have AI working kind of in collaboration with humans or humans working in collaboration with AI to achieve better outcomes. And you see this in a couple of places. I think drug discovery is one of them. In other environments, you don't have that at all. You have corporations using AI, frankly, as an excuse to reduce their staffs. And that's just one element, right? There's a very complex regulatory environment going on here. There's a very complex scientific environment going on here. current artificial intelligence systems, as if you're a subscriber to the DeepView newsletter we talk about all the time, current systems are bounded by very severe limitations that are a component of the current architecture. If you've heard the term hallucinations, there's no evidence that hallucinations are going away. If you haven't heard the term, It basically refers to confident mistakes that these systems make, where they don't get things right. They confabulate. They output incorrect information. When people act on that information, that's when we have a bit of a problem. And there's just so many complexities, and it's such a fascinating environment. But that does seem to be moving very quickly. And so Nada and I, in a slightly longer than usual episode, we get into all of it. And truly, I think there's a lot here that's totally worth thinking about for everyone. Where regulation lands, we don't really know. We won't know. A lot of things about AI we don't know. And we've talked about this in previous episodes, such as the first one that kicked us off, right, with Gary Marcus. And that comes up a little bit in the ensuing conversation here, where people getting involved matters. People knowing what's going on and knowing what's at stake and choosing to care. And that absolutely matters and could absolutely impact the trajectory of how this technology is integrated, how it's used, what it's used for. And hopefully, and as Nada is hopeful of, could help ensure that humans remain central to what's going on, right? This is human society. Humans should remain central to what's going on. So as always, there's a lot. I think it's really cool. I had a lot of fun talking with Nada, and I'm sure that will come out in the following hour plus as we dive into all of it. So buckle up, get ready. Thanks so much for being here. Nada, thank you so much for joining me. How are you doing?

Nada Sanders:
Great. Thank you, Ian, for having me. Really happy to be here.

Ian Krietzberg:
I'm so excited. We were talking off camera before we hit record, and I think this is going to be a fantastic conversation. Now, you, of course, are the writer of the book, The Hue Machine, and that's the main thing that we're going to be talking about today. I read the whole book. fascinating. It was airplane reading in between going from here and visiting my buddy in Texas. And the place that I want to start, right, the impetus for this second edition is the pandemic. And I just want to kind of start with the link between the global pandemic and the AI rush and how those two are kind of interrelated and your findings in that kind of arena.

Nada Sanders:
Well first I want to say I'm really impressed that you read it. That was one of the questions was was it the first or the second edition and the link between the first and the second and the impetus for the second is actually really important because we're a globally connected world and we can't look at any one of these things AI technology, inflation, all of it in isolation. And the world is very different now than it was in 2019, 2018. So to kind of backtrack, I've been doing this research for a very, very long time, and it's ongoing. I talk to executives all the time, I spoke to a few last week, and I'm following in real time what is happening in the business world, the corporate world, and then in advancements with AI and technology, and how are they married, and then of course, obviously, with global supply chains, tariffs, and all of it. So with this research, what was being done in 2018-2019, the first edition of this book came out December 2019. The impetus for that first edition was to go into organizations and to find out the latest and greatest of what AI and technology can do. My background is in STEM and I'm an engineer, mechanical engineer in a past life. I have a PhD in operations management. So it was really like, let's look at technology. But what happened is every company, and I'm not making this up, there wasn't an exception. And we looked at hundreds of companies talking to CEOs and SVPs and technology leaders. Every company said it was the culture, it was the people, and it was really quite stunning to see that. Hence, the Hume machine came about, and that first edition was really about the interaction of what the enterprises, the contours of the enterprise of the future, which really combined humans as the centerpiece. This is December 2019, then we know what happened in 2020. And it was really staggering because remember, December 2019, the Human Machine book comes out. I remember being in Boston, Boston Robotics. We're talking about the latest and greatest in technology and all of it. And then we're locked in, it's March 2020, I see a New York Times piece that tells me how to sew my own mask. And I'm just like, oh my God, how did this happen, right? So obviously we know what happened in the meantime. With all of this, we were asked to do the second edition. And there are findings that I can share with you that were really surprising and unique. But the bottom line is, is that with the pandemic, we really saw the world come to a halt. We saw how connected we were. how we relied on technology, how we relied on, you know, the average person became aware of what a supply chain is. What happens when you have a shortage of a semiconductor that, you know, the average person just thought, you know, I just go to my computer, I press a button, Amazon, something shows up. And how it all works and how it comes to happen, people really didn't think about. And suddenly they began to think about the complexity of all of it. And during the pandemic, I have to tell you, I slowly became frustrated. And let me explain. So I'm a professor. In the beginning, I got called all the time to comment on what was happening with hand sanitizer or what was happening. Do you remember we had the various worker We had the meat plants that were closing down. We had the issues of eggs and cheese and all of it. And at first, you know, I remember being on CNN and CNBC and it was really cool. Like, wow, I'm, look at me. But then after a while, it got kind of old. Like, I wanted to say, can't we get this straight? Okay, you're asking me to repeat the same crap over and over and over again. It's the same stuff. The way things are moved through the system, technology enables communication. There are challenges with scaling, meaning that, you know, if you make 20 things, it's a lot more difficult to make 2,000. You've got throughput that, so when we had the vaccine or when we had the different versions of it. It's ultimately the same principles of supply chains and how that works, it applies to everything. So when the pandemic, we emerged, we come out, I was thinking, oh, you know, finally, we've learned lessons. We're going to get our global supply chains to function because just like you and everybody listening, you know, I didn't want to be locked in either. And I wanted to use my knowledge to help so I can get out too. So I can get my mask and hand sanitizer and go to a restaurant just like everybody else. Well, then comes the launch of ChadGPT and OpenAI, and this whole nother hyper cycle comes out where it is really a big bang. And we are now in this reality where there are these technological advancements that are so hyped out. And so we were asked again to write the second edition of the Humachine book, and believe it or not, to my surprise, the first edition I learned the importance of humans in the Enterprise, because that's what everybody told me. With the second edition, it was humans on steroids. Look, this is not that we're going to jettison technology. It's not we're going to jettison AI, that we're not going to use it. Of course, it is an essential tool. But nevertheless, it is a tool. What I learned is the even more important are the human skills that we are atrophying, that are lacking, our ability to connect, to interact. We are slowly losing those skills. And you know what's going to happen, Ian? If you look at an enterprise, and I'm purely making this one up, but if you take companies that appear similar, let's say Coke and Pepsi. And obviously, Coke and Pepsi, if anybody's listening, you're going to say, no, no, no, we're different. You're different in nuances, but look, you're in the same industry. We know what you make. If you look at Coke and Pepsi, their access to AI, to LLMs, to deep learning, it's similar, right? They're going to have access to the same kinds of models. But then what is going to give one a competitive advantage? It is going to be the ability to use them. And it's the ability to make those decisions that connect them to their suppliers, their customers, to their markets, that the human interface. And that is what we write about in the book. And in fact, one of the key things that I think is Kasparov's law that is in the book. It's that process of integration of people and technology. And I have to tell you, Ian, I had the opportunity to speak with Garry Kasparov this past summer. He visited Northeastern, and I had a chance to chat privately with him. And I was saying, Garry, tell me, you got to tell me, what is going to be the different? What is the thing that is different? And he said, you know what? He said, number one, In every endeavor, in every enterprise, we need to understand that 90% of the time, the algorithms, just let them do what they're going to do. They're going to be better and smarter than we are. But we have to have the expertise and the knowledge that 10% of the time, we make decisions. We intervene. We override the system. And that is going to make the difference between good companies and excellent companies. The excellent companies are those that are going to thrive. And those are the companies where the experts, the humans, know when to override the systems. And I think that is really important. So we come back to humans and ability to interact.

Ian Krietzberg:
And I guess that human element, right, you're talking about companies and enterprises and Pepsi or Coke, but that same idea kind of applies to any group or organization of not just human oversight, but human focus, human intentionality. that there's so much there, right? And kind of the first thing I want to pull apart, right, is you mentioned skill atrophy. Now, I think this is a tremendous risk that isn't really talked about when we talk about farming out skills to automation. I think even as things stand today, right, when you're talking about introductions of AI tutors in schools and essay writers and, you know, I saw a tool the other day that simplifies fictional text, so make the great Gatsby more readable, turn a sentence into a much smaller sentence, right? We're losing skills there if you look at that at scale of, reading comprehension, writing, thinking, as it relates to writing, which helps us think, right? And that's just in today. But I think on top of that, there's a layer of, you know, we do live in a time of a changing climate. We live in a time where we are seeing massive storms, where power goes out. And I think, as you mentioned in the book, the robots don't run if there's no power. And the risk of us kind of farming out vital skills of human interaction, but also things that we need to know how to do, at a time when we're seeing an environment where there's no necessarily a guarantee that we'll always have the powerful capabilities to run these systems all the time, puts the idea of atrophy very high on a list of concerns that I feel like are just not discussed.

Nada Sanders:
I think it is extremely important. One of the things that I have, I've talked to a lot of business leaders and executives, and I think there's a couple things. I think that what we are seeing is If we look at Gen-AI models, and they're so eloquent, right? And as humans, we just think that they're so smart because they can speak so authoritatively, right? just a ruse, but people who have an expertise, what we call domain knowledge, they're experts in their area, they actually do better with them because they can use them as this amazing tool to query. And they know to be able to pick through hallucinations. They know what to ask. It's not just this general questioning, but they can get pretty specific. Individuals that are novices don't know how to do that. They don't have the knowledge, the domain expertise. What I am concerned about, among other things, is this You know, we have a natural pipeline in enterprises across sectors where you take in, you know, new incoming talent and you put them through the paces they learn over time and eventually gain that domain knowledge and domain expertise. I'm seeing two things. On the one hand, I'm seeing a lack of hiring and layoffs of younger talent. I'm seeing this in some companies. You keep some of the senior talent that are working with the AI as their apprentice, if you will, and they're working together. The problem is that severs or cuts off this pipeline of talent that needs to be there because we're going to lose that, that ability to get these people in that are going to be, you know, coming in and have that domain expertise. that takes time to develop. On the other side, I am seeing companies that are letting go of more mature workers that have that knowledge simply in order to pay for a lot of the AI investments. And it's just not very smart. They're going to find themselves flat-footed because what I'm hearing and seeing is that a lot of boards are putting pressure on their companies to, you know, say something that we're doing with AI, because we've got to be on the forefront, and we've got to, you know, just say that we're doing it. The fact is, is that the use cases haven't necessarily been thought through very carefully. How you actually measure the ROI and that return has not been, what are the KPIs? That has not been thought out. And definitely, how do you train people? How do you keep them in the ranks? How do you keep this pipeline of talent that understands the business, the ever-changing business environment? Because, you know, at the end of the day, it's going to always be the two in tandem. Now, that's not the case in every sector, right? We're talking in general terms, but I'll tell you, for example, I have a financial advisor, been with him for years, love him. Now, granted, I know I can do by myself, I've got the algorithms that are gonna be great, that are great to do trades and all of it, but I still wanna talk to Bill. Bill, how are you doing? How are the kids? How's the family? And I know that Bill might not be as smart or as good as the AI, but Bill holds my hand, you know, if you will, and talks me through it. When you're in medicine, of course we know what, you know, we all know, you know, everything from radiology through melanoma, identification, we know what AI can do. It's magnificent, but I don't want an algorithm giving me my diagnosis. I still want a human telling me. Deals and promises are ultimately still, when it comes to big accounts, big clients, big suppliers, it's going to come down to that human interaction. Do I like you, Ian? Do we get along? Do our kids go to school together? Whatever it is. And those are those human skills and we're losing all of that if companies are not careful, if society isn't careful, if universities aren't careful to make sure that we are training and teaching and developing those skills.

Ian Krietzberg:
Right, yeah, there's a lot of thoughtfulness, right, that seems to be lacking in this just kind of desperate rush. But, you know, your point about Bill, I like that point. I think it raises an interesting idea that also is not often talked about, and it goes beyond, you know, the initial thing is right now, When we talk about that, when we talk about LLMs in medical environments, right, current AI technology, which some researchers do not like the term artificial intelligence because we don't necessarily have it, definitions are vague, we have prediction machines. And these things are bounded by pretty severe limitations, right? Even if an LLM hallucinates 1% of the time. That's too high of a percent in a medical field, in financial, right? But beyond that too, to your point about, you know, talking to Bill and having that kind of human connection. AI is not necessarily the solution in every business and every situation. I think we see with call centers that the most consistent complaint that I hear about call centers is, I just want to get to a human. Why can't I get to a person on the other end of the phone? And this has been going on for years, long before the current generative systems. And a conversation that I've had with some researchers is along the lines of, humans like humans. They like human things. We're social. beings, even if you get rid of all those limitations and all those things that kind of prevent it from just doing everything. People don't necessarily want that, right? People might very well pay more to have a human financial advisor that they trust, or to use a medical doctor, or to get food at a restaurant staffed by humans and not futuristic robots wheeling around the place, right? We crave connection.

Nada Sanders:
We actually crave connection. And, you know, in my work, and as I'm evolving and paying attention, is I'm learning more and more about that. Remember, again, I came from a STEM background, that initial, you know, there were years, and within that, my expertise is in forecasting. And I came in, you know, my dissertation was on forecasting quantitative models, ARIMA, things like that. And then as I work with companies, and you and I were talking about this before we actually started, my very first project, I was in my early 20s, and I remember going into a company, a large organization, thinking I was going to dazzle them with my mathematical models. And I looked back and I thought, boy, was I silly. I was naive. I had no idea. Because you have, there are emotions, there are nuances. If we want an example of, or examples of the inefficiencies or lack of mathematical models, look what happened during COVID. Look what happened with the election. Regardless of the models and all of it and the polling, the election, the outcome of the US election was a surprise. It was a surprise because people have emotions. They react emotionally. to things, sometimes not wanting to articulate that. That applies to everything. And so in business, and it's interesting, too, I was talking to somebody in freight forwarding, which is very much a logistics supply chain task, there are all kinds of algorithms that can optimize your freight forwarding, but one of the people said to me, look, I'm going to commit to somebody that I trust, because the algorithm can say whatever they say, but if I know, Ian, you are going to give me the delivery on time, I trust you. you're gonna keep your word, because at the end of the day, we are in the physical space, and I need physical goods to show up at the right place at the right time, not in an imaginary world or a digital world, but in the physical world. And if I trust you to do that, that is a heck of a lot more important than anything that is committed to in the digital space. So I think that we, and I'm telling everybody in organizations, look, We like to think in a binary way. Are we for AI? Are we against AI? Are we marveling at it? The reality of it is it's the two together. And AI and machine learning models and analytics, they are so much better than we are. That's more of X paradox, right? They're so much better than we are at so many things. You and I get tired, you know. We can process only so much, but we have this very deep need for human connections. As long as we are operating in the physical world, we are DNA, we are proteins, we make gut decisions even in the business world. that, fostering that, fostering that in a team environment, those are things that I think are going to be ever, ever more important for all enterprises, whether we're talking about, you know, consumer products, we're talking about retail, electronics, whatever it is, and being able to have a system that integrates technology. And the other aspect that we did not touch on is how expensive these technologies are. how much energy they consume. And so I think that it's really important for enterprises to be very careful and strategic about what they acquire, what the use cases are, how they acquire it, and then invest in training of their people and invest as to how decision making and processing is made within the enterprise in order for that integration to take place, and that human skills continue to be fostered, that we don't get isolated. Because we're seeing social media, isolationism, loneliness, atrophying of the human skills, all of it, that ultimately propagates into a whole host of different of different issues. And I think being able to focus on that for companies, and I say companies in a very broad sense, enterprises, and that could be government, it could be, you know, public institutions, nonprofit, NGOs, as well as obviously, you know, large corporations and small. businesses. For every enterprise, I think it's absolutely important not to forget about people, to cultivate the human skills more than ever, and then rely on technology to automate aspects of the tasks that make it easier. But that integration of the two is absolutely critical.

Ian Krietzberg:
Right. And that's the kind of whole thesis, right, of the human machine, which is humans are very good at these things. Machines are good at these things. If you combine them together, and what I kind of visualize as a yin-yang type symbol, right, you will get the most activation out of both of those elements. But at the same time, right, at the same time that you have this human as an asset approach and argument, As of right now, we're not seeing that play out right in each environment. What we're seeing, and to a degree, this is the kind of the, if you look at the birthplace of what we refer to as generative AI, we talk about chat GPT and Gemini and cloud. There's been this evolution from big tech, where you have the internet companies that kind of became the social media companies, and the social media companies are now becoming the AI companies, and the idea is just to kind of integrate and leverage power in those ways kind of across society, and in a lot of ways, a pitch that seems so fundamental to AI because, again, of its enormous cost. And because of that intensity is in lessening the amount of people that you have working for a given enterprise. And I think you mentioned some of this in the book, and we're seeing it start to play out where some people are, some companies are replacing their call centers with mostly AI. I know IBM's CEO aims to, If not, openly replace, reduce hiring by a significant amount. And in the creative fields, right, in the freelancing fields, in writing and software work and cogeneration and illustration, we're seeing, I think a study recently came out, that there was a significant reduction in demand on online freelance platforms for these roles that is associated directly with generative AI. and a slight increase in demand for people that are going to use generative AI in their software generation writing or illustrative work. So the way the dynamic is starting to play out is, you know, already kind of antithetical to the idea of human as an asset and more so as human as a roadblock that, you know, what if it's someone that we won't have to pay for? What if it's, we can do more with a smaller marketing budget, right, because of this? I'm wondering what you kind of have seen on that side of the concerns for job loss, which I think are so intense in certain industries, because unlike other industrial revolutions, this one came for knowledge workers, right, where we didn't think in the past that machines would be even give the illusion of being able to compete with an accountant, writers, right? These other things.

Nada Sanders:
Yeah, actually, I've given I am giving a tremendous amount of thought to this because you're absolutely correct. So as you know, I sort of marry AI technology with enterprise and supply chains, and I'm constantly, you know, being asked, To comment, you know, we've seen over the last few years, whether it's the dock worker strike that just recently happened or whether it's, you know, the rail strike and rail workers. And one of the things that I've been saying actually for a few years now, you know, if it were me and I'm not in the negotiation space with the laborer or any of that, But in addition to money or rather than money, I would be demanding re-skilling. I think that re-skilling, I think for us to say, look, these processes are not going to be automated. is naive. They're going to be automated, full stop. If we think that many of the, like you mentioned, call centers, these are going to be automated. There's a couple of pieces to that. I think, A, I think they're going to be automated. I think so many of the skills are going to be automated, no question about it, and I'm going to come back to that in a second. But I will say that companies that are able to pivot and add a really special human element are going to be really winning in a marketplace where this becomes standardized, right? How do you differentiate yourself? Will you differentiate yourself with this? you know, really outstanding customer service, but you've got to do it in a way that is cost effective. You know, you can't just, you know, offer it to everybody 24-7 kind of a thing. But kind of going back to the reskilling, we are, our attention span as humans is, we are consumed. We are consumed by social media, the work environment at every aspect. What that is doing is it is diminishing our ability to come up with creative solutions that will allow us to come up with new ideas and new businesses in a world where new roles in the work environment are going to be necessary. So yes, we do hear on the one hand current jobs, many of them are going to be lost. Look, we can naively say that the docks are, we're not going to automate the docks. The docks, the shipping docks are going to be automated. Whether we like it or not, it doesn't matter. I could stomp up and down all I want. It will happen. So what does that mean? If I can offer, if I can create an opportunity to teach people to ideate better, to tap into more ideas and their human skills, we might then see people coming up with small businesses, within their current enterprises, new roles, that are more human in nature, things that you and I can't imagine. People ask me now, well, what new jobs will come about? You know, it takes time to see those. It takes time to think about those. And I think that the majority of people simply don't take time to think, to process. And I think we need to give humans The one, the time, the bandwidth to actually process in blue sky, it's what I call it, blue sky. The time to blue sky, and you know, I'm obviously, you know, the term has been used as we both know, but we need time to blue sky in skills. to ideate, to come up with new ideas and new offerings, whether it's in care, whether I'm thinking of everything from assisted living to if you're a Siemens corporation, and you're a SVP or thinking of creating new positions or new jobs, we need to come up with those. We cannot just simply demand old jobs to stay and just stop technology. But I think if we pause and say, you know, technology's great, it can help us, but how do we work in this new environment keeping in mind again, everything is changing. Our climate is changing. Supply chain disruptions are changing. We have a new pool of talents that we need to usher in and give human skills. One of the companies, and I've talked to so many, but one of the companies that I thought was particularly interesting, and I can give his name. Rod Harrell is the CEO of a company called Lean Candles, and it's a small company, relatively speaking, in New Hampshire, won lots of awards, and they supply candles to companies like, you know, Limited Brands and so forth. But what I thought was absolutely fascinating is their focus on developing human skills. Now, yes, they've got, you know, technology, state-of-the-art technology on their assembly lines. but they actually have people that have come in, trainers, to develop things like interpersonal communication. They come in like once or twice a week and they train everybody, and I mean everybody, from senior managers down to assembly line workers to janitors, everyone. How do I communicate? How do I emote and express my feelings in a better way. How do I resolve conflict? You and I, Ian, if we're in a team and we're working and creating, well, it's very easy to say, oh, we're all going to create, we've got a team. But how do you actually nurture this, right? How do you resolve inevitable conflicts that come about? So they've actually done this. And Rod has said to me, and I've interviewed him multiple times, he said, that is our secret sauce. Our secret sauce is investing and talent. And the other thing that's kind of a spillover, he said to me that some of the laborers have commented to him that these very skills that they learned in the corporate sector, they actually were able to take home. They took home, and I think one of them said, you know, we were at Thanksgiving dinner, and you've got your usual disgruntled uncle at the dinner table. And he said, I was able to take some of those skills and actually apply them and not get into a conflict at Thanksgiving dinner because I now have better skills to communicate and to negotiate and to dialogue with my fellow humans. We've lost a lot of those skills and many of us never had them to the full extent. Not only do we need to bring them back we need to further develop them and I'm actually doing that myself because those are the things that's going to be the winning combination with technology that is here to stay no question about it and I'm seeing it with my students as well you know because When you look at the Gen Z's, it's so easy to argue with somebody on social media, but how about doing it in a way that is productive, where I look at you in the eye, where I'm not hiding behind a screen. What's going to happen is that's going to be the winning combination for companies that are able to do that, because everything else is going to become, you know, we call them business and order qualifier. It's going to be standardized. You know, everybody, just like we all use the internet, this is, you know, access to these models is going to be ubiquitous. Obviously, some are going to use it better than others. Companies that are in R&D are going to be state of the art. They're going to use it better than others. But in the world where you need to communicate with customers, with clients, suppliers, create business alliances, you're going to have to have more of those human skills. And, you know, the other thing, too, if I may add, you know, AI, and there are different, I think there are different kinds of AIs that were used, for example, to solve the Alpha Fold problem with DeepMind. And that is just a great example of the kind of scientific endeavors where technology and AI is fascinating. is incredible because you're dealing with something that doesn't have emotions. You're dealing with the 3D of the protein structure and amino acids and it's brilliant. But when you're talking about how are people going to react during a pandemic, How are people going to vote when there's so many variables and they feel in a certain way and maybe they don't want to share it? How are they, what products are they going to buy? It's very emotive. You know, what a number of companies are doing now is, a number of companies, a few companies, hiring futurists, people that actually embed themselves with customers that look to see what people need in ways that they cannot articulate. Ford did this in some of their development of their product designs, watching people how they use cars in post-COVID, realizing that people sleep in their cars. Whether we like it or not, there's no judgment, right? They work. They eat in their cars. So how do we make it a little more accessible? The same is true with all kinds of products, because if you ask people to articulate in product design, many people, most people, most customers can't. Or if I ask you, do you want A? Do you want B? Do you want C? You're going to say, yeah, I want A, B, C. Give me the whole thing. But maybe I watch you and I use psychology and I observe what you need. And it may not be something that you're even able to tell me. That's going to be the winning combination.

Ian Krietzberg:
And that's a really interesting point about The conversation about re-skilling, we hear that a lot. And I think that's often referring specifically to technology, giving people the understanding to use these systems in a better way, to know more about these systems. You have a lot of companies that are engaging in efforts to do that. I know Microsoft and IBM are two of them. But the idea of re-skilling, not just on that side, but in the human stuff, We've lost, you know, as you said, due to social media and these other aspects and younger generations growing up with a phone in hand and but being, you know, not inclined to answer the phone, right?

Nada Sanders:
Absolutely.

Ian Krietzberg:
These kind of human elements and then, you know, the idea of the future jobs. That's one we hear a lot, like we reskill and there will be jobs. We've had job displacement in the past. In the 1800s, early 1900s, there used to be a career of lamp lighters that would go around and light lamps. We don't have that anymore. And I think when we think about future jobs, to me, there's kind of two paths, I suppose. And what we don't know, as you and I said before we got started, there's a lot here that no one really knows. And that might be disconcerting, but we just don't know the answers to things yet. To me, there's two paths, and we just don't know how they're going to converge yet or where they're going to separate. On the one hand, you have, I guess, the normal sci-fi future where the technology becomes like the internet, right? It kind of, everyone has it, everyone uses it, it's grounded, it's a tool, and then the human aspect is what matters. In that environment, I'm of the belief that any creative jobs, at least, if we're talking about those industries that are lost, will come back. Because for the same reason that humans crave connection, humans crave connection. They crave movies that are made by someone. They crave music that is made by someone. There have been studies that If a person is looking at two generative AI images, both are generated by AI, or one says it's generated by AI and the other says it's made by a human, they prefer the one that says it's made by a human, even if it was actually generated by AI. That classification. gives the idea of greater connection. And so I think that would come back, right? And that kind of supports these ideas of people want people. Now, the other path, right, is the path that you hear OpenAI and Sam Altman talk about a lot, and Elon Musk sometimes. which is the end goal to erase human jobs, is the end goal to create this kind of system where you don't have workers. And these guys have talked about a universal basic income to accommodate for what they claim is a soon reality where jobs will not exist. Now, again, we don't know stuff. There's no evidence to suggest that what they're saying is true or will happen. And I think there's a lot of limitations there when you're talking about that. But I'm wondering about those two paths, the kind of normal path versus the UBI. job replacement, very kind of dramatic path that would put these companies in positions of very significant power. And to me would have massive implications for a society that what do we do? How do people have money to engage in these economies?

Nada Sanders:
So I know I can say a lot, have a lot of thoughts. I think it's a very important question. I think that there's, I very strongly believe that we do need to have some regulation. I am extremely troubled by the fact that if we look at, you look at how many industries, look at medical innovations, look at you know, the pharmaceutical products we produce. Look at aviation. We need to have some regulation. There's no question about it. Right now, it's a kind of a free-for-all. And, you know, when I look at it, I actually get quite upset when I see something that, oh, look what AI created. Well, AI didn't create anything. It didn't get motivated. It didn't look at the sunset or the sunrise, or it didn't fall in love. It just replicated in this great, fantastic way. But basically, as far as I am concerned, personally, I think what has happened is the greatest heist in the history of humanity. Everything that humanity has created through time. from Dante, Voltaire, through the Beatles has been, you know, pillaged. And now we go AI is created. Well, AI did not create anything. It just replicated combined. It is plagiarism on steroids. And I think I'm hoping two things. One, I am very much hoping that we're able to get some kind of regulation in place. How that plays out, I don't know. I'm really hoping citizens at some point do get angry enough. I think that because we're going to lose art. We are going to lose. I mean, look at what happened with the writer's strike, right? What was at the core of that? Do you want movies? And I mean, we're basically gonna die out entire industries of creative writing and creation if we don't do that. I want art that is created by humans that are moved and that are imperfect, that are perfect through their imperfection. And that is what we are looking for. Do we want a reality where there are no jobs? Well, as humans, we need motivation. We need, you know, it's funny, I was not funny, but, you know, I was talking to, I was at a conference last year, and there were these, it was this algorithm that was, you know, probabilistic algorithm in terms of, Survival of it was in the medical area and they had all these variables and I asked them I said what about the will to live and they paused and they said well, you know, we didn't include that Well, I can tell you will to live is a massive factor And they were like wow that that that's really a good idea as humans. How many humans do you know? They don't want to do anything We're naturally creative. We want to innovate. We don't want to sit around and just stare. And so what I'm looking for is a reality where we can automate things that are rote and mundane, but enable people to create. and create a better future that offer, and it may be in services, and I do think, again, I think in the marketplace, the companies that are able to tap into that are going to win. In the Human Machine book, we do dissect not only in terms of how these two are combined, but how the actual integration would take place and how the structure of the enterprise, it has to be different, it has to be more agile, it has to be more loose in terms of in terms of how the functionality takes place. And of course we're going to automate certain jobs. I do not see a future where everything is automated. I do hope that we do get regulation in place where we are able to control the kind of free fall that I think we are in right now. We are marveling at something that I think is a lie to us. I mean, you know, every time we talk to any of the, you know, I'll talk to obviously the many, many versions of of Chad GPT, and, you know, obviously it's wonderful and sounds so smart, but it's a stochastic parrot. I will add one other thing, and I don't know how this is going to play out, but I know in forecasting, I know how forecasting models work, and with forecasting models, you know, you've got the training data set, and then you've got, you know, you've got your forecasts. Well, we've had the training data sets that have been for Gen AI, which is everything that humanity has created, but now as we move into synthetic data, synthetically generated data, are we going to see more hallucinations and and more mistakes, I think it's quite likely. Because we know in forecasting in general, one of the basic rules is you cannot run any of the mathematical models on its own forecast. You start getting this cascade of errors. So I think that's going to be really interesting. And as you had pointed out a little while ago, well, maybe in some areas it doesn't matter. But if you're talking about medicine, you're talking about aeronautics, you're talking about aviation, you're talking about pharma, you better make sure, you know, even a 0.001% matters. You don't want a mistake. So I think that it's going to be a balance and we're not going to be in a world where there are no humans doing nothing.

Ian Krietzberg:
Right. Yeah. Definitely want to make sure the aviation is, you know, please keep AI away from aviation.

Nada Sanders:
Absolutely. Absolutely. And also the pharmaceuticals. I mean, we forget. I mean, when you think about oncology, diabetes, medication, we take it for granted. And we all take it for granted. We're in this era where we follow social media. So I think it's deluded a lot of people to believe that we don't need experts. What is expertise anyways? And it does frustrate me because we rely on experts. If you're having a heart attack, Ian, you want to get to an expert.

Ian Krietzberg:
I'm going to want a doctor.

Nada Sanders:
You want a doctor. You want to go to the ER. You want a cardiologist. You don't want to go on YouTube and go, well, what do I do now? You need a root canal. You're not giving one to yourself. You want to go to a specialist. We need experts. And we still have to value expertise. and the nuances of what that is and what that means. So that's where the regulation part, I think, is really important. It comes in, what it means for children, what it means to the generations that are coming forth, and then to enterprises in order to create a path of apprenticeship to take, you know, young graduates, young talent coming in, to have them become the experts of their domain as they move on through the organization, and that is what's going to be the difference. Allowing AI and technology to run the assembly lines, to do what needs to be done, but to know what you do when There's a cyber attack. What are you going to do when there's a breakdown? And it happens. You know, it happens all the time and it's going to increasingly happen because as those technologies evolve, obviously, you know, the issues with cyber become much more acute and we need to be able to have humans intervene and know what to do. We all need that.

Ian Krietzberg:
Expertise, and to that I would add evidence also, right? Those two things are huge. I want to get into the regulations, but before we get into that in a little bit more detail, I want to circle back to a point that you made, I guess, a couple minutes ago about the will to live, and how those researchers hadn't thought to include that. in their data set. They hadn't thought to include that variable. And I just think it's such a compelling example of what we're dealing with, where what we have today, you mentioned stochastic parrots, right? They're very good at providing the illusion of intelligence. And to people that maybe have not jumped very deeply into the construction of these models. They might look like magic, they might read as magic, but as you were saying, they are trained on the corpus of all human creation. And at the end of the day, AI progress, AI technology isn't some self-improving inevitability. It's grounded and based on what human researchers choose to do. And so if a human researcher doesn't think about including that variable, then the algorithm won't be trained to predict based on that variable. If a human researcher is using data that is not comprehensive and not inclusive of different ethnicities and countries and races and languages, then you have systems that maybe are good in English, but are really bad in other languages, right? And so there's very clear direct lines between what humans are putting into the machine and what the machine is spitting out. Now, at the same time, right, large language models are black boxes. No one really knows what's happening inside of them. But we do know the link between the training data and the output. That's very clear. And that link is vital in researching and understanding the capabilities of these systems. And this is why opacity is so bad in this industry, because researchers, because these models are opaque, because the companies haven't said what they've trained on, researchers are limited in what they can study. And if we knew that ChatGPT was only trained on one book, but it's able to write books in all these different ways, well, that would be quite an achievement and maybe it would have higher capabilities. But being trained on every book ever written and being able to write kind of facsimiles of the things it was trained on, these are not the same things, right? Human intelligence is so unique because we can do a lot from a comparatively small number of inputs. These models need everything and With everything, they're still so limited. And so I just thought that was a really interesting point about the will to live and the intentionality and comprehensiveness of algorithmic design and variables.

Nada Sanders:
It is. And I think it's also a really great example how, I think, coming out of STEM, and we have for decades try to become more like machines and to think like machines and to look at those variables that something like will to live is a very human feeling, it's a human aspect. that is difficult to measure, it's difficult to even think about including it. They didn't even think about including it. And now we would get into what other human traits become important that are not the traditional things that they have followed in medicine and in this particular model that obviously would be things like probably age and blood pressure and the usual things, the usual metrics. I think understanding more of what makes us human is that's sort of an area that I'm really interested in. This is again coming from an engineer and really delving more into the psychology and neuroscience and behavior of humans. One of the books that I was reading recently is notes on complexity by this liver pathologist, Neil Theis, and basically he has studied everything from ant colonies and how entities come together, how much complexity do we need, what is too much, what is not enough, and how do these natural signals that we give to one another, in this case ants, how do they signal and how does behavior signal? actually tried to, I write a little bit about him in one of the chapters in the book, which by the way, he doesn't even know that I have, I've been so busy trying to grow and learn, I didn't even contact Neil to say, hey, do you know that I quoted you in the book? But it's absolutely fascinating because I'm trying to mirror from nature to see is it something that enterprises can learn from as they create teams of people and how teams and members of teams interact and provide feedback to one another in order to be able to create, and to respond. Because right now, every enterprise needs teams that are agile, that know how to signal one another. You know, great examples in the human world are in the ER, you know, emergency rooms. But, you know, so what I'm, you know, just reading his material and trying to learn more because those are the kinds of things that tapping into those human skills and human aspects is going to be more and more the differentiator, the differentiating factor, as I think that there are right now at least diminishing returns on some of the technologies. Look, this technology is going to grow. It's amazing. I'm not knocking it by any means. But I can tell you, even I, I have used for it only from the standpoint that I know what to ask it. But what if somebody doesn't? And of course, the hallucinations are so... I put my name in chat GPT many times. And it's interesting, it's like, you know, the game, you know, two truths and a lie, you know, most of the stuff is true, but then buried deep in there, there'll be something. And if somebody doesn't know, they get lulled into it, oh, this is all correct. And then it'll say, oh, she addressed the United Nations. Well, I've consulted with hundreds of companies but I've never addressed the United Nations and it's buried in there and the second round it'll be something different like all this true stuff and then well she started she's a founder of this company and that is completely false but one doesn't know and unless one has that domain expertise that I already spoke of that you really know your stuff again you can use it as a fantastic input it can help dialogue with you when you know what to query it Ultimately, you as the expert, you're responsible for that final output, that final outcome. You have to know your domain, whether you're a marketing manager at CVS, or you're an ER doc, or a researcher at Pfizer. You have to know how to use it, what to ask.

Ian Krietzberg:
In my own messing around with these systems, right, I've had the kind of same scenario where often when they release a new model, a new model tested on common use cases, right? Article summarization, I think is a big one. And so I'll feed it articles that I've written. And I'll ask it to summarize, and two truths and a lie, right? It'll invent a quote from a person that didn't say that, or that didn't exist. And I know I didn't talk to XYZ person, because they're not a real person, and if you read the article, they're not in it. You know, even today, right, I'm working on these series of prediction pieces for 2025, which is messy because prediction is hard, but we're going to try anyway. And I have all these comments from all these people. It's a 20-page document, and I asked Claude to synthesize it based on some categories. And it did so accurately, but so much was cut out, right? And so what are we missing here? We don't know. If all you go is based on that summarization, you're missing so much, and it's just important for me, at least, in the work that I do, that I need to find the original source. I need to see access to the whole thing. A summary, even if it was trustworthy, is just not enough, because what are we missing? What if there's one sentence that it chose to cut? We don't know why it chose to cut, but I think is vitally important. So, and also, you know, your point about the ant colonies, right, and the the beauty and kind of amazingness of nature. I hope a point that is not lost on anyone is that, and the biggest takeaway that I've had in starting to study these systems and learn about them is that the human mind is a miracle. It's just so fascinating. We don't understand most of how it operates. It is miraculous that we're able to do what we're able to do. The links between consciousness and intelligence and cognition and creativity and emotion. I have so much more respect for what I have in my head, right, by studying these things that attempt to mimic it.

Nada Sanders:
I do too, and you and I both know, and everybody listening, how many times have you met someone And you just vibe with them. You just vibe. Well, what is that? Right? And I know, I'm sure psychologists and neuroscientists have answers for us, but we are still, we don't really fully understand the human mind, what it means to be human. And I think it is disrespectful to us as humans to assume that we've just been able to replicate this and we don't need humans anymore. We don't need the human creativity. the innovation. One comment I do want to come back to, you had mentioned about some of the work you're doing, that I think is important as we think about prediction and forecasting of the future. I think more than ever, rather than relying on any one model, I think risk assessment and probabilities, so basically not one future, but likelihoods of futures And then, you know, which futures are possible and what is the likelihood of it with two caveats. One, that if you're an enterprise, if it's a supply chain, if you're an individual with a career path in mind, same rules apply. And what it is is that how do you prepare? What do you do? How do you remain limber, keeping in mind that this is a dynamic process? what probabilities I associate to various outcomes is going to vary from moment to moment, from day to day. As new information becomes available, I then reassess those probabilities and then I redesign, redeploy that has everything to do with every enterprise. It has to do whether you're talking about war, whether you're talking about the medical supply chain, whether you're talking about one's career. And so when I talk to companies, I'm always saying, more than ever, scenario planning. And the way you do it is you assess the environment, associate risks or the probabilities with it, create a structure that is very dynamic and agile, and then keep reassessing in order to respond to what might happen. Right now, I've been called a lot with with the new administration coming in and the tariffs that could be. And I've been asked, well, what's going to happen with it? Well, the reality of it is Tariffs are not good or bad. It's all probabilities and how something is used. And then once we look at what could happen, we can associate different probabilities and then prepare for the different outcomes. The same is true with obviously with AI technology and everything that we do, but never forgetting about human judgment.

Ian Krietzberg:
Yeah. Risk assessment. And that seems to be a cornerstone of early attempts, maybe, at some efforts of governance. And so that bridge is as good as any to take us a little deeper into the regulation question, which is messy. And right now, some, like Gary Marcus, who we had on the podcast a few weeks ago, are calling and advocating for the idea of a global governance initiative to oversee this that's kind of outside of any one government's approach. Right now, The European Union has their AI Act, and that's something that's coming into force over the next year and a half to two years. The U.S. has no federal regulation. There's no signs that they're really working towards any massive frameworks that would mirror what Europe is doing with its AI Act. So now it's coming down to the states. And we have this kind of messy combination, state to state, of California's doing this, and Texas is not doing that, but they're going to do this instead. And Tennessee cares a lot about deepfake protections for their artists because of all the artists in Nashville.

Nada Sanders:
Absolutely, Nashville.

Ian Krietzberg:
But other people don't necessarily care about that. And so that makes compliance hard, but also When we talk about governance informed by risk assessment, when it's that uneven, there's a lot of added unclarity of will these risks that you and I have spoken about be addressed? Will they be pulled in or will we kind of remain in this free fall? And then at the same time, right, as you mentioned, we are at a transition point with a new administration coming in. President-elect Trump has talked a little bit about AI, not a ton. We know that Elon Musk is closely connected to his potential administration. We don't know how that will shake out at all. And we know that Elon has his own views on regulation, which is don't regulate. And we might assume that this administration would kind of herald a rollback on regulations, which we don't really have much of to speak of at this point anyway. And so when we're thinking about that regulatory environment, A, I guess I'm wondering what you think is most necessary you know, to winnow it down to a couple of maybe specific points, and B, how realistic it is you think to maybe achieve those things, what levers will be, you know, the most accessible to achieve that kind of legislation to rein stuff in? Is it Congress? Is it regulatory agencies? Or, you know, is this going to be a grassroots thing where people have to make very clear what is acceptable and what is not acceptable and, you know, the kind of vote with your dollar, vote with your action kind of thing. It's such a complex environment, right, and I'm throwing a lot at you, but that's where we are.

Nada Sanders:
You are, and it's really complex. So let me just preface by saying that I absolutely 100% agree with what Gary has said, Gary Marcus. We're in such a dangerous place, I think. And I think because we see what's going on right now with the information silos, with the bubbles, with who has who controls the data and access to the populace can change and control everything. So I very much agree with what's happening in Europe. So here we talked about, I think, earlier about two different paths, right, when we talked about humanity and so forth. And I think in this case, a couple of different realities. So one, I would love to see, I think it's essential if we're going to save humanity, to be able to have a global regulatory, some kind of an agreement a regulation of something that is, it's hurting everyone. And it is, there's only very few winners that are coming out. And it bothers me because ultimately, you know, I go back to, I think we started at the, before we actually recorded, I mentioned to you one of my favorite books was The Four with Scott Galloway. And, you know, in terms of the emotional aspect that taps into what drives humans, and unfortunately greed. is one of them. And so right now I think that greed unfortunately is at the centerpiece of so much of this with a few actors, a few players that stand to gain at the expense of all of us, of the populace. And so I think it's a really, having regulation in something that is controlling every aspect of our, potentially our behavior. optimizing what we listen to that, you know, suits based upon, you know, the response of the amygdala without allowing, you know, artists to just kind of create their own works. So it definitely triggers a lot of emotions for me in terms of the need for protectionism. But to me, there's always kind of what's going on, what is the need, and what's the reality. The reality, I do not think that it's going to happen in at least the near future. And so I think what I'm hoping is twofold. One, I'm very much hoping that there is some grassroots. Because ultimately, as humans, we do have those human needs and those human emotions in terms of what we want to purchase, what we want to listen to, what we want to buy. I'm seeing, even from my colleagues at various universities, students want to feel the presence of other humans. They don't want to just continuously be on Zoom or virtual environment. They want to be in physical teams. I am seeing more and more. I actually was just talking to somebody at the Boston Ballet who said to me how the body responds differently when there is a real-life symphony. as opposed to, right? Isn't that interesting? I thought it was interesting. It's some work I want to do on. I don't know. I'm going to research it. Is there evidence on it that we actually respond differently? And I'm sure that there's a vibrational aspect, but that there's a human response that is different when we are in the presence of a symphony as opposed to just the actual sound of it. I'm really hoping that one, there is the grassroots. Two, then if we look at business, and I'm a business professor, that businesses realize we're consumers, they're voting with their dollars. And as I had already mentioned to you, I think that as AI and technological capability becomes much more standardized, you know, the kind of analogy, you know, Coke versus Pepsi that I gave you at the onset, that there are other kinds of offerings that connect with humors that end up bringing consumers to vote with their dollars or money in, you know, whatever form. that that ultimately drives. So we're not able to get the regulation that A, it's businesses that do the pushing because they want to have a presence in the marketplace and it is consumers that are voting with their dollars, and two, that it is grassroots in some kind of a way that is demanding this, that his parents see what's happening to their children, that it's, you know, in every facet. Artists, kind of what we saw with the writer's strike, that if we are not able to get regulation. Because right now, at the moment today, I don't see the regulation happening in this country. I'm just being realistic. So there is what I wish and then what the reality is. And I think if we don't have that reality of the regulation that is sorely needed, then I think that we are, I'm hoping that we end up getting businesses that have financial power to be able to say hey we're winning in the marketplace and we're doing it in this way because what humanity is and what humans want is it can't fully be automated and what it means to be human cannot be fully, it cannot be completely automated and we have those other needs coming full circle as we talked about the right to live, the desire to live and the desire to socialize for companionship for real things Um, and I'm kind of hoping that those two, the grassroots and enterprises that have cloud and financial backing are able to demand it. That is what I'm hoping for. And then comes into Congress and then that kind of push that hopefully then we'll get regulation, but that's not going to happen right away.

Ian Krietzberg:
Definitely not right away. The last point that I kind of want to leaf off on, you end the book by saying that in spite of all the potential negatives, all the risks, right, you remain optimistic. And I just wanted to leave off on that point of optimism and kind of dig into that. I know, for me, right, I'm, I am highly skeptical of this technology, I think there's a lot to be skeptical of. What I will tell people is that that skepticism comes from a place of optimism, right? We are critical because we are hopeful that it can be better. And if we're not critical, it's not going to get better. And so I'm just wondering, right, we talked about so many potentially bad implications. It's heavy. I think a lot of times people are afraid, but you're feeling optimistic. Tell me why.

Nada Sanders:
So I'm feeling somewhat optimistic, cautiously optimistic. I love being a professor, and what I do is I interact with students, Gen Zs, and they're different from us. They truly care. They care about the environment. They are, unlike, I think, most of us, and most of, you know, adults beyond, you know, millennials, Gen Xs and on. You know, they're used to technology, but they're also, because of it, I don't want to say skeptical, but they're a little bit numb by it. They know what it can and cannot do. They're not as fascinated by it as I think many of us are. And they really want to make a difference. And they are the ones that really excite me. They make me hopeful. And it makes me want to continue doing what I'm doing even more because I see where their energy is. They care. They want to truly make the world a better place because it's a world they are inheriting. And again, they are not as starry-eyed about all of this as we are. They're not at all. and they care about the environment, they care about what's happening with climate change. and all of it. So my hope is with them and they are really giving me so much hope that they are going to have an impact on the world. Because ultimately we are human and we forget that. And can we just please stop admiring and saying, you know, Gen A, I created this. Again, that's something that makes my hair stand on end because it's a replication, but it didn't create anything. So I'm redefining in my own mind and doing work, ongoing work. What does it mean to be human? What is really creation? And it's looking at the Gen Z's and seeing what they're about that really gives me hope.

Ian Krietzberg:
Fascinating. Well, Nada, this conversation was as incredible as I anticipated it would be. Thank you so much for your time. This was a pleasure.

Nada Sanders:
Ian, thank you so much for having me on. It really was a pleasure. Thank you.

Creators and Guests

#6: The Coming Revolution of Man and Machine - Nada Sanders