How Healthcare Can Harness the Potential of AI - Dr. Karim Lakhani, Professor of Business Administration at Harvard Business School


In this super insightful conversation with host Shiv Gaglani, Dr. Karim Lakhani breaks down the difference between “strong” and “weak” artificial intelligence, and how the healthcare world can not only adapt to it, but harness its full potential. But, he stresses, the system has some important groundwork to do before that can happen. “Process change is the biggest work that has to happen in healthcare, from discovery to the clinic and beyond. Otherwise, we're basically pouring digital and artificial intelligence asphalt over old cow-paths." As professor of Business Administration at Harvard Business School, founding director of the Laboratory for Innovation Science at Harvard, and the Principal Investigator of the NASA Tournament Lab at the Harvard Institute for Quantitative Social Science, Lakhani is a powerful intellectual force in understanding AI, open-source software and crowdsourcing. He’s also the author of the book Competing in the Age of AI. If you’re curious about how artificial intelligence might transform the healthcare system, this is a can’t miss opportunity to hear from a leading expert in the field.




Shiv Gaglani: Hi, I'm Shiv Gaglani. We've spoken to many guests on Raise the Line about how artificial intelligence is enabling specific improvements in healthcare administration and the delivery of care, but today we're going to widen our lens to look at the broader implications of AI for transforming entire industries, including healthcare. We could not have a better guide than Dr. Karim Lakhani, a Professor of Business Administration at the Harvard Business School. For one thing, he literally wrote the book on this topic, Competing in the Age of AI, with his HBS colleague, Marco Iansiti. He's also one of the foremost academic experts on open-source software and crowdsourcing, and pursues all of these interests in his roles as Principal Investigator of the Crowd Innovation Lab, NASA Tournament Laboratory at the Harvard Institute for Quantitative Social Science, and Co-director of the Harvard Business Analytics Program, among many others. It was my pleasure to see Dr. Lakhani present last month at my five-year HBS reunion where I went up to him and talked to him about AI in healthcare right after his talk. So, Dr. Lakhani, thanks for taking the time to be with us today.


Dr. Karim Lakhani: Shiv, glad to be here, and, you know, you can call me Karim. You don't need to call me Dr. Lakhani. I look behind me - like there's some uncles and aunts that are also Dr. Lakhanis.


Shiv Gaglani: My med school hat has me calling everyone doctor and professor, but my business school hat, everyone's first name. 


Dr. Karim Lakhani: Yes, let's stick with our first names.


Shiv Gaglani: So obviously, Karim, I know a lot about you and your background, but for our audience -- which typically hears from people who are healthcare specific and then broader later -- can you tell us about what got you interested in technology and innovation in the first place?


Dr. Karim Lakhani: I did my undergrad in Canada, at McMaster University, in electrical engineering and management, and then my first job actually was at General Electric, at that time, in their Medical Systems Division. I was in the technology leadership program, which drew me rotations across GE Healthcare business. So I remember my first encounter was to go hang out with radiologists at the Toronto General Hospital, bring doughnuts to the staff, and understand how they were using their various X Ray equipment, CT scanners, MRIs, portable X ray machines, and so on and so forth. 


For four years, I spent time in new product development, sales and marketing roles at GE. So that really got me into the front lines of one aspect of healthcare delivery and technology because radiology is so technology focused. It was in that experience, where I first encountered open source software. I was like, 'whoa, this makes no sense,’ you know, 'amazing software for free,' like 'what?'. I sort of had that in the back of my mind. I ended up at MIT to do my masters, and that's where -- trying to integrate and understand how new technologies were being used and were changing our models of how they were being developed and how were being adopted -- led me down to a career of pursuing a master's then spending a few years at Boston Consulting Group, and then coming back to do my PhD at Sloan at MIT on understanding the rise of open innovation systems. Healthcare and technology was sort of the start of my career, and since that time, this model of distributed development and innovation has sort of taken over the rest of the world. 


So you see, these platform companies show up and do amazing things in a range of industries, and that's where -- as I started working at Harvard Business School and continuing my research, but also writing cases -- saw the convergence even more so of technology inside of businesses. So, the Andreessen-Horowitz thesis on software eating the world really became an organizing principle for a lot of the research that I was doing.


Shiv Gaglani: It's incredible how much has changed since he first wrote that piece, I think, a decade ago. I don't know if you saw, Ben Horowitz just posted that Andreessen Horowitz is moving to the cloud fully. So, for a lot of people, AI is a very charged term, right? A lot of people think they understand it, but they aren't often talking about the same exact thing. What is the current definition of artificial intelligence? If you're explaining it to a future doctor, what should they know right now?


Dr. Karim Lakhani: I'm probably getting into some kind of a hate war or some kind of fire brigade with people about what the definition of AI is. I think what I like, is actually what the computer scientists have sort of said; there's weak AI and there's strong AI. Weak AI is basically algorithms that do what humans once did, and maybe do them better. So, narrow tasks. For example, image recognition: anything from what you do with your iPhone when it unlocks itself or your Android phone unlocks itself, and sort of looks at your face, to other fancy things that humans do. 


Then there's the strong AI, which is sort of the work of science fiction. In a lot of science fiction stories, you read about strong AI, but it's science fiction. It's like autonomous machines doing their own thing. Weak AI really, for me, is statistics at scale with lots of data and automation built in. You take computer science, you take statistics, you mash them up, and then you get it going, and then we can start to apply it to a range of things from book recommendations and movie recommendations, to directions, to lung cancer detection, and so on. We're seeing lots of applications of weak AI across the economy, and that's where all the action is today.


Shiv Gaglani: I like how you've broken it down in those two, because, in the realm of science fiction, there have been a lot of these kinds of movies that have popularized AI like, Her, which was about everyone having a personal assistant. Even around the time Marc Andreessen wrote Software Is Eating the World, I heard a talk from Vinod Khosla at Future Med where he said he would advise any of his friends not to go into radiology or dermatology, because all the diagnostic parts of those fields and other fields, would disappear. However, there's this concept of Moore's law, which I know you're familiar with, which is that people tend to overestimate the impact of technology within a year, but underestimate the impact on the scale of ten years. I feel like we've kind of gone through a trough of disillusionment in AI and what it can do, but I'm curious, how would you think about those healthcare applications? Radiology is where you started with GE, your lab has done work on lung cancer detection…where are we with that right now, those applications for healthcare? Where are the strongest applications that are currently live for healthcare AI?


Dr. Karim Lakhani: Before I give you some specific things where I'm actually not going to be the expert and where in healthcare we see lots of this - let me sort of give a general view. I heard one very famous computer scientist talk about this, he said people ask him, are machines going to replace humans? Right? He said, No, machines aren't going to replace humans, but humans with machines are going to replace humans without machines, which I thought was quite good. Then I added an asterix, which is the economics interpretation, if you have humans with machines that maybe we'll need fewer of them. You could have superpowers that allow you to sort of do more. 


My view of AI today in many settings is that there are some settings where there has been a complete substitution, right? So if you sort of think about auctions, at auctions, we don't need human auctioneers telling us, that's all happening at scale at Google. You do a search for Google, Google throws you an ad, there's an ad auction in the background going on, fully automated, you don't need to touch it. Machines have taken over at auctions, machines have taken over music recommendations. I used to listen to a lot of DJs and radio growing up in Toronto, but now it's Spotify. The algorithm just keeps telling me what to do. 


Similarly, machines, also movie recommendations, basically, Netflix, gives me the movie recommendations I need, so certain parts of our lives have already had replacement, but much of the action is going to be augmentation, where machines will help humans make some decisions, and, in some cases, take them over. I think this sort of precipice of how they will help humans and where in the process they will take over is TBD. We're still playing around with those kinds of things.


Specifically in medicine, I think what we're going to start thinking about is, where is there grunt work being done by doctors? Where are there settings where performance of doctors collapses for a bunch of reasons, either fatigue or time of day? Some of my colleagues have done research on judges, instead of saying that if the judge hasn't had a meal before then you're more likely to go to jail or not get a bail decision in your favor. So what are those cases where doctors' performance fails? We assume constant performance and that all doctors are incredible. I take a statistical approach.


There's a distribution -- you shouldn't have to scale amongst doctors, of course…everybody lives in Lake Wobegon and is at the high end of the distribution -- but realistically, there's a distribution of skill among doctors, and that skill is stochastic, depending on time of day and other things. We know what happens in the June effect or the July effect amongst doctors when the hand-offs happen, right? So we know all these things, and so the question is, how can machines help us there? That's where I think we want to start taking this complementarity approach. We have been pretty much in this substitution story, and we’re going to start thinking about complementarity. Then that can hopefully help us go after that. 


My view overall, Shiv, before I shut up, is to think about three specific places where AI works really well, then think about how that translates into medicine. One is predictions. You take a training data set, and you learn from a bunch of folks making predictions, and then you say, can I make better predictions? Can the machine get better at making predictions? Imagine all the activities in medicine that are predictive activities that doctors do, and here machines can be helpful. 


The second, of course, is pattern recognition. This is the thing around what pathologists, ophthalmologists, and radiologists do all the time. They get a bunch of data and images and make sense of it and do some pattern recognition. Well, machines can get really good at that. 

The third thing is automation. How do you take manual processes and automate them? If you can start to think about those three things as activities that also are happening in medicine --prediction, pattern recognition, process automation…what I call sort of the three P's of AI, then we can imagine where these kinds of things will start to augment medicine.


Shiv Gaglani: I love that framework, and I’d encourage our listeners -- many of whom are actively going through school and engaging with patients in different settings for the first time -- to think about those three that Kareem just shared, because there may be some very compelling business ideas and ways to make clinical medicine more efficient that come right out of that. Obviously, there are people who have thought about a lot of these things already, but ultimately, ideas are cheap and it's really about execution and how good that is. 


One thing I'd like to respond to as well is about the famous judge studies. I think it was Israeli judges. It was found that people who were sentenced earlier in the morning tend to have more lenient sentences, and then by the time lunch was rolling around, and the judges were a little more hypoglycemic -- a little hangry, maybe -- they got less lenient sentences. A lot of people focus on how AI can, depending on your training data sets, be very biased depending on where you are. The same is true of clinical medicine and clinical trials. There's a lot of arguments that it's biased, that it's very white male-centric. But this is interesting because by having a machine that doesn't get tired, and standardizes the diagnostic or the pattern recognition, you could potentially have an argument against the biases that we're finding when humans are fallible in diagnosing or doing other clinical procedures.


Dr. Karim Lakhani: Yeah, absolutely. I think that the question of bias is massive and huge. My sense is that, today in 2022 -- compared to 2012, when these systems were just coming on board -- there was a naivete. One computer scientist saying, “I got a data set so it must be good. Let me go train it.” And then, “Oh, my God, we have all of these problems.” The bias was both coming from how representative the data were and then also, who was labeling the data, whether it'd be physicians or other people, and then how you train the algorithms on top, right? There were three sources of bias. I think this view about biases now -- at least most cutting-edge organizations are thinking a lot about that kind of bias -- we hope that over time, we can start to reduce that bias. 


The hope is that over time, we can in fact reduce the data science pipeline biases that are happening in all of the development and hopefully that helps humans who may have additional biases based on your glucose level. Nobody thought that glucose level was actually going to be the factor that drives a judge's decision levels, but in fact, that's what it looks like. So maybe we should be also be wearing continuous glucose monitors. All the doctors should be wearing CGMs, then we should be looking at how good you are, and then maybe the algorithm will come in and say, “Shiv you're declining. You better get some sugar in you and get some food in you to help you help you make better decisions.” 


Shiv Gaglani: Very interesting, and that's also one of the challenges. I know we've had guests like Eric Topol and Daniel Kraft on the podcast talk about centralizing these data sources so that you can have more data -- obviously, garbage in garbage out when it comes to data sources -- but centralizing these data sources so that we have better algorithms that are more representative of the entire population. 


We're recording this episode, a couple of days after Amazon announced its acquisition of One Medical. Apple released a sixty-page report about how big they're going to be in healthcare, so it's really exciting to see some of these big tech companies that have this tremendous data set, get into this space. For example, there's a lot of excitement around Amazon, they already have patents around Alexa, and say you're coughing more when you're talking to Alexa than normal, it automatically starts recommending a prescription from PillPack, or seeing a primary care doc now with One Medical. So, it's kind of exciting…the convenience that some of these AI or recommendation systems will provide. We've already seen it, as you've shared, in music and other fields, but in healthcare it seems like we're just on the cusp of some really novel things.


Dr. Karim Lakhani: I think so. I mean, I don't know how true it is, other folks will know, for example, can you check for depression, when you're talking on the phone. Your phone company should have better access to you than anybody else, and then maybe based on your texting patterns or from your talking patterns it could come up with measures of depression that could be early warnings for you. So, again, I do think the digital exhaust and the digital footprints of all of us has massively increased. Can we use those data to then drive some inference? I think it becomes quite interesting and exciting, and kind of scary, right? Because then all of a sudden, we need to now be in this world of data privacy. Do I really want Amazon and Google and Facebook to have all my data? Well, I guess they kind of do already. So we have to learn to figure out a new contract with these types of large companies around our data.


Shiv Gaglani: On that specific point about voice patterns being able to predict depression or anxiety or other mental health conditions, two of our guests -- Mainul Mondal at Ellipsis Health and Punit Singh Soni at Suki are both tackling that problem. Our listeners can go check out those. One of the case studies you've written is on Moderna, and how they are sort of very much winning the AI in biotech race. Can you give a description to our audience of that case and why is Moderna so far ahead, and maybe how that helped with their work in the COVID vaccine?


Dr. Karim Lakhani: Sure, absolutely. So, as a disclosure, I've been a key opinion leader for Moderna, beforehand. I wrote a case on them. I'm also now spending a substantial amount of time thinking about AI and biology at Flagship Pioneering, the company behind Moderna, as well. So, I just want to provide disclosures around that. I mean, look, I think the Moderna story is very interesting, and in many ways, for me, epitomizes how to think about AI in healthcare in general -- both in discovery and in drugs -- but also in delivery. I’ll give you three big ah-ha’s, and then we can get into some detail. 


The one big ah-ha, came from conversations with Stéphane Bancel, the CEO of Moderna, and then having spent time in Moderna and thinking a lot about what Moderna was doing, which was different from the traditional biotech model that traditional pharma model. 


One was what Bancel talks about a lot: that discovery is all about data and experiment. How do you think about your data? How do you think about running the experiment? How do you extract the data from the experiment? How do you analyze it? And the faster you can do those cycles of data, experiment, data, experiment, data experiment, the better off you're gonna be. In most discovery settings, we're still in the world of lab notebooks, Excel spreadsheets, email, and people coordinating that way, and you can start to imagine errors upfront in hypothesis, development, and sequence development. Let's say for biotech you're doing some fancy Excel spreadsheet work and then you have an error. That can lead to massive problems downstream. The more manual your processes are, the more likely you're going to have errors upfront, and then also in your experiment, and then data collection and the timeline between the error being made, because of manual processes, and you realizing that can be on the order of months, if not years, depending on what's going on. 


So, Bancel's view was, let's automate and digitize the process by which discovery happens. First, digitize a process, so that the bozo errors that we all make don't get made. They're corrected for. And so he worked quite a bit to make sure that the company's infrastructure for discovery was going to be digitally native. Now, of course, he's working with an information molecule in terms of mRNA and so that was certainly amenable to that mindset, but he took that view and made that a core part of it. Then he added, along with his teams, a set of folks to sort of say, we digitize discovery, we automate the processes by which discovery gets done so that, again, we reduce human error that could be introduced along the way. 


So that's the second thing, digitize, automate, bring in AI along the way, and automation. You're constantly extracting data from the experiments you're running from your lab systems, and so on and so forth. Then finally my biggest insight hanging out with them was -- and I think this has probably the most profound implications for all of us, not just in discovery, but in healthcare in general -- is an emphasis on process. Change the process. You now have this new technology, make sure you change the process, so that you can take advantage of new technologies instead of doing it the old way. 


My biggest complaint about all these EHR implementations in hospitals is we've taken these old crappy processes that got invented in the 50s, 60s, and 70s as to how to run large scale hospitals and we put EHR on top of them. We basically digitized crappy processes instead of the work that was needed to be done, which is, if I'm going to bring in modern computation and modern data analytics, let's rethink the way in which we actually run our hospitals, the ways in which we run our various clinics, and the ways in which we do those things. Bancel sort of applied that relentlessly, and the view has always been, “Let's make sure that we are actually pushing hard on process change, and process improvement.” 


The example I use is comparing New York to Boston. If you go back to when the modern cities were being developed, and asphalt got invented, and digitize New York said, “We have these old cow paths, and we're going to in fact stop and create a new grid.” Philadelphia did that. Paris did that, and so on and so forth. Boston said, "Eh. Just pour asphalt over old cow paths. It’s fine.” Pre-Uber, pre-GPS -- even now, with Uber and GPS -- it's very easy to get lost in the city of Boston downtown, because it's just like a bunch of crazy cow paths that got asphalt put over. New York or Philadelphia is at least sane. You say, ‘East 15th, and blah,’ and you know exactly what that is in your mind, and you can get there. So, the grid system was invented. 


Essentially, I think what's happened with digital and with AI in most healthcare settings, is that we're basically pouring digital and artificial intelligence asphalt on old cow paths, and we haven't rethought processes. Process change for me is the biggest work that has to happen in healthcare across the board, from discovery to the clinic, and beyond. That's why I think Amazon and Apple and Google will be interesting players because there are two options: they could decide to take the existing processes and be happy with it, or they might start to actually change processes and I think that's where we will see this stuff happening in front of us.


Shiv Gaglani: That's an incredible analogy, and having lived in Boston for six years, and Philly and being in New York a lot, it’s totally relatable. I definitely think my healthcare, and most people's healthcare experiences, are more like Boston and we want to make it more like New York. 


Dr. Karim Lakhani: It's crazy. Because you read all these encounters of, ‘I got this particular disease,’ or ‘my aunt or my uncle, or my parents got this, and I'm a PhD and I spent months trying to navigate the healthcare system,’ because it's so bloody Byzantine, and it's full of these cow paths that we have. It's just nutty. 


So, anyway, back to the Moderna story…those three things were really enabled a company with no drugs in the market at the time of the pandemic and 800 people working with them, to be able to within months, scale and run massive clinical trials, go into production, make billions of doses and show up. I think the comparison between BioNTech and Pfizer and Moderna is very interesting, because here you have a startup really working against one of the leaders in the pharmaceutical space, and being able to hold their own. The reason is because of their investments in technology. That and their process focus that enabled them to pull this off.


Shiv Gaglani: Yeah. Now we’re at that point of figuring out what does the next five years or ten years look like for companies that are AI first, as you say, and make AI a core thing?


Dr. Karim Lakhani: More generally -- and this is some of the work of Flagship -- the era of programmable medicine is finally getting in front of us. So, if research starts to see mRNA, and then all the other RNA type stuff that is being thought through, we may be entering this new era of finally biotech actually merging with the revolution in technology, right? We’re going to see some very interesting outcomes come together. If you see now the investments that Moderna is making, but every other company is making their own mRNA, you get to see “Oh, we could be at the cusp.” I mean, again, we don't know if all this is gonna work or not, but certainly, there's lots of promise that this may be an interesting shift for us to start thinking about programmable medicines.


Shiv Gaglani: Very, very exciting, and especially now, because we talked about data and not just data privacy, but being able to get more data on people. Back when some of these companies were starting, the cost to sequence the human genome was prohibitive, and now it's under $1,000, I think at this point. You can do it with next gen sequencing. So, it'll be very exciting to see that. 


I'm aware of your time, so I had just two and a half other questions, for you. The first is, as you know, Osmosis is an education company. We love teaching and we love simplifying things. If you could snap your fingers and teach the next generation of healthcare professionals anything, what would it be and why? It could be, again, some of the things we were discussing -- the role of AI or technology -- but just more broadly, what are you thinking?


Dr. Karim Lakhani: I think healthcare professionals have to practice your craft, learn your craft, and also be researchers, because one way or the other, patients are relying on you to help them interpret the research and or participate in the research process. And guess what…more and more research is being driven by data science. So, first I would say, improve your grounding in data science. I don't want you to become a data scientist, but you got to actually understand this stuff. What I say to MBAs is, when you come to HBS, you take a course in accounting. If we made accounting an optional course, nobody would take it. It's similar to right now in medicine, where data science, they are optional courses, and nobody will take them. But you need to know accounting to be able to be a good business person, to be a leader in business, you actually need to understand how both come together. 


My view is that data science is going to become a required part of the medical career, and you just need to get good at that as a consumer of knowledge that is heavily data-science driven. Hopefully, some of you will become producers of knowledge. In both those cases data science training will matter. 


The second thing I would say is, all of you are managers. Whether you're running a practice, whether you're in a hospital, wherever you are, you're managers. So, appreciating the managerial roles you have, above and beyond your patient care, is actually going to be important as well, because in a managerial viewpoint, you would go and change the damn process, right? You would say, “I'm not happy with the way my department is set up, let's go fix it instead of taking it for granted.” So, I think the managerial leader role is as important as the data science role, and I would add both of those to the med school curriculum. 


Shiv Gaglani: That's great. That's really helpful advice, and actually, that transitions to the second last question, which is, what other advice would you give to anyone starting their career right now? Not just healthcare professionals, but given all the turbulence of the last few years and where AI may be in five years versus fifteen years, what advice would you give? 


Dr. Karim Lakhani: I'm not sure what the average age is for your podcast listener.


Shiv Gaglani: Mid-20s, most likely.


Dr. Karim Lakhani: Mid-20s, okay. What I say to people is, just think back twenty years. Some of you were very young so you may not even remember twenty years ago, but in 2002, we never imagined a world where there'd be companies that would be serving billions of customers, effortlessly. Or that there’d be companies with trillions of dollars of market cap. Or we'd have this magical world of AI that we're living in. Or that we would have a pandemic, that the whole world would get shut down, and even if the world was shut down, we actually got our work done. Schools ran as best as they could. We taught in the MBA program during the pandemic, and we figured it out. Those things seem improbable, but became true just in twenty years. 


The rate of change in technology, the rate of change in AI, is approaching exponential levels, and so now, what are the next twenty years going to be like when you're going to be at the prime of your career? There are lots of changes to expect, and one thing for sure, is that we're going to be living around these exponential technologies. I really encourage people to read Azeem Azhar's book, after reading my book, The Exponential Age. It really sort of highlights a few technologies that are growing exponentially and what it means for us, because most of us, including the healthcare system, live in a linear world where we grow linearly. Our capacity is linearly. COVID showed us what happens when we live in an exponential process. Our healthcare capacity, which grows linearly, is going to collide directly with the exponential disease process and chaos happens. 


But those things are in fact happening around the economy. Just getting familiar with these trends, and then seeing what's your role in that, it's going to be important. I think we're in for an amazing, incredible ride both for the good and, I'm afraid, for the ill and healthcare professionals will need to be able to not push away the technology or push away the “oh, this is technology, I don't care about that. I just care about patient care.” Guess what…patient care is all going to be about technology, and so you have to sort of embrace it and make it part of your identity as you go forward.


Shiv Gaglani: That's great advice. There’s a whole conference called Exponential Medicine that's around some of those trends that's been around for ten years. That's actually where I heard Vinod Khosla speak about this a decade ago. Last question, anything else you want to share with the audience before we let you get on with your day?


Dr. Karim Lakhani: I think what you're doing is fantastic. I wish everybody good luck, as they’re embarking on their careers, in figuring out how to navigate these complex times. We need leaders in all fields to solve these tough problems, and there's a bunch of slow pandemics. We're in the middle of climate change. The climate change-induced healthcare crisis is going to be massive. Nutrition, you name it…lots of interesting, slow pandemics are already in front of us and so we've got to find a way to solve them. Healthcare is going to be critical. So, I wish everybody the best of luck as they take on these challenges. We need this generation to also become the leaders that the world needs.


Shiv Gaglani: Absolutely. Karim, thank you so much. This has been an absolute pleasure. I really loved your talk last month at HBS, and I encourage our audience to check out your book, watch some of your videos online and just get familiar with AI and exponential trends, as you said.


Dr. Karim Lakhani: Thank you, Shiv. 


Shiv Gaglani: Thank you again, and with that, I'm Shiv Gaglani. Thanks to our audience for tuning in to this week's show, and remember, do your part to flatten the curve and Raise the Line, we're all in this together. Take care.