EPISODE 407

What AI’s Rapid Progress Means for Healthcare and Health Information - Dr. Michael Howell, Chief Clinical Officer at Google

08-23-2023

“When my dad gets sick, he has a Harvard-trained physician looking over his shoulder, helping him know what to type in and what queries to ask. I just want that for the world,” says Dr. Michael Howell, who is in a position to advance that vision as chief clinical officer at Google. In that role, Howell leads the team of experts who provide guidance for the tech giant’s health-related products, research, and services. It's a natural extension of a career that's been devoted to improving the quality, safety and science of how care is delivered and to helping people get the best information across their health journey. Of course in recent months, artificial intelligence has dominated conversations about the future of healthcare, and Howell acknowledges the pace of change has been alarming. “It has felt like we've had more progress in AI over the past ten months than over the past ten years in some ways, and it’s getting better very fast,” he tells host Shiv Gaglani.  That means it’s high time for educators to develop curricular standards for what future physicians need to know about the technology as one way to prepare the healthcare system for its disruptive potential. “I don't think AI is going to replace doctors, but I do think doctors who use AI are going to replace doctors who don't,” he cautions.  This is a great opportunity to gain insight from an extremely well-placed source at the leading edge of healthcare and artificial intelligence.

Subscribe

Download

Transcript

Shiv Gaglani: Hi, I'm Shiv Gaglani, and today on Raise the Line, I'm really happy to welcome Dr. Michael Howell, who's the chief clinical officer at Google, where he leads the team of experts who provide guidance for the company's health-related products, research, and services. It's a natural extension of a career that's been devoted to improving the quality, safety, and science

of how care is delivered and helping people get the best information across their health journey. 

 

Dr. Howell previously served as the University of Chicago Medicine's chief quality officer and was an associate professor of medicine at the University of Chicago and at Harvard Medical School. He has also practiced pulmonary and critical care medicine for many years. 

 

Dr. Howell has published more than 100 research articles, editorials, and book chapters and is the author of Understanding Healthcare Delivery Science, one of the foundational textbooks in the field. He has also served as an advisor for the CDC, for the Centers for Medicare and Medicaid Services, and for the National Academy of Medicine. 

 

Today's conversation will build on what we learned recently about Google's work in the healthcare space from Dr. Kapil Parakh, who is a senior medical lead at Google. I had the pleasure of meeting Dr. Howell in person at the AI in Health conference in San Diego over the summer.

 

So, Dr. Howell, thanks for taking the time to be with us today.

 

Dr. Michael Howell: Yeah, thanks for having me. It's good to see you again.

 

Shiv: So, we always like to ask our guests to, in their own words, tell us what first got them interested in a career in medicine.

 

Dr. Howell: Yeah, I have no idea. I was one of those kids who, for some reason, never wanted to be anything besides a doctor. My parents tell me that the first toy I ever asked for was one of those Fisher-Price doctor sets, but my mom was a secretary and then became an accountant. My dad went to trade school as an electrician and then became an electrical engineer. We didn't have anyone in health or healthcare in the family. Just never wanted to be anything besides a doctor.

 

Shiv: Wow, that's awesome. I would love to go back and read your personal statement. Go ahead, sorry.

 

Dr. Howell: No, it's a gift, I think. Like, I feel very lucky to have known that. I thought, you know, when I went to residency, I thought I wanted to be a primary care doc and became mostly a critical care doc, pulmonary and critical care, which is a big surprise to me. I'm not sure I'd even been in an adult ICU in medical school. But, you know, in the fullness of time, I've come to believe that there are many kinds of generalists and that the part that I always loved

was taking care of the whole person in the family.

Shiv: Well, you preempted my next question, which was, okay, you're in med school...how did you decide on pulmonary medicine and critical care, because a lot of our audience go into med school wanting to be a neurosurgeon and they leave as a psychiatrist, as an example. Any advice for them based on your trajectory of how you chose that specialty and made the most of it?

 

Dr. Howell: I learned I love taking care of sick people and very sick people. That happened during residency, and the part that I loved about critical care was a few things: one is that often what you're doing is helping families and patients understand what's going on and many times they haven't had all the communication that they should about the underlying condition that they have. So, ICU docs and nurses and chaplains and all of us are in this position of getting to prevent a huge amount of suffering by talking to people, and so you do that for part of your time. 

 

Most things are routine and you see them a lot, but there are some true Sherlock Holmes kind of cases that come to the ICU and you get to sit there and think in the deep internist way. And then, a third of the time you're using your hands to do stuff and if you can't get it done, the person's really in trouble. So, for me, it was in this nice mix of things that were procedural, things that were cognitive and things where the most important thing was to be able to understand what the patient and family needed and to give them the information they needed to make decisions.

 

Shiv: That's very interesting and a good way to break down what that career path would entail and also kind of foreshadow some of the work I think you've been doing since then, both at Chicago and now at Google with regards to making information accessible to folks and improving quality. 

 

Let's start with University of Chicago and your decision to get into leadership positions, including being ultimately chief quality officer. Can you talk to us a bit about that transition from being a clinician to being a clinical leader at a well-renowned healthcare system?

 

Dr. Howell: So, I was in Boston for fourteen years and had the good luck very early, as a PGY-2, I think, to get randomly assigned into this experimental rotation -- which is very common now -- but it was really odd to get randomly assigned to this rotation of quality and safety. Just as you would rotate onto nephrology or cardiology, you'd rotate onto the quality and safety service. 

 

I'd had one real job before medical school where I did workflow analysis and automation for Rockwell when they were their prime contractor for the space shuttle, but in like the most boring way possible. I was in the materials management group, which is the group of ‘how do you buy things and stay compliant with federal contracting.’ I did forms routing, but it turns out to be sort of Lean process improvement stuff. I remember being an intern and getting called in for my first crash central line in the CCU, and the resident is like, “Go get all this stuff for a central line.” It was before they put all the stuff in a bag. 

 

So, I went into the supply room and it was organized by part number, not by the job you needed to do. I remember thinking, “God, this is the same problem as Rockwell.

Hospitals should have people who work on this.” And then I went back and did my work arounds with an intern, and then later get randomly assigned. I was like, “Wait a minute, this is a job that people can have.” 

 

My first job at a fellowship was being responsible for the quality and safety of the nine adult ICUs at the Beth Israel Deaconess and so I got this chance early in my career to be responsible for a fairly substantive but contained chunk of quality and safety there. So, the role at University of Chicago was a natural extension of that outside of critical care into the full suite of what the health system was responsible for.

 

Shiv: Wow, that's cool and again, that sort of speaks to the importance of being open-minded when you get randomly assigned on rotations or you get to choose electives throughout medical school or residency or fellowship because it could really influence what you do. 

 

I recently read the book by Dr. Marc Harrison, who was the former CEO of Intermountain, called Possibility Unleashed and it was really fascinating because it sort of goes deep into what it's like to run a health system and what are some of the lessons he's learned along the way. I would love to pose that question to you too, like, what are some of the things you learned being so high up in a health system that maybe some of our audience who are interested in taking these leadership roles at their institutions may be interested in learning from?

 

Dr. Howell: I learned how much things that seem mundane really matter to patient outcomes and I learned how cognitively draining it is to do a good job to run line operations in a health system. I'll tell a couple of stories about that. 

 

So, again, I'm an intensive care doc. One of the things about patients who commonly come to the ICU is they have a bad infection called sepsis and there's a bunch of debate about what helps in sepsis, but it's pretty clear that delaying antibiotics in somebody with overwhelming bacterial infection is bad. So, when I was at the Beth Israel, I really wanted to reduce the time for how long it took to get antibiotics once we're sure a patient has septic shock. And we told people to do things and thought of all the things, and we just couldn't get any movement in the median time to antibiotics. I had a new project manager, the first person I ever hired actually, and I was like, “Go to the ICU and watch from the moment that somebody says, ‘we need to give them antibiotics’ and just follow it through until there are antibiotics in the vein.

 

What this person found was a consensus that the problem was the time it took pharmacy to approve them. It's like a two-minute time. Then she followed the order down into the basement, and found the time it took was based on when a label was printed. It turns out the printer was turned 90 degrees off so that the pharmacy tech couldn't see when the label was printed. 

 

Her intervention was two things. One is turn the printer ninety degrees; and for the nine most commonly used chemically-stable antibiotics, just store them in the ICU instead of down in the basement. Dramatic improvements. So, that project manager, by turning a printer ninety degrees, probably saved more lives than I ever did in my entire career. Anyone who's been in quality and safety for a long time has some story that's exactly like that, where there's no grand unifying theory of it. It's just that that's how the real world is and the real world is messy. 

So, ‘how are printers oriented’ sounds unbelievably boring, but it was totally lifesaving in this context. 

 

Then the other example from fairly early in my career, is I spent a year as the interim lead for pharmacy at one of the Harvard hospitals, because we were missing a senior leader and they needed somebody. I went from random quality and safety guy to second largest line item in the budget after salaries, P&L, and a pretty big team. We delivered 10,000 doses of medications into human beings' bodies every day and if we broke, the entire hospital broke. And as somebody who has a research background and academic background, that experience and a bunch of my other subsequent ones at UChicago and other things made me deeply respect the care that my administrative colleagues brought to the table and the expertise and professionalism that they brought. 

 

Someone one time told me about healthcare administration, “We're the people who take care of the people who actually take care of the people.” So, those are a couple of things I learned.

 

Shiv: Wow, those are incredible, incredible stories and in particular, the printer one reminds me of how often things are a game of inches and just continuous improvement...you can get 1% better every day in different things -- whether it's your personal life or a process or a large health system -- and those can make bigger impacts over time with compounding than the Hail Marys that oftentimes we glorify, and frankly make for better TV or movies, but may not be as impactful as what you shared. 

 

So, switching gears to Google. Google's had a habit of hiring some very impressive physician leaders like yourself and our mutual friend, Dr. Garth Graham and Dr. Karen DeSalvo. We'll get to the paper you and she wrote for the New England Journal. Another Raise the Line guest is Vivian Lee who we talked to after she wrote the book, The Long Fix, and who was at Verily before that and University of Utah. Can you just tell us, what prompted you to go to Google and what's it been like since you've joined?

 

Dr. Howell: I joined Google in October of 2017, so I've been here now longer than I was in med school, which is hard to believe. Maybe it's worth giving a little bit of corporate architecture. Alphabet is the overall company. There are a number of companies in Alphabet. Verily is one, Calico is another and Google is another and there are a number of others. So, Verily has its own board, has its own leadership and we know each other and work together, but the place that I've worked is Google.  

 

When I joined, there were a few folks around. Kapil is a good example. But when I was hired, I was hired into the group in North America that at the time was doing the most healthcare work, which was the group that invents new kinds of machine learning and artificial intelligence for Google to use and they had a health-focused team. So, I had this chance to come in originally as a singleton, as an individual contributor, and there are two things to note: one is when I didn't understand something about machine learning or AI, I could usually find the person who invented them and ask them about it. For instance, I don't understand an embedding space, “Oh, let me grab this person and put them in a room,” and I’d ask them to explain it. It's unbelievable. 

 

The second is that I got the chance to help grow the team and build it over time and it's just been amazing to see with folks like Karen joining, how far we've been able to come in -- for healthcare -- a really a short period of time.

 

Shiv: Yeah, certainly we've been watching it for some time. And it's funny, when you joined in 2017, that was actually the year that seminal paper that's led to this AI craze came out. The Attention Is All You Need paper was published 2017. So, I imagine a lot of those authors from Google or DeepMind Google, Alphabet, were maybe even in the same office you were working out of. 

 

Since you mentioned machine learning -- it was kind of your original mandate -- I would love to get your thoughts on this craziness that has happened since ChatGPT was released. Obviously, you guys have been working on this for many, many years at Google and the space has evolved quite a bit. We just recently had Dr. Nigam Shah from Stanford on the podcast, who you may know, and he and his colleagues published a paper in JAMA about LLMs for healthcare and what health systems should know about creating or fine tuning LLMs. What has been kind of surprising to you over the past year? And then, I'd love to dive deep into where you see AI in healthcare going.

 

Dr. Howell: I think it's important to nest that kind of conversation in...there's a little bit of an arc here. The arc is that before 2010 or 2011, there was a category of AI around symbolic AI or good old-fashioned AI. Think about, you know, IBM's Deep Blue beating the world champion in chess, right? Really amazing. Roughly, you know, a gazillion if-then statements and a bunch of ways to search through a possibility space. Amazing things. But then in around 2010, 2011, there were big technical advances in deep learning, the idea of back propagation and convolutional networks, and roughly from 2011 until 2022 is this era of deep learning. 

 

We've seen unbelievable things there, right? The fact that I don't have to go through all of the thousands of photos I have and tag them with my cat or my daughter's name... the fact that it just works is amazing and in healthcare, we've done a whole lot of work in that. Our group had one of JAMA's ten most influential papers of the 2010s about deep learning for diabetic retinopathy and we've done work in lung cancer and skin disease and a whole bunch of areas like that. 

 

In about six months after I joined, the Attention is All You Need paper came out and people were like, “Oh, that's different.” I'm not sure anyone really understood -- maybe Jeff Dean did or some of the other folks -- but it was really different. It's the transformer architecture, which is the huge contribution from that paper. It builds on this history of Word2Vec in the 2010s, which let us figure out how to do math on words and led to big improvements in translation. It's this sort of arc of things. 

 

The Word2Vec papers have been cited about 80,000 times. The Attention is All You Need paper, also 80,000-plus thousand times. If you look at Google in 2018, we put out our AI principles, and at the time, certainly from healthcare, people are like, “Why are you doing that?” Like, I understand convolution networks are great, but...” I think we were starting to see some of the possibilities there when you added more to the transformer architecture.

 

Fast forward to today, and it has felt like we've had more progress in AI over the past ten months than over the past ten years in some ways. I do think that people use different names for these, right? I may flip back and forth between generative AI and foundation models and large language models and multimodal models. I just lumped them together for the purposes of this. It does feel like this is an important technical step change. 

 

It feels like it may be the most important technical step change we've seen since the emergence of mobile and Android and iPhone, maybe even further back to the internet. It's a big deal. We're starting to see that both on the consumer side and in, I would say, some early work and research around health and healthcare.

 

Shiv: Thanks for sharing that context. I think that's obviously really, really helpful to what that arc has looked like, especially because there's been other times where people have gotten very excited about AI and healthcare. IBM Watson being one of the main ones that ultimately led us to a trough of disillusionment about AI and healthcare for a period. Most of our listeners will be familiar with the Gartner technology hype cycle -- we talk about that on this podcast -- and I'm wondering, you know, it seems like there's a ‘there there’ right now. There's applications we're seeing that are coming out. There's very prestigious journals that are publishing evidence-based papers about how some of these models are performing in a whole host of different applications.

 

Maybe you can comment a bit on any things you're most excited about that your team or teams at Alphabet/Google are working on. I know there's the Med-PaLM model, which our audience will have heard of, but maybe if you could elucidate them on why that's so exciting, that'd be appreciated. The second thing I’m curious about is the work with Mayo, one of the best health systems in the world. Anything you can comment on? And as far as I can tell, they're the premier partner you guys are working with as far as the health system to deploy some of these applications. So again, whatever you're willing to share with us, our audience would love to hear.

 

Dr. Howell: Yeah, there's a lot in that question, so maybe let me take it in parts. First, why am I excited about these things and what do I think people should know? I think rather than getting enmeshed in the details of like how many parameters and which model and all the things, it's worth thinking about the capabilities that these models bring that didn't exist before. 

 

So, anybody who's gone and played with Bard or another chatbot, you get the sense that they're able to act like they understand these very complex questions you ask in ways that they're able to respond in a way that makes sense. That's a new capability, right? If you're thinking about product, like building a tool for people, you'd think about the capabilities. They're able to understand context among many unrelated things without hand engineering. It's a new capability for them. The multimodal models are able to generate music, pictures, also text, right? A new capability. And the most generic new capability that they have is that you can adapt them to different circumstances without retraining them on very large numbers of things. 

 

So, you may have a foundation model -- which has read lots and lots and lots and lots and lots of things -- and you can give it a prompt to say, “Act like you are hosting a TV show” and they will act like they're hosting a TV show. “Act like you are a podcast interviewer” and they will generate a script for a podcast. You can do those kinds of things. In the deep learning era, you would have had to retrain on just huge numbers of examples for that. And so, that ability to be adapted to new contexts without huge amounts of data return is a big deal. So, those are reasons to be excited about it. 

 

I'll tell you about what we're doing in the healthcare specific domain. We had a paper in Nature a couple of weeks ago, but I'll tell you the arc of papers. So, there's a December 2022 paper in Archive, and then there's a May 2023 paper in Archive. What the teams did was that they took a foundation model, PaLM in December and PaLM 2 in May, and then they fine-tuned it in healthcare. So, doing things like, ‘here are a bunch of questions that might appear on a medical licensing exam...learn how to do them really well’ and some other things that went with it. 

 

They did two main things -- and I think the reason to focus on these are because the results themselves I think are interesting -- but if there's one thing for listeners to take away it is that this stuff is getting better very fast. So, remember December and May. The two main things that they did in these papers were answer a set of multiple-choice questions that are kind of like you would have if you were taking a medical licensing exam -- one in the US, one in the UK. People have been working on this set of questions -- there's like a benchmark -- for a number of years. We're getting better a few percent at a time over a period of years, and they got it up to about 50% correct. Our paper in December was about 67% correct, and people say 60% is about passing.

 

By May was about 85% correct, right? Roughly the equivalent of top quartile or expert test takers. Great, that's really fast, like really fast. Is it interesting? It's kind of interesting because it's an externally benchmarked question. Would you ever let a medical student who had passed their USMLEs out to practice autonomously? No, right?  

 

Okay, so then the next thing they did is really interesting. They took a bunch of questions off of Google search that real people ask. They open source these questions so that anybody could use them and study on them and sort of help make an evaluation data set. These are things like, ‘can incontinence be cured’ or ‘if I have rosacea, what's the best diet?’ And they asked the model to give it a long form answer, like, write a couple of paragraphs. Then they asked doctors that we had hired to write an answer like they were answering a patient. Then they took those responses and they gave them to another physician. They said, which one is better on a number of dimensions? Is this consistent with medical consensus? If the person followed these instructions, how likely is it that they would be harmed? If so, how bad? Is there evidence of bias? Things like that. 

 

In December, physicians preferred by a little bit the answers of other physicians on many dimensions. By a little bit. By May, on eight of nine dimensions, they overwhelmingly prefer the answers in the model. December and May. So, that's an example of how quickly things are getting better and then a couple of weeks ago, we shared three new papers on Archive that all have the theme of ‘how do we move to multimodal models in healthcare?’ The idea is that healthcare isn't just text, which is what the first two had been. It's, ‘here's an X-ray’ and you're asking some questions about it. You know, when we would all go to the reading room and ask radiologist questions. 

 

In each of these -- they're sort of slightly different -- the teams have been able to show the ability to add multimodal capabilities to these models in a way where it's still early on the multimodal stuff, but it seems likely to work. So, again, all those are not fully ready for clinical care for sure, but amazingly promising and getting better very fast.

 

Shiv: Wow. Yeah, from December to May, those statistics are incredible. Let me put this in a personal context. As a reminder, I did two years of med school, left, started Osmosis, grew that for a decade. Now I'm back in med school and Hopkins, hopefully, will let me graduate May 2025 as the goal. That's two years from this last paper you mentioned. What should I be thinking about? What should my classmates be thinking about as far as how quickly these capabilities are compounding and what career decisions we should be making? For example, radiology...should we just not go into radiology now or maybe do interventional, but not diagnostic? I’d just love any opinions you may have on that subject?

 

Dr. Howell: I don't think AI is going to replace doctors, but I do think that doctors who use AI are going to replace doctors who don't. I was talking about my mom before. When I was in high school, I worked for her for a couple of summers doing bookkeeping for a couple of her clients as a CPA. I'm old enough that the way you did that was there was a big sheet of paper and a ledger, and you had a calculator with some tape that came off of it. You keyed and you added up all these things, right? And then, you know, somebody invented Lotus 1-2-3, and eventually, QuickBooks. Accountants didn't go away, but the work changed quite fundamentally. 

 

I think that we're likely to see the work change in really meaningful ways over the next some number of years. Exactly what that's going to look like, I don't know and I think that there's likely to be two things that educators need to think about. One is, you know, in the way that there are standards for what do you need to know about the kidney -- like, what does your curriculum need to have about the kidney in order to call yourself in medical school -- you are probably going to want to be thinking about AI in that way. 

 

What do you need to know about lab testing, right? There was a period of time after the Flexner report when lab testing was not a common thing in most physician’s practices, but it's core to many clinicians' practice today. Also, I think that it's likely that AI will change the learning process over time. It's going to be an interesting few years.

 

Shiv: Absolutely. I love that analogy that accountants didn't go away, and in fact, probably there are more accountants now than there were back then largely because the demand has gone up. Now, a lot of businesses that may not have kept books because it was too overwhelming or too expensive can do so because we have spreadsheets and QuickBooks and the internet. On mobile phones now, you can take pictures of your receipt and machine learning can categorize those receipts. So, there’s more demand. I know this happened with bank tellers. When ATMs came out, people were like, “Oh, we're going to get rid of all these bank tellers.” But in fact, more branches got set up and the tellers graduated to doing more advanced things than an ATM could do. So I like that. 

 

I think that probably makes a lot of people who are listening to this -- who are incurring hundreds of thousands of dollars of debt to go finish their clinical training -- hopefully it makes them rest assured that as long as they continue putting the patient first as the North Star and learn how to work with AI, I think they'll be fine. 

 

I think so much healthcare is tertiary, it's reactive, it's sick care. But so much of, I think ‘medicine 3.0’ as Dr. Peter Attia calls it, will be proactive and preventative, where people can take care of themselves and their family members before they even have to see a specialist. I often say, we'll never have enough endocrinologists to treat everyone with diabetes, so we need to figure out how to flatten the curve of diabetes, not just raise the line and strengthen the healthcare system. 

 

So, you mentioned there's some things to look forward to right now based on these advances over the past ten months and these papers you guys have published. When the boots hit the ground, what are some of the applications that you think are most likely -- or maybe you are already seeing with the Mayo collaboration -- that you think are kind of the killer applications we'll see in the next couple of months or years?

 

Dr. Howell: I think it's worth distinguishing this from the deep learning era of AI, where the FDA has approved hundreds of medical devices related to those kinds of techniques. At this point, they're in practice in a number of places. Those tend to have a very specific thing they do. It's task-centered AI instead of generalist AI. Mayo has been an amazing partner of ours for a number of years, and they're using our cloud team's enterprise search and working with it now and what that does is it has very private, secure cloud buckets that people can search for using generative AI and be able to get improved results for people who are at that point of care around both around sort of general kinds of things, but summarizing things that are in the area of protected space. 

 

I think that what we're likely to see is that there are a lot of things that we're learning about these models. That's why we're investing. I often get asked, “Why do you publish in journals like Nature and JAMA?” And I'm sure that it's not because journal editors move at the speed to which Google is accustomed. I think it's because it's important to show our work, number one.

Many of these things have never been used in healthcare before and so we're first through the gate. We want to show our work and it's important to get the math right, and we think peer review helps with getting the math right. 

 

So, I think that what we're likely to see is that there are many areas of opportunity that aren't right at the point of clinical care that we're likely to see organizations use generative AI for. Those will be things like helping clinicians and frontline providers search through all the things, helping with things that we think of today as administrative tasks. If you remember the example I gave of turning the printer, that's an administrative task, but because healthcare is so complex, improving administrative tasks at the system level may really improve patient outcomes without directly intervening in the room that the doctor and the patient are in. I think we're likely to see that first and I think we're likely to see extension into clinical areas later.

 

Shiv: I think that sounds right and I know -- talking to a lot of physicians and med students and others -- one of the things we're most excited about is the reduction in documentation and the administrative burden that research has shown has led to more burnout and moral injury... systemic issues that I know you are well aware of. I want to be respectful of your time because we only have a few minutes left, so I only have two final questions for you. 

 

The first is just general advice you'd like to give to our listeners about approaching their careers. You've had such an interesting one at the intersection of healthcare, technology and leadership. Any advice you want to leave our listeners with?

 

Dr. Howell: It's a great, hard question. You know, when you find things that make you angry in clinical practice, that's often an area for improvement, right? So as a quality and safety professional, I talk about areas for improvement. That's part of the reason I got into quality and safety. I was really angry about going into a room and all the stuff I need for an emergent life-saving procedure isn't in one spot. 

 

Early in mid-2000s, there was a lot of play around this idea of 100,000 people being killed by accidents in US hospitals every year based on the Institute of Medicine report. In the ICU, you see when something really bad happens to someone. If they don't immediately die, they come to the ICU. So, you're like, “Oh, these numbers seem plausible to me.” We'd had a family member grievously injured by a really obvious medical error. That struck me as human suffering was being caused through inattention to detail and as a health system, as an industry, we should be able to do better than that. So, it's not to say that everyone should go into quality and safety, definitely not. But when you find something that you're passionate about, that's a good area to work on. 

 

I'll tell one more story. I was a relatively young, kind of a new-ish ICU attending and we had this patient flown in from another part of the state. They were really sick, like, once-in-a-career, Sherlock Holmes kind of stuff....coughing up blood on an intracranial tube. We thought that this patient had microscopic polyangiitis. It’s super rare. You miss it like half the time on the boards. His spouse came in -- we're doing a family meeting at the very beginning -- and we ask something along the lines of, “We don't want to tell you things that you already know, and we don't want to assume that you know things that the other doctors haven't told you, so can you tell us what you understand about what's going on?”

 

And his wife goes, “I think he has microscopic polyangiitis,” And we're like, “Wow, are you a pulmonologist? That's amazing,” and she's like, “No.” “Do you have a lot of pulmonologists in the family? Because that's what we think he has too. It's amazing.” She goes, “No.” We're like, “What do you do?” And she's like, “I teach second grade.” And we're like, “Wow, how did you figure it out?” She goes, “Well, I listened to these other doctors, and they kept talking about he was coughing up blood, and he had respiratory failure, and his kidneys were okay. I put all that into Google, and seven out of the top ten hits were microscopic polyangiitis.” 

 

I felt like I was seeing the future, right? This is fifteen years ago or something at this point. We know that that's not the majority of the time, but it's a great example of leveling information asymmetry and it's something we think about every day. How do we help people get this world-class information that they need for their own health, for the health of their loved ones? You know, when my dad gets sick, he has a Harvard-trained physician looking over his shoulder, helping him know what to type in, what queries to ask, which link is the best one, and all these things. I just want that for the world. That's another example. I'm not angry about that. I'm excited about that. 

 

So, my advice, to go back to your question, is when you find things you really care about, they're fun to work on. They're important to work on and at least for me, so far, it's been a great career getting to work on things I really care about.

 

Shiv: I love that. What a great story. Thanks for sharing that. Just two quick observations based on just what you shared there. One is, there's this quote from Peter Drucker who said, the best way to predict the future is to create it and it seems like fifteen years ago, you were seeing the future, and now you're at Google. You've been at Google for six years, creating that future with all the work you've been doing...you and your team. So that's awesome. 

 

The second is, when I made my decision to go back to med school, I wrote an article for Forbes and put out a video saying, ‘these are the six reasons I've gone back to med school’ mostly for myself, but also for students and others who contact me asking me about these different paths. One of the things I said was, I have this two “F” framework. One F is fear. And if you can push through fear, on the other side is growth, generally. Then the second F is frustration, which you mentioned. There's tons of things to be frustrated about. You kind of have to selectively choose what frustration you then turn into opportunity. It sounds like that's what you've been able to do. I've been working on stoicism to reduce how angry I get about these frustrating things, but then again, getting angry at them and having a high bias towards action is a superpower when it comes to making change, I think, too. 

 

So, my last question is, is there anything else that we haven't been able to ask you about today that you'd like to leave our audience with about you, about healthcare, Google, or just anything -- your favorite hobby -- whatever you'd like to share.

 

Dr. Howell: I'm excited for people who are going into the field today. I don't know what it's gonna look like, but the chance to get to help take care of people in important times in their lives, whether with AI or without, is an unbelievable gift and so I'm excited for people who have that in front of them, like you.

 

Shiv: Thank you very much. Well, this has been a real pleasure, Dr. Howell. Thanks for taking the time to be with us on the podcast and more importantly, for the work you've been doing over the past several decades to improve healthcare -- not just for your patients -- but many millions of people who you'll probably never meet.

 

Dr. Howell: Thank you.

 

Shiv: And with that, I'm Shiv Gaglani. Thank you to our audience for checking out today's show and remember to do your part to raise the line and strengthen our healthcare system. We're all in this together. Take care.