Smarter health: How AI could change the relationship between you and your doctor
Find all episodes of our series Smarter health here.
Stacy Hurt is a cancer survivor and patient advocate.
She says AI could transform health care for the better, as long as it doesn’t transform the sacred patient-doctor relationship for the worse.
“Any technology that occurs should only enhance that. It should not put any distance between that,” she says.
But what about the other side of that relationship?
“We wanted to make eventually patients’ lives better using AI.”
That’s Dr. Sumeet Chugh. He’s a cardiologist working on the development and deployment of AI in American health systems.
“There are brilliant people in the network and some of them might come up with an amazing idea,” he says. “But our hope is the amazing idea comes by keeping the patient front and center.”
Today, On Point: We’ll talk about AI and your care — in the fourth and final episode of our series “Smarter health: Artificial intelligence and the future of American health care.”
MEGHNA CHAKRABARTI: I’m Meghna Chakrabarti. Welcome to an On Point special series, Smarter health. Episode four: AI and your care.
What is health care? At its most elemental level, I’d say it’s a profoundly human act. Patients are people who need help. Doctors are people who want to help. So that makes health care a relationship. One of the most sacred relationships in our lives.
DR. VINDELL WASHINGTON: You know, I remember even as a young physician, just feeling how intimate the act of caring for an individual is.
CHAKRABARTI: Dr. Vindell Washington began his career in emergency medicine.
WASHINGTON: The trust that you have to have because you’re in trouble or you’re bringing your child, your most precious individual to you in the world … in for this care.
CHAKRABARTI: That intimate, sacred relationship is why we still call it health care, not a health transaction. Even as the American health care system has become aggressively transactional, increasingly impersonal, more expensive and less effective.
Which is why today, for the final episode in this series, we’re going to focus on how artificial intelligence could change your experience of health care and your relationship with your doctor.
Dr. Vindell Washington is now chief clinical officer at Verily Life Sciences, which is owned by Google’s parent company, Alphabet. Dr. Washington also served as national coordinator for health care information technology in the Obama administration.
So he has experience in hospitals and insurance, in government and now in the tech sector. At Verily, Dr. Washington leads the development of a new care platform called Onduo. It’s a virtual care tool that combines medical information with data that patients put in themselves, to create continuous customized treatments for chronic conditions.
WASHINGTON: So there would be a series of both passive and active data elements that might come across. You might have a blood pressure cuff reading, you may have a blood sugar reading, you may have some logging that you’ve done. So there’s mood logging that you can do with sort of a voice diary, etc., and they would all be sort of analyzed.
CHAKRABARTI: Practically speaking, how does it work? Well, patients see an app on their phone. Behind that app is machine learning technology that combines the patient’s medical and pharmaceutical data, information about who they are and where they live, insurance information, data from past doctor’s visits. And one more important thing.
WASHINGTON: I want to know things that happen, say, in between physician visits. And so for many years that’s been absent.
CHAKRABARTI: Think about it. To a doctor, the entirety of your life outside the five or 10 minutes she sees you in an appointment is really kind of a black box. The audio system proposes creating a window into that black box by analyzing those multiple data streams. Dr. Washington says the technology does that by incorporating another important information source.
WASHINGTON: It’s super important where I live and what circumstances I live in and whether or not I’m in a stressful environment. Whether or not I’m in a safe environment. Because those data feeds are often underappreciated in the direct care space, but they’re absolutely critical for the work that we’re trying to do.
CHAKRABARTI: Onduo is particularly focused on chronic diseases, the conditions that account for the vast majority of health care costs in the United States. One in every four health care dollars spent in this country is spent on care for people with diabetes, according to the CDC. Americans can also frequently suffer from more than one chronic disease.
For example, a patient could have high blood pressure and also suffer from diabetes and depression due to the stress of managing those chronic diseases. So Dr. Washington says while doctors know that such diseases are interconnected, they’re often treated separately. He says Onduo’s real time machine learning analysis takes a different approach.
WASHINGTON: Maybe talking to you about your blood pressure when you’ve expressed moderate anxiety or moderate depression is not the first step that you should take. Maybe we should get you to your licensed clinical social worker first. And even though you still have other conditions, it’s an order of march, and it’s an intensity that’s driven by machine learning kind of approaches.
As opposed to saying six things are done for a diabetic, ten things are done for a moderate, depressed person, and these seven things are done for somebody with uncontrolled hypertension. So if a patient comes in and they have eight items on their problem list, the ordering of the problem list is a bit of an art. And I would never be one that says that there should be no art in medicine. But I would like to be in a position to offer helpful suggestions to the providers delivering the care.
CHAKRABARTI: The Onduo system adds another layer. It’s also analyzing data across entire groups of patients, people with similar conditions living in similar environments. It’s taking a look at how they’re responding to various treatments. The tool then incorporates that information into the individual guidance it gives to doctors.
WASHINGTON: That kind of learning cycle is pretty rare in medicine.
CHAKRABARTI: Washington says the doctor, isn’t the only one getting insights from the AI. Patients also get the advantage of provider feedback and advice, in between regular medical appointments. So that takes us back to our main question. What impact could a tool like this have on that sacred doctor-patient relationship? On this, Dr. Vindell Washington is adamant.
WASHINGTON: We should not attribute some degree of intelligence to the machine itself.
CHAKRABARTI: He says, AI should assist or enhance, but never replace a physician’s judgment.
WASHINGTON: If things don’t seem right, they’re probably not right. If the outcomes are suspicious, you should have as high a degree as you’re doing, particularly the training aspects of this as you would any complex endeavor that you’re undergoing.
CHAKRABARTI: And how could it change the patient experience? Dr. Washington says that depends on one thing: a patient’s trust. Not necessarily trust in the technology, but trust in their doctor.
WASHINGTON: There’s no shortcut on the trust side. I mean, trust, and I’ve said this in my own life, in lots of different areas. Trust is really based on a series of promises that have been kept. And so when I think about what’s going to break down the barriers of people with AI and some of their chronic disease conditions, if the combination of predictions and interventions are improving all over the course of time. And I get something that’s better for me, than just the gestalt of my well-meaning and smart physician, then I’m all in for that. And I think that’s the way the public is going to react in the long term.
CHAKRABARTI: Dr. Vindell Washington is Chief Executive Officer at Onduo. That’s at Verily Life Sciences. Well, let’s turn now to Stacy Hart. She’s a patient and caregiver advocate, a consultant, and has spent 20 years working in health care and physician practice management. And she joins us from Pittsburgh, Pennsylvania. Stacy, welcome to On Point.
STACY HURT: Hi, Meghna. Thank you for having me.
CHAKRABARTI: So, first of all, do you share Dr. Washington’s overall optimism that he had there at the end, that in the long run he sees AI as enhancing both health care, and potentially the doctor patient relationship?
HURT: I definitely do. But I share the same reservations about technology replacing that relationship, which is absolutely sacred. I’m a stage four cancer survivor, and I just think back to the times when I was interacting with my oncologist. And he had a saying that he said, I can tell if a patient’s doing well just by seeing them, I can walk into the room and know if they’re doing well, if they’re not doing well. And that’s not something that artificial intelligence can do.
I think about, you know, going through treatment and going through scans and how stressful that was. And I remember celebrating the good results and hugging my oncologist. And you can’t hug AI to celebrate those kinds of victories. So those are the concerns I have, but the optimism that I share is. AG as, as Dr. Washington said, aggregating this data and, and picking up on disease patterns so that we can identify those patterns, identify disease sooner, and hopefully get to faster cures and improve outcomes and ultimately save lives.
CHAKRABARTI: So but about that fundamental relationship, sacred is the word that you use. And we’re incorporating that into this conversation. And we’ve got about 30 seconds before our first break. How much did the relationship you have with your doctors contribute to your overall sort of path towards beating the cancer, you think?
HURT: That relationship was everything, not only between me and my oncologist, but me and my nurse. And they were wholly invested in my survival. And I wanted to fight cancer and beat it for them. Because I knew that they believed in me, and that they were doing all that they could to give me the best treatment possible. So that relationship kept me going, gave me hope, gave me faith, and ultimately led me to survive one victory.
CHAKRABARTI: Well, you know, when we come back, we’re going to talk more about how AI could have an impact on that, especially given how technology already has changed, in a sense, some fundamentals about the doctor-patient relationship. So it’s part four of our special series, Smarter health. We’ll be right back. This is On Point.
CHAKRABARTI: Welcome back. This is On Point. I’m Meghna Chakrabarti and it’s our fourth and final episode in our special series, Smarter health. And today, we’re sort of bringing it all together, and talking about how the rapid advances of artificial intelligence in health care can have an impact on your care. And specifically that all-important doctor-patient relationship. I’m joined today by Stacy Hurt.
She’s spent 20 years in health care and physician practice management. She is a patient advocate as well, and she joins us from Pittsburgh, Pennsylvania. And, you know, we’ve been receiving lots of calls and emails about this series over the past many weeks. And in a sense, they all culminate in a series of questions that are very appropriate for this final episode, such as this one from Sandy, who called us from Seattle, Washington.
SANDY: I am hearing from medical doctors that as AI becomes more prevalent, physicians are losing skills and they are also losing sensitivity to the human beings that they are having to deal with. People lean on their physicians, they trust them. They expect a connection and understanding from them, which they should get.
CHAKRABARTI: So let’s bring in Dr. Sumeet Chugh from Los Angeles now. He’s director of the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center and director of the Center for Cardiac Arrest Prevention there as well. Dr. Chugh, welcome to On Point.
DR. SUMEET CHUGH: Thank you for inviting me.
CHAKRABARTI: So you’ve already heard this word trust mentioned a half dozen times just in the beginning of this show. What is your view on the impact that the advancing of AI in health care could have on that fundamental trust between a doctor and a patient?
CHUGH: I hear the same thing from my patients, that the relationship is sacred and we have to do anything that we can do to preserve trust. But especially in response to the question we had from Seattle, Washington, I would take a slightly different approach. I would say that artificial intelligence could actually help us improve the doctor-patient relationship and increase the trust. We just have to give it a chance.
CHAKRABARTI: And how could it do that?
CHUGH: So there’s no question over time, this relationship, the human aspect of medicine, has degraded. And largely it’s due to the fact that there are pressures on physicians that result in spending less time with the patient. There are other aspects where physicians today are interacting with a keyboard and a computer when they should actually be looking at the patient. So I think that there are ways that AI can help mitigate those issues and especially buy more time to spend with the patient.
CHAKRABARTI: Right. So about that, looking at the keyboard instead of interacting, I mean, that’s one that always comes to my mind in thinking of doctor’s appointments I’ve had. And in the series we’ve covered how, you know, a major area of AI could be just in record keeping that would free physicians from having to focus their attention on that and could refocus on patients. But Stacy Hurt, jump back in here, please, because you’re the voice of our patient advocate here. What are your thoughts about what Dr. Chugh said?
HURT: I agree with Dr. Chugh. I think that there’s tremendous promise. And I know there’s a company working on natural language processing that captures what the patient is saying during the appointment, and puts it into the notes and into the electronic medical record. So in that way, if artificial intelligence and technology can help take that burden off of the physician of, as you said, having their face down in that computer, but more focused on the patient and what the patient is saying and the patient care, I think that’s a total win-win for everyone.
CHAKRABARTI: But in so far as the patient experience already, Stacy, I mean, you can understand how some people and a lot of our listeners, quite frankly, are rather cynical about the prospect of AI. Because, you know, as Dr. Chugh said, they’ve already seen how any new introduction of technology, in their view, has actually done nothing but further degrade that relationship. I mean, is that cynicism well-placed, Stacy?
HURT: Well, I think that that cynicism comes from a lack of knowledge or education. So I think this is where we really need to focus on health literacy and educating patients as to the advantage of technology and artificial intelligence. I don’t think that we’re doing nearly enough to talk to patients about how technology can help. And we’re all partners in this health care ecosystem, and we all have a right to know what’s going on.
And in terms of, you know, this is an artificial intelligence handling or banking or flying a plane. This is AI handling our most precious asset, our health care and our data that we own. So we absolutely have a right to know as much as possible about what’s happening. And when you do that, and when you increase that education and close those sort of knowledge and trust gaps, you’re going to have better adoption, better trust, better utilization, and the patients will be on board with that technology.
CHAKRABARTI: Okay. So I want to talk with you about that more a little bit later in the show, about how to close that gap. But, you know, here’s something for both of you, because over the course of the months that we researched and reported this series, there’s this phrase that kept coming up over and over again. And Dr. Chugh, I’m going to start with you and your thoughts on this. It has to do with patient centered care.
That phrase. And to me, as a patient, I wonder, is there any other type of care? Like, why does it deserve this special phrase? So then the reason why I ask that is it’s really central in developing this notion, in developing new technologies and particularly artificial intelligence tools in health care. So honestly, how often are patients, actually, and their needs and their views at the center or maybe even like the major part of the beginning of the development of new AI tools?
CHUGH: I think that’s a very important point. And in fact, I think the term needs to be repeated. And the reason is that over time, a lot of the developments, especially for AI, have been coming from big technology. They haven’t been born. They haven’t been conceived within health systems. And so just reminding ourselves that as we think of AI, and as we think of how AI can improve the the doctor-patient relationship, we must include the patient even as we think of developments, even before we get to putting them in processes.
CHAKRABARTI: What does include the patient mean, though?
CHUGH: I think it means that they’re the most important stakeholders. So let’s take an example of how we function. So as a division of AI in medicine, we function from within a health system. So instead of the technology coming from somewhere else, it’s coming from within the health system. So each question that we come up with comes from the patient, and the doctor patient relationship. Comes from gaps in knowledge that we have identified. So actually the awareness we are developing of what is important is coming from the doctor-patient relationship.
CHAKRABARTI: Okay. So Stacy Hurt, do you think there’s enough of that patient-centered thinking early enough in the process of developing technologies, particularly AI, to lead down the road that Dr. Chugh is describing?
HURT: Absolutely not. Our patients need to be part of the co-design process from the very beginning. So a tech developer or technology company from the very inception of that idea should have a patient involved in co-design of that product. Unless technology is solving a problem for patients, it’s a complete waste of money. And then taking that a step further. If the technology then isn’t adopted by the insurance company, it’s not going to be in the mainstream and it’s never going to be utilized. So with all due respect, it’s kind of pointless.
But circling back, in terms of patient-centered design, I think what we’ve seen to this point, unfortunately, instead of patient centricity, it’s been profit centricity. And that’s where we need to shift. We’re talking about the shift to value based care and that’s around the patient. Will they use it? Will it solve a problem for them? Will it improve their care and ultimately extend their life?
But that begins with co-design with patients at the beginning. We are here to help. There are a ton of us that are high level advocates that through our lived experience, know about this technology, want to use technology and want to know more. So tech developers ask us and we will be side by side with you from the beginning to make it work.
CHAKRABARTI: So, Stacy, you just took a baseball bat swing at the wasps nest that I was hoping you would. Okay. And mentioned the profit centricity that’s at the core of particularly American health care, because this is the thing that most of our listeners who responded to this series, this is what’s resonating with them. Their concerns about, the system’s already out of whack in terms of its incentives, in terms of how insurance works. They feel it in the pressures that doctors are under.
And so they just think that something as sophisticated and difficult to understand as AI is going to exacerbate all of those problems. Like, for example, we heard from a listener named Hikari who listens to us from Southern California, and this was following episode two of our series where we talked about ethical issues around AI, through the example of software in place at Stanford that predicts a patient’s mortality death predictor, really, to put it roughly. And Hikari called us, telling us she was worried that AI will simply keep patients with terminal diagnosis from getting the kind of care that they actually need.
HIKARI: As someone who just recently lost her father to cancer, I am just shocked at how some doctors and our insurance basically disregards people who are maybe, quote-unquote, at the end of their life. He was completely disregarded by his health insurance and his first oncologist. Thankfully, we were able to find an oncologist who deeply cared about my father’s survival. But this video technology horrifies me.
CHAKRABARTI: Now, Stacy, if I understand correctly, when you first got your diagnosis, your cancer diagnosis, you were given, what, an 8% chance of survival?
HURT: 8%, that’s correct.
CHAKRABARTI: So Hikari’s concern must resonate with you?
HURT: Yeah. It hits really close to home. And we have a saying in the cancer community, for those of us with advanced metastatic disease, it says stage four needs more. And that is absolutely what Hikari is saying here, that, you know, the last thing that I want to see as a patient advocate is AI detecting certain disease patterns or behaviors that are trending towards mortality, and exactly what happened here, that a patient is disregarded.
So that’s where we need to sort of set up these guardrails and we need to sort of check ourselves on AI. And you need this hybrid approach and this human intervention to sort of look at that and say, Well, wait a minute, let’s talk to the family, let’s talk to the to the patient and the care partners. What do you want to do? Do you want to continue to fight? Do you want to continue to pursue medical intervention? And if they do, we absolutely owe that to them to keep going and keep fighting. We shouldn’t have an algorithm making those decisions. So I offer that.
But on the flip side, I was asked to comment on an article about AI detecting end of life and having conversations about the end of life, to which I said I’m all in favor of. Because too often in my advocacy I talk to patients that don’t have advance directives and don’t have living wills in place. And as we saw in COVID, we had people that were dying alone, unfortunately, because of the shutdown and care partners were unable to be with them.
So those final wishes, those end of life decisions weren’t in place. And nobody was there to make them for him. Ultimately, medical teams were making them for them. So if AI can help that and aid in that decision of end of life, and reduce the burden for the family, that would be a scenario I’m in favor of.
CHAKRABARTI: So, Dr. Chugh, let me turn back to you here, because this is where your experience is particularly illuminating in this conversation. Because, I mean, you’re doing work in developing AI tools, specifically around cardiac arrest. And from what I understand, there seems to be a lot of promise here that there are some AI tools that can significantly help physicians.
Because in a certain sense, they’re better at the diagnoses than doctors themselves. So you’re an optimist, but you’ve also told us that you’re a realist here, because as those tools are being developed, they should be appropriately vetted so that they avoid ending up with the nightmare scenario that Hikari, our listener, talked about. Can you tell me more about that?
CHUGH: Well, that is correct. And by the way, I completely agree with the sentiment that Hikari is expressing, and I think Stacy has put this in words beautifully. So the AI discoveries that are mushrooming today are made from existing datasets, and these can sometimes, you know, represent a somewhat rosy best case scenario.
So what I mean by these discoveries being vetted is that whether it’s for cardiac arrest or it’s for a heart attack or it’s for cancer, we need forward looking, randomized clinical trials done in real time, or we need real world prospective studies to confirm the utility of AI tools before these are ready for prime time.
CHAKRABARTI: We need them. Do you see systems being put in place to het what we need to happen?
CHUGH: Well, traditional science within clinical medicine does dictate that. Certainly, I listened with great interest to the first three segments as well. And as you and many speakers made the point that some of these are in place, but a lot of development for regulations, for how science should be conducted, how these discoveries should be approved by FDA need to happen in parallel as these developments happen. So we have a lot of structures that still need to be put into place.
CHAKRABARTI: Okay, Stacy, we’ve got 30 seconds before we have to take the second break here. What’s another main thing, another structure in terms of the development of aid that you would like to see put into place?
HURT: Well, in terms of structures, I just think about health equity. I just want to make sure that underserved populations have as much access to technology and artificial intelligence that I did as a white, privileged woman living 30 minutes away from a nationally acclaimed cancer center. So in terms of systems, I think that we need to break down these upstream social determinants of health factors that are preventing everyone from having access to the best technology possible.
CHAKRABARTI: Well, that’s Stacy Hurt. She’s been spending 20 years as a patient advocate and consultant, also in physician practice management. She’s a cancer survivor herself. You’re also listening to Dr. Sumeet Chugh this hour. He’s director of the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center. And we’re talking about how AI could change for the worse, or for the better, that fundamental physician patient relationship. More in a moment. This is On Point.
CHAKRABARTI: This is On Point. I’m Meghna Chakrabarti. And today it’s our fourth and final episode in our special series, Smarter health. And for this final series, we’re really focusing on your experience and that fundamental patient-doctor relationship that should be at the heart of all health care in this country. And I’m joined by Stacy Hurt. She spent 20 years in health care and physician practice management. She’s a patient advocate as well and a cancer survivor. She joins us from Pittsburgh, Pennsylvania. Dr. Sumeet Chugh is with us as well. He’s director of the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center and director of the Center for Cardiac Arrest Prevention there, with us from Los Angeles.
Now, I want to just share with both of you. Again, we heard such a wide variety of examples of the impact that AI could have on health care. And here’s another one. This is Dr. Ryan Lee. He’s a radiologist at Philadelphia’s Einstein Health Network, and he says he thinks of AI in health care in sort of three buckets, logistical, workflow based algorithms and diagnostic algorithms. And he says logistical algorithms can have a potential to make a big difference in ensuring that patients get the follow up care they need.
DR. RYAN LEE: For example, a patient notification using natural language processing to mine reports that, for example, might have a recommendation for a follow up study or a follow up for another physician, and using that natural language processing to identify those reports and then automatically send notification to patients. I think this is a real opportunity to close the loop, so to speak, in which we’re able to directly notify and know when a patient has actually done the appropriate follow up.
CHAKRABARTI: Think about what the system is like right now. Oftentimes a doctor, but more often a nurse or physician’s assistant or office manager at your doctor’s office is the one that has to do the follow up call with you. They don’t have the time to do that as effectively or efficiently as we might like. So AI could potentially be very powerful here to help keep a continuity of care, let’s put it that way, between you and your doctor. But on the flip side, again, we’re hearing concerns from our listeners. Because Timothy Smith, an On Point listener, was actually listening to an episode of this series while trying to struggle with technology at his doctor’s office. And here’s what he’s concerned about.
TIMOTHY SMITH: I want to know if there are studies being done about whether AI is actually harming patients, because I’m literally working off a lot of anger from having to deal with AI this morning on a medical appointment I had.
And I thought, boy, this is actually contributing to bad health. I’m having a bad experience at a medical provider, and I haven’t even met the doctor yet.
CHAKRABARTI: Stacy Hurt, what do you think about that?
HURT: I mean, I’m not going to disagree with Timothy. You know, we look at COVID and the accelerations in technology, and what was asked of patients in the middle of COVID. To log on to your patient portal to figure out if you had access to broadband Internet and then to do virtual care. I mean, that’s a lot for your average patient or somebody who doesn’t understand technology or is in a remote underserved area. So I do get the anger aspect. He’s not wrong.
But on the flip side of that, to the opening example, you can have the best plan in the world, but if a patient doesn’t follow up with it, it’s completely useless. So these reminders, you know, that are triggered, you know, to go and get your screenings done. I mean, you know, I’m a colorectal cancer survivor. Screenings were down 80% in COVID, and we know that only 40% of patients now are up to date with their screenings. So how can I aggregate the data in a patient’s chart to remind them, hey, you need to go get screened and avoid what I had to go through, stage four colorectal cancer.
CHAKRABARTI: Well, so, Dr. Chugh, I think there’s an opportunity here. Within the very legitimate set of concerns that listeners are having about their struggles already with technology. And there’s always an opportunity. And one of them comes up. It actually came up in our previous episode about regulation. And our guest in that episode, Finale Doshi-Velez, a computer science professor at Harvard, said, You know, unlike medications, chemical medications, AI is different. In that it might give us a different way to measure and act more quickly when it comes to adverse events. But right now we don’t really have a system set up to monitor adverse events with AI. What do you think about that?
CHUGH: So I’m an optimist, but I’ll start with a realistic note. And in response to what I’m listening in your question, I would say that AI is not going to fix all the problems that ail modern health care. However, I think that there are some real opportunities that we have. And on a more optimistic note, the power of AI’s already being used to help save lives. So, you know, you can make diagnoses of genetic diseases better. And the workflow is one aspect. But there are many other aspects of AI, like computer vision and robotic surgery that benefit health care on a daily basis.
CHAKRABARTI: And so this is actually why I mentioned the question of adverse events. Because, you know, theoretically, unlike medication, there has to be a certain number of people who have adverse events or bad reactions to medications, and then big studies are done to figure out how and why. But again, theoretically with AI, the moment something goes wrong, I mean, we could go back and look at every decision or every piece of data that the AI looked at to make whatever recommendation it did.
So again, theoretically, we could have only one adverse event before some changes are made to improve the technology. Does that sound realistic?
CHUGH: Yes, it is. In fact, we are in some early phases of AI. where we’re doing very well with narrow AI. But let’s take the example of medical mistakes. We know that as much as we try to prevent them, these happen commonly. And there are studies in progress that are trying to get in front of these medical mistakes by analyzing existing datasets. And absolutely, whether a mistake happens at a technical level, at a physician’s prescribing level, at a nursing level, at a pharmacist level, we have lots of opportunities to use AI to help us do better.
CHAKRABARTI: Again, though, just to quote Stacy, we need to set up some guardrails around that. And I think this is where we as patients, Americans, maybe there are things that we can ask for from our regulators in terms of standards to set up for AI. But we have to refocus for a moment on what, Stacy, what you mentioned earlier about profit in the American health care system. So what do you see as the main cost driver drivers, Stacy, in health care over the next 20 or 30 years? And how do you see AI playing into that?
HURT: Well, I’m going to be very candid here. I know that cancer is big business, and I know that by the year 2030, there are going to be 22 million cancer survivors in the United States, which will be up 34% from where we are now. So more people are, you know, walking around with cancer as a chronic illness than ever before. And I can tell you that from me being in active treatment, when I was being poked and prodded and with chemotherapy going through my veins and I saw a couple of those bills. And it was tens and hundreds of thousands of dollars that, you know, for profitability, for treating, you know, cancer patients.
And then I went into survivorship and I feel like the system kind of left me behind. So I’m still managing several chronic conditions as a result of my treatment, but yet I don’t feel like I’m getting the support that I need in my survivorship to be a fully functioning American consumer. So in terms of managing cost, I definitely think we need to work together on this. All stakeholders in the system, definitely payers, being the insurance company, the clinicians, the patients, regulatory, pharmaceutical industry.
I think that we need to come together and look at how we can do better for patients and cut down some of these costs by being more efficient. And certainly, AI, as you said, Meghna, in terms of reducing diagnostic error, maybe and eliminating unnecessary tests and unnecessary procedures can play a huge part in that.
CHAKRABARTI: So let’s talk for a moment about one of those stakeholders, and that is the payers, the insurance companies. Dr. Kedar Mate is CEO of the nonprofit Institute for Health Care Improvement. And he told us that payers already have a lot of information about us, about patients. So he says there is an opportunity for AI to improve coverage, just as much as there’s a risk of insurance companies making coverage less secure or less safe.
DR. KEDAR MATE: Might artificial intelligence or machine learning, big data type of activities, increase the stream of information that’s flowing towards payers? Potentially. There can be safeguards put in place against that, but certainly that’s a possibility. And might the insurance company use that information to deny claims or to deny coverage, potentially down the line? Again, they have a lot of that information already, and it’s in the interest of insurance companies to cover lives, but to do so in a way that is supportive of their health in the longer term.
CHAKRABARTI: Dr. Chugh, This gets us back to what I think is one of the fundamental tensions in this hour, and that is because of the way the U.S. health care system is run and financed. Is there a tension between what is of benefit to health systems and what is of benefit to patients? Because the insurers seem to be one of the groups in the center of that point of tension. And if so, how does AI change that?
CHUGH: So, Meghna, I think it’s a very important fundamental question that extends in part beyond AI, but using AI in health care as a tool. There are certainly lots of opportunities within health systems to improve workflows to make aspects more efficient.
So, for example, if a patient is in an MRI machine and I can help capture those images, analyze them and have the test done in 10 minutes instead of 30, while the patient doesn’t have to be in a noisy MRI, the health system does a better job and costs are saved. So I think, as Stacy said, all of us as stakeholders have to work together. And yes, those tensions exist. But I personally feel that I can make a difference in reducing those.
CHAKRABARTI: Stacy, I’d love to hear your view on insurers here.
HURT: Oh, boy. So in addition to being a stage four cancer survivor, I have a severely intellectually, developmentally disabled son. And he has a lot of services that are covered by the insurance company. And routinely in the past, we have been denied for services. And I’m convinced it’s because of just a machine going through and automatically denying him either nursing services or medications, etc. just because, you know, the AI is set up to just automatically deny when it sees something going on.
And then I have to come in and fight for my son, and fight for his nursing and fight for his medications. So, you know, again, this gets back to those sort of checks and balances of, You can have a machine going through and looking for certain things. But you have to have a human being coming in with the oversight to correct it and say, oh, no, Stacy’s son does need these things. And he’s had these things for six years.
My son has a genetic condition. It’s never going to change. He’s never probably going to walk, or talk or do anything for himself. So by now, the machine should be trained to know what he needs. And I shouldn’t have to fight this hard. So, you know, again, I’m optimistic about AI. I think the biggest potential for AI is capturing those that we’re missing. I mean, reaching out, in terms of getting patients in who were missing. But these nuances, these personalization of care need to be preserved for patients.
CHAKRABARTI: So we only have less than a minute with both of you. The time has gone by far too quickly. But, you know, our job here on this show is to be the patients, the listeners advocate, the listeners advocate. So, Dr. Chugh, my last question for both of you and I’ll start with you. What’s the thing that you want them to come away with in terms of understanding AI and health care? What should they think the next time they go into their doctor’s office?
CHUGH: So I think that AI holds tremendous potential to improve health care and the lives of patients. Some aspects are already benefiting patients, but we’re still early in the evolutionary process. And as they visit the doctor, they need to ask for complete transparency, which I think is the right way to do this. But also work actively to improve their awareness so they can participate in the development of AI in medicine.
CHAKRABARTI: Well, Dr. Sumeet Chugh, director of the Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center, joining us from Los Angeles. Thank you so much, Dr. Chugh.
CHUGH: Thank you for inviting me.
CHAKRABARTI: And Stacy Hurt, patient advocate who spent two decades in health care and physician practice management. I also understand, Stacy, that you recently saw your son graduate, so congratulations on that. And thank you for joining us.
HURT: Thank you so much. It was a great milestone to be alive to witness. And it was great being here today. Thank you.
CHAKRABARTI: Well, that’s it for this special series, where we talked about where AI is going in health care, the ethical questions around that, the new world of regulation that has to be built in a hurry. And today you heard about how all of these changes could impact your health care.
As we’ve mentioned, we’ve researched AI and health care for four months and we came back with a lot more stories than we could fit into just these four parts. So we’ve dropped another story into our podcast feed. It’s a special one about how a specific type of AI was used for the first time ever to screen travelers to Greece during the pandemic. You’re going to want to check that out. You can also listen to all the episodes of Smarter health here.
This series is supported in part by Vertex, The Science of Possibility.
This article was originally published on WBUR.org.
Copyright 2022 NPR. To see more, visit https://www.npr.org.