
Leveraging Generative AI to Solve Healthcare’s Workforce Challenges
Date
Fri, Jan 19, 2024, 06:00 AM
Become a CHIME25 Ambassador: bring your team at a discount and unlock VIP perks Apply Today!
Date
Fri, Jan 19, 2024, 06:00 AM
Leveraging Generative AI to Solve Healthcare’s Workforce Challenges
Introduction
As resource constraints persist and patient demands increase, digital healthcare leaders are starting to invest in new frontiers through the power of generative AI. The explosive growth of foundation and large language models, such as GPT-4, points to a future in which care teams are empowered with AI and generative AI [DR1] automation tools to optimize workflows, reduce administrative burden, and alleviate clinician burnout.
At this unique time in healthcare, the potential benefits and disruptions of AI tools are poised to shift and transform many established paradigms for what is even possible. Health providers are being asked to manage entire cohorts of patients with significant demands for positive quality outcomes and seamless patient experiences, while realizing maximum economic benefits for their organizations.
In this context, how can organizations relieve overworked clinicians and staff by combining existing capabilities of AI, which has been used for things like speech recognition and data modeling for many years, with the potential of reliable, generative AI tools? How can they do this in the healthcare context with all its regulatory and ethical constraints? How can potential risks be identified and mitigated through emerging governance models?
To find answers to some of these questions, CHIME joined Nuance, a Microsoft company, in an invited roundtable of clinician thought leaders from organizations serving a wide range of diverse communities across the Southeastern U.S. The group included physicians, nurses, and a physical therapist, all of whom served in executive roles that function at the cutting edge of innovation. [MOU2] This roundtable discussion explored the value and limitations of generative AI, as well as real-world cases in use today, where AI models are generating solutions to some of healthcare’s biggest workforce challenges.
“I know that healthcare delivery organizations are desperate to find solutions for their workforce challenges,” said Nicole Kerkenbush, (MHA, MN, BSN, RN, CDH-E) Vice President of Education for CHIME, who moderated the discussion. “Artificial intelligence (AI) has been a buzzword for the last decade. However, as we delve deeper into the AI landscape, we should acknowledge and understand its distinct forms. Among the emerging trends, generative AI, a subset of AI, has shown immense potential in reshaping multiple industries, to include healthcare.”
Kerkenbush was joined by Nuance’s [DR3] [JD4] CMIO, Reid Conant, MD, FACEP, and Director of Global Customer Engagement, Sara Huntley.
CHIME Participants included:
* Deborah Backus, PT, PhD, FACRM, VP, Research & Innovation, Shepherd Center
* A Senior Nursing Executive at a Multi-Regional Health System
* Jeff Cleveland, MD, Ambulatory CMIO for the Southeast Region at Advocate Health
* Terri Couts, RN-BC, MHA, CHCIO, SVP, CIO, The Guthrie Clinic
* Matt Sullivan, MD, Chief Medical Information Officer, Atrium Health
Together, these experts shared their ideas about emerging AI use cases in the modern healthcare environment, and how generative AI might be able to help relieve some of these workplace challenges.
Surveying the AI landscape in real-world clinical settings
With the media raising popularity of generative AI in the public consciousness, many consumers may assume it can simply do anything, like the sentient computers depicted in movies and television shows. But in actual clinical settings with high stakes and heavy regulation, innovators are taking a measured, scientific approach to define the scope of possibility.
“What we’re really interested in, is how it could shift some of the work for our clinicians,” explained Terri Couts, SVP and CIO at The Guthrie Clinic. “We’re going to explore use cases to prove that it actually can generate efficiencies.” One such use case is creating better workflows in hospital operating rooms, targeting check-in processes and other functions that impact the organization both financially and in terms of patient satisfaction. Another experiment involves using AI for tele-case management across a large regional healthcare system, to relieve some of the travel burden on existing staff. “We just don’t have to staff to go out and do that case management like we used to,” she added.
Representing post-acute care, Deborah Backus, VP of Research and Innovation at The Shepherd Center, is looking at uses for AI in the world of rehabilitation from catastrophic spinal cord and brain injuries, and degenerative diseases like multiple sclerosis (MS). “We're using AI to support staff and operations, but also to direct precision rehab,” Backus said. “We are trying to use AI to optimize our data for improved care decisions to the point where our clinicians could say, ‘Okay, based on all of this data and what you're presenting with today, as well as how you look relative to your physical abilities, or where you're going home to, all the social determinants of health considerations, we think you need this path.’[MOU1] If we can figure out how to get them into that path sooner, we will have a lot less waste of time and resources.”
For Jeff Cleveland, MD, Ambulatory CMIO for the Southeast Region at Advocate Health, the goal is two-fold, delivering the best care for patients while also being the best place for caregivers to work. “We started a full-service semi-automated clinical documentation pilot last year in Orthopedics.” The project uses ambient listening to capture physician-patient interactions, then recognizes and transcribes the clinical conversations. He explained that in this iteration, the captured conversations would need to be reviewed by quality documentation specialists before presenting draft summaries to clinicians.
However, Dr. Cleveland noted his partner in another pilot of a fully automated ambient solution, Nuance, has combined its voice recognition technology with generative AI to capture and then summarize clinical notes using standard medical technology, without the need for quality reviewers. “I hit the microphone and as soon as I’m walking in the room, I hit it again to stop the recording; by the time I sit down at my desk, I’ve got a draft note,” he said. “It literally takes a few seconds for the computer to do its thing and present that draft to me.”
How quickly generative AI has carved out a new niche and is changing medicine and healthcare “is mind blowing,” to Reid Conant, MD, CMIO at Nuance, and an emergency room physician. “What probably excites me the most about this kind of ambient listening platform, is we’re just at step one, right?”
As far as success in this early phase, removing burdens from clinicians appears to have an energizing effect. “I’ve had glowing reports back from my colleagues,” Dr. Cleveland reported. “They have told me, for example, ‘I’m going to practice medicine for another decade because of this; thank you for saving my career, literally;’ and, ‘I don’t go to the office two hours early to get ready for my day anymore because I know this [AI-based technology] is going to capture the information for me.’”
On the other hand, not all clinicians adopted the technology, according to Dr. Cleveland, who estimated about 30% dropped out of the pilot due to what he concluded is misalignment with personal workflows. “I’m looking forward to pushing it out and giving it to my late adopters, and then blowing their minds because it’s writing the note for them,” he added.
Some people argue these generative AI tools take the clinician out of the medicine, while others argue the technology gives them time back, Kerkenbush noted. Matt Sullivan, MD, Chief Medical Information Officer at Southeast Region, Advocate Health, responded, “It may be that the really good physicians who can use AI to their maximum, are able to get the data that other physicians can’t get, by spending that extra time asking the extra questions.”
The human-computer partnership requires human judgment, perspective, and experience, but the human factor has limitations. Dr. Sullivan referenced the famous historical clinician William Osler, who could seemingly diagnose any illness — the Osler Phenomenon: “I think there’s not that one guy who knows all those things and who is the best diagnostician.” If there ever again will be such a master diagnostician, “they won’t be human,” Dr. Conant predicted.
For anyone considering possible use cases for generative AI, Dr. Sullivan suggested several areas for exploration. “As far as efficiency components, we’ve hit on that,” he said. “I would add a quality-safety component; a finance component which obviously keeps the wheels working, keeps me employed; and then there’s a research side, exploring how to train it, how to build it, and how to do something useful with it.”
Concerns and risks surrounding generative AI
While clinicians like Dr. Cleveland say they are impressed with the current state of AI, they acknowledge it isn’t perfect, even if it is improving all the time. “As an industry, we will need to get comfortable with AI that’s imperfect to some degree as we move ahead,” Dr. Conant suggested.
Using another technology, self-driving vehicles, as a metaphor, Dr. Cleveland said, “If we all drove self-driving cars, thousands of lives would be saved. It’s the same kind of pushing-the-edge-of-your-expectations challenge: Does it have to be perfect, or can it be just ‘better’ and provide a homogeneous level of care that will continue to improve as we broadly adopt it? This is going to be our experience with this kind of technology. It’s good now, but it’s going to be really good in a couple of years, and it’s going to blow our minds in a decade.”
In this vein of accepting current limitations with the promise of major future advancements, Dr. Sullivan said, “Generative AI can actually find data, put it together, and right there in the real world provide clinicians with decision-making guidance.” He foresees a scenario in which AI could use the data from billions of cases, as Epic has already started doing with its new Cosmos feature, and combine it with patient risk factors to suggest possible courses of action or at least generate a checklist for clinicians to use to better diagnose cases based on common comorbidities.
Dr. Cleveland characterized this scenario as a “metanarrative” in which all the available data, the smaller narratives, are woven into one big picture guidance a clinician needs to make care decisions. Dr. Sullivan agreed, seeing a future workflow in which there is a Gen-AI suggestion that the clinician would have to evaluate based on their training and experience. “That’s probably the right approach: ‘Here’s what we think all the components of care need to be based on what we’ve gathered for you. Do you agree?’ And then as a clinician, you have the option to disagree,” he concluded.
The ability to disagree may make sense and provide clinicians additional control, but the thought leaders were concerned about legal liabilities if a patient had a bad outcome after a doctor chose not to accept the AI-generated diagnostic profile or suggested treatment options. “In the future, if we have AI generated recommendations and people follow them, we will probably protect them if there is bad outcome,” Dr. Sullivan said.
Current implementations of AI governance in healthcare
Acknowledging the need for protections from the resultant risks of using hypothetical AI-assisted decision-making features, participants shared their own experiences with current iterations of governance surrounding the use of AI and especially Gen-AI. One of the primary questions around governance is, what should be the scope of stakeholders at the table?
“We haven’t said to people, you have the permission to disagree with the computer,” Dr. Cleveland said. “We haven’t gotten to that point. Once it’s become the standard of care, am I going to get sued because I disagree with the AI-generated recommendation? Yes, and it’s terrifying. And I think that’s going to be an ethical issue for our physicians soon.”
Dr. Conant agreed it’s a likely scenario: “Will there be times when you override the recommendation of AI? of course, right. It’s a reference just like other guidelines.”
Responding to the sudden burst of Gen-AI in healthcare, many providers are in the early stages of developing governance. A senior nursing executive at a multi-regional health system suggested that one of the key challenges has been identifying stakeholders representing a wide and relevant cross-section of the enterprise for inclusion on AI governance committees. Such a group would be tasked specifically with addressing all aspects of how the uses of AI within the organization can be managed, and dedicated for that purpose.
AI is now included in data governance at The Guthrie Clinic; Couts explained that it includes compliance and legal stakeholders, as well as CFOs, but is less IT-focused. “I’m the only IT person,” Couts said. “The governance focuses on, ‘Is this the right technology that we want to give decision support in our system?’ It’s more around the ethical aspect. Once the security aspect and the technology standards get done, the focus shifts to resources.”
Suggested pathways for generative AI technology development
One of the looming questions in the short and long-term is where AI could make real impacts in clinical settings. “Variations in care are probably one of the biggest challenges that, no matter where you sit, we struggle with across medicine,” Couts said. “If AI could collate that data meaningfully into a reliable summary, I think that would take the variation away and would allow us to deliver higher-quality care.”
Interoperability is another top concern. “I want an executive summary of the data, because it is becoming impossible to get your head around this growing volume of information,” Dr. Cleveland said. “I want the computer to know everything about the patient and then give it to me in the executive summary. I think we’re close.”
Barr agreed, saying it is imperative for developers to extend these Gen-AI data insight capabilities as soon as possible. “I would like to focus on how to expand such tools to the entire care team. How do we provide the entire care team that level of data and information to help take better care of patients, have better outcomes?”
Kerkenbush added, “I would include others in the care team that we don’t think about all the time like the family that’s taking care of the person after they’ve come home from rehab, the person at Safeway that’s selling them the food that they eat, the person who is trying to help them with their housing or paying their water bill.”
Pulling on this thread, Backus noted patients dealing with chronic conditions and disabilities have care teams that go beyond nurses and providers to include physical, occupational and speech therapists. She said in an ideal world, governance would include some aspect of compliance with AI-generated guidance. “If I was Queen for the day, I would come up with some tools [to ensure] everybody does the right thing, that we would have a system in place to make sure that people are using it safely and ethically.”
Conclusion
The merger of ambient listening and generative AI signals a transformative shift in healthcare documentation.
As Dr. Conant stated, “If we can continue to connect the dots to empower all those providing care to the patient such that all would have the right data at the right time to affect that patient’s care, we will make a tremendous difference.”
While hurdles and limitations exist, ongoing collaboration between tech experts, healthcare professionals, and regulatory bodies mean the industry is poised for significant advancement. Embracing this paradigm promises not only streamlined operations, but also a healthcare ecosystem with more sustainable and attractive workforce environments, improving the quality of life and outcomes for everyone.