
As health systems nationwide adopt artificial intelligence (AI), many questions remain about its reliability, ethical implications, and impact on patient care. UC San Diego Health’s inaugural Chief Artificial Intelligence Officer Karandeep Singh, MD, MMSc, spoke with America’s Essential Hospitals about the role of AI within hospitals and his outlook on its future in the health care space.
Can you talk about your role as chief AI officer and what it entails?
There is an explosion of AI technology all around us, and we’re seeing AI seep into almost every aspect of life and in the tools we use every day. It’s really easy to see this growth and get excited about the possibilities, or to see this and become skeptical because it comes with a layer of excessive hype. Anyone who understands the realities of modern health care knows that technology by itself doesn’t fix problems.
My responsibility is to see through the hype, help us identify the real opportunities, and make sure our teams are equipped to responsibly use AI to support their work. Being a chief AI officer within a health system is about making sure that AI is used in a way that serves the needs of our patients, community, and our team members, including our faculty, staff, and learners.
On a day-to-day basis, this means I am involved in strategic discussions at the health system level, in close collaborations with operational leaders who are poised to drive changes. [We discuss] the intake and assessment of AI technologies; engaging industry and research partners; AI implementation work; governance to ensure that the AI tools we adopt are safe, secure, and transparent; and helping our health system both inform and navigate the regulatory landscape of AI.
I also help to run classes and office hours to bring our teams up to speed on ways in which they can use our secure AI platforms to support their everyday work. At the end of the day, AI is a means to an end. My goal is to make sure we get the most value out of this tool and that we do so responsibly and thoughtfully.
Alongside my chief health AI officer role, I also lead an AI lab within our Jacobs Center for Health Innovation, which gives our team the ability to study the effectiveness of AI tools and to publish our results in scientific journals. This ensures that the lessons from our operational work in implementing AI extend beyond the walls of our health system.
What is UC San Diego Health’s overarching AI strategy? How is AI used within the health system?
One way to think about AI strategy is to consider the different types of work that health systems do when they roll out AI.
Health AI projects fall into one of four buckets:
- Supporting infrastructure and platforms
- Implementing AI enhancements on top of non-AI tools
- Implementing new classes of AI tools that didn’t previously exist
- Developing new tools where current products don’t exist
All four aspects are important to having a leading AI strategy. However, finding the right match between the priorities of the health system and the types of AI projects is not always straightforward. And there are situations where an AI approach is not the right one.
A key part of our AI strategy is identifying the right type of AI approach for the important challenges facing our health system, and to pair this with an evaluation plan that allows us and others to learn from our experiences.
What kind of data was used to train your models?
There are two common types of AI models when we talk about AI in general. Predictive AI models are tools that are trained on health care data and use that data to generate predictions that inform patient-level of health system-level decision-making. We and others have found that these tools often don’t work well when you use tools trained at one health system within another health system. For predictive AI models, the best practice is to either retrain models using your health system’s data to ensure that the predictions are accurate, or to “fine-tune” the models on your health system’s data. Fine-tuning refers to the tweaking of an existing model so that it works better on your patients. Any time models are trained on health system data, this work is done with appropriate ethical review and approval.
For generative AI tools (like ChatGPT or Copilot), these models are usually trained on data from the internet and are not typically trained on patients’ data. However, there is a risk that private information may be entered into these tools, especially when using publicly available tools on a personal account. For generative AI models approved for use by our health system, we ensure that any private data entered into the tools is protected and remains private. We also disclose the use of generative AI to our patients to ensure that we are transparent anytime AI is used in communication or clinical documentation.
How has AI improved patient care quality and outcomes? Do you have a favorite story that illustrates this?
At UC San Diego Health, our implementation of sepsis AI has been shown to save lives and to help us more efficiently measure the quality of our sepsis care. We have studies underway looking at whether AI can save time and reduce burnout for our clinical teams.
One of my favorite AI stories from UC San Diego Health is from before I joined the organization. It involves the implementation of an AI algorithm that was designed to identify COVID-19 on chest X-rays early in the pandemic when access to more definitive testing was very limited. [The AI tool identified] a patient in our emergency department who possibly [had] COVID, [who was] put in isolation to prevent others from being infected. When the definitive test was finally conducted and [we] confirmed the diagnosis several days later, we had already taken measures to treat the person and prevent the spread of COVID. In this way, UC San Diego Health was among the first health systems in the world to use AI to support clinical care during the pandemic.
AI is a hot topic on the legislative and policy fronts. Do you have any concerns on how changes to AI regulations could impact your hospital?
The AI regulatory landscape is changing fast in various directions at both the state and federal levels. We have been engaged in policy discussions at all levels, including a public comment that we have shared with the [administration] in response to a call for ideas to shape their AI action plan. Our public comment called for more transparency of AI solutions to make it easier for health systems to understand their value and risks, streamlining of AI regulations to clarify responsibilities, and regulatory innovation that supports the safe trial of new AI use cases. Transparency helps us better assess and protect our patients from the use of ineffective AI. Separately, I worry that health system leaders will try to bolt AI onto broken systems rather than trying to reinvent those systems to work differently in the era of AI.
Are there any concerns regarding AI taking away from clinical jobs?
No. I understand why clinicians are concerned about AI taking away jobs, but we have a national shortage of physicians, nurses, and allied health professionals. Patients continue to struggle with getting timely access to care when they need it. I view AI as a technology that will optimize our current system to get more value out of it while also expanding our ability to support the care of people beyond the walls of our health system, where we currently have limited resources.
Have there been any challenges implementing AI in a hospital setting?
AI implementation moves at the speed of trust. We build it by listening, learning, and teaching. And we build trust by solving the problems that matter to our teams.
What is your vision for AI integration at UC San Diego and within health care in general in the coming years?
Our team’s vision is that AI will help us to create the safest and most efficient health system in the world. To get there, we need to make AI accessible across our teams, and to educate them so that they can use it effectively. We also need to reimagine how we serve our patients, and the steps each of us carry out every day to get our patients the right care at the right place. And with our Jacobs Center Mission Control as the emerging “operating system” for our health system, we need to have the best system-level intelligence on hand to make the best possible day-to-day operational decisions for the patients we serve.
What do you want patients to know about AI use?
We know that our patients increasingly use AI to help them answer questions about their health. One tip I have for patients is that if you are using an AI tool to ask a medical question, always include the phrase “look it up” in your instructions. This will help to ensure that the AI tool searches the web for information before it answers your question. While no method is foolproof in ensuring the accuracy of AI output, this is a simple tip that I’ve found to be helpful to getting higher-quality answers.
America’s Essential Hospitals is launching its Artificial Intelligence Interest Group on Sept. 25. Visit our website or contact Tanya Alteras, MPP. director of research of innovation, at talteras@essentialhospitals.org for more information.