Dana-Farber Cancer Institute shares lessons on using secure LLMs

Reading Time: 4 minutes

The famous Dana-Farber Cancer Institute built a secure and private exploratory environment to evaluate, test and deploy large language models for non-clinical applications such as clinical and basic research and operations.

The provider organization overcame governance, ethics, regulatory and technical challenges, and deployed a secure API to enable its developers to embed AI into their software applications. And the organization trained its workforce on proper and secure LLM use, reskilled and upskilled where necessary, and worked on increasing adoption.

Renato Umeton is director of AI operations and data science services at Dana-Farber Cancer Institute. He holds a doctorate in mathematics and informatics. Healthcare IT News spoke with Umeton to talk about his AI work and get a sneak preview of his case study session on the subject at the HIMSS AI in Healthcare Forum, scheduled for Sept. 5-6 in Boston. The session will focus on mitigating the risk of LLMs in healthcare.

Q. What are some of the biggest opportunities – and challenges – for large language models in healthcare today?

A. The focus of the session is private, secure and HIPAA-compliant implementation of large language models in healthcare, specifically to Dana-Farber Cancer Institute’s workforce. The main point of the session is to discuss the challenges and lessons learned in integrating these advanced AI tools into research and operational tasks, while explicitly excluding direct clinical care (for example, treat, diagnose, drive clinical management or inform it).

This is highly relevant in today’s healthcare landscape as AI permeates a growing number of healthcare software products, and everyone – from clinicians to patients and staff – can benefit from understanding how to safely and effectively capture the potential.

In the short term, we are aiming for use cases that improve efficiency. In the long term, it is our hope that better data and AI will lead to improved practices and patient outcomes.

Our journey to operationalize GPT-4 came with significant ethical, legal, regulatory and technical challenges.

By sharing our experiences and the framework we developed for implementation of AI, we aim to provide insights for other healthcare organizations considering similar deployments. This is particularly pertinent as the industry grapples with the dual imperatives of innovation and patient safety, making it crucial to establish robust governance and guidelines for AI use.

Q. What is an example of your work in action at your organization?

A. The primary technology discussed in our session is GPT4DFCI, a private, secure, HIPAA-compliant generative AI tool based on GPT-4 models. You can think about GPT-4o as the central layer of this application. The next layers are supporting AI models that analyze all data coming in and out of the models to filter dangerous content such as harmful language or copyrighted software code.

Outside of that is a layer that logs all our users do with this technology and allows auditing. Finally, the outermost layer is a  ChatGPT-like simple user interface that has links to training materials and to a ticketing system for user support, as well as a dedicated Wiki page where users can read more.

This technology is being used to assist in various non-clinical tasks, such as extracting and searching for information in notes, reports and other documents, as well as automating repetitive tasks and streamlining administrative documentation.

Q. What are some takeaways you hope session attendees will learn and be able to apply back at their provider organizations?

A. First, we hope attendees will understand the importance of establishing a comprehensive AI governance framework for the careful deployment of AI technologies in healthcare. This includes forming a multidisciplinary governance committee, like our AI Governance Committee, to oversee the implementation, address ethical concerns and ensure compliance with evolving regulations.

By involving diverse stakeholders, including legal, clinical, research, technical and bioethics experts, as well as patients, organizations can create policies that balance innovation with patient safety and data privacy.

Second, we aim for attendees to recognize the value of a phased and controlled rollout of AI technologies. Our experience with GPT4DFCI highlights the potential benefits of limiting clinical AI use to IRB-approved clinical trials and institute-sanctioned pilots.

This approach allows for iterative improvements based on learnings from controlled studies and helps identify and address potential issues early on. As far as non-clinical use cases are concerned, there is significant value in providing comprehensive training and support for users to learn from one another to effectively and responsibly use the technology.

By adopting a cautious and phased AI strategy, we believe other organizations can maximize the benefits of AI while minimizing associated risks.

Attend this session at the HIMSS AI in Healthcare Forum scheduled to take place Sept. 5-6 in Boston.  Learn more and register.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.

Article Source




Information contained on this page is provided by an independent third-party content provider. This website makes no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact editor @pleasantgrove.business

Warning! This link is a trap for bad bots! Do not follow this link or you're IP adress will be banned from the site! Skip to content