Insight
Why HCOs Should Deploy Private, Secure AI Chat Now
By Jay Sultan, Chief Data + Analytics Officer
As we approach the end of 2024, many healthcare organizations and payers still need a solid plan to adopt AI and to protect themselves from unintended consequences of modern AI solutions.
Whether leaders choose to address it or not, healthcare staff are increasingly using these tools in their everyday work. New research shows that 40 percent of clinical staff and over 50 percent of non-clinical staff use generative AI on a weekly basis. Setting aside the need for AI governance, controls on your vendor’s use of AI and what they are doing with your data, and many other topics, let’s focus on one simple step that would benefit most HCOs—deploying a secure instance of ChatGPT or a similar generative AI tool.
40 percent of clinical staff and over 50 percent of non-clinical staff use generative AI on a weekly basis*
This can serve as your HCO’s introductory use case for AI, offering a relatively quick, simple, and inexpensive AI solution with multiple benefits:
1. HCOs without this capability are probably experiencing constant HIPAA violations today.
- Clinical use of public chat tools involves feeding protected health information (PHI) into the tool’s servers, a problem highlighted in numerous recent articles.
- HCOs that do not provide staff with access to a safe version of the tool (and block their access to all external tools) can assume that their users are unknowingly repeatedly violating HIPAA by accessing the public instance of a tool and asking it to summarize a note, not realizing that the note contains PHI.
- The solution is to deploy an instance of the AI tool within your firewall. Not only does that deliver HIPAA-complaint access, you can now add all kinds of organizational data and business rules to it.
2. An internal ChatGPT model is a perfect base use case to get your organization started in AI.
- It quickly delivers a proven, significant improvement in the productivity of your staff at a modest expense with minimal execution risk. It is the low branch for AI in healthcare.
- It can be the use case that allows you to start developing the organizational support you need, such as AI governance or the evaluation of other AI use cases to ensure they can scale to production with ROI.
3. You will benefit from the efficiency gains such a tool delivers to your workforce.
Tegria has been hired to help measure the use and impact of such a tool at a major healthcare delivery system. Here is what we have found:
- By deploying this cost-effective solution, people using the tool are increasing their productivity by 5%.
- We found that the single biggest group of users were clinicians, and they were using it to reduce the time it takes to rewrite case notes, summarize things, and ease other tasks that are contributing to provider burnout.
- Results can be tailored to include access to your organization’s policies, procedures, and internal references, and even utilizing contextualized data from your core system.
- Also, because it is an internally deployed solution, the conversations can be analyzed to identify common issues where additional training of personnel is needed or to identify instances of compliance violations.
Before long, patients will be shocked to learn that a public AI engine knows all about their medical history. Organizations that fail to safeguard PHI from public chat tools will lose their goodwill and trust.
HCOs have an important choice to make: Risk continuing to unknowingly violate patient/member confidentiality and fail to realize the efficiency gains from AI, or deploy your own internal private generative AI tool as a powerful first step toward safe, efficient enterprise-wide use of AI.