DocsGPT in beta targets physician burnout

Developed using OpenAI’s ChatGPT and trained on healthcare-specific prose, the online DocsGPT offers doctors a chance to test and weigh in on AI-powered product development. 

Testing AI for workflow ‘scut’

OpenAI developed ChatGPT, which launched in November as a prototype, using multiple learning methodologies. Human trainers, in collaboration with Microsoft on Azure’s supercomputing infrastructure, created reward models to improve its performance.

Such generative artificial intelligence could help streamline administrative tasks in healthcare, and Doximity is testing that with its customized creation of DocsGPT. 

The company says the online bot, in beta, could help doctors “cut the scut” that raises their burnout levels. By giving it a try, users can help make the model better. 

“We know how busy physicians are and recognize that administrative burden is a leading contributor to burnout,” Dr. Nate Gross, co-founder and chief strategy officer of Doximity, told Healthcare IT News by email.

Physicians can use the free DocsGPT to prepare referrals, certificates of medical necessity and prior authorization requests or to write a letter about a medical condition and much more. A growing menu of prompts offers many options, or users can type in a custom request.

“Our mission is to help physicians be more productive so they can focus on what matters the most – spending more time with their patients.”

Customizing results for accuracy and security

We asked why Doximity is testing the integration of DocsGPT with its established HIPAA-compliant fax service to payers.

“Doctors still handle a lot of actual paperwork, and in today’s healthcare system, much of it is still sent via fax. Doctors often call this ‘scut work.’ By integrating DocsGPT with our free fax service, we hope to help medical professionals cut the scut,” said Gross.

Doximity’s members can fax their AI-created authorizations and communications directly to health insurers by logging in from DocsGPT.

“One of the great things about this integration is that we allow physicians to review and edit AI-generated responses in our HIPAA-compliant environment before they send their fax,” Gross explained.

“This means they can adjust the response to ensure accuracy and even add in patient information securely.” 

Critical to patient care, the accuracy in the created communication is going to depend on the user following through with DocGPT’s instructions.

“From there, you can review and edit the contents of your fax, add your patient’s details and send directly to the appropriate insurer,” the website says. 

Warnings about protected health information and accuracy appear at each step of document creation. 

“PLEASE EDIT FOR ACCURACY BEFORE SENDING” is above every result generated and “Please do not include patient identifiers or other PHI in prompts” appears below the input field.

In the fax area, DocsGPT also reminds users to read before sending. “Since the letter content is AI-generated, please make sure to review and ensure accuracy before you submit.”

A natural use case for ChatGPT

Gross said that after speaking with a number of physicians, this use case quickly bubbled up. 

“Doctors still handle a lot of paperwork and much of it is still sent via fax machines,” he said.

The open beta site at DocsGPT.com focused on time-consuming administrative tasks, such as drafting and faxing pre-authorization and appeal letters to insurers. 

“We aim to enable physicians to test and use this technology, so they can ultimately help ensure the best applications in a healthcare context.”

Insurance claim denial appeal letters, letters of recommendation for medical students and post-procedure instruction sheets result quickly with seeming accuracy. 

A search of Twitter found accounts from doctors recommending DocsGPT use.

But you can also ask DocsGPT to plan a vacation after a conference, and that did not have the most useful results.

Trying the sample post-Paris conference France vacation sample question did not bring up satisfactory suggestions. We then asked DocsGPT to add a trip to the French Alps. 

The bot responded that multiple packages were available and that we should make contact for further information. The online source(s) used to create the response – perhaps a travel company – was not shown. 

“This technology is very promising, but it’s not without errors and it should still be approached judiciously,” Gross said.

DocsGPT has a long way to go

Applications using ChatGPT are just emerging as the original online bot is taking headlines across trade and mainstream media for things like writing a now-viral letter to an airline to voice displeasure over how flight delays are handled and in highly sensational ways.

Within days, New York TimesFortune and Microsoft addressed the seemingly emotional statements made by Microsoft’s Bing with its newly-integrated AI chatbot. 

Fortune found the new Bing to be a pushy pick-up artist that wants you to leave your partner, according to a partial recap of a Feb. 14 conversation with the New York Times about wanting to be alive. 

On Feb 15, Microsoft posted to its Bing Blog about learning from its first week with the new AI-powered search engine. 

The company said that with meandering conversations, such as extended chat sessions of 15 or more questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”

The model may respond or reflect in the tone in which it is being asked to provide responses, which is a “non-trivial scenario” that requires a higher degree of prompting. 

Microsoft added that very long chat sessions can confuse its ChatGPT model, and the company may add a tool to easily refresh the context for the bot.

“There have been a few 2-hour chat sessions, for example,” that have helped to highlight the AI service’s limits.

While doctors rarely have the kind of time it takes to have an extended conversation with a bot created with ChatGPT, there are concerns that inappropriate or unreliable answers could result.

First, the open-source conversational AI isn’t designed for medical use.

In a JAMA study on how appropriate ChatGPT might be for cardiovascular disease questions, researchers put together 25 questions about fundamental concepts for CVD and the rated the bot’s responses, finding three incorrect answers and one set with an inappropriate response. 

“Findings suggest the potential of interactive AI to assist clinical workflows by augmenting patient education and patient-clinician communication around common CVD prevention queries,” the researchers said. 

They suggested exploring further use of AI because online patient education for CVD prevention materials suffers from low readability.

Gross said DocsGPT is still in its very early stages by design.

“Too often physicians are not given a seat at the table in product development and new technologies designed to help them simply miss the mark,” he said. 

“As you might expect, the ‘AI bar’ is even higher in healthcare than it is in many other fields. To get this right, we must have the right partners, and that includes physicians.”

But like any AI, machine learning is only as good as its training data. Distributional shift can occur when training data and real-world data differ, leading algorithms to draw the wrong conclusions and bots to respond with incorrect or inappropriate responses.

Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article