With $100 million in investment backing, San Francisco-based telemental health provider Brightside Health provides care for people with mild to severe clinical depression, anxiety, and other mood disorders, including those with elevated suicide risk. Mimi Winsberg, M.D., the company’s chief medical officer, recently spoke with Healthcare Innovation about the company’s concept of “precision prescribing” and leveraging data to optimize treatment plans, as well as using AI to help predict mental health crises. 

Healthcare Innovation: I want to ask you about some research published recently in JMIR Mental Health that looks at the performance of large language models in predicting mental health crisis episodes. Before we do that, could you help set the stage by talking a little bit about your background and Brightside Health’s focus?

Winsberg: I’m a Stanford-trained psychiatrist, and my expertise in my fellowship was in managing bipolar disorder. I’ve been in the digital health space about 10 years now. What I observed, certainly from treating bipolar disorder patients over the years, along with other psychiatric conditions, is that it was very helpful to have patients track their symptoms, and we could have much more success in predicting their episodes if we had a good log of their symptoms. As long as 25 years ago, we had patients do this with pen and paper, and then with the advent of the digital health movement, it was really important to me that we be able to use some of the tech tools that we have at our disposal to do things like remote symptom monitoring and even treatment prediction based on symptom cluster analysis.


Not all antidepressants are created equal, but oftentimes in mental health, the selection of an antidepressant is really a kind of guess-and-check process for a lot of providers. What I hoped to do with some of the tech tools that we had at our disposal was to create a database and take a more informed approach to treatment selection that takes into account everything from a patient’s current symptom presentation to things like prior medication trials, family history and so forth. So that’s what we built at Brightside, and it’s built into the backbone of our digital health platform that Brad Kittredge, our CEO, and Jeremy Barth, our CTO, created seven years ago now.

HCI: Does that involve looking not just at how this individual patient has responded to, say, different medications, but looking across the whole database and seeing how people respond and symptom clusters and things like that?

Winsberg: That’s right. It’s not based on just the individual. It’s very much based on published literature that exists and also a very robust database that is probably unparalleled in the sense that we’ve treated over 200,000 patients. We can look at patient attributes, symptom presentations, and treatments and outcomes. We can say, ‘Who else do we have that looked a lot like you, and how did they do with this treatment?’ And we can make some predictions accordingly. This is a way to approach treatment selection. We’ve published extensively in peer-reviewed journals about the success of this model. All of this is exciting, because it really helps move the needle in a field that has been, I would say, less data-rigorous than other fields of medicine.

HCI: Especially as the pandemic hit, there was a huge growth in the number of telemental health providers. How do you stand out in that field, with patients, payers, and provider groups?

Winsberg: Telemedicine 1.0 is putting a doctor and a patient in a video interface. That can solve a lot of access problems, because you’re no longer dependent on having those two people geographically co-located. It allows you to leverage providers in one area to serve an area that may have a dearth of providers. But that’s just the beginning of what telemedicine can do. As you said, a crop of companies emerged out of the pandemic that were intent on solving the access problem. We very much see that as table stakes at Brightside. We existed before the pandemic, and telemedicine was only one of our goals. What we really tried to do was take a more precise and quality approach to care.

So in terms of differentiators, one is the notion of precision prescribing, which is our proprietary language, if you will, around the data systems that we use to make treatment selection recommendations. It’s clinical decision support, so a machine isn’t deciding what treatment is best. It is surfacing that to your psychiatrist, who then uses that information to better inform their choice. But that precision prescribing engine is proprietary for Brightside and definitely a differentiator, as are many of the other AI tools that we’re implementing and actively publishing on. In terms of health systems that partner with us, we feel it’s important to show our work and to publish in peer-reviewed journals where the data can be scrutinized and objectively evaluated by anyone who’s interested. 

HCI: How does the payment landscape look? Does Brightside have partnerships with health plans or with health system organizations?

Winsberg: We have national contracts with many payer systems and we get those contracts by showing the quality in our work. They have access to data so they’re able to scrutinize our outcomes with a very informed lens, and have obviously determined that our outcomes meet or exceed the quality that they would expect in order to pay for them.

HCI: Do you have any contracts with Medicaid managed care organizations?

Winsberg: We started with commercial payers and then we launched with Medicare, and are rolling out with Medicaid now nationally as well. 

HCI: Let me ask about this research published recently in JMIR Mental Health. Could you talk about how it was conducted and what it demonstrated about large language models and the implications?

Winsberg: Large language models can digest a lot of text information rather quickly and synthesize it. So when a patient lands on our website and begin to sign up for services, we have a question for everyone that says, tell us about why you’re here. Tell us what you’re feeling and experiencing. And people type in anything from one sentence to many paragraphs about their reason for seeking care. That response is typically reviewed by the provider, along with other structured data. 

In this experiment we took that information that was typed in by patients and completely stripped it of any identifying information, and surfaced that to both a set of experts who reviewed the text data, along with information about whether the patient had previously had a suicide attempt. Then separate from that, we fed that information to a large language model, ChatGPT 4, and asked both parties — the experts and ChatGPT 4 — to predict whether they thought the patient was likely in the course of their care to have a suicidal crisis. 

What we found was that the language model approached the same accuracy and predictive abilities as the trained psychologists and psychiatrists. Now, the caveat in all of this is that providers are far from perfect in their predictions, so just because I’m a psychiatrist doesn’t mean I’m going to predict this, but that’s the best we’ve got right now. It raises a bigger philosophic question of, when you implement AI, do you expect it to be as good as humans? Do you expect it to exceed humans? For instance, with self-driving cars, it has to be better than humans to want to implement it, right? So we take the same approach in medicine when we start to train these tools. In order to widely implement them, we would need them to be much better than humans, but what we’re seeing, at least in this example, is that we can get it as good as humans. What we find is that for a human to do this task, it’s very laborious and also very emotionally draining, so having an automatic alert that maybe you wouldn’t have had otherwise can be very useful. 

HCI: In this particular use case, if you could get the tool to be really highly accurate and that would trigger an alert, how might that change the care plan? 

Winsberg: We do a lot of triaging of patients based on information we get about them on intake for treatment selection purposes. For instance, we have a program called crisis care, which is intended for patients who have elevated suicidal risk, and it’s a particular therapy program that’s based on the collaborative assessment and management of suicidality. When patients are enrolled in this program, they’re having more frequent, longer sessions with their therapists that are specifically looking at suicide risk and managing reasons for wanting to live, reasons for wanting to die, and so forth. So were we to find that a patient was identified as high risk, it would prompt a referral to a higher acuity program.

Similarly, there are certain pharmacologic strategies that you might employ with higher risk patients. You might progress them to a tier two treatment selection, rather than beginning with a tier one. 

HCI:  So, in summary, are you saying the research is showing that these tools are promising, but not quite ready for deployment yet?

Winsberg: What I am saying is that we’re still keeping humans in the loop at every step. We think of these tools very much as co-pilots. They’re like a GPS rather than a self-driving car.

Another example of an AI tool that we are deploying is a scribe — a tool that can transcribe a session and then generate a provisional note for a provider. 

Yet another example of AI is that we offer our providers care insights, too. There are a lot of elements to the chart that you have to review either before talking to the patient or while talking to the patient. Depending on how extensive a patient’s chart is, it’s nice to have a tool that can summarize various aspects of the care for you. And LLMs are quite good at this. So we are just just scratching the surface in terms of the ways that AI can enhance the quality of care delivery, as well as reduce provider burnout that we’re seeing in spades across the country right now and across specialties.

 

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here