A group of informaticians has been helping the American Academy of Pediatrics improve the quality of clinical guidelines and policies that standardize care delivery. Regenstrief Institute Research Scientist Randall Grout, M.D., M.S., and Wake Forest’s Stephen Downs, M.D., M.S., recently sat down with Healthcare Innovation to explain how the Partnership for Policy Implementation (PPI) works to eliminate ambiguity in clinical recommendations, which in turn eases the implementation of guideline recommendations by clinicians and EHR developers.
In addition to his role at Regenstrief, Grout is also the chief health informatics officer at Eskenazi Health and an assistant professor of pediatrics at the Indiana University School of Medicine.
Downs is a professor and Associate Director for Clinical Informatics at the Center for Biomedical Informatics and Vice Chair for Learning Health Systems in the Department of Pediatrics at Wake Forest University. He was the founding director of Children’s Health Services Research at Indiana University where he retains adjunct faculty status. He is co-developer of the Child Health Improvement through Computer Automation System, known as CHICA. The system helps pediatricians maximize the time they have with their patients and address care guidelines by using information gathered from electronic health records and parents to set an agenda for the appointment based on the specific needs of the child.
Healthcare Innovation: Has the role that medical associations like the American Academy of Pediatrics play in the development and implementation of practice guidelines evolved over the last several years, and has the widespread use of EHRs impacted that work?
Grout: I think the the role as it applies to electronic health records has changed definitely. These associations have always tried to be an authoritative and a clear voice of good evidence-based medicine, so they produced various guidelines and different mechanisms, typically via a journal or a paper article. But now, as we switch to electronic health records, more and more practice is done electronically. The orders are done electronically. Much of the decision support is all happening electronically. Having that at our electronic fingertips makes it more effective to be able to to implement these guidelines. That’s where something like the Partnership for Policy Implementation comes into play to say, “Let’s take these guidelines that we are building as an expert group of pediatricians and make them so that they can be implemented into the electronic health record. The primary work of the PPI is to help that implementation process. I think many associations and professional societies are seeing the importance of taking their recommendations and putting it into a translatable format that somebody can use at the point of care.
HCI: Did the PPI grow out of seeing issues with the guidelines not being clear enough or having contradictions?
Downs: The PPI came into being because an informatician colleague of mine, Paul Biondich, had been invited to be on one of these guideline committees. He came back from one of their meetings and said, ‘I don’t know how to interpret this….Everything is being given in these vague terms, like “you should screen children regularly” or “you should give extra attention to the kids who exhibit this problem.” And nobody knows what that means. He was thinking in very concrete ways about how you would put this in a computer. You can’t program a computer to remind you to do something regularly. You have to decide what that actually means. So I said, “Well, why don’t we encourage them to use an algorithm like a real formal flow diagram, to describe the care” — and they actually loved it, right? They thought this was super helpful.
At that point, we said this is probably useful for all of AAP’s guidelines. So we approached the American Academy of Pediatrics, and actually the federal government helped us a little bit. The Maternal and Child Health Bureau gave a small grant to the American Academy of Pediatrics to fund a group of informaticians to get together and start developing processes for making all of their guidelines and clinical reports follow these kinds of recommendations. I will say we were not the only ones thinking about it. An investigator at Yale by the name of Rick Shiffman had been for years thinking and working on the issue of how you make guidelines so that they’re unambiguous and easy to interpret.
HCI: I read in your paper on this topic in the journal Pediatrics that the informaticians assist by using a variety of tools to support guideline authorship. What kind of tools and how does that work?
Downs: One of them is clinical algorithms, as I mentioned. For guidelines that are recommending a specific flow of care, like here’s the diagnostic process or here’s the therapy process, we will produce these standardized flow diagrams. The idea is that if you show a committee a very precise description of what you think they’re recommending for care, it’s extremely useful as a communication device, because people will say, “Oh, yeah, that’s exactly what I meant,” or you’ll uncover lots of hidden disagreements in the committee. So that’s one very useful tool.
Another one was produced by Rick Shiffman at Yale called Bridge-Wiz, which is actually a web-based piece of software that helps you craft language for these key action statements that Randy mentioned so that they are precise and unambiguous. It actually asks you questions; you respond to the questions, and then it proposes different ways you might write this that would be unambiguous.
Grout: All of these tools are augmenting the informaticians’ experience in taking guidelines and putting them into practice. Sometimes it’s just a meticulous and detail-oriented eye with the experience of programming things in an electronic health record before, and understanding if what I’m reading going to be able to be translated well. You can look at a sentence, and say if I format it in this structure with this standardized kind of vocabulary, these are action commands and these are decision words. It can help you determine: Is it a must? Is it a shall? Is it a may?
HCI: So in a way, it is kind of a linguistic challenge…
Downs: It’s definitely linguistic. And we actually have some words or phrases that are considered a trigger. We don’t like: “You should consider doing this,” because considering doing something is not really an action. But you see it all the time in clinical guidance. We also look for use of the passive voice, because passive voice masks who the actor is in the recommendation. So if you say, “the child should receive an antibiotic,” who’s supposed to give them the antibiotic, right? As opposed to saying the physician should prescribe an antibiotic. Anytime we see a document recommending that a physician do something with a patient or family, we want it to say who should do what to whom and under what circumstances right?
HCI: We often write about clinical quality measures coming from CMS and other payers. I know they’re working on digitizing a lot of those measures now, and the provider organizations and ACOs say it is hugely challenging to get those in the EHRs. Are they dealing with some of the same issues as the guideline developers or different ones?
Downs: They’re extremely closely related. In fact, if you have a well-formed key action statement, it should say under this circumstance, this actor should perform this response, and that is essentially the same as a quality measure. The first part of that becomes the denominator, right? What are the circumstances? And the action becomes the numerator of any quality metric. So if you’ve got a well-formed key action statement, and you have it electronically embedded into your EHR, every time that rule fires, something belongs in the denominator, and every time the user responds you’ve got a count in the numerator. So the act of building decision support off of these recommendations automatically creates your quality metric at the same time.
Grout: Yes, I was going to say these quality measures and the recommendations really are just two sides of the same coin. So as we are trying to build a very actionable and unambiguous recommendation, the quality measure should be just very obvious by looking at those exact same criteria.
HCI: But is this shift to electronic clinical quality measures really difficult for the provider groups?
Grout: Absolutely it is difficult. I think the scope and the volume of the CMS measures are what lend some of that to difficulty. For example, in our pediatrics space, our guidelines are often targeted for a certain population, certain circumstance, certain disease process or something like that. So we have perhaps a narrower scope, but even within that scope, trying to account for edge cases in a flow diagram, you imagine the tree branching out. If the scope is general health in the United States, for CMS, you can imagine just so many branches and use cases and edge cases to account for that it becomes this immense work to try to program that in. So you either get something very vague and broad or get something so unwieldy that it becomes just nearly impossible to program in. I think the sheer complexity of trying to capture something so wide ranging, so detail-oriented, is certainly a monumental task.
Downs: I think one of the other issues that’s probably important is that people are often struggling to find the data that are needed to do these quality metrics. They’ll say, “Well, we don’t measure that.” And that’s why, from the PPI standpoint, what really needs to happen is you have to go upstream. You have to say, “OK, if we have decided that what’s really important is we’re going to screen all teenagers for depression, then we have to go upstream and have a way to capture that information.” And to our earlier point, as long as you’re going to do that, why don’t you build a decision support system that will remind people to do the depression screening at every visit? Then your decision support system is capturing your denominator and numerator.
HCI: So you’re saying that they should have the the reminder to do the action first, and then you can measure whether it’s getting done often enough?
Downs: Exactly. This is my whole argument for CHICA. If this is important enough to measure, then it’s an important enough thing to go upstream and work on improving it. Then measuring it is not a big deal because you already built that into your system in order to improve it. That is not the way the system currently works. The way the system currently works is that somebody decides, here’s a quality metric, and the ACOs and clinic people, their hair catches on fire because they say now we have got to work on improving this thing, and then they drop all the other balls that they’re carrying and focus on that thing. We think that if we went upstream, we could simplify things.