By MICHAEL MILLENSON

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Meanwhile, the content of health information, its capabilities, and, crucially, the loci of control are all undergoing radical shifts due to the combined effects of data democratization and artificial intelligence. The increasing sophistication of consumer-facing AI tools such as biometric monitoring and web-based analytics are being seen as a harbinger of “fundamental changes” in interactions between health care professionals and patients.

In that context, a framework of information sharing I’ve called “collaborative health” could help proactively create a therapeutic alliance designed to respond to the emerging new realities of the AI age.

The term (not be confused with the interprofessional coordination known as “collaborative care”) describes a shifting constellation of relationships for health maintenance and sickness care shaped by individuals based on their life circumstances. At a time when people can increasingly find, create, control, and act upon an unprecedented breadth and depth of personalized information, the traditional care system will often remain a part of these relationships, but not always. For example, a review of breast cancer apps found that about one-third now use individualized, patient-reported health data obtained outside traditional care settings.

Collaborative health has three core principles: shared information, shared engagement, and shared accountability. They are meant to enable a framework of mutual trust and obligation with which to address the clinical, ethical, and legal issues AI and data democratization are bringing to the fore. As the white paper AI Rights for Patients noted, digital technologies can be vital tools, but they can also expose patients to privacy breaches, illegal data sharing and other “cyber harms.” Involving patients “is not just a moral imperative; it is foundational to the responsible and effective deployment of AI in health and in care.” (While “responsible” is not defined, one plausible definition might be “defensible to a jury.”)

Below is a brief description of how collaborative health principles might apply in practice.

Shared information

While the OurNotes initiative represents a model for co-creation of information with clinicians, important non-traditional inputs that should be shared are still generally absent from the record. These might include not just patient-provided data from vetted wearables and sensors, but also information from important non-traditional providers, such as the online fertility companies often accessed through an employee benefit. Whatever is in the record, the 21st Century Cures Act and subsequent regulations addressing interoperability through mechanisms such as Fast Healthcare Interoperability Resources more commonly known as FHIR have made much of that information available for patients to access and share electronically with whomever they choose.

Provider sharing of non-traditional information that comes from outside the EHR could be more problematic. So-called “commercially available information,” not protected by HIPAA, is being used to generate inferences about health improvement interventions. Individually identified data can include shopping habits, online searches, living arrangements and many other variables analyzed by proprietary AI algorithms that have undergone no public scrutiny for accuracy or bias. Since use by providers is often motivated by value-based payment incentives, voluntary disclosure will distance clinicians from a questionable form of surveillance capitalism.

Shared Engagement

AI engines are being trained to parse the medical literature, outcomes databases, and patient information to make diagnostic and treatment recommendations. The companies controlling these engines intend to market the information for clinician use, but it is hard to imagine from a practical standpoint or from the legal standard of informed consent that this clinically personalized information will remain closely held. The doctor-patient relationship is inevitably becoming a doctor-patient-AI relationship, with AI necessitating a recognition of patients as “true partners.”

For example, some sophisticated patients are already using generative AI to simplify a lengthy medical record or summarize a complex journal article. (See the #PatientsUseAI hashtag.) Similarly, some clinicians are using these same tools to reduce their workload by summarizing data and discovering patterns from patient encounters. Shared engagement not only asks patient and doctor to be engaged fully with each other, but also to be transparent about any engagement with AI. This kind of proactive approach with AI could possibly confer a degree of legal protection on practitioners, as well as help clinicians forthrightly confront issues of implicit bias and equity.

Meanwhile, clinicians tempted to dust off their “Please Don’t Confuse Your Google Search With My Medical Degree” mugs should consider that AI may make better diagnoses and also have a better bedside manner.

Shared Accountability

While clinicians increasingly face financial incentives designed to improve the outcomes of care, an important question is the extent to which giving patients more power to manage their health should also be accompanied by financial incentives. Or is the ultimate bottom line – one’s personal health and welfare – adequate? One approach might be accompanying the trust enabled by shared information and engagement with some form of formal doctor-patient compact based on the enhanced autonomy model suggested by medical ethicists Quill and Brody. Their model envisions an explicit collaboration based on the medical evidence, the patient’s preferences and values, and the physician’s experience.

With the rapid changes occurring in the volume, sophistication and spread of health information, from the inpatient arena to the iPhone, effective sharing will require more than technological tweaks or narrow regulatory responses. It will, instead, require a wholesale reimagination of roles, rules and relationships, particularly regarding the interactions between doctor and patient, but also with other stakeholders, such as insurers, employers and non-traditional health service providers. There are certainly many barriers to be addressed, including information overload and reimbursement issues. Nonetheless, as AI and data democratization undermine old information asymmetries, and as financial incentives increasingly value maintaining health as well as providing treatment, the collaborative health concept can serve as a framework for building a durable new partnership structure.

The potential rewards for embracing this approach go beyond possibly avoiding counterproductive regulation or legal battles. The democratization of information will diminish the “magic, mystery, and power” of medicine, noted one digital health pioneer, but  it will “bolster the cognitive and moral” pillars of the profession.

Michael L. Millenson is President of Health Quality Advisors LLC and a regular contributor to THCB. This piece originally appeared on the Bill of Health blog

LEAVE A REPLY

Please enter your comment!
Please enter your name here