Trust is Healthcare's Most Valuable Commodity
Data liquidity in healthcare is exploding with the introduction of FHIR, this spring's CMS/ONC rules on open EHR APIs, and announcements of partnerships between tech giants such as Google and Microsoft and major providers like Mayo Clinic, Providence, and pharmacy chain Walgreens. Our health records have never been more free to move on our behalf, and they have never been more likely to be lost, used maliciously, or simply hop from place to place without our knowledge or consent.
Healthcare is and must move towards more coordinated, more flexible care if costs are to be contained and outcomes maintained and improved. We’re seeing examples of this evolution in value based payment models, the rise of telehealth within established IDNs, and the myriad chronic disease management programs seeking to help chronic sufferers lead better (and less costly) lives. All of these expand patients’ care networks from their existing relationship based interactions with primary care to large, loosely connected care teams. The effectiveness of these teams depends on knowing the patients in context, and knowing = data.
The newfound openness in health data encouraged by these changes may lead to major advances in research and to improved care, but for this data to truly work for us, the patients, we must participate in its collection and curation. We must be confident that we’re in control of our care. And that requires trust.
Defining Trust
As patients, we have a finite amount of trust to give; we can only evaluate so many people, so many companies. Health companies actively complete for this limited resource, and according to recent research, they’re not doing a great job. 52/100 trust in health plans is failing by anyone’s gradebook. COVID-19 has created a shared reluctance to see anyone in person, much less hang out in a clinic waiting room. The long anticipated transition to digital health has arrived on the back of a pandemic, and our ideas about building a virtual relationship with our providers are forming.
We want to be assured that the health data we consent to be released is protected well and is reliable and complete. We want to be able to take a look: make sure our record is accurate and correct it if not. And we want to be in control of how and when that sensitive information is used, for our benefit or for the benefit of others.
A partner I trust in collecting, curating, and sharing my health data should:
Positively identify (authenticate) me as a patient to ensure that I have control over my own records
Allow me to view and contribute to (e.g. correct or comment on) my record without violating the integrity of what others have contributed.
Collect and act on my consent to use/share my data with particular individuals or companies
... for a particular time which I can revoke on demand
... limited to a particular part or segment of my records such as those related to a diagnosis, to a medication, to a provider, or to a job
... with special attention to data that may make me vulnerable to discrimination such as sexual health, substance abuse, or mental health.
Be completely transparent about who has my data, when it was or will be shared, and what rights I have to end that sharing
Implementing Trusted Data
These goals are generally at the edge of today’s systems’ capabilities. Data platforms going forward must weave the intentions of patients into their DNA if they are to reflect and respect those intentions. This is the bar trust-minded organizations must hurdle. It requires technologists to:
—> Connect policy to permission:
Users of data can express their data needs in computable terms, and collected consent matches patients to this policy. Mobile app permissions can illustrate this: the app declares what rights it needs to function, and the user agrees or not during install.
In a similar way, a health service might declare that it needs vital signs and allergies to provide its services, and patients might agree with a consent perhaps qualified by time or with some exclusions, reviewing or modifying them as needed, or just revoking the whole thing if they desire. Successful implementations by trust-minded organizations present policy in human terms, something the mobile app permissions model achieves with mixed success.
The HEART (Health Relationship Trust) project has done some great work scoping what might be part of a policy.
—> Connect permission to data:
Once permission has been given, the policy and any qualifications added during the consent must be matched to the data itself. The details of this connection are in how the policy itself is expressed. In a FHIR world, implementation guides may use FHIR's profiling system to communicate policy (e.g. defining the scope of "vital signs"). More generally, medical ontologies like SNOMED-CT could be used to specify categories of diagnoses or type of medications to include.
Executable policy can then be applied:
Directly to records themselves, classifying and filtering out records that don't fall under consented policy, and
To the connections in the record, including records according to the intention of the consent such as those related to a particular provider, a specific visit, or a diagnosis of a condition.
—> Connect data to APIs:
The final step in implementing trust in a more open environment is applying these permissions (and restrictions) to the technical interfaces themselves. While in some cases this is as simple as not sending what is not allowed, clinical safety and complete data privacy demand that systems communicate what is said and what is left unsaid.
This is tricky - if I ask for all lab results and I send some along with a warning that I didn't send some STD tests, I just revealed something I was trusted to keep silent about. Better that I communicate up front (when defining the interface) that I will never talk about STD tests, so don't bother making assumptions about their absence.
This can be applied to individual fields or to categories of records. In FHIR for instance, publishing an implementation guide for a policy would remove fields from the resources as needed and defined the scope (category) of the records in policy.
From Here to There
Again, most systems today implement only a portion of the controls needed to achieve this level of control and transparency. The extent to which they can be applied to existing systems depends on how that system handles data and security.
Leaders are thinking about:
How do we think about trust? Are we actively cultivating a culture and a public image as a trustworthy partner?
Are we leveraging the trust we’ve gathered by creating products and services that really know our customers and act for their benefit? Are we creating opportunities to interact with customers to know them even better?
Do our risk models adequately account for a radical shift in health data liquidity? Are we properly valuing acquired trust against data protection cost?
Implementers are questioning:
Where are the trust boundaries in our systems? Do systems holding health data take responsibility for its protection, or have we spread data across the landscape and delegated trust to far too many systems?
Do we have the health domain capabilities, especially in terminology/vocabulary to understand and execute policies that use health terminology to define their conditions?
What additional touchpoints with our patients or members do we need to put them in control of their data? Are our efforts at managing consent and monitoring access visible to the people with whom we’re trying to build trust?