FHIR Chat · Meaty meeting today - see minutes! · patient empowerment

Stream: patient empowerment

Topic: Meaty meeting today - see minutes!


view this post on Zulip Dave deBronkart (Jul 09 2020 at 18:24):

Lots of robust discussion especially on @Jan Oldenburg and @Maria D Moen 's Patient Contributed Data white paper project! Good foundation thinking on issues of trust. Please read!
https://confluence.hl7.org/display/PE/2020-07-09+Patient+Empowerment+Minutes

Did you know some are saying that if a patient is just a courier for data from another system, they're going to consider it patient generated information? That's nuts, but we need to explain why. We can be world thought leaders here.

I sure want to be involved in the project; anyone else would be welcome. As always I just need someone to lead & organize.

view this post on Zulip Cooper Thompson (Jul 09 2020 at 18:55):

One of the problems with the patient-as-a-courier is that some (small minority) of patients may maliciously alter their own data in transit. The common example is patients seeking opioids or other drugs who may remove those drugs from their chart in transit so that they can request more. There are mitigations to this, such as PDMP integrations for controlled drugs, and potentially including a digital signature from the source organization so that the receiving organization can be sure the data hasn't been tampered with in transit. I'm not saying it's good or bad - just providing an example of why things are the way they are.

view this post on Zulip Cooper Thompson (Jul 09 2020 at 18:56):

It's really that a few bad apples have spoiled the patient-mediated exchange environment for everyone :(, and that means it's going to take a lot of work (and probably complexity) to un-spoil it.

view this post on Zulip Dave deBronkart (Jul 09 2020 at 19:00):

Yes and unless I am misremembering because I can’t go look right now, that is explicitly mentioned by Lloyd in the minutes. Trust is essential.

And equally compelling issue among patient advocates is the high incidence of major errors entered into the chart by providers. So I would reject a suggestion that working on patient contributed data should wait until reliability can be solved. The issue is valid but already exists so it is no reason not to move into this important territory.

view this post on Zulip Josh Mandel (Jul 09 2020 at 19:58):

Verifiable data is critical. This is one of the areas I've been digging into with "verifiable credentials" (e.g. for COVID 19 test results). There's a great opportunity to combine FHIR with a set of emerging specifications from W3C to support this kind of trust.

view this post on Zulip Dave deBronkart (Jul 10 2020 at 18:31):

@Jan Oldenburg @Maria D Moen I'm tagging you to be certain you see Josh's note - anything that gets his attention (plus "critical") is of drop-everything significance IMO :-)

It will benefit our WG greatly if we position ourselves (via your approved White Paper project) out front in thought leadership on this. So I'm very interested. As you both know, this goes to the CORE of the problem of people thinking patient-sourced information should not be trusted.

view this post on Zulip Dave deBronkart (Jul 10 2020 at 18:33):

On a related note, 20 years when Salesforce.com was attacking big-iron SAP and Oracle, the big guys fought back saying "You can't trust the cloud." SF's answer was not to just say "Sure you can!" but eventually to totally own the conversation, demonstrating mastery of the issues (and also transparently publishing trust.salesforce.com, which at the time was less fancy than today - it was ruthlessly complete realtime data on uptime and downtime incident status.

view this post on Zulip Dave deBronkart (Jul 10 2020 at 18:35):

I don't recall the specifics but I think somewhere in the 2010-2012 region something catastrophic hit one of the big boys, demonstrating that they could no longer claim purity or even superiority. Anyway, this is a great chance for the patient WG to get aligned with thought leaders from W3C.

view this post on Zulip Michael van Bokhoven (Jul 12 2020 at 22:56):

Hi - I have to ask, has anyone in this discussion investigated cryptographically signing this data, such that it's readable by the patient (and anyone else who obtains it), but that it's provable that its content has not been tampered with since it was generated by the originating system?
Edit - @Dave deBronkart mentioned 'provenance & trust' in the minutes, which strongly suggests signing. I might have been a bit late on that one! I'm definitely interested in the details of any proposals around that though.

view this post on Zulip Lloyd McKenzie (Jul 13 2020 at 02:28):

Provenance supports signing. The challenge is that digital signatures require a public key infrastructure that doesn't exist all the places it would need to.

view this post on Zulip Lloyd McKenzie (Jul 13 2020 at 02:29):

Also, signatures would require strict data fidelity - any system that doesn't retain every element, code translation and extension would break the signature - and given that most systems use legacy data stores, few can guarantee absolute data fidelity.

view this post on Zulip Dave deBronkart (Jul 13 2020 at 12:20):

Thanks, @Michael van Bokhoven and @Lloyd McKenzie ... is there any way out of this, or must we just do the best we can with imperfect infrastructure, and stay mindful of how things SHOULD be ideally?

I can't help but think of how many limitations arise from our archaic infrastructure, and how those limitations don't necessarily exist in a (non-legacy) patient-generated data store like the Sovereignty Network that @olivierkarasira is using. It's so interesting: traditional providers say "We're not sure we trust data from patients," but will we find a day where the patient data store says "Wait, you're from one of those legacy systems - we're not sure we trust YOU"?

view this post on Zulip John Moehrke (Jul 13 2020 at 13:31):

This is a case where we know the standards, we know how to make it work.. .but Digital Signatures require an Identity system that is national. That is the thing holding this back.

view this post on Zulip Josh Mandel (Jul 13 2020 at 13:45):

I don't think they require a national identity system. I think they require systems that work in their own various contexts of use. Often times this is small and local, but in general it is global -- which is to say there is a strong need for anyone to be able to publish facts that others can review and verify. This implies an open and decentralized identity system.

view this post on Zulip Debi Willis (Jul 13 2020 at 13:53):

The CIO of the medical system where i get my care told me THEY enter all data into their EHR because they do not trust patient contributed data (because patients don't understand enough to answer questions properly). Two minutes later, he said he was not wanting patients to retrieve their data via FHIR applications because "There are so many errors in them that insurance companies are angry with us and we don't want patients to be angry at us also when they see the errors." I thought that was a bit ironic. They only trust what they enter, but also know it is so wrong that they get in trouble because of it. And this is a VERY large health system. As patients, we need to be able to contribute data and forward information we received from other providers. If they want to verify a CCD that i forwarded from another doctor, they can call and have it verified. The problem i am seeing is that they do not get data from other health systems at all or in a timely manner if the health system is a competitor. Most of the providers that i forward data to are really very happy to get it.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:00):

Josh Mandel said:

I don't think they require a national identity system. I think they require systems that work in their own various contexts of use. Often times this is small and local, but in general it is global -- which is to say there is a strong need for anyone to be able to publish facts that others can review and verify. This implies an open and decentralized identity system.

I don't think you said anything different. The point is that if a source doctor signs a document, that signature can be technically proven to have not been broken by anyone. What is missing is a method for the recipient to have trust in the identity of the signer. Anyone can create a certificate and sign data, and that anyone could include in their certificate that they are chief surgeon at Mayo Clinic. This certificate will sign data just like a legitimate certificate, at the signing step it is indistinguishable. So, it is trust in identity that is key.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:01):

I specifically said an identity system that is national. not an identifier that is national. a system. Where a system is often a federation.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:01):

Further, we already have a system almost exactly this in the DirectTrust identity 'system'.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:04):

Using Direct Trust managed federation (trust bundle, made up of certificate authorities), one can look at the signature and see that it is not broken, then look at the identity and determine that it is trustworthy. Meaning that there are claims through the Direct Trust system that can be trusted. The certificate content can be relied upon. The Direct Trust 'system' does not prevent the malicious user creating chief surgeon at Mayo; but that certificate will not be within the trust network of Direct Trust managed federation.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:07):

The problem is: First these certificates are only for S/MIME signature and encryption; so standalone signatures are not part of the acceptable use of these certificates. That is not that huge of a problem as the above mentioned Provenance.signature element does support signatures by mime-type; so the signature can be by S/MIME signature block. This presents a replication of data problem, but does eliminate the problems of encoding we have in FHIR json, vs FHIR xml, vs FHIR turtle, vs lossy servers.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:08):

The second problem is that most of the certificates issued by DirectTrust are not individual, but are organizational. This is a benefit and a curse. It is a benefit as it works really well for an organization signing data as it leaves the organization (via API), signature indicating that organization XYZ exported this data.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:11):

Lloyd McKenzie said:

Also, signatures would require strict data fidelity - any system that doesn't retain every element, code translation and extension would break the signature - and given that most systems use legacy data stores, few can guarantee absolute data fidelity.

The other solution to this problem is that any data that is imported by someone (patient, or organization) is maintained in original form with original signature; it can be imported into component parts with Provenance back to the original form and tat Provenance would also claim that it had validated the integrity and authenticity of the data prior to import. Thus it could be internally processed in best form for internal processing, it could be provided in original form upon need, and provenance chain is available.

view this post on Zulip Lloyd McKenzie (Jul 13 2020 at 14:12):

We have technology that can give strong computable trust, but it's hard to implement. And it's rarely used in the real world because in most cases there's a sufficient degree of 'real' trust. Electronically shared prescriptions don't generally have digital signatures. Data faxed or mailed could be faked. However, the paper trail, audit logs and professional consequences for "bad actors" generally discourages people from acting improperly. One of the challenges we have in the patient space is that patients have no licenses they can lose or professional societies that can sanction improper behavior by patients and care-givers. So one of the levers that helps create trust in provider-to-provider communication isn't there. However, that's really about outright fraud/misrepresentation. If the issue is just "data that's not right", then there's certainly a possibility of patients measuring data improperly or capturing data improperly. And depending on the patient, the risk can certainly be higher than for a trained professional. On the other hand, for properly trained and motivated patients, their data quality is likely higher because they have the motivation to get it right and don't have other demands on their time that would distract or lower quality. A greater issue is "lack of consistency". A given practitioner knows their own convention for answering certain questions and thus knows what those answers 'mean'. As soon as they start getting data from other sources, the degree of consistency is less - and there's a desire to stick that data in a corner. However, even there, there's at least a roughly consistent set of training that clinicians have had that creates a degree of consistency. With patients, that training isn't there, so the risk of inconsistency is higher and the desire to compartmentalize that data is higher. I think what we need to do is support "integrated compartmentalization" - where patient data can be seen alongside other data, but easily distinguished and - as needed - filtered (to highlight or suppress) based on what's needed for the analysis.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:13):

Second solution is that when data that have been imported are later exported, they claim they are exporting their data with Provenance statements back to their import function and the original source. Thus we don't worry about being able to produce exactly the same as what you imported, but rather a chain of Provenance that could be investigated if there is question or concern.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:21):

Not all patients are upstanding members of society. There is much drug seeking behavior, and other things because that is where the drugs and money are. These use-cases deliberately modify data to their advantage. These modifications are hard to distinguish from an upstanding member of society trying desperately to get their medical records corrected. We must not just think about the upstanding member of society that needs the right thing to happen, we must also be concerned about preventing inappropriate things from happening. Security is the science of doing BOTH. This is why identity system is needed, this is why identity system is hard.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:26):

Note that some systems (VA) are heading toward a case where the patient sourced data are managed in an integrated compartmentalized data set. Where this data is managed by the Patient, and is accessible through FHIR api by the clinicians. Early days, so it is not without bumps in the road, but it looks much closer to the idealized equally usable data by the clinician workflow. Not clear yet how the questions of data integrity and authenticity will be managed, just in early days of empowering the patient (veteran). The VA (DoD) do have a defined cohort, so identity is less of a concern.

view this post on Zulip Debi Willis (Jul 13 2020 at 14:30):

I would imagine that providers have mechanisms to check to see if a patient is drug seeking..? I would not think they would simply order controlled substances based on information a patient sent to them.

view this post on Zulip John Moehrke (Jul 13 2020 at 14:34):

you would think so.. Hence why they tend to approach emergency room where the doctor does not have that much time to do the research, and has less capable tools to do the search. (Milwaukee, Racine, Kenosha has had a HIE network for decades for the sole reason to enable ED workflow to detect these, it worked really well). Other workflows include clinicians that are known to not follow rules... Fraud detection is a main feature of HIE.

view this post on Zulip Lloyd McKenzie (Jul 13 2020 at 14:36):

There are a variety of motivators for data misrepresentation - patients who "know" they have/need X, patients with psychiatric issues (diagnosed or not). Some of those motivators are easier to detect/protect against than others. However, there are bad actors on the clinician side too. My key message is that we don't necessarily need to get to guaranteed computable trust. But we do need to think about how to establish an 'acceptable' level of trust. The system has long worked without guarantees.

view this post on Zulip Hamish MacDonald (Jul 13 2020 at 22:52):

Lloyd McKenzie said:

...There's certainly a possibility of patients measuring data improperly or capturing data improperly. And depending on the patient, the risk can certainly be higher than for a trained professional. On the other hand, for properly trained and motivated patients, their data quality is likely higher because they have the motivation to get it right and don't have other demands on their time that would distract or lower quality.... A greater issue is "lack of consistency".... With patients, that training isn't there... Support "integrated compartmentalization" - where patient data can be seen alongside other data, but easily distinguished and - as needed - filtered....

@Lloyd McKenzie You make some excellent points. Apologies for abbreviating a few of them above. The trick is to be able to reward patients who are motivated to get it right, and turn that earned reputation into trusted data. The data is still marked as being "patient-derived", but also coming with an earned reputation/trust score on their overall compiled data set. And this should all be wrapped up within patient training, so they can get better. This can be provided, with financial consideration, by Patient Advocates like @Morgan Gleason and Stacy Hurt and Regina Holliday , as well as clinicians who have always been good patient communicators (especially retired ones) with so much knowledge that can be tapped. That will take an ecosystem that aligns incentives - because there ain't no billing code for this stuff! @Dave deBronkart

view this post on Zulip Lloyd McKenzie (Jul 13 2020 at 23:02):

That would be an interesting ecosystem indeed - moving from one that impedes patient data contribution at all to one that rewards it based on quality would be a radical shift. And having practitioners evaluate their 'clients' could also prove interesting (both from a 'what's the incentive to bother' as well as 'what are the business ramifications of down-rating a lucrative patient'?). As well, quality isn't necessarily something that will be consistent across a range of data. Someone might be good with blood glucose levels, but bad with blood pressures. Or good with blood pressures using their old device but bad with the new one. Finally, there's the question of "how to know?" Some sorts of measures you can evaluate based on consistency with office measurements (though there are valid reasons for home and office values to vary - e.g. blood pressure). You could also evaluate consistency of reporting. But for other types of self-reporting, evaluating truth/accuracy could be challenging.

view this post on Zulip Abbie Watson (Jul 14 2020 at 04:44):

A reputation/trust score sounds like an accounting system. Which makes me think of Accountable Care Organizations. We tend to think of ACOs in terms of health networks, insurers, nursing home networks, etc. But at their core, ACOs are simply about... accountability, right? Maybe that accountability could be as simple as submitting receipts for over-the-counter medications to the local patient ACO coop / credit union.

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 06:52):

Regarding the digital signatures discussed above, I believe it's really worthwhile to separate between the two main aspects of patient contributed data

  1. patient generated health data, where there are so many questions about trust overall that signatures don't even make it to top 5
  2. patient facilitated data transfer, where a key issue is whether the data has been tampered with - by the patient.

For case 2, we actually do have a PKI infrastructure in place. SMART launch relies on certificates used to sign the JWT objects. I had thought using those certificates would be enough to prove that the patient has not tampered with the data bundle since it left the source organization. And this could be verified by the destination organization.

view this post on Zulip Dave deBronkart (Jul 14 2020 at 09:54):

I'm newly intrigued, now, with the idea of tapping retired or underused people with clinical experience, as a resource to help with the process of getting data into computers and/or helping people understand the information, detect errors and get them fixed, etc.

Through the years I've encountered several who spoke support for the cause and said wistfully that they wished their employer would support such work. I wouldn't be surprised if they'd like to take it on as a part time activity now. I wish I'd kept a list!

view this post on Zulip Dave deBronkart (Jul 14 2020 at 10:01):

("Retired or underused" includes people I've met who got fed up with the system and moved into other fields, or elsewhere in healthcare away from patient data ... their knowledge of the data and its context is still valid. It also includes, though, people such as Case Managers, who are experienced navigators, sometimes the only one in the whole ecosystem who proactively and reactively defends the needs of a patient.

@Hamish MacDonald you should probably note them as a possible source for that role in your hoped-for ecosystem. :slight_smile:

I'm sure you're also thinking about experienced family caregivers / "care partners", who play a similar role but are usually unpaid.

view this post on Zulip Hamish MacDonald (Jul 14 2020 at 10:23):

@Dave deBronkart Absolutely. I wish you had kept that list too! 1) Retired or "underused" healthcare professionals are such a huge waste of experience and knowledge. Many only retire because they are burned out from full time work (especially hospital nurses, it seems to me) but would like to keep their hand in by helping people navigate the system, and/or organize people's data on their own timetable, and at home. And yes, there should be an economy built around that collective brains trust, because it is certainly of value. 2) Same goes for experienced family caregivers. How many volunteer "specialists" are out there who learned so much about a particular condition by caring for a loved one over a long period of time, including keeping up with all the data and information, as well as following new treatment outcomes and research in general.

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 14:12):

@Mikael Rinnetmäki I didn't realize that SMART relied on PKI. What registry is used to expose the public keys?

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:37):

@Lloyd McKenzie the public keys are published and advertised in the .well-known directory of the (auth) server, according to OpenID. https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 15:40):

Ok. That works well for SMART where the location of the auth server is identified by redirects when making the SMART call. How would we envision those certificates to be identified (and known to be trustworthy) when receiving data that has passed through multiple hands before it gets to someone who wants to verify the signature? It's not clear to me that the SMART infrastructure will work in that distinct environment...

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:45):

In my view the party wishing to verify the signature should retrieve the public key from the key store advertised in the bundle, and trust that if the server hosting the keys match the signature, the contents of the bundle should have originated from that server.

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:48):

SMART also has its own version of well-known directory, in case we don't want to rely on keys of the auth server
http://www.hl7.org/fhir/smart-app-launch/conformance/index.html#using-well-known. There's no entry for jwks_uri, but there could be.

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 15:48):

It's not clear to me that there will be a Bundle at all. I would expect exchange to be at the level of individual resources, not necessarily a collection. If you receive a data dump from source A, you're not necessarily going to want to relay that whole dump to provider B (and provider B might well not want to receive all of it). Also, where in the Bundle would the key store be advertised?

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:51):

Right, it doesn't need to be a Bundle. I thought that as the simple case - then the patient wouldn't have even tampered with the contents of the data dump. But I agree being able to sign individual resources is much more flexible. Perhaps something to be discussed in the white paper?

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:53):

And I wasn't trying to say we have everything ready to sign and transmit the data dump, just that we have a working infrastructure for PKI.

view this post on Zulip Michele Mottini (Jul 14 2020 at 15:53):

SMART backed auth uses public / private keys, but standard SMART provider or patient authorization (that is what is usually available) does not

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 15:59):

Still, the infrastructure is there, and the keys are advertised in and retrievable from publicly available urls. Right?
So we could envision a solution where organizations would make information available for patients, signed with their keys. And other organizations then wishing to inspect the signature being able to do so. Yes?

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 16:02):

In SMART, there's trust established between the app and the server - otherwise the app doesn't even get a chance to authenticate. That trust is established through some sort of legal agreement. And that trust includes knowledge of the web address at which requests can be made (which then ties to the location of the keys). However, in the patient courier scenario, there's no trust agreement between provider A and provider B, so no trusted URL to use to find the keys - which is why I was asking about directory.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:05):

keys used in SMART app signing are issued for the crypto purpose of code signing, not data signing. And they are issued to app providers, there is no identity assurance. In fact there is push back to some of the vendors that do try to do identity verification of the app prior to issuing a certificate for app use.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:06):

PKI is not hard... the technology has been around for decades. What is hard is trust, that is building a community of trust. PKI can be the technology that technically binds the trust community. But the PKI is less important than the trust community.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:09):

I think the DirectTrust.org trust network is far more broad and mature. It has focused on providers, not app writers. (I might argue that a missed opportunity for DirectTrust.org was to add FHIR-App trust)

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 16:09):

SMART app signing (code signing) is yet another thing. The keys I was referring to are the keys hosted by the healthcare organization, which are used to sign the JWT tokens used in auth.

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 16:11):

And we're seeing some registries (lists) of provider endpoints. Other providers could, for instance, use those endpoint lists when evaluating how much they'd trust the provided data bundle.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:11):

okay, different but still not data signing purpose

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 16:13):

I fully agree this is far from trusting a piece of data to be absolutely true. Or enabling the provider to assess the quality or trustworthiness of the data or the original provider of the data. This would be just to prove that the patient as the courier has not tampered with the data.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:17):

given that SMART implementations tend to put the authorization server tightly coupled with the Resource Server; it is a rather natural thing to then say that the Resource Server will also auto-sign all data it exposes with a Provenance.signature. This could be done at the Resource-by-Resource level, and thus a Provenance in this case would be just a statement of "this service exported this object with this signature". That Provenance could be ignored, like most Provenance are... but if carried along to downstream use-cases, that Provenace could be used to do digital signature validation, and the certificate within the signature could be used for identity of signer and certificate issuer could be probed for validity (non-revoked)

view this post on Zulip John Moehrke (Jul 14 2020 at 16:20):

note that with digital signatures intended to be persisted and re-verified, one does carry the signing cert. The trick, which is what I said long ago, is that the singing cert must be signed by a certificate authority that is trusted. Trust is the hard part.

view this post on Zulip John Moehrke (Jul 14 2020 at 16:23):

note that this Provenance of export is very much what the Basic Provenance IG asks for; except that Basic Provenance IG does not specify the .signature element use. It does indicate that all exports need to have Provenance of that act of exporting. So this is just augmenting that IG with "oh and also include digital signature signed by your auth server key".

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 16:56):

Ok - say I receive a resource (via a Patient acting as data mule) that is accompanied by a signed Provenance instance. What's my process - with current infrastructure - for verifying that the data did indeed come from a health provider system and was not tampered with by the patient or other intermediary?

view this post on Zulip John Moehrke (Jul 14 2020 at 17:17):

In my case, I mandated that the Provenance you see is a Provenance of the export act of THAT one resource, thus the Provenance.signature would be the signature of just THAT one resource. (setting aside xml<-->json conversions). You can easily do the technical signature validation of THAT resource serialized and hashed are the hash found in the signature. You can trust the signature because the signing certificate can be proven to chain to the the organization that exported THAT resource. @Mikael Rinnetmäki is adding that the organization might re-use their OAuth token signing certificate for this export Provenance.signature signing certificate purpose. I am still not convinced that all the certs in the SMART trust should be considered trustworthy data exporters (this is the hard trust problem).

view this post on Zulip John Moehrke (Jul 14 2020 at 17:20):

possibly a Validated Provider Directory (VhDIR) like directory of trustable organizations. This might publish their signing cert, and/or manage the PKI chain.

view this post on Zulip John Moehrke (Jul 14 2020 at 17:21):

and this is how we get back to ... trust is hard work... often too hard to justify the benefit... far cheaper to just have doctors ask patient and re-enter the data.

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 17:28):

What I think I'm hearing is that my initial assertion is correct - there isn't an existing infrastructure of public/private keys that is broadly used and trusted by industry and that SMART doesn't require such infrastructure to work - i.e. self-signed certificates plopped on a server are totally sufficient for SMART needs. Is that accurate?

view this post on Zulip John Moehrke (Jul 14 2020 at 18:06):

I agree. as I agreed with you before. The best hope is a National Provider Directory 'network', or DirectTrust.org,

view this post on Zulip Dave deBronkart (Jul 14 2020 at 19:27):

This is really valuable stuff, y'all - is it feasible to distill it into some take-aways? I'm guessing this could make this topic more efficient on Thursday's agenda. Or maybe not - I'll leave that to you. (I haven't been able to keep up!)

tagging @Jan Oldenburg and @Maria D Moen who "own" this project in our WG

view this post on Zulip Lloyd McKenzie (Jul 14 2020 at 20:16):

My take-away is that we shouldn't rely on solutions that require signatures unless we're confident there's existing infrastructure we can piggy-back on

view this post on Zulip Michele Mottini (Jul 14 2020 at 20:37):

Still, the infrastructure is there, and the keys are advertised in and retrievable from publicly available urls. Right?

@Mikael Rinnetmäki no, typically there no keys - have a look at the end points listed at https://open.epic.com/MyApps/Endpoints

view this post on Zulip John Moehrke (Jul 14 2020 at 20:49):

Lloyd McKenzie said:

My take-away is that we shouldn't rely on solutions that require signatures unless we're confident there's existing infrastructure we can piggy-back on

yes, but ... this is true about anything. Digital Signatures are powerful, but to make them really work you need a trust network for identities, and trustable time (we did not talk about time, but it is just as much a problem).
I have been involved with Digital Signatures standards since 1990s, it always comes down to the fact that the benefit that might be brought by Digital Signatures is overwhelmed by the costs of doing it right. If you are not going to do it right, then you might as well not include any signature.

view this post on Zulip Jan Oldenburg (Jul 14 2020 at 20:49):

Josh Mandel said:

Verifiable data is critical. This is one of the areas I've been digging into with "verifiable credentials" (e.g. for COVID 19 test results). There's a great opportunity to combine FHIR with a set of emerging specifications from W3C to support this kind of trust.

I think this issue is related to our Patient Engagement workgroup on correcting data, however, as we want patients to be able to either alter or footnote or otherwise flag data that they know to be incorrect before sending it on. And yes, provenance should flag that the data has been changed--ideally it should show the specific data element that has been changed rather than throwing the entire data set out the window.

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 20:50):

@Lloyd McKenzie the way I'd implement your process would be this:

You’d use an app or a feature of your EHR, that would

  1. Inspect the origin of the resource from the Provenance. This includes the URL of the resource server.
  2. Contact the resource server, get the auth server info from the SMART .well-known data.
  3. Get the information of the auth server.
  4. Get the signing keys from the signing server.
  5. Verify that the signature of the Provenance matches the retrieved public key.
  6. Check whether the resource server URL is already whitelisted by your organization.
  7. If the resource server URL is not whitelisted, present you with the information present on the SSL certificate (!) of the resource server, asking you whether you want to trust it.
  8. Tell you that the resource is in fact authentic and originates from the resource server and has not been modified by the patient.

Note that communication with both the resource server and the auth server is through SSL, and the app will verify that the SSL certificates match what’s expected.

Your organisation may have chosen to trust, for instance, all organisations listed in https://open.epic.com/MyApps/EndpointsJson, or to inspect that list from time to time and reflect it in its own hosted whitelist.

However, when I present you with my information, I don’t expect your organization to automatically trust that information. Rather, the app would tell you that it originates from a server with information:

  • Country or Region: FI
  • Locality: Helsinki
  • Inc. Country/Region: FI
  • Organnization: Kansaneläkelaitos
  • Business Category: Private Organization
  • Serial Number: 0246246-0
  • Common Name: phr.kanta.fi

Then it would be up to you whether you want to dig deeper into who operates the service at kanta.fi, but the app or feature of your EHR would give you assurance that that’s where the resource originates from, and that the patient has not modified the resource.

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 20:52):

And of course if any of the steps 1-8 fails the expectations of the app (certificates not matching the given urls or signatures), the app would warn you that the resource cannot be confirmed as being unmodified.

view this post on Zulip John Moehrke (Jul 14 2020 at 21:04):

An alternative is to have a Provenance Service -- that is a service that manages Provenance for everyone. It takes in Provenance statements, it responds to queries for Provenance Statements. It is responsible for integrity, and authenticity of the Provenance statements. BUT who would be "the" holder of this Provenance Service????

view this post on Zulip John Moehrke (Jul 14 2020 at 21:04):

BlockChain as a Provenance Service is a realistic use of BlockChain. So given the solution we have been talking about the difference is that upon Export the resources are returned AND a Provenance record is submitted to the Provenance BlockChain. Thus the Provenance overhead does not overwhelm the REST search results, as the app gets only what they asked for. No need for contained Provenance. Thus any downstream use of the resource (even many hops) can query the blockchain for a Provenance record for THAT resource. The blockchain covers the timestamp. The blockchain covers signature. The blockchain covers an Identity. And if the blockchain is a permissioned blockchain where all participants are required to validate identities of Provenance submitter are in-the-trust-network (the blockchain cohort), then that also automatically handles trust network. This also simplfies the content in Provenance.signature.blob as that can just be a hash (SHA256) of the resource, as the Provenance gets covered by the blockchain signature. One solution also obfuscates the URI to the resource by using the hash of the URI to the resource as the identifier (thus Provenance.target.identifier.value is the hash of the target id). Provenance.agent would include that organizations blockchain id. So some pattern data leakage, but limited.

view this post on Zulip Jan Oldenburg (Jul 14 2020 at 21:06):

Mikael Rinnetmäki said:

Regarding the digital signatures discussed above, I believe it's really worthwhile to separate between the two main aspects of patient contributed data

  1. patient generated health data, where there are so many questions about trust overall that signatures don't even make it to top 5
  2. patient facilitated data transfer, where a key issue is whether the data has been tampered with - by the patient.

For case 2, we actually do have a PKI infrastructure in place. SMART launch relies on certificates used to sign the JWT objects. I had thought using those certificates would be enough to prove that the patient has not tampered with the data bundle since it left the source organization. And this could be verified by the destination organization.

@Mikael Rinnetmäki I think your distinction about patient generated data and patient facilitated data transfer are excellent. But I think there's a problem with the PKI infrastructure that certifies the bundle rather than the individual data elements. I can imagine a patient who has tried and failed to get a correction to his/her records wanting to change a single piece of data before transmission. Shouldn't provenance be able to distinguish at that granular level?

view this post on Zulip John Moehrke (Jul 14 2020 at 21:06):

note this is the data equivilant of what the food industry is claiming is the solution to farm-to-fork provenance

view this post on Zulip Mikael Rinnetmäki (Jul 14 2020 at 21:24):

@Jan Oldenburg yes, indeed.

Mikael Rinnetmäki said:

Right, it doesn't need to be a Bundle. I thought that as the simple case - then the patient wouldn't have even tampered with the contents of the data dump. But I agree being able to sign individual resources is much more flexible. Perhaps something to be discussed in the white paper?

view this post on Zulip Lloyd McKenzie (Jul 15 2020 at 02:54):

If you have a signed version of the original and then send your corrected version, it'll be pretty obvious what you changed (and what you didn't) and you can author your own Provenance to explain the change. Signatures below the level of the resource are possible, but the overhead would be too expensive and I don't think it would be necessary.

view this post on Zulip Nancy Lush (Jul 15 2020 at 12:00):

Provenance is important

As we expand our exchange use cases, whether it is more clinician to clinician collaboration or collaborations that include the patient, we will undoubtedly need to address possibilities where trust could be compromised. Our core solutions should certainly include trust, or interoperability will be for naught.

However, portions of this discussion smack of paternal protectionism that too often results in data blocking. Naturally, when we are considering protections for controlled drugs, we need to add additional checks and balances to protect against such transgressions. But currently, when a patient sees a new provider, and their medical record is not available, a provider asks the patient for their list of pre-existing conditions, their current med list, and the long list of other questions. This happens routinely now, and that data becomes the basis for their medical record at that location.

This workgroup is considering new functionality that will both benefit the patient and improve overall care. For instance, if a patient views their record and determines they have meds listed that they have never taken or a diagnosis that has never occurred, they might like a way to provide that feedback so that their record could be corrected. It is not an unreasonable request. We certainly can define systems and processes to make this work which minimizes burden for both patients and providers.

view this post on Zulip David Pyke (Jul 15 2020 at 12:46):

It is very important that a patient can amend their record. Our only concern is that change is shown and tracked so that a clinician is able to see that a change has been made. A patient's feedback is a necessary channel for proper care. It's changes that aren't shown or are done illicitly that are a potential issue so we are only interested in preventing changes that are unable to be tracked.

view this post on Zulip Nancy Lush (Jul 15 2020 at 13:17):

I agree David. Since I was reading the entire thread in one sitting, I felt the need to reset the perspective. Do notice the difference between a patient changing their record and the ability to provide feedback. I am confident that the many nuances will be addressed in the research along with suggestions for workflows that work for all. The discussion is valuable.

view this post on Zulip Lloyd McKenzie (Jul 15 2020 at 14:07):

I don't think the intent of the discussions is to be paternalistic, but rather to identify the concerns that will need to be addressed for any technical solution we create to be viable. Some of those concerns are legitimate, some perhaps less so, but failing to address them is likely to severely limit uptake. It's essential that practitioners have confidence in the mechanisms and process behind whatever we come out.

I don't think there are concerns about automating and increasing the efficiency and transparency associated with requests for correction. That (in principle) benefits both patients and providers. The concerns we're focused on addressing are more around data that passes through a patient's hands and how to ensure that providers have confidence in the fidelity of that data. When data passes through a patient's hands, they have the ability to correct the data without making a request. That can be a pro or a con, depending on the situation. If we don't address the "con" side, then all data that's passed through a patient's hands risks being treated as suspect - which isn't what we want. So we need a viable way of establishing trust in whether data has been subject to manipulation or not.

view this post on Zulip Debi Willis (Jul 15 2020 at 15:01):

There really are 3 types of data we are discussing here:
1) patient generated health data: where a patient is reporting their medical history, answering questionnaires, sharing device data, etc. (similar to the questions a provider asks and types into the EHR during an office visit)
2) patient facilitated data transfer: where a patient is "forwarding" data received from another provider
3) patient request for corrections: where a patient is identifying an error and requesting a change be made in the EHR

I think #2 is where the concern is...how does the receiving provider know the patient did not make changes before they forwarded the data. I think this is probably similar to the paper world right now. How does a provider know that the information given to them by a patient with another provider's letterhead is really from that provider. A patient can create a document with a letterhead of a provider.

I think it is worth looking at what pieces of the record are the most concern. It sounds like it is medications. So, how are clinics handling that now? How are they identifying a drug seeking patient?

view this post on Zulip John Moehrke (Jul 15 2020 at 15:23):

yes @Debi Willis the signature discussion is dominantly about #2 in your list. First person Provenance is easy, Second or more levels away the Provenance becomes more important.

view this post on Zulip John Moehrke (Jul 15 2020 at 15:25):

note there are use-cases where a device that is reporting be able to sign the data the device is reporting.. for the same reason the data will be used 2 or more hops away from the device. So your #1, when the patient is using a Device (automated reporting capable device) might fall to this too

view this post on Zulip John Moehrke (Jul 15 2020 at 15:26):

Have had for years this discussion in Medical Device space, adding consumer grade devices will make this more critical.

view this post on Zulip Maria D Moen (Jul 15 2020 at 19:56):

In response to Nancy's comment above, without looking at patient corrections but focusing instead on patient contributed data, if a person has an advance directive or other advance care plan, that they provide to an EMR or that is queried by an EMR, the document is patient contributed health data but is sourced from the patient, not a clinician. I'm interested in that use case as part of what Jan and I are working on.

view this post on Zulip Virginia Lorenzi (Jul 16 2020 at 04:03):

@Jan Oldenburg said: "however, as we want patients to be able to either alter or footnote or otherwise flag data that they know to be incorrect before sending it on." @Debi Willis @Abigail Watson So this seems to be implying that a patient may want to correct or respond to their data and send notes and annotations referencing parts of their data. Not sure how that would be represented in FHIR. Would need to get our use cases done first. But interesting...

view this post on Zulip Virginia Lorenzi (Jul 16 2020 at 04:06):

Also, when a patient emails or calls to say did you get my correction I sent in on May 5th, you could find it. Patient Correction is an event that needs to be tracked in the record in some way.

view this post on Zulip Virginia Lorenzi (Jul 16 2020 at 04:07):

@Debi Willis don't forget patient sending in consents.

view this post on Zulip Abbie Watson (Jul 16 2020 at 04:34):

Seems like those notes and addendums would just be part of the Bundle that contains the Continuity of Care Composition.

On a side note, what is the file extension of the file that we write to? Are they just plain-old .json files? Or should they be .fhir files? Patients will want to set up their computers so after they download their Continuity of Care Document (is that a .ccd file? password protected or no?) they can then right-click on the file and open it with the editor of their choice, where they can then add their notations. That can be facilitated with the correct file extension.

Pragmatically speaking, all these use cases seem to need an import/export function; and I think defining a patient-friendly standard for file extensions could be low hanging fruit for us.

view this post on Zulip John Moehrke (Jul 16 2020 at 12:34):

the Implementation Guide on the topic of patient correcting a mistake they find, is not just a file. It is a workflow. A workflow that will have a few variants. I recommend we first start describing that workflow more logically in abstract terms. This then gives us a pattern that we can then apply to FHIR, and then to CDA, and then to something else. As a workflow we must leverage the Privacy Principle that enables the subject of data to impact the quality of the data, that is to say we have Privacy Principles on our side. We also have Medical Safety on our side. What we do need to do, and why I think this is a workflow, is to express that there will need to be tasks along the pathway. Data are determined by the patient to be wrong, patient identifies the correction, patient communicates the correction request, .... investigation... confirmation...data are changed... provenance recorded of the change... investigation of why the data were wrong... communication back to patient that the data was corrected... communication to all that had used the incorrect data that it was corrected. This abstract workflow would simplify in environments where these intermediate steps are determined to not be needed.

view this post on Zulip Lloyd McKenzie (Jul 16 2020 at 13:22):

In general, we shouldn't rely on packaging information as compositions/Bundles. Doing that fits nicely into the behavior of existing systems - which take the Bundle/document and stick it off into the corner where the data doesn't get integrated unless someone in the system chooses to copy and paste info into the 'real' system.

view this post on Zulip Abbie Watson (Jul 16 2020 at 18:23):

Mmmm, I agree, with the caveat that the 'file' is one of the major artifacts that the patient will interact with during the workflow. People know how to use Microsoft Word, and how to save word files. They set up custom ad-hoc workflows on their computers. Launch my .png files with Pixelmator not Photoshop; or open .html files with Dreamweaver not Safari. We need to think in those terms, and not assume that patient workflows will look like hospital workflows.

I would posit that file extensions are used precisely to streamline workflows. Otherwise, we'd simply treat every file as binary or text and call it a day. But we don't. File extensions define file type which enable autoselection of appropriate workflow tools.

In the rest of the computing world, this is so automatic that we forget it's even happening. But since healthcare is only just now entering the 21st century and getting on this new-fangled 'Web', we need/get to revisit things like mime types and file extensions in the context of healthcare.

view this post on Zulip John Moehrke (Jul 16 2020 at 18:27):

there was a group in FHIR looking at defining a file... possibly #storage for FHIR

view this post on Zulip Lloyd McKenzie (Jul 16 2020 at 19:48):

If the data gets stored on their machine, .fhir would be fine. My premise though is that the submissions, by and large, will be through apps, not emailing an attachment.

view this post on Zulip Abbie Watson (Jul 16 2020 at 22:08):

Mmmm, I have a feeling files are going kick around for a long time to come. People keep these records around for decades. Migrating from one system to the next. Makes me think this local storage question is maybe low hanging fruit for the Patient Empowerment group.

I'm good with .fhir extensions. Can say from personal trial-and-error that using the resourceTypes themselves (i.e. .medication and .observation extensions) sort of works, but not well enough to continue doing so. So, our data import/export module has evolved to support the following naming convention:

JaneDoe.fhir
JaneDoe.ccd
JaneDoe.Bundle.fhir
JaneDoe.Medications.fhir
JaneDoe.Observations.fhir
JaneDoe.*.fhir

Converted into a standards rule, it would maybe be something along the lines of:

  • Files with a .fhir extension SHALL be a .json or .xml data type and validate successfully against a FHIR schema.
  • Files with a .resourceType.fhir extension SHALL have a matching resourceType field.
  • Files with a .ccd extension are equivalent to a .Bundle.fhir extension, and SHALL be signed and zipped (i.e. .ccd is similar to a .tgz file).

And I bring this all up, because I think the difference between a .fhir file and a .ccd file gets to the heart of the discussion around provenance, and how different file extensions could enable different workflows.

The .ccd can be conceptually understood like a sealed envelope, with that timestamp and cryptographic signature. Take a bunch of .fhir files, put them together, sign, zip and it's a .ccd file. But if a patient or clinician wants to break it up into component pieces, they can. They lose the signature and notary stamp, as it were. But then they can slice, dice, and re-assemble the contents as they want. Which then allows them to craft amendments, retractions, etc.

Of course, an app would be taking care of most all of this behind the scenes. So to Debbi's model, we would have:

  1. Gathering .fhir files of medical history, questionnaires, etc. and self-signing a .ccd.
  2. Forwarding a signed .ccd file from one clinician to another.
  3. Receiving a signed .ccd file with errors, breaking it apart, taking the .fhir record of interest, and adding it to a new bundle.

view this post on Zulip Hamish MacDonald (Jul 22 2020 at 09:55):

https://chat.fhir.org/#narrow/stream/179262-patient-empowerment/topic/Meaty.20meeting.20today.20-.20see.20minutes!/near/203665072

Lloyd McKenzie said:

Provenance supports signing. The challenge is that digital signatures require a public key infrastructure that doesn't exist all the places it would need to.

This has been an excellent conversation, a lot has come out of it. In reference to @Dave deBronkart "Is there any way out of this?" question, and a discussion today with @Mark Braunstein , @Michael van Bokhoven and I would like to propose to strip it right back to
F+S=P , or "FHIR + Signing = Provenance". Any objections / thoughts this?

view this post on Zulip John Moehrke (Jul 22 2020 at 12:07):

I don't understand what you mean. It is certainly true that the method for signatures in FHIR is to use Provenance. But I am not sure that statement solves the certificate trust problem that any form of digital signature presents.

view this post on Zulip John Moehrke (Jul 22 2020 at 12:19):

It should be noted that there is one more trust model that was not brought up, the model that PGP tends to rely on. Where there is no certificate authority mandated. Where you (the recipient) keep a book of certificates that you personally trust because of personal effort to get the certificates of those individuals you know are trustworthy. This does not scale, but it does have very low startup. The certificates could be self-signed, or could be CA issued. You could choose to trust an individual certificate, or you can chose to trust some CA too. The burden is on the recipient to keep their network managed, removing trust when their friend calls them and confesses to have lost their keys. Hence why it is hard when one gets beyond 50-100. From a process point of view, a recipient only needs to do this 'book of certificates' when they chose to validate a signature. Thus where they are confident of the pathway the data traveled, they chose to not validate the signature. Where they are suspicious, they pick up the phone and call the person who is indicated as signing the data, that person confirms their certificate hash, and now the signature validation can happen. Noting that a Digital-Signature carries within it the certificate, and the certificate contains within it phone numbers and email addresses. This 'book of certificates' can be handled once for an organisation, or individually. This is effective with Digital Signatures as the evaluation of data integrity questions don't happen often.

view this post on Zulip Dave deBronkart (Jul 22 2020 at 13:37):

I love how conversations around here blend rigor and pragmatism. As someone new to HL7 (and health IT standards in general), it reminds me of the eternal reality in the patient data access world: observers often say "That's already handled in regulations - providers have to do it," while patients in crisis suffer from the reality that some don't. It's always seemed to me that we need both - keep pushing for a "clean," rigorous reality, but don't let that stop us from providing a practical reality for people who are in need right now.

(If I'm misunderstanding what y'all are talking about, let me know.)

view this post on Zulip Abbie Watson (Jul 22 2020 at 14:07):

+1 for the PGP model, which can be used to sign zip files. Let's make the .ccd extension a PGP Zip file with .fhir contents. I know it's not perfect; but it's a tangible something to start with and has fairly strong precedent and tooling in place.

view this post on Zulip Lloyd McKenzie (Jul 22 2020 at 14:20):

F+S=P is going to be confusing in the FHIR context where Provenance is the name of a resource and signing it is neither required nor common.

Digital signatures and blockchain are both 'technical' solutions that could offer what we're looking for, but they come with both set-up and ongoing costs. They also require a degree of coordination/management across the space they're being used. If we're not piggy-backing on an existing infrastructure, it's going to be hard to roll out a solution that uses either of them for the problem we're trying to solve given that convincing provider organizations to solve it at all is an up-hill battle. (Getting healthcare organizations to expend significant initial and ongoing resources to enable it is pretty much a non-starter.)

@John Moehrke PGP as a solution would mean we'd need to convince all of the relevant provider organizations to use it. And their communication partners will tend to scale well past 50-100. We don't need to sign the data coming from the patient/caregiver. We need the providers to sign the data before the patient/caregiver transport it to someone else. I can't imagine PGP scaling to work for providers.

view this post on Zulip John Moehrke (Jul 22 2020 at 16:51):

Note I did NOT say to use PGP. I said we could use a certificate management model popularized by PGP, that is a model where everyone manages their own trusted set of individual or CA certificates. I was applying that only to the signature function, not the encryption function. When applied only to the signature function it does not require that a publisher knows who will eventually need to validate their signature, they just sign everything. The signature would still be XML-Signature or JSON signature -- both of which have their own implementation problems.

view this post on Zulip Lloyd McKenzie (Jul 22 2020 at 17:35):

Right, but using PGP or not, the approach relies on a relatively small number of communication partners to manage certificate revocation. It also requires that each participant know how to do that and actually do it properly. I don't think that's going to fit well in most provider environments.

view this post on Zulip John Moehrke (Jul 22 2020 at 18:33):

I am offering various models. none are perfect except for full on PKI. so if someone really cares they will use full pki

view this post on Zulip John Moehrke (Jul 22 2020 at 18:35):

but to keep saying that signatues are too hard as the reason to not use signatures has been stopping ANY use of signatures for 30 years. There are models that can fit and not demand full PKI

view this post on Zulip Mikael Rinnetmäki (Jul 22 2020 at 21:56):

So what were the weaknesses of my proposal for using PKI infra from SMART, combined with SSL certs of servers?

view this post on Zulip Lloyd McKenzie (Jul 22 2020 at 22:47):

There's no trust infrastructure. With dual-auth SSL (which doesn't happen super widely because even that's a pain), each health system manually chooses to trust the SSL cert of another provider based on out-of-band discussion. Similarly, with SMART, each EHR system chooses to trust a given SMART service (and explicitly enables it). With "patient as courier", there's no relationship between the two provider organizations and thus no 'trust' in the certificate of the source organization. There's no communication from the target organization to the source organization's system (and no way of knowing what the source organization's system even was). Nothing would stop someone from creating a self-signed certificate claiming to be "XYZ hospital" and then using that certificate to sign a bunch of patient resources. The recipient provider would have no way of knowing whether the self-signed certificate was legitimate or not.

view this post on Zulip Brendan Keeler (Jul 23 2020 at 05:53):

Wow lot to process in this thread.

I see DirectTrust mentioned but apparently not sufficient. Worth noting mutual TLS is widely deployed nationwide via Carequality framework as well.

view this post on Zulip Mikael Rinnetmäki (Jul 23 2020 at 11:01):

Lloyd McKenzie said:

There's no communication from the target organization to the source organization's system

The recipient provider would have no way of knowing whether the self-signed certificate was legitimate or not.

In my view there is a way. When inspecting the signature, the recipient provider would obtain the public key from the auth server of the source provider. The SSL of that auth server proves that that is the provider.
The recipient would only need to be careful not to accept resources that claim to originate from https://xyzhospital.malware.com. But I do see a way for them to ensure a piece of content originates from https://xyzhospital.com.

view this post on Zulip John Moehrke (Jul 23 2020 at 13:02):

There are solutions, that a few of us have put on the table. Yes, each one has some friction in the way of processing. But in the end, because there is no intergalactic trust fabric, there will always be friction. So one must pick the friction that is the least to the broadest of use-cases that are likely to use the solution.

view this post on Zulip John Moehrke (Jul 23 2020 at 13:04):

The biggest problem with SMART, or TLS, or Direct... is that those certificates are issued with very specific purpose indications, using a certificate outside of the purpose is grounds for considering the Digital Signature invalid, no matter how good the math is. To support national or galactic digital signature trust, one must have a trust fabric that is purpose specific.

view this post on Zulip John Moehrke (Jul 23 2020 at 13:07):

Hence why I proposed a lazy, decentralized one... where someone wanting to validate a digital signature is required to do some discovery activity when they have not previously done that activity. The main difference is that one does not try to inform ALL possible recipients of ALL possible trusts; one builds their own trust based only on their own needs. This need only be done by the recipient that wants to validate signatures. It does not need to be done by intermediaries or recipients that don't care about validating signatures. This will work with self-signed (zero cost), or any other kind of certificate.; as long as that certificate is issued for the purpose of digital-signature.

view this post on Zulip Lloyd McKenzie (Jul 23 2020 at 14:33):

@John Moehrke Who would do the 'discovery activity' in a clinical environment? It certainly wouldn't be the clinician. And if it couldn't be automated, it's hard to imagine it flying.

@Mikael Rinnetmäki
"When inspecting the signature, the recipient would obtain the public key from the auth server of the source provider" - So the digital signature would include a URL to the location of the public key on the authorization provider's server. I'm ok with this bit
"The SSL of that author provider proves that that is the provider" - I don't see how that works. First, the auth provider for a provider organization isn't necessarily specific to that organization. Second, it's not clear how SSL would demonstrate who the provider is. Are you relying on the notion that the auth provider would have a URL base that seems like a provider organization? That would presume that all providers would have a domain name that indicates they're a provider organization (or even have a domain name at all). And it presumes that the domain name of the authorization service is the same as the name of the provider (and that the authorization service doesn't provide authorization for anyone who isn't that provider).

view this post on Zulip Abbie Watson (Jul 23 2020 at 16:00):

Isn't that what .name and other top-level domain names are for? Or could be for? Each provider having a unique URL base doesn't seem all that far fetched nowadays, duplicate names aside.

view this post on Zulip Lloyd McKenzie (Jul 23 2020 at 16:07):

Sure, every provider could have one. But it seems unlikely they'd go through the effort of registering for, paying for and remembering to regularly renew one for this purpose if they don't need one for some other reason.

view this post on Zulip John Moehrke (Jul 23 2020 at 21:46):

you can use domain name system to distribute certificates, but that does NOT address trust at all. It is easy for an attacker to poison the DNS system.

view this post on Zulip John Moehrke (Jul 23 2020 at 21:46):

besides it is not a question of how do you get the certificate in your hands.... there is a copy of the certificate in the digital-signature.

view this post on Zulip John Moehrke (Jul 23 2020 at 21:46):

one must learn of trust

view this post on Zulip John Moehrke (Jul 23 2020 at 21:47):

so... YES, a certificate does contain links to where the certificate can be checked

view this post on Zulip John Moehrke (Jul 23 2020 at 21:47):

but, why should you trust that location?

view this post on Zulip John Moehrke (Jul 23 2020 at 21:50):

as to "who" would do the 'discovery activity'? Likely software designed for the task, hosted by YOUR organization given YOUR organizations rules of what kind of signer YOUR organization trusts. This could be automated... but worse case it can be done by a phone call by the clinician. The degenerate form is you call up the claimed doctor and talk to them.


Last updated: Apr 12 2022 at 19:14 UTC