FHIR Chat · Google Project Nightingale · Security and Privacy

Stream: Security and Privacy

Topic: Google Project Nightingale


view this post on Zulip Dave deBronkart (Nov 16 2019 at 02:04):

Apologies for the length of this. I'm endeavoring to do a handoff of a fairly fat subject over in Social, which people have decided belongs here.


Over in Social a hefty thread has sprung up since Wednesday's media coverage of the whistleblower on Google's Project Nightingale, which is a partnership with Ascension Health (the 2d biggest US health system) to analyze the medical records of millions of people.

In posting this note I'm not taking sides, but the robustness of the thread indicates the sensitivity of the subject sociologically, distinct from the legal issues. Many point out that in one sense Google is merely a BA (business associate) legally allowed under HIPAA to look at the data, with proper authorizations, and this stuff happens all the time. Others point out that many US consumer/patients are not at all reassured by that, with others noting that it's also not clear whether Google has performed proper HIPAA training for all the people who might touch it, as is required for (e.g.) hospital employees.

I'll note that the air is inflamed (out in the world) regarding Big Tech's perceived cavalier attitude about extremely personal health data, HIPAA or no HIPAA, partly because of the September stories of period-tracking apps which sent data to Facebook (even for women without FB accounts) and because of Facebook's egregious conduct that led to the recent $5bn FTC fine. It wasn't helped in the least when Zuck then created a new chief privacy officer and populated it with Michel Protti, previously in charge of partnership marketing (e.g. Analytica). (See the fox / henhouse analogy at the end of my post last week.)

My own view (not that you asked!) is that there is no sure answer to any of this, and we'd be prudent to simply acknowledge the importance of privacy, as well as the difficulty of guaranteeing it. IMO the sensible approach is a blend of constant vigilance and radical transparency.

view this post on Zulip Jose Costa Teixeira (Nov 16 2019 at 10:42):

IMO the sensible approach is a blend of constant vigilance and radical transparency.

Agree, and my practice has shown that vigilance and transparency cannot be only statements of intent and agreement (or one-way policies) but must be materialized in processes and must have metrics

view this post on Zulip Jose Costa Teixeira (Nov 16 2019 at 10:45):

Take the GDPR approach (as I read and used it): Companies must declare what they are doing with all personal data, and that declaration must include permission, purpose (consent or other), other parties (and their location to see if they are also GDPR-covered), safeguards against misuse...

view this post on Zulip Dave deBronkart (Nov 16 2019 at 14:13):

Next up is this story, which seems to illustrate the complexity of the situations we're inevitably going to find ourselves in.

Google scrapped the publication of 100,000 chest X-rays due to last-minute privacy problems. The data was going to be shared as part of an AI showcase in 2017

Again, I post this for awareness of two things:
- the complexity of moving forward on this frontier
- how easy it is for big-name companies to find themselves in headlines, which can amplify consumer security worries.

(Side note: it appears in this case that Google did the right thing, at some cost to itself, which is in stark contrast to Facebook, which truly appears not to give a @#$% about anything or anyone.)

view this post on Zulip John Moehrke (Nov 18 2019 at 16:54):

I would invite any output that is reasonable for HL7 to post. We have included many guidance and warnings on our security/privacy pages. We can't keep idiots from being idiots. So we can't force any actions, and we can't force any specific policy. But we can certainly do what we can do; so please everyone feel free to suggest realistic things that HL7 can do within the scope of a standards organization.

view this post on Zulip Jose Costa Teixeira (Nov 18 2019 at 17:33):

no, but we can give better tools for idiots to do idiot things, and that may be a risk.

view this post on Zulip Jose Costa Teixeira (Nov 18 2019 at 17:34):

I'm proposing we standardise the metadata that explains "Why do I think I'm allowed to use this data". I think this is a should-have for data exchange. Perhaps not on every resource, but on things like capabilitystatement.

view this post on Zulip Dave deBronkart (Nov 19 2019 at 11:28):

I'm proposing we standardise the metadata that explains "Why do I think I'm allowed to use this data". I think this is a should-have for data exchange. Perhaps not on every resource, but on things like capabilitystatement.

This seems like a brilliant idea. It's a cousin of provenance, yes?

It sounds like it would be a strong enabler of the transparency goal. In that way, it would enable complaints if unintended uses were discovered. This would be annoying for people who want to use any data they find, but it could be a real confidence builder that could fuel the kind of trust we want consumers to feel.

Yes?

view this post on Zulip John Moehrke (Nov 20 2019 at 14:48):

Here is the detailed form of what I think @Jose Costa Teixeira is requesting, and what I am helping get created.

Where a Consent is a patient-by-patient thing, there are other cases, such as this one, where bulk of data is communicated for a specific reason. That specific reason has allowances and restrictions. The best word the authorization rules world have is "Policy". Which is also the word we use for everything, so it is unsatisfying. These policies are today encapsulated in business agreements (e.g., HIPAA BAA, or GDPR). These agreements are often human readable paper documents signed by organizations. Fine solution, but we want to do better.

So, we want to create a FHIR Resource (or something) where these policy statements can be made in a computer processible form. Why was the data released, for what period of time, for what can the data be used, how will the data be handled, how must the data be manipulated upon use, what reporting is required upon use, how will the data be returned or destroyed... etc. This Policy would accompany the data so that the parties communicating are clear on the authorizations and obligations.

Thus the Policy could then be persisted with the data. Where the data center holds data from many interactions, each data would be strongly linked to the Policy that controls it. Thus each time the data are used, the appropriate Policies would be interrogated as to if that use is allowed and what further obligations are imposed.

This Policy resource would be very similar to a Consent, but would be usable in cases where Consent is not necessary but where there are still reasons to convey Policy. Indeed Consent might leverage Policy rather than duplicate it.

The problem I have is that the above Policy resource is a very complex 'rules' encoding. And there is already standards for encoding Policy, such as XACML. It worries me that we (FHIR) would attempt to re-invent XACML complexity rather than just use a standard that exists. One solution might be an Implementation Guide that creates a translation layer between FHIR and XACML language. A vocabulary mechanism for how to reference things in FHIR space inside the XACML rules structure. This is what IHE did for XDS environments in the APPC profile.

view this post on Zulip Grahame Grieve (Nov 20 2019 at 15:50):

A vocabulary mechanism for how to reference things in FHIR space inside the XACML rules structure

This is something different?

view this post on Zulip Grahame Grieve (Nov 20 2019 at 15:51):

There's the provenance approach: a resource based on xacml but concreted to FHIR and a subset of requirements - this can make implementation orders of magnitude easier?

view this post on Zulip John Moehrke (Nov 20 2019 at 16:26):

There's the provenance approach: a resource based on xacml but concreted to FHIR and a subset of requirements - this can make implementation orders of magnitude easier?

I don't understand the question

view this post on Zulip Jose Costa Teixeira (Nov 20 2019 at 17:45):

I don't really want to do much on Policy. I would look at a Permission resource. Take GDPR : it may be what you seem to call a "policy". But GDPR requires that we keep track of our "permission" to use data. So I don't want (personally) to look at policy but at Permission as an affirmation of a reason. That reason can be a policy or consent. In both cases that permission is the linking element and seems missing

view this post on Zulip Peter van Liesdonk (Nov 22 2019 at 12:05):

Of course, the issues in the Google case have everything to do with bad data governance at Google.

FHIR has all kinds of ways to hook into existing policies or permissions, but all of that only works when the company has actually written those policies, assigned responsibilities, designed processes, and implements/enforces them. Typical case of Courtney's third law here.

Though I see this as a huge issue that requires more guidance, I personally don't think FHIR has a role in solving such issues


Last updated: Apr 12 2022 at 19:14 UTC