Stream: patient empowerment
Topic: Thursday topic: presentation & discussion w Alissa Knight
Dave deBronkart (Oct 20 2021 at 17:11):
From the agenda:
Security analyst Alissa Knight will present about her report, announced last week, on vulnerabilities she found in FHIR APIs, specifically related to data aggregators.
Approx 15 min presentation, then discussion
Please see the recommended pre-reading (from @Grahame Grieve and @Keith Boone) and report excerpts, in the Confluence agenda.
Ryan Harrison (Oct 21 2021 at 23:17):
Attn: @Andrea Downing @Abbie Watson @Dave deBronkart @Brent Zenobia @Lloyd McKenzie @Keith Boone @Jessica Skopac
During the meeting, @Keith Boone proposed that the Patient Empowerment Working Group write a letter to HL7/ONC, expressing our communities concern over common (OSWAP Top 10) vulnerabilities in patient-directed FHIR servers and clients.
I took a stab. 8 recommendations to various actors.
- 2021-10-21-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v1.docx
- 2021-10-21-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v1.odt
- 2021-10-21-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v1.pdf
(ODT and PDF are identical)
I'm intentionally keeping the Attn to the Patient Empowerment WG, because we haven't discussed internally.
Action items
The document needs reviews by...
- Patient / Patient advocate, to make sure the statement is clear and comports with your desire (the recommendations are somewhat technical; the statement isn't).
- Security person, that knows what security scanning and responsible disclosure guidance (from HL7 Security WG?) apply. @Keith Boone ?
- Policy person for the HHS OCR (to) and CMS (cc) recommendations. @Jessica Skopac?
To Co-chairs (@Virginia Lorenzi @Debi Willis @Dave deBronkart @Abbie Watson )
If there isn't something more urgent, would you consider adding this to the agenda for next week? Maybe add as homework?
I do think a response should be timely.
I don't want the WG to be derailed by this. We've already spent two weeks on it, and IMO, the WGs IGs are more important (there are other people working on security; there are many fewer people working on FHIR advanced directives).
Dave deBronkart (Oct 22 2021 at 00:42):
cc @John Moehrke
Josh Mandel (Oct 22 2021 at 00:49):
I'd suggest keeping sight of the pattern where HIPAA covered entities are making the decision (without patient input!) to ship patient data over to their business associates, who in turn implement poorly secured access. This is a "worst of all worlds" story where the services are insecure and patients don't have any say about whether their data is exposed as part of the mix, because patients didn't opt in.
There's a "next worst" set of offenses where the apps are just as poorly secured, but at least patients have a say in whether their data gets hoovered up.
Dave deBronkart (Oct 22 2021 at 00:56):
Thank you, @Josh Mandel ! Can you do a diff on that pattern vs what we're discovering now through the Knight report?
Dave deBronkart (Oct 22 2021 at 00:57):
All, the Zoom archive (screen video plus transcript) is now released. (Someone please confirm that this link works.)
https://hl7-org.zoom.us/rec/share/tYItyZVYjVIUsgQAhrLrIDMPLLv0rX2KF7gkUmuS7_1LGyuTUzBXZue8gToXopel.zPXQfzzgKNJT-BgP
Josh Mandel (Oct 22 2021 at 00:58):
I couldn't tell for sure, since the parties aren't named, but from an email exchange with @Alissa Knight at least some of the apps she tested (and found vulnerabilities in) were branded by healthcare providers as "their" apps, which is to say they were offered by or on behalf of HIPAA covered entities, which is to say they're governed by HIPAA.
Josh Mandel (Oct 22 2021 at 00:58):
https://twitter.com/JoshCMandel/status/1449437825548554242 was my quick take
@aneeshchopra @amalec My reading is the same as @amalec. The tragedy of the sound bite IMO would be a community concluding "Patient Application APIs are dangerous" when the truth is more like "covered entities shipping data to incompetent business associates without pt approval is dangerous."
- Josh Mandel (@JoshCMandel)
Dave deBronkart (Oct 22 2021 at 01:06):
Josh Mandel said:
"covered entities shipping data to incompetent business associates without pt approval"
So, now how does "incompetent business associates" compare to "aggregators who have incompetent security (or none)"?
For starters I imagine one big difference, but someone please confirm - is this correct?
- A BA has a business relationship with the covered entity, so the provider (covered entity) has chosen to send them patient data.
- But (by law?) providers (aka covered entities) have to give patient data to anything/anyone that has the patient's credentials.
Is that a correct & meaningful distinction? If not, please fix (or declare my whole concept invalid)
Josh Mandel (Oct 22 2021 at 01:14):
Yeah, that's pretty good. The term "Aggregator" isn't super well defined in this space, but assuming it means roughly "any party that runs a business and collects clinical data from multiple sources"...
-
One pathway for moving data from a provider to an aggregator would be under HIPAA, where provisions like treatment, payment, and operations support this kind of sharing without a patient's authorization as long as there's a Business Associate Agreement in place; this is all governed by HIPAA and Business Associates have obligations just like covered entities (to protect the data, to notify in the case of breaches, etc). The technical means of transfer could be a database export, or an HL7v2 message feed, or FHIR APIs, or anything else.
-
Another pathway for moving data from a provider to an aggregator would be HIPAA's patient right of access, where a patient instructs a healthcare provider to share. This can involve technologies like FHIR APIs for patient access. (Note that not all patient facing apps should be considered "aggregators" -- e.g., using Common Health or Apple Health, you can bring your own data onto your own phone without it being "aggregated" with anyone else's)
When both rules "could" apply, the BAA route essentially "wins" -- if an app is offered by or on behalf of a covered entity, even if the technical route of data exchange is a FHIR patient access API, the app still needs to have a BAA in place and is still governed under HIPAA (see Q5 from this FAQ)
^^ Folks please correct me if any of this off. I'm not a lawyer :-)
Dave deBronkart (Oct 22 2021 at 01:21):
Josh Mandel said:
The term "Aggregator" isn't super well defined in this space
FWIW: My intent, and the apparent understanding of others I've discussed this with, is that we're talking about anyone who connects to many providers' FHIR endpoints and pulls out all the data for a given patient, using their credentials. This aggregated data can then be provided to apps, saving the app developers from having to deal with all the different provider endpoints.
Is this what y'all out there think "aggregator" means, or whatever we call it, for this topic?
Josh Mandel (Oct 22 2021 at 01:22):
I think that the apps Alissa evaluated don't necessarily work this way.
Josh Mandel (Oct 22 2021 at 01:22):
Or maybe some of them do, but some don't.
Josh Mandel (Oct 22 2021 at 01:23):
Furthermore...
using their credentials.
Do you mean like "enter your password and we'll use it" (gah! bad! antipattern! but it does happen) or do you mean like "use a SMART on FHIR flow to delegate a subset of access that you choose"?
Josh Mandel (Oct 22 2021 at 01:24):
But it'd be really helpful to understand the different routes (in real life) by which patient data came to arrive in the poorly-secured backend systems that Alissa evaluated.
Dave deBronkart (Oct 22 2021 at 01:27):
Josh Mandel said:
Do you mean like "enter your password and we'll use it" ...
I don't know enough to know what I mean. I just have a generic impression that things outside a FHIR endpoint get to suck data out by saying "Yo endpoint: I am here to get Dave's data. Here's his permission."
In any case an aggregator thingie does that to lots of endpoints, e.g. mine at Beth Israel Deaconess and at Darmouth-Hitchcock and at Nashua Eye Associates and at OCB in Boston.
Josh Mandel (Oct 22 2021 at 01:28):
I expect it's a mix of:
HIPAA-governed aggregator (BAA in place with health system)
- A. Direct sharing from health system to BA
- B. Consumer mediated sharing from health system to BA (via patient credentials)
- C. Consumer mediated sharing from health system to BA (via SMART on FHIR)
FTC-governed aggregator (no BAA in place with health system)
- D. Consumer mediated sharing from health system to non-BA (via patient credentials)
- E. Consumer mediated sharing from health system to non-BA (via SMART on FHIR)
And I expect it's mostly (A), but I don't have any direct visibility into this.
Dave deBronkart (Oct 22 2021 at 01:33):
For sanity and to get anywhere, we may need to stick first to the specific findings in the report. But in parallel, I sure agree it makes sense to figure out IF anyone even TRIED to think this out. For all I know there may be groundrules, or not ... hey @Keith Boone is this stuff well known to the Security WG?
Josh Mandel (Oct 22 2021 at 01:33):
Of all these, (A) is the thing to be most upset about IMHO. And (B), (C) are a close second.
Josh Mandel (Oct 22 2021 at 01:35):
I struggled to understand specifics from the repot, but the broad conclusion sure feels true to me: security requires careful work and systems often do a poor job, even on the things that are well understood and should be basic hygiene.
Dave deBronkart (Oct 22 2021 at 01:36):
Josh Mandel said:
Of all these, (A) is the thing to be most upset about IMHO. And (B), (C) are a close second.
Josh, help me out. I need a little more specificity. :-)
"Upset" as in offended, annoyed? Or as in worried, anxious, fearful about risks? Or, the thing to focus most on, because it's a real and current live problem today?
Also, Is a covered entity responsible for what a BA does with the data? That is, is the covered entity required to be careful (like a fiduciary responsibility) who they choose as a BA?
Josh Mandel (Oct 22 2021 at 01:37):
I think "offended" best captures my personal take -- the idea that my health data might not only be shared without my awareness, but shared with an aggregator that has such lax security as to expose my data for unauthorized users to see.
Dave deBronkart (Oct 22 2021 at 01:40):
I'll note in passing that patient advocates have for YEARS been pointing this out, and it has pretty much not gotten anyone's attention. Many people (including me) have complained about Knight's style, but she sure has gotten attention on this.
(And with that, I must expire for the night - too little sleep lately for this old man!)
Josh Mandel (Oct 22 2021 at 01:43):
Also, Is a covered entity responsible for what a BA does with the data? That is, is the covered entity required to be careful (like a fiduciary responsibility) who they choose as a BA?
Generally not, but in some specific details yes, and it depends on the specifics of the BAA contract; https://www.hhs.gov/hipaa/for-professionals/faq/236/covered-entity-liable-for-action/index.html lays out the background.
Perhaps the key point is that HIPAA enforcement can be directly against the Business Associate -- e.g., https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/chspsc/index.html ("HIPAA Business Associate Pays $2.3 Million to Settle Breach Affecting Protected Health Information of Over 6 million Individuals"), and if the covered entity has reason to be aware, they may be liable as well.
Lloyd McKenzie (Oct 22 2021 at 03:18):
@Ryan Harrison, in looking at your recommendations, I have some concerns:
- I don't think it's appropriate for a list of the vulnerable organizations to be provided to HL7. HL7 is not an oversight authority and none of the organizations who produced these interfaces necessarily have any obligation to HL7. It's entirely possible that some of them aren't even members. There isn't much we could do with such a list. If anyone is going to receive it, it should be the relevant government agencies - i.e. HHS and FTC. They have the responsibility around enforcement.
I'd be astounded if Touchstone picked up on any of these issues. Its tests generally focus on "happy-path" and it's not really set up to be able to test things like this. It's theoretically possible that Inferno or the ONC-ATLs might have caught some of this, but I'm not sure they're set up to test this sort of thing right now either. I'd change this recommendation to be that Inferno and the ONC-ATL should be enhanced to ensure that basic security testing as well as explicitly asserting adherence to a checklist of best practices (e.g. "None of this software has included hard-coded credentials") be made part of the certifcation process.
- I'm kind of hoping that Alissa has already done this. As a white-hat hacker, it's sort of an expectation...
- There was no disclosure here that was relevant to HL7. Yes, the poorly secured interfaces happened to use FHIR APIs, but the FHIR APIs weren't the issue. That said, ensuring that HL7 has a disclosure policy and that it's easy to find and understand is probably a good thing for us to do. That doesn't make sense to include in a letter to government though.
- On board with this, but again, not something for regulators
- I'm not sure that privacy attestations would have helped much here. I'm sure those who would have signed such attestations for the aggregator apps thought they were fully secure and adhered to all policies. Establishing a voluntary testing and certification process and a trusted, verifiable mark for apps that have gone through the process isn't a bad idea
Ryan Harrison (Oct 22 2021 at 04:17):
@Lloyd McKenzie
For (1) 2nd paragraph and (2)-(5), are you making the updates (my preference), or do you want me to?
For (1) 1st paragraph, my intent for an identified list wasn't enforcement (though, a breach notification may be in order for the impacted patients). It was to classify the vulns by "severity." Where severity is was how much attestation/testing/vetting the vendor/API received. A high severity example would be a production API that passed ONC-ATL and is self-certified as compliant under a trust framework such as CARIN. A low severity example would be a student project intended for staging, that somehow ended up with prod data. The FHIR community has checks in place; the identification is required so you can pin-point where those existing checks have fallen short and could be improved.
@Josh Mandel brings in a second aspect of "severity" (which also requires identifying the vendors/APIs). The severity in terms of regulatory oversight.
HIPAA-governed aggregator (BAA in place with health system)
A. Direct sharing from health system to BA
B. Consumer mediated sharing from health system to BA (via patient credentials)
C. Consumer mediated sharing from health system to BA (via SMART on FHIR)FTC-governed aggregator (no BAA in place with health system)
D. Consumer mediated sharing from health system to non-BA (via patient credentials)
E. Consumer mediated sharing from health system to non-BA (via SMART on FHIR)
I guess Ms. Knight could annotate both the attestation/testing/vetting severity and the regulatory severity, and share the sheet without vendor names. But then, HL7 couldn't replicate.
John Moehrke (Oct 22 2021 at 12:24):
This is nicely constructive, relative to some non-constructive other discussions... However I think there is wild speculation going on here based on trying to read-between-the-lines of the report. The report is what it is, it served the purpose it was written for.
We can certainly discussion what kind of information might be needed to go beyond the report. But this seems well outside the scope of HL7 organization.
To presume that ONC (and friends) are not doing this work is also highly speculation.
John Moehrke (Oct 22 2021 at 12:24):
Should we sit on our hands? No. But we should not try to divine details that don't exist, we will get only what we want to see.
John Moehrke (Oct 22 2021 at 12:28):
The good news, is that the report did indicate that EHR vendors were part of the assessment and no issues were found. We should be very happy that the primary vendor that the Healthcare IT industry have been focusing their FHIR API attention to have done a good job. We don't know how deeply Alissa tested them. We don't know what the assessment would be "this month". But this is a clear indication that FHIR as an API -- CAN BE IMPLEMENTED SECURELY!
Brent Zenobia (Oct 22 2021 at 15:38):
@Ryan Harrison @Andrea Downing @Abbie Watson @Dave deBronkart @Lloyd McKenzie @Keith Boone @Jessica Skopac
Someone needs to be taking an end-to-end approach to security. At present it seems like we are pursuing an "arms and legs" approach - individual pieces are looking at their own security needs, but no one (that I'm aware of) is looking at the problem end-to-end
Brent Zenobia (Oct 22 2021 at 15:45):
And we should be applying a multiple perspectives approach to the analysis. Consider organizational (e.g. policy) and personal (e.g. patient burden) perspectives as well as just narrow technical considerations. This needs a holistic approach.
Lloyd McKenzie (Oct 22 2021 at 16:49):
@Ryan Harrison Not sure how to edit the file format you're using for authoring.
HL7 doesn't do security testing. We have zero expertise (or capacity) for doing that. If there's a flaw in our specifications or any of the very limited amount of code we distribute (we've had issues with shared XSLTs once), then we absolutely need to fix those. But we have no authority or capability to do anything more than say "we think you should do X". (And at the international/base FHIR level, we have limitations even there, because we can't mandate what needs to be secured, nor how - those decisions are made much closer to implementation and aren't the kind of thing we can mandate across all FHIR implementations worldwide.
What we can do is start to set expectations that implementation guides all have a section on "Security & Privacy" that documents specific risks and mitigations that are relevant to that IG, including any mandatory security technologies that need to be employed and any constraints on how those are configured and used. Some accelerators are already doing this.
Part of the issue is that standards, are by definitions, building blocks used to define overall solutions. They're built that way deliberately to allow re-use and reduce fragility/brittleness. The second problem is that, for most standards, there are few resources available for basic testing, let alone penetration testing. ONC may be able to fund some for US Core, but I don't know we can expect the same for Gravity, mCode or the other accelerators, much less for the plethora of IGs developed outside the accelerator space.
John Moehrke (Oct 22 2021 at 16:49):
@Brent Zenobia I think I get the spirit of what you say, but I will point out that we do define standards that support strictly speaking "end-to-end security". What we don't do, and have no authority to do, is define Access Control Policy. We do use a set of prototypical access control policies to as use-case input to the standards we define. This is certainly true of everything you find on the FHIR core security and privacy pages, and the security relevant implementation guides like SMART.
Brent Zenobia (Oct 22 2021 at 17:23):
@John Moehrke FAST FHIR might. If the ONC were to establish a minimum certification for attaching to the national infrastructure, the end-to-end problem could be attacked that way.
John Moehrke (Oct 22 2021 at 17:39):
possibly FAST could declare policies.. however the issues found may not be fully coverable by a policy that even FAST could declare. The actual problem is basic RESTful API security, hence why security workgroup and the FHIR core security pages point at general RESTful API security standards bodies and tools, like OWASP Mobile Security https://www.owasp.org/index.php/Mobile_Top_10_2016-Top_10.
Josh Mandel (Oct 22 2021 at 17:42):
Yeah I sure wouldn't tie this to FAST, which isn't an effort with production IGs in real-world use today (unless I'm missing something)
Ryan Harrison (Oct 22 2021 at 17:55):
Lloyd McKenzie said:
Ryan Harrison Not sure how to edit the file format you're using for authoring.
I uploaded a docx version of the file
2021-10-21-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v1.docx
Though, you should be able to edit the ODT (Open Document Format) in Word as well.
Ryan Harrison (Oct 22 2021 at 18:03):
John Moehrke said:
Should we sit on our hands? No. But we should not try to divine details that don't exist, we will get only what we want to see.
What is your recommendation for next steps for..
- HL7 Patient Empowerment
- HL7 Security
Are you saying the Patient Empowerment WG should...
- Send a letter / make a statement, but with modifications to the draft?
- Toss the draft and respond in another way?
- Not respond / other actors "got it"?
We can certainly discussion what kind of information might be needed to go beyond the report. But this seems well outside the scope of HL7 organization.
This is Recommendation 1 (asks from Ms. Knight). A spreadsheet with the vulns, and either i) the vendor/API (initial proposal), or ii) an annotation of severity (comments). Where severity is i) testing gap severity (e.g. ONC-ATL should have caught), and ii) regulatory severity (vulns in a BA vs vulns in a non-BA aggregator).
Basically, all the vulns are not created equal. But the report doesn't have sufficient information to classify by severity.
Josh Mandel (Oct 22 2021 at 18:07):
testing gap severity (e.g. ONC-ATL should have caught
I'm confused about this classification; none of the vulnerabilities from the report were in systems subject to ONC's certification and testing requirements, right? So how could authorized testing labs have caught them?
Ryan Harrison (Oct 22 2021 at 18:10):
@Josh Mandel
none of the vulnerabilities from the report were in systems subject to ONC's certification and testing requirements
To @John Moehrke 's point
However I think there is wild speculation going on here based on trying to read-between-the-lines of the report.
That's the known-unknown, and why it's why we need a spreadsheet.
The EHRs are subject to certification, and they passed.
Maybe some of the other APIs were subject, but had vulns. I doubt it, but we don't know.
Lloyd McKenzie (Oct 22 2021 at 18:10):
I think that asking ONC to extend their certification process to include some fundamental security testing as well as self-certification of adherence to basic security principles is reasonable - and requiring that certification by business associates as well as EHRs would be useful. Also establishing an optional certification (and labeling process) for patient apps that includes similar requirements would also be useful. And of course, re-enforcing the message that "The fact some systems have done lousy execution of security should NOT be used as an excuse to slowing or setting further barriers to patient access to data"
Lloyd McKenzie (Oct 22 2021 at 18:12):
I'm pretty sure that ONC's current testing is focused on "can you interoperate" not on "are you secure" - beyond the "can you successfully connect with TLS, OAuth, etc."
Ryan Harrison (Oct 22 2021 at 18:17):
Lloyd McKenzie said:
I think that asking ONC to extend their certification process to include some fundamental security testing as well as self-certification of adherence to basic security principles is reasonable - and requiring that certification by business associates as well as EHRs would be useful.
Sounds like a concrete recommendation to ONC, add it?
Also establishing an optional certification (and labeling process) for patient apps that includes similar requirements would also be useful.
Expand recommendation 6 [^1] to include, or split into new recommendation?
And of course, re-enforcing the message that "The fact some systems have done lousy execution of security should NOT be used as an excuse to slowing or setting further barriers to patient access to data"
Add to statement, or this a recommendation to ONC?
[^1]
Recommendation 6: Create guidance for Patient Access API 3rd-party app onboarding and consent screens
Consider including a requirement that Patient Access API Servers visually indicate the “safety” of a 3rd-party application. For example, showing positive badges for CARIN Trust Framework and ONC vetted application, and/or showing negative warnings for apps that have not voluntarily attested.
Lloyd McKenzie (Oct 22 2021 at 18:25):
I would say the "re-enforcement" should be in the intro. The letter should be stripped to only what the government can do (not what HL7 can do and not what information might have been nice-to-have from this recent report). What I've listed are the two concrete actions we're looking for from government.
John Moehrke (Oct 22 2021 at 19:03):
feels alot like "make work". ONC already knows about the report, and likely far more than this report. I am confident that they don't like patient data being exposed.
I think that the hysteria is not helpful. Actions are helpful.
Many have implemented FHIR API securely!
The security workgroup has always been watching the market place for issue that should be emphasized on the FHIR core security pages. You will find many things there that come from past articles that the Security WG and others in FHIR community knew about, but for which there was not as much drama.
We are planning:
- updating the security core pages. We have found a few instances where we could have mentioned general REST API security more (We said it once, seems we need to say it 2 or 3 times)
- identifying some FHIR specific "abuse cases" to add to any products "use cases". These would not duplicate the above general REST API security, but rather focus on FHIR specifics such as _revinclude query parameter
- identifying some common REST API security tools and possibly some hints on how to configure them to test FHIR based REST APIs.
Andrea Downing (Oct 22 2021 at 19:15):
@Ryan Harrison just now catching up in Zulip...Thank you for fantastic feedback in our meeting. I will be the reviewer as patient advocate and can get this back by Monday.
Josh Mandel (Oct 22 2021 at 19:27):
feels alot like "make work". ONC already knows about the report, and likely far more than this report. I am confident that they don't like patient data being exposed.
@John Moehrke I agree 100%
Josh Mandel (Oct 22 2021 at 19:28):
(I mean, if folks are excited to work on this, I'd say go for it, but it's not an obvious "win" from my perspective.)
John Moehrke (Oct 22 2021 at 19:50):
Patient Empowerment WG should be mad that their Patient data are in Aggregators. Did you know that? And not just Aggregators, but Aggregators that have deployed that data behind FHIR APIs that were very badly flawed. This is my complaint to ONC as a Patient.
Ryan Harrison (Oct 22 2021 at 20:57):
@John Moehrke @Lloyd McKenzie @Andrea Downing @Josh Mandel
What I'm hearing is...
- Letter from Patient Empowerment WG to ONC
-
Stating...
- We're ~mad~ deeply concerned by these vulns (John's blurb above)
- We do not want this used as an excuse to block patient access or otherwise undermine the information blocking rules (as Ms. Knight calls for in some of her recommendations) [^1, ^2]
- All vulns aren't created equal (Josh's breakdown)
-
Recommendations for, to quote Lloyd, "what the government can do". Definitely ONC. Probably HHS OCR. Maybe CMS and FTC.
- Discuss and approve (or not) at 28 Oct WG meeting. Done and move on.
I'd include the Recommendation to Ms. Knight to publish a spreadsheet (not identified) so we can aggregate the data by severity (gaps in testing, and regulatory severity), but I don't see support for this.
-
Footnotes
[^1]
Report > Recommendations: Clarify that the Security Exception to the Information Blocking Rule allows EHR vendors to require specific controls be implemented by any system that connects to their APIs.
My 2-cents: Reasonable intent, but too easily abused. This would allows EHRs carte blanche to disallow connections based on any (potentially arbitrary) control they impose. This would end-run around 5 years of ONC work crafting the information blocking rule.
[^2]
Report > Recommendations: Government mandated exposure of FHIR services creates a "killing field" of FHIR APIs when used by unaffiliated patient-facing app developers whom the EHR vendors have no influence over selecting.
My 2-cents: Let's replace "creates a killing field" with the less loaded term "enables an ecosystem," and you'll find the entire point of the ONC Information Blocking provisions. EHR vendors (ONC rule) and Payers (CMS rule) should not be able to unilaterally block access to a patients information.
Josh Mandel (Oct 22 2021 at 20:59):
Love it @Ryan Harrison
John Moehrke (Oct 22 2021 at 21:25):
I don't think it is right for us to ask Alissa to divulge information that she was asked to not divulge, and for which responsible-disclosure would have her continue to keep in confidence.
John Moehrke (Oct 22 2021 at 21:26):
i think the communication may be initiated by Patient Empowerment, but I think the HL7 process would have the Policy Advisory Committee (PAC) actually do the communicating.
Josh Mandel (Oct 22 2021 at 21:27):
John Moehrke: I don't think it is right for us to ask Alissa to divulge information that she was asked to not divulge, and for which responsible-disclosure would have her continue to keep in confidence.
I don't see where Ryan is asking for this.
Andrea Downing (Oct 22 2021 at 21:29):
Josh Mandel said:
(I mean, if folks are excited to work on this, I'd say go for it, but it's not an obvious "win" from my perspective.)
It would help to understand what would be most effective use of time. Right now this is 10ish recommendations to 8 different regulatory bodies, and wondering how to prioritize where the most obvious wins would be.
Josh Mandel (Oct 22 2021 at 21:40):
I think the letter to ONC with Ryan's points (even leaving the recommendations aside) is roughly the efficient frontier.
John Moehrke (Oct 22 2021 at 21:43):
Josh Mandel said:
John Moehrke: I don't think it is right for us to ask Alissa to divulge information that she was asked to not divulge, and for which responsible-disclosure would have her continue to keep in confidence.
I don't see where Ryan is asking for this.
I read this as asking Alissa to do things beyond what she is authorized to do
I'd include the Recommendation to Ms. Knight to publish a spreadsheet (not identified) so we can aggregate the data by severity (gaps in testing, and regulatory severity), but I don't see support for this.
Josh Mandel (Oct 22 2021 at 21:53):
but I don't see support for this.
Josh Mandel (Oct 22 2021 at 21:53):
(Also: I wouldn't expect a list of vulnerability types / metadata / org types to violate any terms of anything.)
Lloyd McKenzie (Oct 22 2021 at 22:21):
A desire for Alissa to do anything has no business in a letter to Government. And, frankly, we're not in a position to do anything with information that Alissa might provide. It's not HL7's role. Someone who has a relationship with her might want to verify that she's informed the organizations with the vulnerabilities of their vulnerabilities and suggest that passing on what she feels she can to the relevant regulatory authorities would be helpful. However, in the end that's her call and will be influenced by what her contract allows her to do. HL7 shouldn't try to be in the middle.
Ryan Harrison (Oct 23 2021 at 00:35):
@John Moehrke @Josh Lamb @Lloyd McKenzie @Andrea Downing
Let's try this again; we're making progress...
Letter from Patient Empowerment WG to
ONCPAC
HL7 policy dictates the letter is to PAC, not ONC. (Thanks @Dave deBronkart )
Stating...
- We're ~mad~ deeply concerned by these vulns (John's blurb above)
- We do not want this used as an excuse to block patient access or otherwise undermine the information blocking rules (as Ms. Knight calls for in some of her recommendations)
- All vulns aren't created equal (Josh's breakdown)
There seems to be consensus on the statement outline
Decision: Include recommendations to the govt or not
- Recommendations for, to quote Lloyd, "what the government can do". Definitely ONC. Probably HHS OCR. Maybe CMS and FTC.
Did I get this right folks?
- @Ryan Harrison Recommendations
- @Andrea Downing Recommendations, but fewer
- @Josh Mandel Indifferent.
- @Lloyd McKenzie Recommendations, and has proposed at least two specific recommendations to ONC
Dave deBronkart (Oct 23 2021 at 00:41):
@Ryan Harrison the only thing where I know for sure you're off base is that no WG writes to the government. It's "law" in HL7 that policy communications come solely from PAC (policy action committee), to ensure squeaky-clean rule compliance, so Pt Empowerment or any other group sends thoughts to PAC, who does the massaging and sending.
I'm sure more senior HL7 people will correct me if that's wrong.
Ryan Harrison (Oct 23 2021 at 00:42):
@Dave deBronkart Got it. Updated post.
Andrea Downing (Oct 23 2021 at 00:51):
Ryan Harrison
- @Andrea Downing Recommendations, but fewer
Hi @**Ryan Harrison** ! I emailed you doc w/ specific redlines synthesizing some of the above, along with suggested recommendation to FTC.
Ryan Harrison (Oct 23 2021 at 00:56):
@Andrea Downing Yep, I know.
I'm in the document now.
Update: I worked through all your redlines and incorporated the comments from the group. I think the document is ready for v2 release to the group.
Was paraphrasing stances for the group. Was my paraphrase inaccurate?
Peter Jordan (Oct 23 2021 at 03:20):
Dave deBronkart said:
Ryan Harrison the only thing where I know for sure you're off base is that no WG writes to the government. It's "law" in HL7 that policy communications come solely from PAC (policy action committee), to ensure squeaky-clean rule compliance, so Pt Empowerment or any other group sends thoughts to PAC, who does the massaging and sending.
I'm sure more senior HL7 people will correct me if that's wrong.
@Dave deBronkart I'm afraid that's incorrect. The A in PAC stands for 'Advisory' and that describes its role. The only people authorized to represent HL7 International to other organizations or entities, are the CEO, CTO( @Wayne Kubick) and Board Chair or anyone authorized by them to do so. This is stated in Section 8.04 of the GOM (Representing HL7 International).
Grahame Grieve (Oct 23 2021 at 03:22):
right. but in this case, the CEO and CTO have asked the PAC to draft such a letter
Peter Jordan (Oct 23 2021 at 03:23):
Sure - but who will actually sign that letter?
Grahame Grieve (Oct 23 2021 at 03:24):
CEO I expect. CTO remit covers technical issues, and this isn't one of them. So I would expect Chuck, but I don't think we've discussed it explicitly
Grahame Grieve (Oct 23 2021 at 03:26):
Walter might co-sign this one, I suppose.
Dave deBronkart (Oct 23 2021 at 04:14):
@Peter Jordan thank you for honing my impression! Apparently I was right in that individual WGs do NOT write directly - they write to PAC - but beyond that I was all foggy.
Ryan Harrison (Oct 23 2021 at 05:31):
@Andrea Downing @Lloyd McKenzie @Josh Mandel @Brent Zenobia @Dave deBronkart @John Moehrke
v2 of the letter. There are comments in the docx for the suggestions from Josh, Brent, Lloyd, et al. contributed. Do double check your intent is captured.
- 2021-10-22-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v2.docx
- 2021-10-22-HL7-Patient-Empowerment-WG-meeting-DRAFT-proposed-response-to-Playing-with-FHIR-v2.pdf
Virginia Lorenzi (Oct 25 2021 at 02:47):
I agree with @Josh Mandel from reading the report it appears that at least some of the aggregators were used by providers who provided their patients with apps to access and manage their data. Thus covered by HIPAA so the hacking would represent a HIPAA breach.
Virginia Lorenzi (Oct 25 2021 at 02:53):
@Josh Mandel as a patient wanting to use an app that I download as my PHR using individual right of access, and the new SMART on FHIR APIs, I would want to know that my data was safe. Isn't that D.? Isn't this one of the goals of having a "nutrition label" telling someone downloading their data about the app (which could also have APIs)? This is where I thought CARIN Code of Conduct could help and is why I have been so interested in it. Also more guidance for app and API developers (per Alissa's recommendations to app and API developers.
Virginia Lorenzi (Oct 25 2021 at 03:31):
@Ryan Howells CARIN alliance was working on the concept of app attestation I thought (attestations to CARIN code of conduct, gallery of those that attested to it).
Virginia Lorenzi (Oct 25 2021 at 03:39):
actually all policy communication goes through HL7 leadership and is first recommended by the policy advisory committee (PAC)
Paul Church (Oct 25 2021 at 03:42):
I don't really see the CARIN Code of Conduct as a solution. If users learn to rely on attestation to the code of conduct, then an incompetent app developer is likely to attest to the code of conduct (in good faith, as their security issues are due to incompetence). The question then is who will test their security, who will pay for that testing across the many apps in the market, and what are the consequences when it is shown to be insecure?
Grahame Grieve (Oct 25 2021 at 04:34):
the code of conduct is part of a solution, but it needs to buttressed by testing
Virginia Lorenzi (Oct 25 2021 at 05:54):
Some info on ONC certification:
The ONC Certification requirements are determined by the regulatory process. The most recent ONC Certification Rule requires EHR vendors to upgrade their products in 2022 and we expect CMS to require adoption by many providers in 2023. This is the 2015 Cures Update Edition Certification. It actually includes enhanced requirements for APIs: https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#test_procedure I think that perhaps some of what is covered in the new certification might actually help with some of the issues but I am not sure.
To upgrade the certification requirements beyond this is possible, but would require the full rulemaking process, EHR compliance, and provider implementation - this takes alot of effort and alot of time (create and release proposed rule, get feedback, review feedback, issue final rule, give time for vendors to update software and get certified, provider adoption)
Also, there is NO requirement for an aggregator or any HIT vendor product to be ONC certified. the EHRs tested were all certified and none of their APIs had a problem. But many HIT products used by providers and by patients are not ONC certified. My guess is the ones that were hacked were not certified. The only motivation for a vendor to certify a product is so that providers are able to comply with a rule that requires they use a certified EHR (Promoting Interoperability, MIPS, etc).
Josh Mandel (Oct 25 2021 at 14:32):
. If users learn to rely on attestation to the code of conduct, then an incompetent app developer is likely to attest to the code of conduct (in good faith, as their security issues are due to incompetence). The question then is who will test their security, who will pay for that testing across the many apps in the market, and what are the consequences when it is shown to be insecure?
It's a solution to the "is there enforcement" problem (i.e., clear terms are then enforceable, meaning developers are at financial risk) more than it is to the "are there security bugs" problem. So it's policy that at least starts to line up incentives, because if there's enforcement there's (yet another) direct motivation to get the security right.
Lloyd McKenzie (Oct 25 2021 at 14:35):
Even if there isn't active enforcement, if there's well-advertised, easily available/useable testing tools, most will use them. But I don't think we have those.
Lloyd McKenzie (Oct 25 2021 at 14:35):
Not well-advertised, at any rate...
John Moehrke (Oct 25 2021 at 14:43):
I would also advocate for Patient access to a full Access Report, not just the Accounting of Disclosures. This Access Report needs to be available from EVERYONE who has data with the Patient as the subject. The FHIR AuditEvent resource can support this. Where support means that when Security/Privacy relevant events are recorded in AuditEvent resources with a .entity indicating the affected subject (Patient); then a trusted service can appropriately filter out those relevant to a given Patient. This is the foundation of the IHE-ATNA audit logging capability, all defined IHE Interoperability transactions define how the audit log "should" be filled out including identifying the patient whenever possible. This is also what I elaborate on in my BasicAudit IG
Also see my blog article on Access Log -- https://healthcaresecprivacy.blogspot.com/2019/06/patient-engagement-access-log.html
My point here in this topic is, that abuses of an API will still show up in an AuditEvent log. These should be detected internally during regular audits of the audit logs, and remediation started for failures. But these could also benefit from Patients asking about accesses to their data that they don't think are justified or authorized. This in addition to the benefit to a Patient of the Transparency of how their data are used - Privacy Principles.
John Moehrke (Oct 25 2021 at 14:52):
A really stupid idea came to mind over the weekend.... is it possible that Alissa did not find Security failures? That she just uncovered the system working exactly as the system was intending, exactly as their Policies are defined? Thus this is not a technical failure. This is a case where we look at this Policy outcome, and are not quite understanding how anyone could have THAT KIND OF POLICY on medical data.... This might simply be a Policy, that we collectively are shocked about.
Lloyd McKenzie (Oct 25 2021 at 14:59):
The issue with patient access to access logs is that there will be a whole lot of individuals and organizations with legitimate reasons to access the data, but the patient will have no clue who they are or why they have access - which will lead to a lot of churn. I'm not saying it shouldn't happen, I'm saying that doing it in a way that conveys enough context that an outsider can distinguish "legitimate" from "inappropriate" is super hard.
John Moehrke (Oct 25 2021 at 15:12):
I agree that will be the initial setting... BUT, that is a solveable problem. Ignoring Transparancy is not a good solution. The automatic exceptions baked into Accounting of Disclosures are so vast that there is nothing that actually qualifies to go into an Accounting of Disclosures log.
John Moehrke (Oct 25 2021 at 15:13):
quite possibly it would be beneficial to EVERYONE, including the CEs, for the Access Report to drive these kind of discussions. Those discussions could then drive more accurate Privacy Policies.
John Moehrke (Oct 25 2021 at 15:15):
step one is to record Security and Privacy relevant events.... Step two is to figure out how that informs a useful Access Log report. Without Step one, step two can't happen. Without Step one, the DevSecOpts can't detect when bad activity is happening.
Lloyd McKenzie (Oct 25 2021 at 18:40):
"that is a solveable problem" - how? If exposure results in noise, then that's not beneficial. How do we enable exposure while ensuring that - from day 1 - noise is minimal. (Noise = issues raised because of a perceived problem that isn't really a problem.)
Andrea Downing (Oct 25 2021 at 19:14):
@Lloyd McKenzie and @John Moehrke I'm getting input from security community there needs to be separation between disclosure in this ecosystem vs. policy advocacy. Those 2 go hand in hand but it's complicated. Good wisdom here from Katie Moussouris https://csrc.nist.gov/CSRC/media/Presentations/industry-bug-bounty-implementations-lessons/images-media/Industry%20Bug%20Bounty%20Implementations%20Lessons.pdf
Dave deBronkart (Oct 25 2021 at 21:04):
Grahame Grieve said:
the code of conduct is part of a solution, but it needs to buttressed by testing
I agree, of course. And @Paul Church asked the "who will pay for that testing?" I don't have a clue - I'm just bumping the question.
Lloyd McKenzie (Oct 25 2021 at 21:47):
Part of the answer to that is dependent on how much testing gets done and how expensive it is. The more expensive, the fewer the number of apps that can afford to do it on their own - particularly those that are either free or don't rely on an advertising or other not-so-ideal method of self-funding.
Last updated: Apr 12 2022 at 19:14 UTC