FHIR Chat · Feb report on health app vulnerabilities · Security and Privacy

Stream: Security and Privacy

Topic: Feb report on health app vulnerabilities


view this post on Zulip Dave deBronkart (May 25 2021 at 01:17):

Hi folks - I know nothing about security issues except in the most generic terms, but:

I just ran into this February report, which, if valid, seems pretty appalling. https://www.beckershospitalreview.com/cybersecurity/30-popular-mobile-health-apps-vulnerable-to-cyberattacks-phi-exposure.html

Holy crap, hard-coding security keys?? And things like that. And a vulnerability where a hacker can just replay a FaceID session to reactivate the authentication and play around? Etc etc.

Or am I falling prey to a bogus press release? I know the project was funded by Approov, who makes a product that claims to remove the vulnerabilities; that's a separate question from: are these vulnerabilities truly there?

view this post on Zulip John Moehrke (May 25 2021 at 10:14):

I am not surprised at all. In fact most SMART-on-FHIR apps likely have their client_id password hard coded. Security is hard, idiots are powerful.

view this post on Zulip Cooper Thompson (May 25 2021 at 13:43):

I agree with John - I feel like this is pretty much expected and normal in software. And is not a healthcare specific problem. As long as there are mortal humans building things, the outcomes reported in that article will remain true. Giving more parties access to patient data was always a decision that involved pros and cons. I think we both agree that the pros far outweigh the cons, but you've discovered one of the cons: the more parties that have patient data, the more opportunities there are for that data to be stolen and misused by bad actors.

view this post on Zulip John Moehrke (May 25 2021 at 18:12):

which is one of the reasons given by healthcare providers for data-blocking... The patient could not be trusted with protecting the data, therefore they should not be given a copy...

view this post on Zulip Dave deBronkart (May 25 2021 at 18:38):

Holy crap.

view this post on Zulip Dave deBronkart (May 25 2021 at 18:40):

@Josh Mandel what do you make of this?? Isn't it a bit nuts for the norm to be this weak??

view this post on Zulip Dave deBronkart (May 25 2021 at 18:41):

Cooper Thompson said:

As long as there are mortal humans building things, the outcomes reported in that article will remain true.

But holy crap, isn't that why there are laws and regulations (and standards to adhere to)??

view this post on Zulip Dave deBronkart (May 25 2021 at 18:42):

Honestly I'm having a hard time keeping my vocabulary dignified. Anyone who wants to be in the business of gratuitously offering to move patient data around had damn well better be responsible about what happens while it's in their custody!

view this post on Zulip Andrea Downing (May 25 2021 at 18:46):

Hi there, joining and thought I'd introduce myself. I'm Andrea, a BRCA Community advocate, security researcher, and cofounder of The Light Collective. (lightcollective.org)

view this post on Zulip Andrea Downing (May 25 2021 at 18:46):

To be clear, the problem here isn't the patient being "trusted with their data." It's basic security standards for FHIR API's. I'm currently on ONC Steering Committee for FHIR Factories and working to get this research in the hands of the right stakeholders at ONC. FHIR vulnerabilities are fixable problems, and pretty basic when developing secure API's. For example BOLA is a very common and fixable vulnerability found in 100% of the FHIR API's tested in this attached report. I wouldn't expect patients to know the first thing about this. Rather, I'd expect Health IT professional to know how to secure their FHIR API's. This is all new and we'll get there ( : FHIR-hacking-mhealth-apps-with-apis.pdf

view this post on Zulip Dave deBronkart (May 25 2021 at 18:47):

Andrea Downing said:

Rather, I'd expect Health IT professional to know how to secure their FHIR API's.

+1000

view this post on Zulip Dave deBronkart (May 25 2021 at 18:52):

For those who don't know BOLA, it's basically this: In the example in my post a year ago for FHIR total-newbies, once you're in, just change the yellow highlighted stuff in the URL, and presto, you get stuff the patient may not have meant to release.
image.png
Is this okay with us?? I'd think not.

view this post on Zulip Lloyd McKenzie (May 25 2021 at 18:58):

That's perfectly fine - if your authorized to look at multiple patients and multiple types of records - which may well be the case for the providers, and with constraints, even for patients (e.g. a patient might be authorized to access records of a minor child, etc.) However, the system should absolutely not honor a query you don't have the right to execute. It should assume everyone knows how to formulate a valid query.

view this post on Zulip Paul Church (May 25 2021 at 20:28):

For BOLA, that sounds like it's talking about servers, not clients? I think the ecosystem of servers is a different problem than the ecosystem of client apps. Servers are not something chosen by the patient, so there should be more accountability.

Regardless, the statement "Out of the API endpoints tested, 100% were vulnerable" is difficult to believe. It would indicate that no one enforces access control successfully.

view this post on Zulip Josh Mandel (May 25 2021 at 20:35):

I'm not seeing what the issue issue is in the inset image above; a client is authorized with certain scopes of access up-front; the client can then issue many potentially different queries (e.g., with different query parameters), and it's the server's job to ensure the client only sees data it was authorized to see. Asking about different LOINC codes in different queries is generally fine, when this is consistent with the authorized scope of access.

view this post on Zulip Andrea Downing (May 25 2021 at 20:41):

Josh Mandel said:

I'm not seeing what the issue issue is in the inset image above; a client is authorized with certain scopes of access up-front; the client can then issue many potentially different queries (e.g., with different query parameters), and it's the server's job to ensure the client only sees data it was authorized to see. Asking about different LOINC codes in different queries is generally fine, when this is consistent with the authorized scope of access.

Hey Josh et all: Please read the full report I linked to summarizing Alissa Knight's research: FHIR-hacking-mhealth-apps-with-apis.pdf

view this post on Zulip Andrea Downing (May 25 2021 at 20:42):

Here is the link again:
FHIR-hacking-mhealth-apps-with-apis.pdf

view this post on Zulip Andrea Downing (May 25 2021 at 20:46):

To be clear, I'm not talking about queries where access is authorized. Alissa was able to look at entire record sets of patients admitted to a hospital. Here is one example of many. I'd be happy to connect anyone w/ her or set up a call if there are questions. My takeaway: there's some pretty basic things that can be done to harden the security of FHIR API's and standards are lacking. Screen-Shot-2021-05-25-at-1.41.13-PM.png

view this post on Zulip Andrea Downing (May 25 2021 at 20:54):

Right now I'm working to get Alissa connected w/ the right stakeholders. For example, call later with some of the good folks at H-ISAC and Bio-ISAC. Meeting w/ ONC stakeholders next week. So if it helps I'd be happy to set up a call with her @Josh Mandel so you can speak with Alissa directly to ask questions about the security research she is coordinating. Also I know I'm new here so very nice to meet everyone ( :

view this post on Zulip Josh Mandel (May 25 2021 at 21:00):

Thanks @Andrea Downing! I'd be very happy to meet with Alissa -- from the description above, this sounds like a critical vulnerability. I know how challenging these can be to report and track (e.g., this blog post documents my experience reporting across multiple vendors in 2014)

view this post on Zulip Paul Church (May 25 2021 at 21:09):

Ok, I read through the report more carefully and the vulnerable APIs aren't the EHRs (which is what gave me pause about "100% vulnerable", I'm pretty sure some of the EHRs have successfully implemented access control) - it's the server backends of the mHealth apps.

view this post on Zulip Josh Mandel (May 25 2021 at 21:27):

I just looked at the screenshot stating "BOLA vulnerability found allowing me to see all patients admitted into the hospital". I know I should be reading the article but haven't gotten around to it yet; if it wasn't an EHR vulnerability, what system was storing/leaking these data?

view this post on Zulip Josh Mandel (May 25 2021 at 21:29):

I can see from the screenshots though that this isn't a FHIR API. I don't recognize what kind of API this is.

view this post on Zulip Paul Church (May 25 2021 at 21:36):

It's the mobile app's server backend.

view this post on Zulip Paul Church (May 25 2021 at 21:36):

So the scope of what you can compromise is presumably people who have their data in that app, not "the hospital".

view this post on Zulip Josh Mandel (May 25 2021 at 21:38):

OK, that's a misleading description unless the mobile app is "the mobile app for all clinicians at this hospital"?

view this post on Zulip Josh Mandel (May 25 2021 at 21:40):

In any case, I agree with John's first point (security is hard); these notes aren't FHIR-specific though and from what I can tell none of the findings involve FHIR.

view this post on Zulip Grahame Grieve (May 25 2021 at 23:17):

I'm not seeing, from the document, where there were documented access problems in real clinical systems using Smart on FHIR here. BOLA would imply some predictability around the random numbers in the access tokens. I haven't worked with any implementation that wasn't using UUIDs for those (I haven't worked that many real ones...)

view this post on Zulip Andrea Downing (May 26 2021 at 00:33):

Hi @Josh Mandel to be clear some of the vulnerabilities found do directly impact the hospital. Not all is published in this report and will be disclosed in June. How about: let's take conversation to DM and coordinate a call w/ Alissa. My goal is to figure out who the coordinating body for this would be (ONC?) so there are some basic security standards / patches before the July deadline. Thx!

view this post on Zulip Josh Mandel (May 26 2021 at 03:53):

Happy to chat!

view this post on Zulip Venu Gopal (May 28 2021 at 07:32):

Josh Mandel said:

Thanks Andrea Downing! I'd be very happy to meet with Alissa -- from the description above, this sounds like a critical vulnerability. I know how challenging these can be to report and track (e.g., this blog post documents my experience reporting across multiple vendors in 2014)

@Josh Mandel While there may be a larger discussion going on on the vulnerabilities. I want to take this opportunity to seek guidance on what does FHIR or SMART say about securing documents/imageURLs like you stated in your post <img src="http://hack.me/leaked-from-image.png"></img>. The custom scopes are at resource level, and the resource endpoints are protected by them. But there can be a case where even a server can leak these URIs/URLs, leaving them vulnerable and hard to track the origination of leak. Since these are free form URLs they don't follow a resource model. How can they be protected in a shared model like FHIR ?

view this post on Zulip Josh Mandel (May 28 2021 at 14:44):

It's important to consider specific threats here. Are you worried about clinical data leaking inadvertently from a document to a third party? Intentional tracking by the documents author ("phone home")? EHR session compromise when a maliciously authored document is displayed?

view this post on Zulip Venu Gopal (May 31 2021 at 13:08):

@Josh Mandel I am seeking in general. If I take a look at the Attachment data type http://hl7.org/fhir/datatypes.html# Attachment or a Reference (Any) http://hl7.org/fhir/references.html#Reference that could be used. The attachment could be anything, it could be the photo of a patient, practitioner, a pdf from a previous enocunter/episode of care/medication that is attached to an Appointment. These have URLs/URIs, that can be of free form. Assuming these are passed on along with 'a Resource' that is exchanged, it could be two standalone apps that are out of network. The receiving server could then leak a URL/URI. What is a recommended mechanism to safeguard these URLs ?

view this post on Zulip Josh Mandel (May 31 2021 at 14:48):

I'd recommend starting by listing out some specific threats: who is doing what to whom, and who should be in the position to prevent it?

view this post on Zulip Venu Gopal (Jun 01 2021 at 05:43):

Yes, perhaps ask the requester to present a token and do an introspection along with rules based on ROLE and or consent ? Or make the requester create a login account with the sender and force login before providing access. How would a SMART app in general access these type of links ?

view this post on Zulip Josh Mandel (Jun 01 2021 at 15:20):

In the general case, clinical documents are part of a record and can be shared. Documents can include links. That's not inherently a problem. If dereferencing a link would unintentionally leak information, then document authors should avoid creating links like that. But in general, document receivers should also be cautious about when/where to dereference a link. For an analogy: in GMail, I have "do not show images" enabled by default, which prevents a lot of tracking; email authors know this is a common setting, so they directly embed most visually important images to ensure they can work locally.

view this post on Zulip Venu Gopal (Jun 01 2021 at 15:31):

Yeah I am okay with handling links at clients, because most likely the UI controls the rendering, prevent download, stop screen grab, not use webview and so on. Perhaps it should be dealt at compliance. Thank you for your views

view this post on Zulip John Moehrke (Jun 01 2021 at 15:38):

First rule of Access Control --- when there is not full confidence in the receiver being able to do the right thing, then don't release the data. This is true of ALL things, not just REST or Documents.

view this post on Zulip John Moehrke (Jun 01 2021 at 15:41):

In the case of releasing data that has embedded URLs certainly does not remove the responsibility for the client to be careful getting the content at that URL, or proper security of that URL. It is not uncommon for Access Control being needed to access those URLs. The case of a style-sheet or images supporting a style-sheet; is likely a case where the GET will not encounter any access control. although the links should still be https, and be confirmed as coming from a trusted server.

view this post on Zulip John Moehrke (Jun 01 2021 at 15:44):

There will be other cases where embedded URLs are to protected resources. Examples might be a DocumentReference.content.attachment.url pointing at a clinical document; or an ImagingStudy pointing at a DICOM server. In these cases there will (hopefully) be access control that likely builds upon the SMART-on-FHIR access control model. In the case of use of DocumentReference, there is Implementation Guides from IHE that address a Health Information Exchange environment. In the case of ImagingStudy there are implementation guides from DICOM and some being worked on in IHE.

view this post on Zulip Venu Gopal (Jun 01 2021 at 15:54):

John Moehrke said:

There will be other cases where embedded URLs are to protected resources. Examples might be a DocumentReference.content.attachment.url pointing at a clinical document; or an ImagingStudy pointing at a DICOM server. In these cases there will (hopefully) be access control that likely builds upon the SMART-on-FHIR access control model. In the case of use of DocumentReference, there is Implementation Guides from IHE that address a Health Information Exchange environment. In the case of ImagingStudy there are implementation guides from DICOM and some being worked on in IHE.

@John Moehrke yes this is a scenario where I am getting at, in a few months from now. If I am not wrong, DICOM viewing needs login based on the vendor ?

view this post on Zulip John Moehrke (Jun 01 2021 at 15:59):

and that is where it is helpful to keep in mind other standards organizations. IHE is a good organization to look to for implementation guides that span multiple standards. HL7 tends to be very focused on HL7. -- This said, there is not much yet on the Imaging front as there has not been much demand. I would very much like to encourage you to speak to your PACS vendors.

view this post on Zulip Andrea Downing (Aug 03 2021 at 20:19):

Hi All - reviving this thread to give a shoutout to @John Moehrke for being amazing!! I'm sure you guys have seen this but just really glad to see this has come full circle. This is so awesome. cc: @Dave deBronkart @Josh Mandel :tada: https://podcasts.apple.com/us/podcast/conversation-alissa-knight-john-moehrke-ins-outs-fhir/id1533468758?i=1000530851279

view this post on Zulip John Moehrke (Aug 03 2021 at 20:59):

https://healthcaresecprivacy.blogspot.com/2021/08/inscope-podcast-fhir-security.html

view this post on Zulip Andrea Downing (Aug 04 2021 at 01:12):

Will share / tweet. Also if anyone here is interested, Defcon is this weekend! Here's the lineup of speakers for the Biohacking Village, which focuses on healthcare, device security, DNA, etc.: https://www.villageb.io/speakers2021


Last updated: Apr 12 2022 at 19:14 UTC