FHIR Chat · Testing SMART Backend scopes · bulk data

Stream: bulk data

Topic: Testing SMART Backend scopes


view this post on Zulip Richard Braman (Jul 13 2020 at 01:37):

The Ad hoc team has been working with @Vladimir Ignatov on testing CMS BCDA and DPC FHIR APIs using the BDT Bulk FHIR testing tool https://bulk-data-tester.smarthealthit.org/, and we have made great process integrating BDT with our build processes and testing our sandboxes with the Online runner.

An issue came up while testing DPC, which we are trying to make fully conformant to the SMART Backend spec. The issue is around how to test whether a SMART Backend server properly handles the scopeform parameter on a call to create an access token. DPC only supports the scope: system/*.*, which means all resources, all permissions

A little on our DPC Bulk FHIR server for context: DPC supports Group export, Job status, and resource Downloads per the bulk spec. Only Patient, Coverage, and ExplanationofBenefit resources are exportable.

I read the Wildcard Scopes Spec at http://hl7.org/fhir/smart-app-launch/0.8.0/scopes-and-launch-context/ , which states: clients can request clinical scopes that contain a wildcard (*) for both the FHIR resource as well as the requested permission for the given resource. When a wildcard is requested for the FHIR resource, the client is asking for all data for all available FHIR resources, both now and in the future.

This seems to support system/*.* as a valid, minimum scope that all SMART on FHIR Bulk servers must accept, when requested by the client.

Questions:

  1. Is that a correct interpretation?

  2. Does a server have to support all the different the potential individual scope combinations depending on the available resources and available permissions on the server?

  3. if a server supports system *.*, does it have to also support system/*.read as well?

To me, system/*.* makes sense as a default or minimum required scope for testing conformance, but I can also see the argument for a Bulk server needing to accept system/*.read.because it makes sense for an export system,

The BDT tool currently uses system/*.read as its default scope. An option for BDT is making configuration of the test client default scope possible, but some input and consensus from the Bulk and #smart community on the above questions would be helpful and appreciated to know which direction we need to head with BDT and /or our DPC server.

view this post on Zulip Vladimir Ignatov (Jul 13 2020 at 03:17):

Here is my take on that (but it might be incorrect):

  1. system/*.* is not the "minimum" but the ultimate scope that allows everything. If there is any common denominator scope that should be supported by every Bulk Data Export Server, that would probably be system/*.read. However, even that may be too optimistic because for example, a server may choose to only make a subset of its resources available for export.
  2. Yes, I believe a server that supports system/*.* should support system/{Resource}.read scopes or at least infer them from system/*.*. Otherwise it like saying "I support everything but I don't support this one".
  3. Yes, that should also be inferred from system/*.*. In fact, a server should not support system/*.* if it does not support writing to the data (like Bulk Data Import).

view this post on Zulip Josh Mandel (Jul 13 2020 at 13:48):

In general, keep in mind that a server can maintain its own policies beyond this scope system. So a specific client with a system/*.* is not necessarily going to see all data in the system.

view this post on Zulip Josh Mandel (Jul 13 2020 at 13:48):

The advantage of allowing a client to at runtime request more specific scopes is that you can keep individual tokens more limited in their power. But a client's abilities are always limited by the policy configuration of the underlying system, irrespective of scopes.


Last updated: Apr 12 2022 at 19:14 UTC