FHIR Chat · Authz provider & consents · Security and Privacy

Stream: Security and Privacy

Topic: Authz provider & consents


view this post on Zulip René Spronk (Jan 14 2021 at 08:30):

Being more familiar with SAML and XACML (conceptually, that is) the entire process of a (Oauth/UMA etc) Authz Provider referring the 'consent' question back to a human end user seems useful some very specific contexts. Allow me to verify my understanding:
If we already have a Policy Registry with consents/policies expressed in some computable way [these are my assumptions for any project at scale], (1) I assume that the Authz Provider, rather than ask a human being, could simply use the data in the Policy Registry to respond to the Authz request. (2) this would mean that both the Authz provider, as well as the resource server (the PDP thereof) would have to have access to [the same] Policy Registry, as there may be other policies/consents in place than the one that some application/user asked permission for from the Authz Server. Less so perhaps with oAuth.xyz (with a more refined way of specifying what rights one is seeking), but certainly true if one is only using basic OAuth2 scope information.

view this post on Zulip John Moehrke (Jan 14 2021 at 13:37):

Yes to (1). mostly... not absolutely.

Not necessarily to (2)... the resource server could delegate full decision making to the authz service. What would be shared is "residual obligations and refrains". It is these obligations and refrains that are part of the authz decision (permit, but not after 24 hours). These obligations and refrains tend to be simple rules, they are not the full (1) policy. They are just the residual rules associated with the PERMIT. That said, they might not be simple enough to be fully expressed using a vocabulary (like we have in HL7 HCS). This is one reason why we are building the Permission resource, so that we can encode these residual obligations and refrains. For example including an expiration on the PERMIT can't be done with a code. (Expiration is not a specific problem with OAuth as it is built into the tokens, but it is useful as an example)

view this post on Zulip John Moehrke (Jan 14 2021 at 13:38):

on (1) First there are very different flows where the user is the Patient, vs the user is accessing the patient for some purpose.

Where the user is the patient and the access by the app is interactive, the rules can be captured inline with the access when the access. Meaning when the app needs more rights, the patient can be asked. This is commonly seen with common general-IT apps in android and apple, where you can deny an app full access but later when that app needs a permission you are asked if it is okay now to give it.

When the user is not the patient or when the user is the patient but it is not interactive session (authorized research mining). The expectation is that there is some ceremony involving the subject (patient) to capture their rules. This is your precondition. This precondition could be once, capturing perfect rules. Or the system could recognize that there might be some decisions that need to be temporarily-rejected (or deferred) while some workflow with the subject (patient) is used to get updated rules. This second is far more complex, so not often done.

It is certainly possible do recognize that a decision can't be made until the subject (patient) is asked further questions. This just requires there be a authorization denied with a try again later. (vs authorization denied and go away). In these cases there would be some workflow similar to the above one with an interactive user that is the subject. The trigger and messaging are not as obvious, but it is possible.

An example of this is in CareQuality there is a point-of-care-consent workflow. Where a treating organization asking for data doesn't currently have authorization, but the custodian organization has a trust-framework arrangement with the treating organization such that it initially rejects access with a message that a point-of-care-consent is needed. Once that is obtained, then requests may progress under the newly captured consent rules.

view this post on Zulip John Moehrke (Jan 14 2021 at 13:40):

the basic SMART scopes are not too helpful on this level of permission . They were intended to get us going with OAuth, focusing on starting simple and therefore chose a very RESTful pattern. This is not a failure of the SMART project, it is just that these consent vectors were not in scope.

view this post on Zulip René Spronk (Jan 14 2021 at 14:06):

I understand it's kind of neat that one may have a build-in ability to (if there no know consents/dissents) for something, that one can ask the patient. Cool feature as an added bonus. For me it's reasonable to assume that countries and/or regions have a centrally managed consent registry [this may not be true for all countries/regions, but it seems that all projects should aim for something like this], to avoid having to ask a patient for their consent again and again (and again), and to avoid having conflicting consent statements by one single patient (they may not remember what they decided last time around). As such going back to the patient to ask for some additional consent should be the exception to the rule.
Realistically I don't think the consent itself would have a computable expression, probably a policy id of sorts, with parameters (e.g. 'consent to share HIV results', parameter ="GP x" and "next 30 days").
Using OAuth (as it is used today), one probably wouldn't request authorization to access HIV results. That's not something you'd be able to do using scopes. At most it'll be 'read Observations'. Which means "residual obligations and refrains" can be quite a lot. The better 'coverage' of the consents/policies by the Authz server, the less the resource server (the PDP thereof) will have to do in terms of access control decisions. Given that OAuth currently doesn't really allow one to do /express that much in terms of consents, this will still put a heavy burden on the resource server. Even with oath.xyz I'm not sure if it's safe for a resource server to fully depend on a Authz server.

view this post on Zulip John Moehrke (Jan 14 2021 at 14:22):

the new granular SMART scopes do support that

view this post on Zulip Josh Mandel (Jan 14 2021 at 15:25):

This is good discussion; and of course the value that you assign to different capabilities/features all depends on your existing infrastructure and assumptions. If you already have a centralized policy server that everyone agrees is authoritative, and the policy server has specific details explaining which actors are allowed to perform which actions (and in which contexts) then a lot of the standardization challenges go away :-)

In most environments these central infrastructure components do not exist, of course -- and even the very thought of centralizing detailed user-specific policies can be troublesome, because it forces people to say out in the open what they are and are not willing to share.

view this post on Zulip René Spronk (Jan 14 2021 at 15:27):

In the spirit of GDPR: that's exactly what we want, right?

view this post on Zulip Josh Mandel (Jan 14 2021 at 15:28):

no I think in the spirit of gdpr people should be allowed to configure these policies for their own data but they shouldn't be forced to broadcast them

view this post on Zulip Josh Mandel (Jan 14 2021 at 15:29):

In some sense the UMA interaction provides nice capabilities here: an individual can point to a service that makes decisions for them, but they don't have to codify those decisions in a policy language that others can see and store.

view this post on Zulip Josh Mandel (Jan 14 2021 at 15:29):

(of course the complexity of making this actually work is another story; I'm just talking about principles here.)

view this post on Zulip René Spronk (Jan 14 2021 at 15:30):

Sure.

view this post on Zulip René Spronk (Jan 14 2021 at 15:34):

.. so, you do believe in the capability of Authz providers to fully determine if a requester is allowed to do a certain thing, so resource servers don't have to do any additional consent/policy checking?

view this post on Zulip René Spronk (Jan 14 2021 at 15:38):

So if I were to say that a region/country had one single PDP (instead of stating that there's one single policy registry), that would fit nicely with UMA, right?

view this post on Zulip John Moehrke (Jan 14 2021 at 16:21):

note that UMA just moves the problem.... each custodian of data on a subject, needs to get authorative instructions that point at THAT subject's appointed UMA server... so they still need to do some work... and since that work is the biggest effort, meaning the consent terms is smaller, they tend to hang on to all consent for their data. (note they also loose the ability to know who is using the data, which they really are not authorized to know but they still like to know it).

view this post on Zulip Josh Mandel (Jan 14 2021 at 17:48):

There's an important distinction, not just "moving the problem" -- in one interface, a party needs to expose policies over the wire, for other parties to see; in the other, you only need to expose a decision endpoint, not the actual policies.

view this post on Zulip John Moehrke (Jan 14 2021 at 17:59):

there would need to be some level of policy, just not variance to that policy. Such as "trust completely decisions made by this UMA server without question". vs "trust decisions except where legal hold is in place". vs etc... there will be policy. It will be just more PERMIT based

view this post on Zulip René Spronk (Jan 17 2021 at 10:02):

Context: a training course module with a focus on Consent (It's difficult to discuss such a subject without touching upon all sorts of other security aspects, but the focus is on Consent). It helps our trainees when we sketch a low number of different scenarios (and afterwards tell them that in reality one may use a blend of the various scenarios shown).

Use case (for both scenarios): in a policy domain there's a legislation that states "all lab results data (with the exception of HIV results) may be viewed by the GP of a Patient". The Patient has created consents "My GP is allowed to view any HIV related lab results", and "My GP is allowed to see document 1234" (a psychiatric report).

Scenario1: identity server, PDP grouped with resource server
The identity of a GP (using some sort of authentication server) is sent to a resource server, jointly with a request "give me all lab results". The identity information will be fairly minimal, some id number, and probably a role-type. The resource server receives this request, fetches the matching resources (small r), and determines (using metadata charateristics of the resources, and the policy/consent information) what can be included in the response to this GP. The requesting party isn't aware what policies/consents exists.

Scenario2: refined authorization server
The identity of a GP (using some sort of authentication server) has been established. This GP wishes to access all available data for a patient P. There is a separate authorization server which also has knowledge of the policies/consents, so the GP application requests authorization to access all data'. The authorization server matches this with its 3 consents/policies, and generates a token which identifies the 3 applicable policies. This token is used by the GP application in its request to the resource server. The resource server inspects the permission tokens, fetches the matching resources (small r), and determines (using metadata characteristics of the resources, and the permission tokens) what can be included in the response to this GP.

Questions:

  1. Are there any errors/mistakes in the descriptions of the above scenarios?
  2. In scenario 2: isn't that a security issue of the token can be parsed? Knowing that someone has a HIV-results consent is in and of itself already a security issue.
  3. 'give me document 1234' as a query is easy in scenerio 1, but in scenario 2, how would the authorization server know that policy "My GP is allowed to see document 1234" is applicable, given that it doesn't have access to document 1234 itself ? Or will we be simply be including a full list of authorization tokens, independent if what the GP was trying to access?
  4. Scenario 1 is probably what we mostly see today. OAuth2 and a standalone FHIR (CDR) server. Scenario 2 is what SMART/UMA/Oauth.xyz/GNAP (all variations on the same theme) are looking to support.
  5. Use of the Permission and Consent resources really only makes sense in scenario 1, and even there I'd expect these policies to be expressing in XACML or some proprietary form rather than as FHIR resources. Permission and Consent FHIR resources make sense if one includes them in a HTTP header upon communicating them, x-provenance, x-permission, (more or less a requirement under the GDPR)

view this post on Zulip John Moehrke (Jan 18 2021 at 14:12):

  1. an OAuth token does not need to be transparent, most are opaque. Even where there are some attributes that are exposable to the client, does not mean everything is. And, I would expect this case would just be a list of sensitivity-tags to be excluded. To exclude them does not mean there is data, this would be why one would exhaustively include the exclusion keys except when someone has authorized access to sensitive data..

view this post on Zulip John Moehrke (Jan 18 2021 at 14:15):

I am not sure how to respond to #3, as I am not clear what the difference between scenario 1 and 2 are... what is :"refined"? I am not aware of any legitimate solution that looks like scenario 2.

view this post on Zulip John Moehrke (Jan 18 2021 at 14:20):

  1. UMA is not what you have outlined in 2. I think... I think that in UMA case, one must start with the presumption that each interaction between a client and a server requires getting a new UMA ticket. There is no such thing as a ticket that is obviously useable by a client for multiple things. I don't think that the client ever knows this, as it just keeps trying to access, and the server either feels the UMA token is sufficient or requires a new token. Thus the refinement of the token meaning happens. Where the policies are simple, the refinement is likely coarse; where the patient has intricate policy needs, the refinement is much more fine grain. I think this is how it is intended to be used.

view this post on Zulip John Moehrke (Jan 18 2021 at 14:28):

  1. The biggest use of Consent is for documenting the details of the consent ceremony. Yes the details affect real-time access controls. But it is unlikely that the Consent resource is inspected in real-time. I would expect that upon saving of a new or updated Consent; there will be a process that will code the results into XACML or proprietary (OAuth) rules engine. This might be part of the saving operation, or might happen at some other time. Where an organization has chosen to primarily use something like XACML or UMA; they can chose to not encode the .provisions at all; and just point at their instance with .policy.uri.

This ceremony might be an internal one, which is the norm today, for which exposing the Consent is not all that helpful. But the ceremony might be executed by some other organization and provided to a custodian as evidence of the ceremony. It is this second case that I think is more compelling use of Consent resource. Examples might be a clinical-trial, another treating organization, an insurance company, etc.

view this post on Zulip John Moehrke (Jan 18 2021 at 14:36):

The Permission might hold similar uses, but I see the Permission being more real-time. I understand two
A. similar functional uses as Consent, but where the rules are expressing something that is not a "subject" authorizing access by a third party to their data. That is to say "not a Consent". These might be b-2-b authorizing, these might be government/police access, etc.
B. real-time communication of residual rules that a data recipient is trusted to enforce.

The (B) use-cases are the most interesting to me. The case here is where a data-custodian determines that a data-recipient is authorized to get some data, and they are trusted to enforce some residual rules. This is especially needed where the residual rules are more complex. For example, to tell a data-recipient that they must not persist the data, one could just put into the Bundle.meta.security the simple code #DELAU. However there is no way to say delete after 30 days, without us creating a special code for that.

view this post on Zulip René Spronk (Jan 18 2021 at 14:59):

Just goes to show that (even after viewing a bunch of videos) I still don't really understand UMA et.al. I'll study your response in more detail tomorrow morning.


Last updated: Apr 12 2022 at 19:14 UTC