FHIR Chat · Data Segmentation for Privacy · implementers

Stream: implementers

Topic: Data Segmentation for Privacy


view this post on Zulip Josh Mandel (Feb 11 2019 at 19:23):

In ONC's proposed rule, I see:

Since the 2015 Edition final rule, the health care industry
has engaged in additional field testing and implementation of the DS4P standard.

I'm hoping to learn, especially for major EHR vendors, what the experience has been; what field testing has shown; and whether there are any lessons learned. (Cc @Isaac Vetter @Jenni Syed would love to hear your experience).

view this post on Zulip Josh Mandel (Feb 11 2019 at 19:25):

(This is especially interesting given that ONC is proposing a FHIR-based API with DS4P support. @John Moehrke can you share pointers to what this likely entails?)

view this post on Zulip John Moehrke (Feb 13 2019 at 20:37):

As you suspect, there is not much use of DS4P... Certainly NOT any use of the fine grain (inside CDA). It has informed organizations to understand the processing they internally need to do, and it has informed them on how to use the Confidentiality vocabulary. Specifically gave everyone a good use-case for N vs R... And thus gave everyone a reason to look at confidentiality code N vs R when they receive something... BUT for the most part once something is 'imported' into a new system, it is now in that new system and under the rules of that new system. So the distinction of N vs R is the advancement that DS4P made.

view this post on Zulip Grahame Grieve (Feb 15 2019 at 10:41):

I ran into at least one company at HIMSS that told me they had implemented D4SP. Was that a question at the ONC town hall?

view this post on Zulip John Moehrke (Feb 15 2019 at 18:57):

It was asked at the ONC townhall... but the question was more directed at how hard it is when CDA are so poorly created. They were refering to using a Security Labeling Service (SLS) to examine the CDA (which were poorly formed) for sensitive topics. An SLS would identify these using the fine-grain method found in DS4P. This level of labeling is only to be used between the SLS and a Access Control enforcement engine. That is pipelined between the raw CDA and the version delivered to a remote location would be Access Control decision, SLS tagging, then enforcement. The enforcement might block the whole transaction, might redact portions, or just convert the sensitivity vocabulary into a confidentialityCode vocabulary. The enforcement might simply take the original raw CDA and provide it unmodified, with a transport confidentialityCode.

view this post on Zulip John Moehrke (Feb 15 2019 at 19:00):

this processing model is not just DS4P, but also SLS, and lots of policy. DS4P is just defining how tags would be carried with a set of levels of effort. Where easy is just setting ConfidentialityCode in metadata and CDA header; where fully hard is this processing model at the fine-grain level inside the CDA; where ludicrous level is including Obligation codes that one expects the recipient to act upon.

view this post on Zulip John Moehrke (Feb 15 2019 at 19:01):

There is an open-source (ish) project that the VA funded that @Mohammad Jafari might be able to fill in more details.

view this post on Zulip Mohammad Jafari (Feb 18 2019 at 07:43):

This is a very accurate summary of the approach. I can add that in one of the pilots, there was also a natural-language processing engine that would extract clinical codes from the unstructured text in the CDA to feed into the rest of the SLS pipeline. This was used to manage the poor quality of structured coding in CDAs and to improve the accuracy of the SLS decisions.
Also AFAIK, the VA's DS4P and SLS for CDA demo/pilot projects were not open-sourced.

view this post on Zulip Luca Toldo (Feb 18 2019 at 14:16):

Hi Mohammed I am interested in the standardization of the NLP extraction, in terms of FHIR.
Is there anything you could share about that, perhaps in the #data extraction services stream ?

view this post on Zulip John Moehrke (Feb 18 2019 at 16:13):

That is why HL7 has defined the "Security Labeling Service". A service for which you give it data, and it labels the data for sensitivity. The service instance you use might be simple, or might include NLP. But it is all behind the SLS api.

view this post on Zulip John Moehrke (Feb 18 2019 at 16:13):

see the HL7 specification http://www.hl7.org/implement/standards/product_brief.cfm?product_id=360

view this post on Zulip Mohammad Jafari (Feb 18 2019 at 20:30):

@Luca Toldo there are existing services (open-source and proprietary) to extract structured codes (e.g. SNOMED, RxNorm, etc.) from unstructured text. This can be as simple as a look-up for key words or very sophisticated natural language processing for entity extraction with consideration of nuances like attribution and negation. Once the structured codes are extracted, the document can be processed by the rest of the SLS pipeline, similar to structured/well-formed documents. As John pointed out, this is all under the hood behind the SLS API.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 01:07):

I don't believe we've described a FHIR API for the SLS?

view this post on Zulip Luca Toldo (Feb 19 2019 at 07:14):

It looks to me as this SLS is a use case for NLP pipeline, for which currently no standard interface has been defined. The #data extraction services has shared some thoughts on the required extensions and the #cimi could as well be interested in providing input to the discussion since in their charter the topic of making use of NLP resources is mentioned @Mark Kramer might be interested at this topic too. Personally I think that the SLS could simply consume a Bundle with the raw data delivered DocumentReference resource (as suggested in https://www.hl7.org/fhir/comparison-cda.html) and then returning the extractions as composition resources, certainly extended by those etensions proposed by the #data extraction services for example... Good would be to have it approved and set out of as standard.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 07:14):

I'd like a finer grained interface too - for single resources

view this post on Zulip John Moehrke (Feb 19 2019 at 14:57):

glad to support it in the Security WG... just need someone to bring the need forward. All are welcome. The FHIR-Security call is today http://www.hl7.org/concalls/CallDetails.aspx?concall=42431

view this post on Zulip John Moehrke (Feb 19 2019 at 15:00):

More specifically, we need experts in the technology stack. We have the abstract concept in the existing SLS specification. We thus need people who can propose and discuss to a consensus solutions, and those to pilot those solutions at FHIR Connectathon, and finish the task of writing the service interface.

view this post on Zulip John Moehrke (Feb 19 2019 at 15:25):

seems the service could support bulk data just as easily as single resources. Is it a service that modifies the resources (adding .meta.security values)? Or does it just return a manifest of the assessment relative to the resource, something like a variant of OperationOutcome? The CDA one did modify the CDA, which really bothered me as it was modifying what might eventually be returned, which is a modification of the attested content (signed)... Seems the goal of the SLS tagging is to inform the decision/enforcement; so the OperationOutcome might be more powerful...

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 17:22):

Sounds great; I'll be happy to attend.
I want to add that in our experience, some functions of the labeling service cannot be fully isolated from the transaction context --e.g. for some types of labels like handling instructions or in some cases confidentiality, or high-watermark labels on a bundle.
So our approach in the FHIR demos (which was presented in a number of pervious connectathons) was a service with the following inputs:
- an individual resource or a bundle of resources,
- transaction context, e.g. client id and attributes, purpose of use, etc.
which would output a labeled resource or bundle tailored to that specific transaction context.
The process includes checking overarching labeling rules/policies as well as patient consents. This enabled our demo to support rules like: "for recipient A, HIV resources are marked as Restricted with Delete After Use handling instruction, but for recipient B those resources are Normal."
In such cases, the sensitivity labels (e.g. HIV) can be assigned by a bulk SLS system while the confidentiality and handling instructions are assigned on-the-fly and depending on the transaction context.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 17:44):

so that suggests that purpose matters - is this for resources that are about to be persisted, or is this being done as they are sent somewhere for a purpose

view this post on Zulip Grahame Grieve (Feb 19 2019 at 17:45):

does sound like we should aim for a connectathon stream in Montreal...

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 17:59):

I think some labels like sensitivity labels, e.g. HIV and ETH only depend on the content of the resource and therefore can be assigned in bulk, or based on an event triggered by creation/update of a resources and can be persisted on resources.
Other labels like handling instructions and arguably confidentiality labels depend on the transaction context and applicable policies (e.g. SCA may be restricted in one jurisdiction and normal in another jurisdiction). These labels can only be assigned on the fly. Also high-watermark always depends on the content of the collection so will always depend on the response bundle.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 18:00):

does an SLS presented with a bundle label just the bundle, or all the resources inside the bundle individually?

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 18:01):

Yes, when the input is a bundle, individual resources will be labeled and then the high-watermark is computed based on the individual labels and assigned to the bundle.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 18:03):

which is why the CDA api modified the CDA document... Doesn't mean we have to do the same, but the API is a lot more complicated if the SLS doesn't modify the resources itself

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 18:09):

There was an earlier variation of the demo in which the SLS was part of the Authorization Service and would return the labels as Obligations. This approach was abandoned for practical reasons since XACML support for obligations was not adequate for this use-case and also performance/complexity.

view this post on Zulip John Moehrke (Feb 19 2019 at 18:50):

labeling prior to persistance is a fragile thing. This because the assessment of how sensitive something is, will be different in the future. This because today there is some defined stigma on various concepts, that might change over time. This because today our understanding of a medical condition might be more evolved in the future. Thus I would focus efforts on a SLS use for access authorization decision support.. that is to focus on using a SLS to determine the current medical assessment, against the current stigma, up against the current Privacy Consent, and the current business rules.

view this post on Zulip John Moehrke (Feb 19 2019 at 18:52):

as to the SLS modifying or not... I am open to whatever is best. Modifying has the negative of having the data modified, but there is a trend toward ignoring changes to the meta element for the purposes of integrity/authenticity checks.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 19:02):

fragile is not the same as not useful

view this post on Zulip Grahame Grieve (Feb 19 2019 at 19:03):

I don't understand the issue around modification and integrity... either we're changing, or we're not.

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 19:04):

@John Moehrke while I definitely agree in principle that SLS rules/policies evolve over time and persisting labels may not seem to be a great idea, I want to add that determining content-based sensitivity labels (such as HIV, ETH) could be a time-consuming process (e.g. when NLP is involved) and in some cases it is not feasible to do that on the fly. So the compromise is to persist labels that are a) time-consuming to compute and b) less likely to change frequently and have other mechanisms such as an SLS crawler + event triggers re-examine those when necessary, e.g. when rules/policies or the content of the resource change.

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 19:07):

@Grahame Grieve say a hash of the resource (including the meta element) is stored somewhere with a third party (e.g. on the block chain :sunglasses: ) for integrity check.

view this post on Zulip Grahame Grieve (Feb 19 2019 at 19:12):

o but you're asking the SLS to label the resource - so it's not unreasonable to expect that the resource will change when you do that. Making the change on the SLS side or the client side... how does it matter?

view this post on Zulip Grahame Grieve (Feb 19 2019 at 19:12):

I'm pushing on John's dislike for the SLS making changes, and trying to understand why

view this post on Zulip John Moehrke (Feb 19 2019 at 20:07):

my dislike is simply recognizing that it might be better to not change... I am not objecting, just pushing slightly... If it is best to modify, then we should modify the .meta element.

view this post on Zulip John Moehrke (Feb 19 2019 at 20:10):

The SLS will also just be doing sensitivity assessment. It will not be assessing for ConfidentialityCode, Obligations, or Compartments. These sensitivity tags are often just used by the access control enforcement to determine if the resource should be allowed to transfer, should be blocked, or should be redacted. That is to say that the sensitivity vocabulary are not usually communicated beyond the tight relationship between SLS and Access Control Enforcement.

view this post on Zulip John Moehrke (Feb 19 2019 at 20:22):

DS4P SLS authorization Flow
1. Some access request is made -- Client ID, User ID, Roles, PurposeOfUse
2. Gross access control decision is made --> Permit with scopes
3. Data is gathered from FHIR Server using normal FHIR query parameter processing --> Bundle of stuff
4. Bundle of stuff is examined by SLS. SLS looks for sensitivity topics, tagging data with those sensitivity codes (e.g. HIV, ETH, etc)
5. Access Control Enforcement examines output of SLS relative to security token/scope to determine if whole result can be returned, or if some data needs to be removed.
6. Access Control Enforcement sets each bundled Resource .meta.security with ConfidentityCode (R vs N), removing the sensitivity codes.
7. Access Control Enforcement determines 'high water mark' ConfidentityCode to tag the Bundle.meta
8. Access Control Enforcement may set other Bundle.meta.security values such as Obligations (e.g. Do-Not-Print)

view this post on Zulip John Moehrke (Feb 19 2019 at 20:25):

9. Bundle of stuff returned to requester

view this post on Zulip Mohammad Jafari (Feb 19 2019 at 21:13):

Note that this flow relies on a narrower definition of SLS, which only assigns sensitivity labels. A broader definition of SLS would include assigning _all_ security labels including the ones assigned by the ACE above. The SLS in this broader sense conducts multiple _passes_ (first pass tagging with sensitivity labels, second pass determining the confidentiality labels and handling instructions, etc.).

I'm also wondering why the ACE should remove of sensitivity tags. Other than a minuscule addition to the size of the outgoing resource, is there any other reason to get rid of these tags? I think the sensitivity tags on an outgoing resource essentially _cache_ the result of a potentially computationally expensive process and could be useful if the recipient has any use-case to rely on them. For example, if the recipient is subject to different jurisdictional policies which assigns different confidentiality labels to the sensitive information in the resource, it can simply re-use the sensitivity labels instead of re-calcualting them.

view this post on Zulip John Moehrke (Feb 20 2019 at 18:25):

I just published three articles on this topic:
Basic DS4P - How to set the confidentialityCode https://healthcaresecprivacy.blogspot.com/2019/02/basic-ds4p-how-to-set.html
Segmenting Sensitive Health Topics https://healthcaresecprivacy.blogspot.com/2019/02/segmenting-sensitive-health-topics.html
What is DS4P? https://healthcaresecprivacy.blogspot.com/2019/02/what-is-ds4p.html

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 07:54):

Related to DS4P (and to some other thread that I can't find now): should we have a resource where we can identify and classify a set of elements within a resource (or a graph of resources)? Something like a graphdefinition but at the level of element

view this post on Zulip John Moehrke (Sep 17 2019 at 11:48):

not sure why that would be useful. The security tags are in a well-known and fixed location in all Resource types, so one can find the security tags without knowing what kind of resource it is. To duplicate these tags elsewhere would just open up failure-modes that one would need to deal with.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 12:41):

It's about tagging individual elements on the definitional space. E.g. "on any medrequest, subject is pii, medicationreference is not" (sorry for simplistic example)

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 12:42):

And are security tags on element level or on resource? Even on the instance level, we would need that (at least the way I see it, I may be missing something)

view this post on Zulip David Pyke (Sep 17 2019 at 12:44):

SEcurity tags are on the resource level. HAving to label every individual element would be a difficult design

view this post on Zulip David Pyke (Sep 17 2019 at 12:44):

I

view this post on Zulip David Pyke (Sep 17 2019 at 12:44):

I'm not familiar with a use case that would need the inidvidual elements have different sensitivity

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 12:47):

SEcurity tags are on the resource level. HAving to label every individual element would be a difficult design

How else do you see DS4P? If I have 1000 prescriptions, is all the info sensitive and you cannot do analysis by drug or by prescriber without breaching privacy?

view this post on Zulip David Pyke (Sep 17 2019 at 12:50):

THe resources that would be sensitive would be omitted from your query or you could have a de-identified query set that removes/de-identifies the subject for research analysis

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 12:50):

And I don't see it as a difficy design. I think it is more robust than using _elements

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 12:51):

How do you define that de-identified query set? Using _elements in each query?

view this post on Zulip David Pyke (Sep 17 2019 at 12:57):

The bulk data fetch could be set up to de-id the resource (for research, that is a strong use case).

view this post on Zulip John Moehrke (Sep 17 2019 at 13:30):

There is no such thing as a general de-identification algorithm. You must create a de-identification algorithm that is specific to the data-set you have, your risk tolerance, and the data your intended use needs. It is only after this level of analysis that you can have a project specific algorithm. Once you have that you can pipe the results of a bulk-data (or any kind of data-set) through a de-identification engine charged with that algorithm to produce a dataset that has been statistically inspected for proper risk reduction. So it is both a process to define the de-identification algorithm, and also to prove the resulting dataset meets risk criteria.
Please see the discussion of this that is within the FHIR specification at http://build.fhir.org/secpriv-module.html#deId

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:54):

This is hard... I wanted to ask if there is a way to define "in this server, medicationrequests have their elements classified as follows: patient is level X, medication is level Y, prescriber is level Z"

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:55):

I am not simply trusting the client to define which data they want.

view this post on Zulip David Pyke (Sep 17 2019 at 13:55):

No, that's not currently a core capability, it would require the server have that custom ability.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:55):

that is my discussion point. Should we work towards that?

view this post on Zulip David Pyke (Sep 17 2019 at 13:56):

That adds a complete additional layer of data for all resources. That seems excessive rather than having de-id operations/passs-through in the data export/request

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:56):

I did not mention a general de-identification mechanism. I am launching this, alongside with the Permission thing, to see what can or should be done.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:58):

That adds a complete additional layer of data for all resources. That seems excessive rather than having de-id operations/passs-through in the data export/request

Well, metadata. (a resource of some sort).
I do not know how how de-identification would work here, without having a definition of what to de-identify.

view this post on Zulip John Moehrke (Sep 17 2019 at 13:59):

related to this is the use-case we are talking about Tuesday Q4 where the DiVinci group (yes it is US based) have a regulated requirement to specify the minimum data they need on their requests for data.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 13:59):

This cannot be an all-purpose solution (agree with John there is no universal identification algorithm). We simply need metadata enough to provide a good behaviour

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:01):

whatever algorithm we have for de-identification, it should be defined at the server side, and it should consider things like purpose, etc. I think this is one of the benefits of DS4P - it it not to say "this resource is available, this is not" but rather "this data element is classified as XX, see policies to see what to do with it for each request"

view this post on Zulip David Pyke (Sep 17 2019 at 14:02):

That goes far beyond the DS4P and really should be implemented as a side service, perhaps through an enhanced SLS, rather than loading the data into the resources themselves

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:05):

so we don't need to care about interoperability of classification criteria?

view this post on Zulip David Pyke (Sep 17 2019 at 14:06):

We do and have resource level classification. It's moving to element level that is likely outside the 80% use case

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:06):

anyway, this is in the definitional space - we don't load resources, we load something on the structuredefinitions, or we have an extra resource that translates the policy into the metadata graph on element level.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:07):

not sure about the 80%. 80% of what? Of implementers looking at GDPR?

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:09):

after this discussion I'm more convinced that we are almost there but not quite. I would like to see this applicable in a reproducible way, and i don't see it yet. Perhaps we need some guidance, but if our starting point is "we have all we need" it's hard to move forward.

view this post on Zulip John Moehrke (Sep 17 2019 at 14:10):

I do see a linkage between the use-case you are explaining, and a potential for a way to encode the 'elimination algorithm', which may be what you have been pushing for as a general permission outside of Consent.

view this post on Zulip John Moehrke (Sep 17 2019 at 14:11):

did I catch the point of the thread?

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:13):

elimination algorithm can be one dimension of this, yes. if you say "when someone asks for deletion, please look at the segmentationgraph and policy and etc to see which of the data you need to really delete, keep, or de-identiffy"
I think we need a common white board instead of a zulip chat..

view this post on Zulip Nick Radov (Sep 17 2019 at 14:25):

There is no such thing as a general de-identification algorithm. You must create a de-identification algorithm that is specific to the data-set you have, your risk tolerance, and the data your intended use needs. It is only after this level of analysis that you can have a project specific algorithm. Once you have that you can pipe the results of a bulk-data (or any kind of data-set) through a de-identification engine charged with that algorithm to produce a dataset that has been statistically inspected for proper risk reduction.

That's all true, but I think we can define a "necessary but not sufficient" baseline as a starting point. In particular the text contents should always be deleted and (possibly) regenerated later. I've noticed that a lot of developers working on de-identification algorithms ignore narrative text. There's no reliable algorithm to remove PII/PHI from narrative text so the only safe approach is to delete the whole thing.

view this post on Zulip Jose Costa Teixeira (Sep 17 2019 at 14:33):

Agree.
This IMO builds the previous point:
This is one of the entries in a list: "Text is always a good candidate to remove because...". The question before was "where is that list?"

view this post on Zulip John Moehrke (Sep 17 2019 at 15:14):

we already recommend that in the FHIR core specification on the security page around de-identification. Please review that and offer improvements

view this post on Zulip Kevin Happel (Oct 27 2021 at 15:17):

We are converting C-CDA's that include DS4P security and confidentiality information to FHIR. We have included the necessary extensions on the FHIR resources that are contained in a FHIR Bundle with a composition. These extensions are using the official URL for these extensions as listed on the IG. The IG is here: http://hl7.org/fhir/uv/security-label-ds4p/2021Sep/index.html

The issue is when running this new file through the HL7 Java validator it is returning a structure error stating that the URL does not resolve. If a URL is listed as an official URL on an HL7 IG should it not resolve to somewhere? Instead it currently returns a 404. Could this be an issue with the validator? Or is it on a todo list because the FHIR Data segmentation for Privacy doesn't have a released version yet?

Example - http://hl7.org/fhir/uv/security-label-ds4p/StructureDefinition/extension-has-inline-sec-label

Any of the Structures: Extension Definitions URLs listed here http://hl7.org/fhir/uv/security-label-ds4p/2021Sep/artifacts.html#structures-extension-definitions are also returning a 404.

Thanks in advance.

view this post on Zulip Lloyd McKenzie (Oct 27 2021 at 15:30):

If you're going to use URLs defined in an HL7 IG, then that IG needs to be listed as a dependency of your own IG to make the definitions available.

view this post on Zulip Grahame Grieve (Oct 28 2021 at 02:43):

@Lloyd McKenzie I don't understand this response - where did an IG enter the picture?

@Kevin Happel how are you invoking the validator?

view this post on Zulip Lloyd McKenzie (Oct 28 2021 at 02:46):

Sorry, I was presuming this was in the context of validating content within an IG. If not, then I think you need to reference the IG when invoking the validator so that the validator loads it?

view this post on Zulip Kevin Happel (Oct 28 2021 at 13:15):

@Grahame Grieve I am invoking the jar file through a PowerShell command being called through a simple forms app. We are only doing the conversion so we don't have an IG. I think @Lloyd McKenzie is correct I'm not including the DS4P IG in the command line parameter! I'll test this out and report back.

I still have the question if an IG lists an official URL should that URL be expected to resolve to something?

view this post on Zulip Kevin Happel (Oct 28 2021 at 14:12):

@Lloyd McKenzie Thank you. I can confirm this was an oversight on my part. We've defaulted the us core profile on the validator and I forgot the step of adding the new DS4P IG to the validator.

view this post on Zulip Lloyd McKenzie (Oct 28 2021 at 15:04):

Canonical urls SHOULD resolve (i.e. it's best practice), but they're not required to. And there are all sorts of reasons why they might not. (draft artifact, so not officially hosted yet; organization can't host content; organization that originally hosted has gone out of business/no longer owns domain name; etc.) The typical way to retrieve resources with a canonical URL is first to look for them locally, then to look for them in whatever your preferred registry(ies) are, and then as a last resort to try resolving the canonical url.

view this post on Zulip Grahame Grieve (Oct 28 2021 at 22:59):

in this case, it's a draft

view this post on Zulip John Moehrke (Oct 29 2021 at 13:24):

so was it draft because it was just the ballot copy?


Last updated: Apr 12 2022 at 19:14 UTC