Stream: argonaut
Topic: Jan22 Ballot: Survey Instruments
Josh Mandel (Feb 03 2022 at 19:44):
From last week's Cross-Group Project call, we had some open discussion on whether/how to represent SDOH survey instruments. I wanted to share my take on the relevant questions (and some proposed answers) here, to help frame follow-on discussion: https://hackmd.io/@jmandel/sdoh-assessments
Josh Mandel (Feb 03 2022 at 19:51):
(FYSA @Robert Dieterle and @Floyd Eisenberg since we were making the case for these capabilities on last week's call.)
Eric Haas (Feb 03 2022 at 19:53):
RE Josh's proposal above in an alternative the current state and this proposal FHIR-35364
Josh Mandel (Feb 03 2022 at 19:55):
If I'm reading the proposed "Not persuasive w/ mod" disposition on FHIR-35364 correctly, @Eric Haas, your answer to my Question (1) is "No, that's out of scope"?
Josh Mandel (Feb 03 2022 at 19:57):
It'd be good to answer the design questions first to make sure we have agreement on the requirements, before reaching a decision on the ballot items.
Eric Haas (Feb 03 2022 at 19:58):
this needs a new thread - since it may overwhelm the meeting and block announcements
Josh Mandel (Feb 03 2022 at 20:25):
(Moved!)
Floyd Eisenberg (Feb 04 2022 at 03:12):
@Josh Mandel Thank you for your thoughtful analysis of the situation with respect to survey instruments. As I see the issue, USCDI version 2 specified SDOH Assessment with examples of types of assessments. Unfortunately, it is difficult to reference an evaluation tool and create a generic US Core Screening Response Observation Profile without expanding the scope and stakeholder community since there is a large clinical and quality improvement community that use screening response observation instruments to manage clinical care, to provide clinical decision support, and to measure outcomes with respect to quality. Some of those screening instruments are risk-assessment and clinical status tools as well. Such screening tools are widely used in clinical care clinicians use EHR products to collect the responses; it is the ability to tie together the components and sharing through a FHIR API that seem to be the challenge. I've tried to consider different approaches: (1) Terminology-based - allow each tool and each of its components to remain observations (I.e., not Observation.component) and use the LOINC AccessoryFiles/PaneslAndForms.csv to find the parent for each component code. This approach requires terminology services that may not be readily available in all cases. (2) Retain the "must support" proposed for Observation.component in the new profile - this requires significant work on the part of vendors for a generic use case that cannot cover all possible variations in evaluation instruments, some of which have nested components and, thus, seems unreasonable. (3) Suggest use of Questionnaire and QuestionnaireResponse to contain any evaluation instrument of interest (the Questionnaire) and the results (QuestionnaireResponse). This approach seems analogous to a document-based answer that may create additional overhead in trying to evaluate change over time (and which specific change in components) such that the QuestionnaireResponse would need to be unpacked anyway except for just human viewing. (4) Create a Must Support for Observation.derivedFrom such that each component observation maintains metadata about the parent instrument code from which it is derived (note the US Core proposed profile already includes MustSupport for derivedFrom with reference to the same profile. An alternative is the suggestion Josh Mandel provides using hasMember for the instrument parent code to list its members (note that hasMember is not currently listed as Must Support). From this review, it seems that Josh's suggestion to have 2 profiles: "SDOH Item" and "SDOH Panel" with the Must Support items he listed makes the most sense - I leave it to the community and more technical folks to determine if hasMember or if derivedFrom is the better approach but either seems more palatable and useful than the current Observation.component Must Support. I also might consider that SDOH in the name of both the "Item" and the "Panel" show consistency with the USCDI v2 requirement, a less restrictive name for the profiles is worthy of consideration.
Floyd Eisenberg (Feb 04 2022 at 03:14):
Note - now that this is a separate stream, I want to be sure that there is good community participation. Will keeping it as a separate stream limit review and comment? We really need good community review.
Josh Mandel (Feb 04 2022 at 03:14):
This is a topic within the Argonaut stream; everyone subscribed to the stream should see this topic.
Josh Mandel (Feb 04 2022 at 03:15):
Thanks for the detailed response here! I take it from your analysis that you agree we should treat (1) and (2) (as described in my write-up) as design requirements.
Floyd Eisenberg (Feb 04 2022 at 03:16):
Yes, I do agree with your write-up
Floyd Eisenberg (Feb 04 2022 at 03:18):
Note - the discussion really applies to all four trackers: FHIR#35364, FHIR#35363, FHIR#35282, and FHIR#34752 - and to their proposed resolutions.
Floyd Eisenberg (Feb 07 2022 at 15:13):
edited - "RE: this needs a new thread" comment above (apparently the entry before was confusing- I don't see other participating in this thread but we really need input to the design issues.
Josh Mandel (Feb 07 2022 at 15:52):
This is a dedicated thread about survey instruments. It's visible to everyone subscribed to #argonaut .
Floyd Eisenberg (Feb 07 2022 at 17:08):
@Bryn Rhodes
Floyd Eisenberg (Feb 07 2022 at 20:56):
@Robert McClure Interested in your take on the suggestions posed by Josh Mandel re: design requirements of (1) sdoh_item and (2) sdoh_panel (https://hackmd.io/@jmandel/sdoh-assessments).
Floyd Eisenberg (Feb 07 2022 at 20:59):
@Robert McClure @Bryn Rhodes @Lloyd McKenzie Can you help me clarify when it is appropriate to use .derivedFrom versus .hasMember? I.e., a panel code .hasMember of a number of component questions that each might be referenced as an observation and connected as members of the "parent" ---- versus each "component" observation .derivedFrom the parent panel code. Which might be better for an observation with 1 answer versus an observation with multiple responses allowed? Your explanation will help with this discussion. Thank you.
Lloyd McKenzie (Feb 07 2022 at 21:15):
My understanding:
derivedFrom = "The value of this Observation or its components was determined - at least in part - from the value of the referenced resource
hasMember = "This is a 'grouping' observation which typically has no value of its own, that is grouping the referenced observations"
Though probably best to vet that with OO.
Floyd Eisenberg (Feb 07 2022 at 21:37):
@Hans Buitendijk Can you comment on Lloyd's description of .derivedFrom and .hasMember above? Might I use .hasMember to reference a risk survey tool that has a value imputed from all the results of the components listed as members?
Bryn Rhodes (Feb 07 2022 at 22:43):
That's my read as well. I would not expect derivedFrom
to represent the relationship of an observation to it's parent panel, that would be hasMember
(from the panel to the constituent observations). We use derivedFrom
to indicate the Observation was extracted from a QuestionnaireResponse.
Josh Mandel (Feb 07 2022 at 22:49):
These are orthogonal concepts. From a US Core perspective, I think the most important thing is to have a consistent structure for representing granular and nested observations (i.e., hasMember
semantics). We don't have any expectations that such Observations will have originated from Questionnaires (i.e., derivedFrom
semantics), though of course that's one path.
Eric Haas (Feb 07 2022 at 22:51):
if you are looking for a direct link to the assessment tool from where the questions came from for an observation you need to use either Provenance or the relatedArtifact extension. To fetch based on a particular questionnaire or form you would use a chained search a la:
GET [base]/Observation?patient=Patient/123&Observation.derivedFrom:QuestionnaireResponse.questionnaire.url=http://somequestionnaire12345
or
GET [base]/Observation?patient=Patient/123&Observation.derivedFrom:QuestionnaireResponse.questionnaire.title=My%20Questionnaire
Eric Haas (Feb 07 2022 at 22:53):
The current profile in both US Core and SDOH have a 0..1 MS on derivedFrom
.
Eric Haas (Feb 08 2022 at 04:35):
mock up of Josh's proposal with a few edits: https://hackmd.io/ObMVJ_ohQu2QsFZMiKYSBg?view
This demonstrates the need for multiple profiles and terminologies and some 'Splainin' on how it all fits together...
Robert McClure (Feb 08 2022 at 14:05):
FWIW, there is this further clarification in observation. The application of these elements to survey instruments described above seems reasonable. In general I think use of .hasMember should be somewhat constrained. It should be used when the primary observation is, in part at least, defined by these references, each of which is distinct and used as is.
.derivedFrom really is anything else, but more specifically the primary observation is doing some transformation of the referenced observation to yield a "derived" result.
Daniel Vreeman (Feb 08 2022 at 15:11):
In the LOINC community we had a lot of discussion about the various "collections" of things across domains (lab, clinical, forms, structured reports, etc) that have child members and found far more similarities than differences and thus elected to use a common structure ("panels") to represent them. Many SDOH surveys are remixes of other instruments whose items can stand on their own as interesting/important.
Seems that .hasMember
to indicate child observations is a good way to go, though many SDOH (and other) surveys are nested (e..g subpanels for each domain). The description of .hasMember
mentions DiagnosticReport for nested grouping.
If/when there is a QuestionnaireResponse
you'd use .derivedFrom
to link.
Occasionally, but not always, the instrument will have one or more summary/total score variables, so conceivably those could point to the individual observations that contributed to the summary AND the QuestionnaireResponse if there was one?
Floyd Eisenberg (Feb 08 2022 at 15:51):
@Eric Haas @Daniel Vreeman I like the multiple profile examples and agree with the .hasMember (which needs to be Must Support) - would the "panel" profile allow responses to individual component observations be singular or multiple (depending on the panel definition)? I.e., the multiple responses to one question would be handled as multiple observations based on the same code? I think that was part of the original ask.
Isaac Vetter (Feb 08 2022 at 17:05):
What are existing implementations doing?
Michele Mottini (Feb 08 2022 at 17:44):
We support hasMember outbound and inbound only referencing contained observations
Eric Haas (Feb 08 2022 at 17:49):
to be clear I am not advocating this approach just trying to flesh out how it would look.
Eric Haas (Feb 08 2022 at 17:54):
@Floyd Eisenberg the components are used for multipart answers as demonstrated in this example. I think I will capture a screenshot of the alternative option adjacent to this one so we can compare side by side.
Eric Haas (Feb 08 2022 at 19:11):
I now have the two options mocked up with enough detail to cause implementers heartburn.
Isaac Vetter (Feb 08 2022 at 21:48):
Overall, we think that the emphasis on screening instrument/survey is a mistake.
Isaac Vetter (Feb 08 2022 at 21:48):
What matters is the ability calculate risk assessments based upon discrete and calculable answers. Surveys share questions, there's little value in transmitting grouping them by survey.
Isaac Vetter (Feb 08 2022 at 21:48):
An interop exchange would always transmit all SDOH data, not just the observations limited to a specific survey.
Floyd Eisenberg (Feb 08 2022 at 21:54):
@Isaac Vetter I think we need to explore your comment more deeply. Yes, the ability to calculate risk assessments is critical when the evaluation tool has such calculate as part of the tool and calculation is generally performed in a SMART APP or in an EHR to provide an answer. However, the individual responses do have value for trending in addition to the overall score (if indeed a score is involved). There is value in transmitted grouping by survey and all data related to any survey could not necessarily be transmitted as a whole unless I am missing something. There are surveys that have unique entries that have meaning only as part of specific tools and the USCDI request is not specific to any given survey.
Isaac Vetter (Feb 08 2022 at 21:55):
Hey Floyd,
the individual responses do have value for trending in addition to the overall score
trending responses doesn't have anything to do with grouping as a survey, though, right?
Isaac Vetter (Feb 08 2022 at 21:56):
There is value in transmitted grouping by survey
Can you please point to an authoritative example of this value?
Isaac Vetter (Feb 08 2022 at 21:57):
all data related to any survey could not necessarily be transmitted as a whole
I think a query on observation with a bunch of LOINC codes would transmit all data related to a survey, wouldn't it?
Isaac Vetter (Feb 08 2022 at 21:58):
There are surveys that have unique entries that have meaning only as part of specific tools
Please reference these surveys.
Floyd Eisenberg (Feb 08 2022 at 22:01):
Isaac, no - there are surveys that have value as a whole, but individual results can be trended for specific improvement for individual care over time. These may not be SDOH-specific but they are evaluation instruments for clinical care and measure developers have sought to look for specific improvements in elements. That includes the asthma scales. I will request some examples from the quality measure community. However, as for a query on observation with a bunch of LOINC codes - I don't know that they would transmit all data related to the survey unless they were linked somehow with .hasMember. Or one would have to use the LOINC tables to reconnect them (at least that what I have been given to understand). Note that CDS may also need to refer to components of a survey to suggest specific interventions.
Floyd Eisenberg (Feb 09 2022 at 18:52):
@Eric Haas for your example: ---- your words: " :thinking_face: Do we require support for chained search to fetch all observations by survey instrument using the derived-from searchparameter?" Answer/Question - I thought we were proposing using .hasMember (as must support) for the survey tool to indicate which component codes are members (rather than the component code listing .derivedFrom). .derivedFrom seems to indicate the result of the component is derived from something - the .hasMember seems more appropriate to the use case.
Josh Mandel (Feb 09 2022 at 19:02):
I think we should separate out our goals for required query parameters, until we've got a handle on the required data models.
Josh Mandel (Feb 09 2022 at 19:03):
There are various ways to improve the ergonomics of queries, but even a simple two-step query process is feasible If we don't want to get into chaining. I would suggest we defer the discussion about query ergonomics.
Eric Haas (Feb 09 2022 at 20:06):
hasMember points to other Observation in a group or panel.
derivedFrom point to the Instrument from where it came from. like a QuestionnaireResponse or a PDF
One of Josh's concerns in his ballot comment was how a client would be able to fetch all the answers from the same survey which is a reasonable question. I have provided one answer to address that concern. Search is tied into profiling so it is an important consideration.
Josh Mandel (Feb 09 2022 at 20:27):
... a reasonable question. I have provided one answer to address that concern.
Can you spell out the answer for me?
Eric Haas (Feb 09 2022 at 23:47):
FHIR RESTful search based on derived-from
search parameter.
Josh Mandel (Feb 10 2022 at 01:33):
But that would only work if there was also a QuestionnaireResponse available.
Josh Mandel (Feb 10 2022 at 01:35):
And if that was available we wouldn't need observations. The question for me is what core set of capabilities should be consistently and universally supported. I doubt if that core set needs to include multiple representations (observations and questionnaire responses hardly seems parsimonious).
Eric Haas (Feb 10 2022 at 03:36):
If the QR was available we would still need the observation to search out individual QA pairs, otherwise why go to all the trouble of data extraction in the SDOH guide? All I am saying is derived-from it is one way to tie the observations together and it doesn't need to be pointing to a FHIR QR.
Josh Mandel (Feb 10 2022 at 04:04):
If both of the following are true:
- The one defined way to tie observations together is based on co-occurrence in a FHIR QuestionnaireResponse
- USCDI-enabled EHRs are not required to support FHIR QuestionnaireResponse
... then it follows that USCDI-enabled clients will not have a reliable way to tie observations together.
(This is why I wouldn't introduce a dependency on QuestionnaireResponse into the grouping story.)
Michele Mottini (Feb 10 2022 at 04:24):
Seconding Isaac' comment above. We surely have SDOH data, but in general the original surveys and attended grouping is long lost.
Josh Mandel (Feb 10 2022 at 04:26):
Are you saying you never know how survey questions are grouped, or just that you don't always know?
Lloyd McKenzie (Feb 10 2022 at 04:31):
Even if you do know, the way things are grouped in the Questionnaire isn't necessarily the way they ought to be grouped in a 'panel' Observation.
Josh Mandel (Feb 10 2022 at 04:56):
I'm just trying to ascertain whether it's sometimes known that "this set of observations was collected as part of one instrument".
Michele Mottini (Feb 10 2022 at 15:26):
I am saying that we aggregate data from various sources, and what we get is often (always?) just the raw observations, we do not know if they came from a survey or what survey, so if the SDOH profiles mandates or relies on those kind of information we won't be able to comply
Josh Mandel (Feb 10 2022 at 15:33):
The mandate would be to ensure that systems support grouped observations, not that observations always are grouped.
Isaac Vetter (Feb 10 2022 at 18:21):
the better question here is -- @Michele Mottini, why is it reasonable for you to not know what survey the observation came from? Would it be reasonable to never know what survey an observation came from? My point is that the survey (for SDOH domains at least) isn't important.
Josh Mandel (Feb 10 2022 at 18:24):
I'm not sure that's a "better question" -- along those lines I think you'd need to ask:
- should support for US Core SDOH imply that your system can represent data collected from survey instruments?
Josh Mandel (Feb 10 2022 at 18:26):
That's the question we need to answer before hashing out profiles and conformance requirements. (For me it's a clear "yes".)
Isaac Vetter (Feb 10 2022 at 18:33):
Josh, I think your characterization is incorrect, How about:
Isaac Vetter (Feb 10 2022 at 18:33):
should support for US Core SDOH require that your system can represent that data was collected as part of a survey instruments?
Isaac Vetter (Feb 10 2022 at 18:33):
I (nor anyone else I think) is disputing that systems should be able to represent data collected as part of a survey being completed. I'm just pointing out that the value of maintaining a reference to that survey isn't important, and I haven't seen any evidence to the contrary above.
Josh Mandel (Feb 10 2022 at 18:35):
(will continue on call happening now)
Michele Mottini (Feb 10 2022 at 18:38):
We handle SDOH data discretely, we do not really care if it comes from a survey
Isaac Vetter (Feb 10 2022 at 18:39):
yup, so do we
Cooper Thompson (Feb 10 2022 at 18:44):
The issue with "multi-select" is that if you refactor the questionnaire to be "better"(?) and have individual 1-5 questions about "how hard is been to get x" and "how hard has it been to get y", then you need to maintain multi-select support even if you collect that same data in a more discrete way.
Brett Marquard (Feb 10 2022 at 18:52):
Not sure what you mean
then you need to maintain multi-select support even if you collect that same data in a more discrete way.
Isn't it just a new set of observations?
Brett Marquard (Feb 10 2022 at 18:52):
And maybe an updated instrument so folks would know how to get the key data in version 1.0 vs 2.0
Cooper Thompson (Feb 10 2022 at 19:01):
Yeah, I guess my normal "legacy data" speech undermines my point that future, improved questionnaires would negate the need for MS on multi-select answers... Dang.
Floyd Eisenberg (Feb 10 2022 at 19:02):
It seems there are 2 parallel yes related questions. I understand Josh's suggestion is:
Floyd Eisenberg (Feb 10 2022 at 19:04):
Sorry - entered too soon: Josh's Suggestions as I understand it: Question for modeling:
(1) Set of answers collected as part of the same instrument (these six questions are part of one survey) and any single question may have 1 or more answers AND (2) Individual assessment observation which may have 1 or more answers. I'm finding it confusing when we include the single question with one or more answers as part of the "single observation" and as part of the "panel" - different evaluation tools have different approaches and we need to support a generic approach to handle them.
Floyd Eisenberg (Feb 10 2022 at 19:15):
And please note, the USCDI v2 element is SDOH Assessment. However, US Core balloted a more generic approach "US Core Observation Screening Response (http://build.fhir.org/ig/HL7/US-Core/StructureDefinition-us-core-observation-screening-response.html). Thus the proposed profile in US Core would encompass any other structured evaluation/risk assessment tool including those generated as PROMIS measures included in LOINC - and many such components have value only as part of the tool in which they exist.
Josh Mandel (Feb 10 2022 at 20:13):
I think it would be a very useful baseline capability for US core to define generic"observation item" profiles and "observation panel" profiles. If we had such building blocks, then using those with a set of specializations for SDOH items and SDOH panels would be quite natural.
Josh Mandel (Feb 10 2022 at 20:14):
However, I think the design discussion is getting a little bit ahead of the consensus right now. On the phone call today I tried to take a straw poll about whether we think solving these problems is in scope. It looks like we won't get a chance to finish that poll until next week though.
Hans Buitendijk (Feb 14 2022 at 23:24):
@Floyd Eisenberg : Lloyd's general description's of derivedFrom and hasMember follows closely the definitions here: https://build.fhir.org/observation-definitions.html. Depending on the definition of "risk survey tool" the hasMember is potentially appropriate. It may be that the survey "tool" is the Observation where the hasMember points to the individual results "survey tool components" of that "tool". I.e., if the tool is instantiated as an observation instance that could flow.
Josh Mandel (Feb 15 2022 at 18:41):
Since we seem to have agreement on the scope (i.e., systems need to support some way to communicate sets of responses together, as a set), it's worth evaluating proposed technical approaches. Based on conversation with @Gino Canessa I've updated my doc to start listing trade-offs between "Observation-focused" and "Questionnaire-focused" approaches to SDOH assessments
https://hackmd.io/@jmandel/sdoh-assessments#Pros-and-cons
Floyd Eisenberg (Feb 15 2022 at 19:50):
Thanks Josh, for the updated approaches and trade-offs. I tend to lean toward option A and for this reason: I have heard the concern about multiple choice responses and I need more context from experts in the field of SDOH. I specifically looked at PRAPARE ( https://forms.loinc.org/93025-5) and I basically see one survey tool code (93025-5) which has 5 sections: Personal Characteristics (94043-8), Family&Home (93042-0), Money&Resources (93041-2), Social&EmotionalHealth (93040-4), and OptionalAdditionalQuestions (93039-6) - each of these "sections" has a number of question components but each of those questions has a single response chosen from a list of normative response options. So, I see this as an observation.hasMember with each section as an observation.hasMember and the subsequent questions as single observations each with one result. I don't know that I would consider that a multiple response but a nested set of observations. IF that is the case, Option A should work well. And the US Core guidance could provide the PRAPARE tool as an example. I look forward to comments from others.
Josh Mandel (Feb 15 2022 at 20:04):
each of those questions has a single response chosen from a list of normative response options.
Ah --- the problem you're seeing is just the result of a bug in the LOINC definitions. (See my report here.) Once that's fixed, the form demo at https://forms.loinc.org/93025-5 will show these as "Check all that apply"
Eric Haas (Feb 15 2022 at 21:31):
I look these options as to whether we inject US Core at the beginning (pre data extraction) or the end (post data extraction) ( or both ends) of the data collection chain. So depending how data is collected and stored will skew your opinion. Also I noted there is a fair mix of FHIRR5 search concepts in the mix so don't get thrown off by that.
Floyd Eisenberg (Feb 16 2022 at 14:20):
each of those questions has a single response chosen from a list of normative response options.
I see that the issue is a bug in the LOINC display now - there are some that should be "choose one" and some questions that allow multiple responses. I'm wondering if the observation.value can have a cardinality of 0.. * to accommodate the multi select without interfering with the other modeling. (Sorry if this sounds like a naive response, I'm trying to help get to some solution)
Brian Postlethwaite (Feb 16 2022 at 14:29):
The QuestionnaireResponse for data entry doesn't preclude the Observations for final output target too.
(using the observation based data extraction is actually pretty simple to setup)
Having this capability could then lower the bar for data capture, could leverage a smart app launch that does the SDOH questionnaires to generate the QR and then extract into the Observations. Happily moving the complexity - and also then use pre-pop to grab any existing data, but still be able to append in any that doesn't already exist.
Josh Mandel (Feb 16 2022 at 14:31):
Agreed; this is akin to (C) in my write-up.
Brian Postlethwaite (Feb 16 2022 at 14:31):
for me, I think profiling both and how they work together would be awesome - then be able to leverage both where requried.
But on the where does this data end up question - I think that really does belong down in Observations... (and supporting QR if that's where it was originally collected from)
Brian Postlethwaite (Feb 16 2022 at 14:32):
Yup, pretty close to (c)
Brian Postlethwaite (Feb 16 2022 at 14:32):
Both @Paul Lynch and I could demo that stuff working now with some existing systems.
Josh Mandel (Feb 16 2022 at 14:33):
The main question we're looking at right now is how to be judicious with requirements US core while still meeting the policy objectives.
Eric Haas (Feb 16 2022 at 16:19):
for me, I think profiling both and how they work together would be awesome - then be able to leverage both where requried.
The SDOH guide has already done this.
Eric Haas (Mar 01 2022 at 19:36):
Based on the Feb 17th Call proposol to include both QuestionnaireResponse and Observations to Record and Store Assessments Responses in FHIR, I am drafting a mock up how this would look here. This is preliminary as I have only completed some of the technical aritifacts. I will draft the narrative introduction and Quick starts and overall guidance later today. Based on this, I will update all the assessment related trackers so we can review and vote on them on the thursday call.
Last updated: Apr 12 2022 at 19:14 UTC