Stream: genomics / eMerge Pilot
Topic: dr-relatedArtifact v dr-supportingInfo
Larry Babb (Apr 09 2019 at 12:22):
In the diagnostic report profile the 2 elements (dr-relatedArtifact and dr-supportingInfo) both reference the idea of containing "supporting info".
dr-relatedArtifact has the description "Citations and supporting info". Is the supporting info in that description meant to be only supporting info for the citations and thus the dr-relatedArtifact element is really about "literature references and citations"? Or is it supposed to mean that the element can reference both citations and supporting info as separate concepts?
In the longer description of dr-relatedArtifact it reads as follows
References to literature that supports the assertions made within the report, describes the methodology used in testing or other information relevant to the interpretation of the report.
1. From this longer description it would seem that we can use citations that reference the assertions (can we use citations that reference the testing practices themselves - these are typically put in separate report sections?).
2. It also says that we can describe the methodology used in testing. So if we have complex steps in a genetic testing assay broken down into the technical methodology versus the several analytic methodologies, then should we group them all here together? If so, then it seems it will be impossible to group the technical vs analytic methods with the results for each. If not, then can someone please indicate how to perform the groupings so the methodologies can be associated with the results for the various services performed?
3. Finally it would be extremely helpful to have one or two examples of how "other information relevant to the interpretation of the report" related artifacts would be used.
Again, dr-relatedArtifacts seems like it can contain a number of different useful items or groups of items. The challenge for us is understanding 1) is it correct to use this for everything that doesn't fit somewhere else and 2) how can it be used to organize sets of related artifacts so they can be linked or associated to particular results elsewhere in the report.
Regardless of whether the dr-relatedArtifact is meant to be primarily (only?) citations and citation supporting info their is also the open question as to the precise usage of dr-supportingInfo. It currently is described as "Other resources that support report". Again, this seem fairly broad and open to interpretation. By "other" are we left to determine if there is no other "better" spot in the report element list then it can go here? And does the type "Reference()" indicate that we can put any resource in that element? I've seen "Canonical(any)" in other "type" values for elements that can presumably take on any resource or data type defined in FHIR.
Kevin Power (Apr 10 2019 at 02:56):
I will have to admit, I don't know that I considered using dr-relatedArtifact for things like methodology. But I don't know that we have a better answer today other than coded values we can support today in Observation.method. If you have just a textual description, I suppose you could use Observation.method.text? Do others have comments?
Jamie Jones (Apr 10 2019 at 16:10):
I'm not certain why dr-relatedArtifact is 0..1, it seems you could only include 1 citation? Regardless, I would envision using Observation.method wherever possible (believe that's what Bob M has been doing). The Implication profiles have their own 0..* relatedArtifact extension as well.
For dr-supportingInfo, it seems the References are meant to be just the listed options of (FamilyMemberHistory | RiskAssessment | Observation | DocumentReference). It looks like it was mainly included for linking FamilyMemberHistory.
Kevin Power (Apr 10 2019 at 16:26):
I didn't focus on the comparison to supporting info, but Jamie's points are correct. Supporting Info are typically other results or clinical data like Family History that the lab might want to reference. So I don't think it is a fit for 'methodology'.
@Larry Babb - Can you post an example or two of what you consider methodology? I typically see it as a paragraph or two on the example reports I have seen.
I think the things I have seen from Bob M have been much more straight forward - using a LOINC code or two.
Larry Babb (Apr 11 2019 at 01:31):
From the Baylor eMERGE reports this is the Methodology section (one as equally descriptive for the LMM counterpart)...
Methodology:
1. eMERGE-Seq Version 2 NGS Panel: for the paired-end pre-capture library procedure, genome DNA is
fragmented by sonicating genome DNA and ligating to the Illumina multiplexing PE adapters (reference 1). The
adapter-ligated DNA is then PCR amplified using primers with sequencing barcodes (indexes). For target
enrichment capture procedure, the pre-capture library is enriched by hybridizing to biotin labeled in-solution
probes (reference 2) at 56°C for 16 - 19 hours. For massively parallel sequencing, the post-capture library DNA
is subjected to sequence analysis on Illumina HiSeq platform for 100 bp paired-end reads. The following quality
control metrics of the sequencing data are generally achieved: >70% of reads aligned to target, >99% target
base covered at >20X, >98% target base covered at >40X, average coverage of target bases >200X. SNP
concordance to SNPTrace genotype array: >99%. This test may not provide detection of certain genes or portions
of certain genes due to local sequence characteristics or the presence of closely related pseudogenes. Gross
deletions or duplications, changes from repetitive sequences may not be accurately identified by this
methodology. Genomic rearrangements cannot be detected by this assay.
2. As a quality control measure, the individual's DNA is also analyzed by a SNP-array (Fluidigm SNPTrace panel
(reference 3) ). The SNP data are compared with the NGS panel data to ensure correct sample identification and
to assess sequencing quality.
3. Data are analyzed by the Mercury 3.4 (reference 4) pipeline. The output data from Illumina HiSeq are converted from
bcl file to FastQ file by Illumina bcl2fastq 1.8.3 software, and mapped to the hg19 human genome reference by the BWA
program (reference 5). The variant calls are performed using Atlas-SNP and Atlas-indel developed in-house by BCM
HGSC. Copy number variants were detected using Atlas-pcnv v0, developed in-house by the BCM HGSC. Variant
annotations are performed using the Cassandra tool, developed in-house. Neptune version v1.3 was used to match
variants against curated variants in the VIP database version [2018-09-27-17-38-32.vip] and generate this report.**
4. The variants were interpreted according to ACMG guidelines (reference 6) and patient phenotypes. Synonymous
variants, intronic variants not affecting splicing site, and common benign variants are excluded from interpretation
unless they were previously reported as pathogenic variants. Reviewed variants are added to the VIP database for
inclusion on future reports. It should be noted that the interpretation of the data is based on our current understanding of
genes and variants at the time of reporting.
Clinical interpretation and reporting are provided for pathogenic and likely pathogenic variants as requested by the
Children's Hospital of Philadelphia for the following 68 medically actionable genes: ACTA2, ACTC1, APC, APOB, BMPR1A,
BRCA1, BRCA2, CACNA1A, CACNA1S, COL3A1, COL5A1, DSC2, DSG2, DSP, FBN1, GLA, HNF1A, HNF1B, KCNE1, KCNH2,
KCNJ2, KCNQ1, LDLR, LMNA, MEN1, MLH1, MSH2, MSH6, MUTYH, MYBPC3, MYH11, MYH7, MYL2, MYL3, MYLK, NF2, OTC,
PALB2, PCSK9, PKP2, PMS2, POLD1, POLE, PRKAG2, PTEN, RB1, RET, RYR1, RYR2, SCN5A, SDHAF2, SDHB, SDHC, SDHD,
SMAD3, SMAD4, STK11, TGFBR1, TGFBR2, TMEM43, TNNI3, TNNT2, TP53, TPM1, TSC1, TSC2, VHL, WT1, the following
medically actionable SNPs: rs77931234, rs387906225, rs79761867, rs386834233, rs113993962, rs397509431, rs6467,
rs6025, rs80338898, rs1801175, rs1800562, rs28940579, rs61752717, rs193922376. For autosomal recessive disorders,
only homozygous or biallelic variants will be returned. Variants in exon 3 of the FLG gene are not reported.
5. Variants related to patient phenotypes are confirmed by Sanger sequencing if the variant has been observed and
confirmed fewer than 5 times by our laboratory or the Baylor Genetics Laboratory. Sanger confirmation is noted in the
'Notes' section of the tables if performed.
6. For the pharmogenomic variants, the star alleles are determined based on the variants detected by this assay. Alleles
reported for TPMT are limited to *1, *2, *3A, *3B, *3C and *4. Alleles reported for CYP2C19 are limited to *1, *2, *4A, *4B,
*5, *6, *7, *8, *17. If reported, alleles for DPD are limited to *1, *2A , *13 and rs67376798. Alleles reported for CYP2C9
are limited to *1, *2 and *3; and rs9923231 for VKORC1. Additional rare star alleles have been reported with reduced or
no function for TPMT, CYP2C19 and DPD; however, the variants defining these additional star alleles are not detected with this assay. For SLCO1B1, this assay only detects rs4149056. The minor C allele at rs4149056 defines the SLCO1B1*5
(rs4149056 alone) but also tags the *15 and *17 alleles. Thus a *5 allele may represent a *15 or *17 allele. However, the
magnitude of the phenotypic effect is similar for *5, *15, and *17 alleles.
** The VIP variant database was developed in conjunction with Baylor Genetics and the Partners Healthcare Laboratory
for Molecular Medicine.
Bret H (Apr 11 2019 at 13:30):
There is some useful data in their about region studied, which could be used for queries on what areas of the patient's genome have been interrogated. Would be a shame to send it as narrative. But the method is much more complex than a single code. Perhaps, if the static information were placed at the Genetic Test Registry one could provide a link to it.
But that would still make it hard for the receiving system to access... Do we have a structure for genetic test methodologies that could capture all the elements in the post by Larry? Something like a knowledge resource about the test.
Jamie Jones (Apr 11 2019 at 17:07):
Facilitating links to the GTR or other comparable resources seems very valuable here
Kevin Power (Apr 11 2019 at 18:09):
I would recommend to @Larry Babb to review the (Region Studied](http://build.fhir.org/ig/HL7/genomics-reporting/obs-region-studied.html) profile - I do agree with @Bret H that some of the above could be expressed there. And we recognize we need to improve the Region Studied profile.
Linking to a GTR or other online resource is great to support, but I think we should find a simple solution for the textual representation of the methodology. Still not sure if that should be the related artifact or something like Observation.method.text .
Bret H (Apr 11 2019 at 18:19):
With what we have right now, i favor related artifact. Observation.method
is ment to be coded. That's my 2 cents
Larry Babb (Apr 11 2019 at 18:59):
We will look more into the region-studied, as that is some of what the methodology is about. But we definitely need a place to store the narrative text in a way that downstream consumers can reliably identify it and display it in other formats.
So is that the relatedArtifact.Observation.method? Since that is a "code" I think we may need another place to put the method text. And, what would the Observation concept represent in this use?
Jamie Jones (Apr 11 2019 at 21:12):
To clarify the options as I see them now:
1. Observation.method.text
on the relevant Observation results;
2. Observation.method.text
on Region-Studied, covering the relevant regions where the methodology was applied;
3. Observation.method.text
on a Panel (soon to be called "Grouper") that holds all the relevant Observations through hasMember
;
4. a (Base64Binary) Attachment on DocumentReference.content
, referenced through (one or more) dr-supportingInfo.valueReference
;
5. a (Base64Binary) Attachment on RelatedArtifact.document
, referenced through dr-relatedArtifact
.
Bret has a good point that that the method fields should ideally be coded. There shouldn't be any trouble parsing and displaying the Attachment text files in whatever format is required, but that's not something I've played around with myself yet.
Jamie Jones (Apr 11 2019 at 21:25):
Given the description "A CodeableConcept represents a value that is usually supplied by providing a reference to one or more terminologies or ontologies but may also be defined by the provision of text," I am inclined to push for using the method.text
fields wherever it makes the most sense within the report, as that seems the simplest (but also looking into a way to codify this information in the future).
Larry Babb (Apr 14 2019 at 16:24):
I also believe the codification of genetic testing methodologies is a goal. But we have bigger items to tackle and it seems it will be some time before there will be any significant standardization on the testing and methodology front. There's definitely work being done in this area (GTR is an example). But there is still the question of registration authorities, centralized lookup, international consensus and standardization of how to standardize the representation and structure the testing/methodology components themselves.
So, method.text sounds like the pragmatic choice for the emerge folks. I'll explore it more and come back with any questions or issues I have when applying it.
Bret H (Apr 15 2019 at 12:00):
@Andrea Pitkus, PhD, MLS(ASCP)CM, CSM have you seen any examples in OandO that use observation.method in this way? I'm just curious how common using obeservation.method.text without a code might be.
Andrea Pitkus, PhD, MLS(ASCP)CM, CSM (Apr 15 2019 at 13:00):
@Bret H Thanks for looping me in. Short answer, is I haven't seen Observation.method implementations yet.
That said, the item allows for indication of the method used to obtain the test result. Ideally, all laboratories using the same method should be documenting in this field the same way. Most laboratories would structure their test results with a precoordinated approach mapping to a pre coordinated LOINC that includes the method. However, a number of entities, such as CIMI and Intermountain prefer a post coordinated approach, indicating the method aspect in OBX-17, in current v2 format (we haven't gotten to OBRs/OBXs in the vs to FHIR calls yet). It is vital that different methods that have different clinical decision making impacts be distinguished so results are not comingled downstream. (Also why the laboratory community recommends the pre coordinated approach.) Another aspect discussed at LOINC meetings impacting interoperability and loss of essential clinical information such as method, occurs when a sending system produces a post coordinated message say with method in OBX-17, but a downstream system can't handle said field and drops the info, which can result in a patient safety risk.
Often methodology is listed in the manufacturer's insert for FDA 510(k) approved methods. (i.e. automated vs manual, FISH vs SISH vs ISH). For laboratory developed tests (LDTs) as indicated in some examples above, they may be indicated here or another field in the resource such as supporting info or a comment to meet regulatory requirements. See https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=809.30
That said, genetics is similar and different. On one hand there are some tests using more traditional methods, reporting, etc., while on the other hand, there are more LDTs and methods that may not be as common or standardized. Highly recommend where common methods occur in genetics/molecular that standard method list be included to reflect those methods if they aren't already included for v2, CDA, and FHIR reporting of the same content.
Regarding the use of code, there are some SCT codes that folks are also using to encode the method field in V2 that I'd expect to be used in FHIR as well. (Haven't reviewed how comprehensive the SCT methods are for encoding, but suspect some significant gaps exist especially for newer/less common methods.) Even if folks are reporting a precoordinated approach in v2/FHIR, it doesn't hurt to include method in OBX-17, but I don't know how many folks are currently sending in messages, as there are other similar fields like procedure/specimen collection method that folks aren't encoding/using according to v2 guides required for MU, MIPS/MACRA reporting too. Expect some of these aspects may be more fully realized as folks model existing results in FHIR. Likely need an implementation guide with specific guidance especially on this field so folks implement it the same way for the same content, so we don't get the Wild Wild West.
Does that help address the question @Bret H ? @Larry Babb has included several good comments/questions/points on this aspect too (#4 ACMG, #2 supporting info)
Larry Babb (Apr 15 2019 at 13:11):
@Bret H @Andrea Pitkus, PhD, MLS(ASCP)CM, CSM thank you for the feedback. For eMERGE and most of the labs i've worked with they have LDTs and they are so focused on trying to get the actual variant findings and interpretations structured that there is not a concerted effort to structure and standardize on the portions of sequence testing that compose parts of the LDTs that they we would even know where to start and how to organize around the set of codes that would support and supplement the LDTs specialized methodology descriptions.
That said, I think @Andrea Pitkus, PhD, MLS(ASCP)CM, CSM is correct in that this is an important future development that should have guidance in an IG.
For our near term plan and hopefully for the short-term guidance that the IG will provide we will use the method.text approach as it seems like the best (possibly only reasonable) option.
Andrea Pitkus, PhD, MLS(ASCP)CM, CSM (Apr 15 2019 at 16:08):
@Larry Babb , @Mullai Murugan thanks for your wonderful presentation today. Seeing the examples helps provide additional context to your questions here to understand them better.
Short term in a walk, crawl, run approach, using Method with text may be the approach as it supports the current text blobs folks are reporting today with report sections, but starts to standardize it further so folks can use the same section/resource. In the future, it can be further refined, such as if specific methods or sub methods are needed to support the variety of methods used in CG/molecular reporting. Also agree that Methods should be in each report section. For example PgX methods would be in that section as they may not apply to other "module/section based" CG testing added on to the original order. Alternatively, it may be that Method is it's own report subsection if same approach/method used for everything in entire DR below it (similar to performing org, specimen collection info, patient demographics, etc.). I can see it modeled both ways.
Larry Babb (Apr 16 2019 at 19:38):
@Andrea Pitkus, PhD, MLS(ASCP)CM, CSM I am encouraged by your response. Here's my understanding of what you are saying and some questions on taking the "crawl" path.
1. Are you using the term "report sections" and "module section" to align with the Genomics Panel profile?
2. If we are talking about using the Genomics Panel profile then it would start to take on more than a simple navigation and grouping concept and more of a concept that has data and attributes specific to it. I'm fine with that I just want to clarify that we'd be starting down that road.
3. The "Method" attribute on Genomics Panel is a 0..1 cardinality and it is a CodeableConcept.
4. The methodology text we want to put into a report section "called" methodology would be more like paragraphs describing the LDT (lab-defined test) SOP. While these are reusable across all instances of a given LDT, it is not likely these truly represent codeable concepts.
5. If you are still suggesting that we use the CodeableConcept Observation.method.text attribute to share our methodology, would you suggest we stick all paragraph blocks in one big method element for the group? or should we try to break them up (which is harder than we'd prefer).
6. We'd really like to simply relay human-readable background and SOP for the LDT on the report itself and have the downstream systems be able to capture the complete block of text that represents the methodology for either the entire report or for the 2 distinct panels we are planning on delivering.
Sorry for the questions. But it very tedious and time consuming to try to gain confidence on decisions this way. It is clear that folks like yourself have opinions and expertise that would be helpful in making short term decisions. However, we cannot sustain the effort and time that this approach has with the aims of trying to deliver an implementation/pilot spec process that needs to be completed in the next 6 months.
Anything you can do to directly call out a decision would be greatly helpful. we do take you feedback into serious consideration and try to understand how to apply it.
Jamie Jones (Apr 17 2019 at 19:04):
The most FHIR approach would be to have the methodology split (and likely redundantly copied, which is why codes are suggested) into the Observation.method
for each individual Observation so they can stand on their own and to easily see the relevant Methodology for how each particular Finding was obtained.
Looking at all the options discussed, I would suggest attaching the text in however many blocks you need to the overall report through RelatedArtifact.document
contents, and since Observation.method
is not bound to a code system, coming up with a system for referencing individual bullet points/items in those methodologies and putting that information into the relevant Observation.methods fields (either as text or a very simple code system you could describe in your spec).
This approach alleviates needing an extra grouper
to group 0 Observations, which is semantically awkward, and seems to address most of your concerns.
Last updated: Apr 12 2022 at 19:14 UTC