FHIR Chat · Semantic clash? (Webinar day 2) · clinFHIR

Stream: clinFHIR

Topic: Semantic clash? (Webinar day 2)


view this post on Zulip Dave deBronkart (Mar 20 2019 at 19:35):

I'm sure the wise & mighty have wrestled this question to the ground but it causes frequent problems in the real world so I'll ask. (It seems to be a matter of clashing semantics, but I don't even know if that's a thing.)

In today's lecture @David Hay mentioned that something like "asthma" can easily exist in multiple code sets. No surprise there. He said it's important to be able to convert between them. Understood.

My concern is that I've heard numerous anecdotes among health data wranglers about problems arising with one party storing a value with a particular set of meanings and implications, and the data being read by someone else who's given it different nuances. I vaguely recall some cardiac condition where hospital X (using Epic) assumes that means something, and another Epic user configuring their system (or just USING their system) such that it doesn't have the same detail of meaning. (I hope this makes sense.)

It's a recipe for disconnect, and maybe even disaster, yes?

My impression from my lightweight use of data through the years is that there's nothing you can realistically do about this except tell everyone to be CAREFUL about such things. People have said that the genesis of this problem is that some arrogant cardiologist (e.g.) insists that condition X should be treated that way, and regardless of what the damn computer can do, insists that everyone KNOW the implied meaning.

There's nothing that can be done in the IT world about this, right? Except for people to be careful?

There's also the whole issue of granularity - for instance in my case someone saw in my problem list that I had "migraines" and was going to act accordingly, when I actually had ophthalmic migraines, which are not at all migraine headaches. AFAIK in this case too there's no solution except to tell everyone to be wary of important nuances that might not be expressable in the data.

Am I correct in this? There's no hope for this except awareness?

view this post on Zulip Grahame Grieve (Mar 20 2019 at 20:03):

pretty much correct. The issue often relates around qualifiers like 'might have' or 'early stages of' - people have different implications for these, and their codes are not equivalent. Problems come from trying to translate these. Another classic is mild-moderate-serious-fatal which is often encountered. Do not try to pin this down...

view this post on Zulip Lloyd McKenzie (Mar 21 2019 at 00:17):

Processes also drive different definitions for things. The impact of the differences vary, but what's an "allergy vs. an intolerance" has wide differences, or even something as simple as what constitutes an encounter or stay. Sometimes the differences are driven by region or formal policy differences, but often it's organizational or even departmental cultural differences. Trying to standardize meanings or processes, let alone cultures is not a short-term (or necessarily even a feasible) exercise, though some nudges are sometimes necessary/helpful

view this post on Zulip Dave deBronkart (Mar 21 2019 at 03:26):

Thank you both. So, in practical reality, what do people do when mingling resources from different sources?

view this post on Zulip Lloyd McKenzie (Mar 21 2019 at 03:32):

Realisticly, we don't have a lot of experience with "true" mingling. A lot of cross-organizational data sharing has been with CDA/PDF/Fax which means that you're looking at "foreign" data in a completely different context than "local" data. EMRs are only just staring to enable FHIR-based write capabilities and those are generally for easier things like labs and documents rather than conditions/allergies/etc. where confusion is more likely. Where such integration does happen, there's usually a manual curation process which, at least in theory, allows for data adjustment on import. That doesn't mean that it hasn't happened or that there haven't been problems, but the practice hasn't happened at nearly the scale it's soon going to start happening at.

view this post on Zulip Grahame Grieve (Mar 21 2019 at 03:44):

Inevitably, it's going to be manual reconciliation

view this post on Zulip Grahame Grieve (Mar 21 2019 at 03:46):

if I was writing a patient app (or when!) I would take the position that every condition resource received from amywhere is a candidate statement about the patient that the patient might want to treat as true, and either

  • clone into their own master problem list as a new entry
  • merge into an existing entry in their problem list
  • ignore

view this post on Zulip David Hay (Mar 21 2019 at 04:13):

I do think there needs to be a human in the mix - backed up by algorithmic support for sure, but - at the end - it should be a person making the call...

view this post on Zulip Jim Steel (Mar 21 2019 at 04:18):

Some of the secondary-use/data aggregators we've dealt with have specific processes for normalizing the data they get based on knowledge they have about the specific software tools and the clinical workflow/context that the data is coming from, even if they are using common code sets (or code sets that are normalized via mapping). Provenance can help with this, to make sure they have that context information in order to trigger the normalization

view this post on Zulip Josh Mandel (Mar 26 2019 at 18:36):

It's also worth mentioning that the risks and benefits of this "mingling" are highly context sensitive. Delivering CDS for care vs training an ML model to detect missing diagnoses, for example, have very different requirements.

view this post on Zulip Dave deBronkart (Mar 26 2019 at 18:39):

Having experienced how demented some items in my own chart (And billing history) have been, I'd almost violently insist that nothing get into a new data set (e.g.. Problem list) without having been sanity checked by a human who knows the case.


Last updated: Apr 12 2022 at 19:14 UTC