Stream: implementers
Topic: AI generated clinical information
Ardon Toonstra (Mar 02 2022 at 12:34):
How would you best represent clinical information in FHIR that is generated by an artificial intelligence system? For example, information that is suggested by a smart system for ClinicalImpression.finding
. I think we want to be able to label or indicate that that information is authored by a system/device instead of a human.
So... extensions? Provenance? Do others have experience with AI-generated clinical information?
David Winters (Mar 02 2022 at 13:47):
Not sure that is the intended use of ClinicalImpression. What about capturing this using an Observation and using the Observation.device element to point to a Device resource that describes the AI?
Mike Lohmeier (Mar 02 2022 at 15:33):
If the output of your modeling is part of decision support, you can leverage the RiskAssessment resource. The RiskAssessment resource has a performer field that can reference your AI system as a Device resource.
Otherwise, like David mentioned the Observation leveraging device makes sense, if the output of your modeling is not part of decision support like predicted data values for time series data.
Lloyd McKenzie (Mar 03 2022 at 03:34):
The AI would be represented as a Device. It would be the author of the ClinicalImpression or whatever was being created
Mareike Przysucha (Mar 03 2022 at 21:25):
I think this tackles a more general question: if ClinicalImpression.finding contains a CodeableConcept, do we in general assume, that the assessor also made the finding?
Of yes: how do we cover AI as author, as ClinicalImpression only has an assessor, but no author, and assessor is limited to Practitioner and PractitionerRole.
If not: i would propose to use one of the resources named afterwards and set the author to the AI-device.
René Spronk (Mar 04 2022 at 07:04):
The Provenance resource has a role to play in this scenario as well.
Ardon Toonstra (Mar 07 2022 at 19:54):
Thanks all for the input! Very helpfull. Good to know an AI would be modeled as a Device that's inline with my thoughts as well. We are also investigating the use of the Provenance resource.
Also Josh noted me to this slightly related thread: #fhir/infrastructure-wg > NLP Derived Elements
Ardon Toonstra (Mar 07 2022 at 19:57):
Can I conclude that it is not smart to mix AI generated information in resource that contain human made clinical content?
Lloyd McKenzie (Mar 07 2022 at 23:31):
Mixing content - whether it's device + human or multiple humans can happen. Whether it's problematic or not depends on how much you need to know exactly who's responsible for what. In theory, even if you do combine into a single resource, Provenance can let you differentiate who's responsible for which elements.
Last updated: Apr 12 2022 at 19:14 UTC