Stream: social
Topic: Hippocratic Oath for Connected Devices
Grahame Grieve (Nov 08 2017 at 02:13):
https://www.iamthecavalry.org/domains/medical/oath/
Grahame Grieve (Nov 08 2017 at 02:13):
I will revere and protect human life, and act always for the benefit of my patients. I recognize that all systems fail; inherent defects and adverse conditions are inevitable. Capabilities meant to improve or save life, may also harm or end life. Where failure impacts patient safety, care delivery must be resilient against both indiscriminate accidents and intentional adversaries. Each of the roles in a diverse care delivery ecosystem shares a common responsibility: As one who seeks to preserve and improve life, I must first do no harm.
To that end, I swear to fulfill, to the best of my ability, these principles.
Cyber Safety by Design: I respect domain expertise from those that came before. I will inform design with security lifecycle, adversarial resilience, and secure supply chain practices.
Third-Party Collaboration: I acknowledge that vulnerabilities will persist, despite best efforts. I will invite disclosure of potential safety or security issues, reported in good faith.
Evidence Capture: I foresee unexpected outcomes. I will facilitate evidence capture, preservation, and analysis to learn from safety investigations.
Resilience and Containment: I recognize failures in components and in the environment are inevitable. I will safeguard critical elements of care delivery in adverse conditions, and maintain a safe state with clear indicators when failure is unavoidable.
Cyber Safety Updates: I understand that cyber safety will always change. I will support prompt, agile, and secure updates.
Grahame Grieve (Nov 08 2017 at 02:14):
perhaps we should have a modified version of this somewhere in the FHIR eco-system?
John Moehrke (Nov 08 2017 at 14:36):
+Privacy by Design
Jim Kreth (Nov 08 2017 at 16:08):
Isaac Asimov would submit:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Grahame Grieve (Nov 09 2017 at 20:02):
there's also this: http://www.achi.org.au/docs/ACHI_Professional_Code_of_Conduct.pdf
Jose Costa Teixeira (Nov 10 2017 at 08:39):
i think "First, do no harm" is actually quite needed.
Lloyd McKenzie (Nov 10 2017 at 12:25):
Part of "do no harm" is having the skill and knowledge needed to have insight into whether harm is possible/likely. The chances of implementers intentionally doing harm is low. The chances of implementations going forth without having spent the time to learn and consider the possible harmful ramifications are much higher.
Abbie Watson (Dec 05 2017 at 13:35):
Somebody needs to send this over to Facebook. They didn't get the memo, re: "Move fast and break things."
Abbie Watson (Dec 05 2017 at 13:36):
More seriously, such a thing would be a compelling differentiator, and communicate the values of the FHIR project. It would be a great addition.
Last updated: Apr 12 2022 at 19:14 UTC