Stream: inferno
Topic: Patient _id and other tests
Ruth berge (Apr 16 2021 at 00:40):
All our tests in Inferno for Patient are being skipped. How do we get this to run?
Robert Scanlon (Apr 16 2021 at 00:57):
Hi @Ruth berge -- what is the skip message?
Ruth berge (Apr 16 2021 at 01:07):
It means that the test is skipped by the tool.
Robert Scanlon (Apr 16 2021 at 01:56):
There should be a message associated with the skip that describes why the test is not running.
Robert Scanlon (Apr 16 2021 at 02:04):
For example, if I enter an invalid patient id that does not exist on the server as an input to the test, our tests cannot verify that search by _id is capable of sending back a valid Patient resource. Therefore, we do not fail, but instead "skip". I've circled the "skip message" in this screenshot. Are you getting this message, or another message?
Screen-Shot-2021-04-15-at-10.02.03-PM.png
Ruth berge (Apr 16 2021 at 17:00):
@Robert Scanlon we don't have a specific error or message like you have shown. We just have an indication that the test was skipped. We don't know why.
Robert Scanlon (Apr 16 2021 at 17:03):
Hmmm, that's obviously not a very helpful behavior by our tests. Is this for the "Program Edition" (ONC g-10 certification tests), or "Community Edition" (anything else?). Can you point me to a specific test id?
Cooper Thompson (Apr 16 2021 at 17:23):
<deleted>
Ruth berge (Apr 16 2021 at 18:32):
@Robert Scanlon we are using Program Edition 1.5.1. We fail on Inferno Single Patient API test - test 1. This test claims to be testing _id. I have tested _id in our server and it works. There is no indication that the test failed or what it is testing. It just shows the whole section as skipped. I am not the primary user of Inferno but my teammates say there is nothing to indicate why it skipped nor is there a setting that is instructing Inferno to skip - not that they know of. It just skips that set of tests.
Robert Scanlon (Apr 16 2021 at 18:57):
Thanks @Ruth berge -- I've reviewed our test and can't find an obvious reason why a test would be skipped without any message explaining why. Is there any chance you could have them send a screenshot, like I am doing below?
Generally we use the "skip" if a prerequisite hasn't been met (e.g. we can't search for conditions about a patient if we weren't given a valid patient id). We also skip if you don't state support for capabilities required by US Core within the Capability Statement (e.g. search Patient by _id). But those should have messages associated with them.
Screen-Shot-2021-04-16-at-2.48.59-PM.png
(I recognize that the skip message we have in the above screenshot isn't ideal, because the "Patient" resource is a bit of a special case and that specific message was written a bit more generically and applies better to every other resource. We could clean up that language a little).
Ruth berge (Apr 16 2021 at 19:06):
Ok. I don't have access but will get a screenshot. What I remember is that the skip symbol appears for the whole section. Also, we did NOT have a separate search listing _id for Patient but have now changed that. But the _id does work for the patient id that we provided for the test, it just wasn't in the Capability Statement. We just added that to test for this and I am waiting for others to free up to run that test in Inferno again. I'll provide a screen shot and results later. Thanks.
Robert Scanlon (Apr 16 2021 at 19:16):
Thanks! Yes, perhaps that wasn't the greatest example, because the "fail" trumps "skip" in the "whole section" roll-up. But under each individual test (e.g. USCP-01), there should be a line stating what is wrong. If you didn't state support for Patient search by "_id" in your CapabilityStatement, then the skip message should be "The server doesn't support the search parameters: _id" for the first test, and then for the later tests we likely would skip because we never even tried getting the patient in the first test. Generally we try to avoid blocking tests running because of a 'minor' issue early on, but in practice we want to make our tests easy to maintain and deferring errors like this until later in the process gets complicated.
Ruth berge (Apr 16 2021 at 20:48):
We added the _id to the capability and now the Inferno results have changed to show a Nil class error (any hints here?). The second screen shot shows another grouping that is completely skipped. There is no information as to why that was skipped. Screen-Shot-2021-04-16-at-1.20.28-PM.png Screen-Shot-2021-04-16-at-1.24.01-PM.png
Robert Scanlon (Apr 16 2021 at 20:51):
For the error, you are returning an empty body (with code 200) for the search, which isn't correct (it should be a bundle). We can fix the error message so that it doesn't state 'NilClass' like that though.
Robert Scanlon (Apr 16 2021 at 20:55):
For the later skips, the skip message is "No Organization references found in prior searches.' The "About" section for that tab tries to explain how tests work. In this case, we look through every resource we've seen so far for an Organization reference, and use that to perform the necessary Organization read query. If you have not send us any Organization references in other resources, then we skip this whole test. You can think of our test strategy as a 'crawl'. We start with a Patient ID, then find all resources that reference patients, and also look at all resources that they reference.
Ruth berge (Apr 17 2021 at 01:06):
thanks. I didn't capture the patient before we changed the capability statement. I will capture one on another server if possible and then send that. Appreciate the information.
Last updated: Apr 12 2022 at 19:14 UTC