Stream: implementers
Topic: TestScript - assert processing
Ardon Toonstra (May 20 2020 at 06:55):
The FHIR Testing spec describes that subsequent test asserts are not evaluated when a prior assert within the same test evaluates to a failure. (https://www.hl7.org/fhir/testing.html#assert) Why is that?
From a user perspective, it would be more informative if you would know all the error messages of the subsequent test asserts. This way you can fix those errors as well before you set up the next test round.
Jose Costa Teixeira (May 20 2020 at 07:09):
How do we know if the subsequent assert depends on the previous and can therefore be evaluated?
Ardon Toonstra (May 20 2020 at 08:14):
Then that subsequent assert will fail as well, and hopefully, the test platform will be smart enough to give a clear error message?
Ardon Toonstra (May 20 2020 at 08:42):
But I get the point. The next depended asserts may fail with a lot of unrelated error making it look like a total mess for the end user.
Richard Ettema (May 22 2020 at 18:25):
I've actually been thinking about this issue for a couple of days now. I believe the easiest approach would be a new assert boolean element that would instruct the test engine to either 1) Stop test processing on assert error or, 2) Continue test processing on assert error. This would give the TestScript author the needed control over a grouping of asserts and their behavior.
Something like this:
<action>
<assert>
<description value="Confirm that the returned HTTP status is 201(Created)."/>
<direction value="response"/>
<responseCode value="201"/>
<stopTestOnError value="false"/><!-- [NEW ELEMENT] Allow next assert to evaluate if current assert fails -->
<warningOnly value="false"/>
</assert>
</action>
Thoughts? Comments?
Jose Costa Teixeira (May 22 2020 at 18:33):
Why not use the severity "error / warning / info" codes?
Richard Ettema (May 22 2020 at 18:34):
Currently, the test only stops on error.
Jose Costa Teixeira (May 22 2020 at 18:34):
i mean instead of a boolean
Jose Costa Teixeira (May 22 2020 at 18:35):
(not sure if that is what you mean too)
Richard Ettema (May 22 2020 at 18:36):
So, the new element would be "stopTest" or "stopTestOn" with a coded value of either "info", "warning" or "error". That would provide a bit more control.
Jose Costa Teixeira (May 22 2020 at 18:37):
Actually I was thinking of this wrong but I think you just corrected it :)
Jose Costa Teixeira (May 22 2020 at 18:38):
Yes, if the element means "threshold" - then it would be error or warning, and that would determine if the script stops depending on whether the assertion returns errors or warnings
Richard Ettema (May 22 2020 at 18:40):
I suppose you could add "fatal" for any exceptions.
I'm hoping to play around with this concept internally over the next couple of weeks. I was thinking the boolean would be easier and still provide the needed functionality. The coded value as a threshold does sound interesting though.
Jose Costa Teixeira (May 22 2020 at 18:42):
cool
Richard Ettema (May 22 2020 at 18:47):
Sounds like a good HL7 FHIR JIRA enhancement request. I'll try to get one created next week - long holiday weekend for me coming up.
Peter Jordan (May 22 2020 at 21:56):
I'd support adding a boolean element as it's more deterministic and there may not always be an exact match between response codes sent by an implementation and the requirements of a tester. Certainly, there is a requirement for continued execution of a test script after some failures, a common one might be regex checks on the (computer-friendly) name property of resources.
Ardon Toonstra (May 26 2020 at 20:00):
Great! I think I would support the boolean element more as well based on the arguments given bij Peter Jordan. Either way, I think this is a great addition to the assert definition as we will use it for sure in our TestScripts.
Ivan Dubrov (May 27 2020 at 00:25):
For what it worth, in our TestScript runner we run all assertions after a failed operation. The assumption here being that asserts don't change state of the system. Therefore, subsequent failed asserts fail not because of the previous assert failed, but because of the operation itself. In other words, order of asserts block after an operation _shouldn't matter_, so semantically they are evaluated as a whole.
This allows writing asserts in a block without thinking too hard which one of those is more useful for troubleshooting. For example, if we assert both status code and output, sometimes it could be status code that is more useful, sometimes it would be output. In our runner, you'll get something like "status code is 400 AND your output is OperationOutcome instead of Patient you expected".
I think that adding an explicit flag will make TestScripts even more confusing. Even as they are now, they are not easy to write (we use them extensively for API testing).
Richard Ettema (Jun 04 2020 at 14:23):
FHIR-27772 created to track this issue.
Richard Ettema (Oct 05 2020 at 20:47):
Picking up this thread again and looking for additional feedback.
Per the FHIR-I WG discussion on this topic, a suggestion is to have assertions be able to point to other assertions that are 'dependencies'. Expectation is that all assertions would run, but you'd skip assertions that have declared dependencies if one or more of the dependencies had failed.
Richard Ettema (Oct 05 2020 at 20:55):
In order to support this 'dependency' between asserts the current default behavior of a FHIR Test Engine would need to change. Currently, this behavior is to stop test execution on the first failed assert evaluation. The new behavior would then have to be to evaluate all asserts regardless of failure status except where dependencies exist.
Looking for pros and cons of this approach versus the introduction of the 'assert.stopTestOnFail' boolean.
Lloyd McKenzie (Oct 05 2020 at 21:27):
Primary pro is that when you have different tests with different dependencies you don't either have to run a bunch you know will fail (which causes wasted time/resources) or skip all tests after a failure - when some of them might still succeed. The cost is that you need to determine dependencies (i.e. what tests have no hope of working if a previous test failed).
Richard Ettema (Oct 05 2020 at 22:19):
I've come up with two TestScript examples for a test with asserts that follow each idea.
TestScript example using stopTestOnFail
TestScript example using assert dependencies
In the first example, the use of the 'stopTestOnFail' boolean is shown.
Some Pros
- Each assert explicitly and succinctly controls its own behavior
- Ordering of asserts is not affected, i.e. the TestScript author is free to organize the asserts in any order
- On the first failed assert where stopTestOnFail=true, all remaining asserts are skipped
Some Cons
- Each assert must declare the 'stopTestOnFail' boolean, FHIR does not allow default values on boolean elements
In the second example, the declaration of assert id values and the use of a new 'dependentOnAssert' element is shown.
Some Pros
- Dependency declarations provide additional documentation of related asserts
Some Cons
- Ordering becomes more restrictive where those failed asserts that must stop the test execution need to be declared first. Relationships between dependent asserts requires more work of the TestScript author.
- Asserts that are a dependency must define unique id values. In a lengthy TestScripts with numerous tests this may become cumbersome
- The FHIR Test Engine would still need to process all remaining asserts to determine if there are any dependencies
Last updated: Apr 12 2022 at 19:14 UTC