Stream: implementers
Topic: Reducing Cardinality via Profile
James Agnew (Feb 17 2017 at 15:52):
I'm scratching my head about about a bug that was submitted to HAPI.
Say I have a resource with a field that has cardinality 0..*
, such as Patient.name
. If I define a profile that reduces the cardinality of this field to 0..1
is it valid to then serialize the field as a simple HumanName object instead of an array of HumanName object?
Michel Rutten (Feb 17 2017 at 16:01):
Hi James, I think that serialization should always adhere to the original structure of the base profile. Otherwise you would break to contract of the base profile.
Pascal Pfiffner (Feb 17 2017 at 16:03):
I would not expect to be able to parse that, since I'd still be expecting an array for Patient.name
in JSON.
Michel Rutten (Feb 17 2017 at 16:03):
Right
James Agnew (Feb 17 2017 at 16:11):
I'm inclined to agree with this logic. I suspect HAPI isn't doing the right thing here.
Lloyd McKenzie (Feb 17 2017 at 17:00):
Serialization is always based on the resource and data type cardinality, not the profiled cardinality. (One schema to rule them all! :))
Grahame Grieve (Feb 17 2017 at 18:31):
yes, it still has to be an array. in terms of the profile, the fact that it's an array rather than a singleton is still available
Anand Mohan Tumuluri (Feb 22 2017 at 20:36):
I had a similar question about changing the cardinality of a field from 0..*
to 1..1
within a profile. From what I understand, the value should still be an array with 1 element. Is that correct? Otherwise, the base (de)serializers will not work at all.
Anand Mohan Tumuluri (Feb 22 2017 at 20:38):
Even then, wont this break the profile validation? It needs to treat cardinality altered in a profile separately from the cardinality specified in the base.
Lloyd McKenzie (Feb 22 2017 at 20:43):
It shouldn't break profile validation. The validator should be aware of whether it's looking at the original resource/data type definition vs. a constraining profile
Last updated: Apr 12 2022 at 19:14 UTC