Stream: hapi
Topic: Expansion of ValueSet produced too many codes
Rob Hausam (Apr 07 2021 at 17:57):
Are folks seeing this (Expansion of ValueSet produced too many codes (maximum 1,000) - Operation aborted!
) and how are you dealing with it? I'm trying to use $expand and filter on the full ICD-10-CM code system (definitely > 1000 codes) and am running into this issue. Narrowing the returned results by using the filter and also trying setting the $expand 'count' parameter to a low value (e.g. 20) doesn't help eliminate the error (on HAPI 5.3.0 jpaserver-starter).
Lin Zhang (Apr 08 2021 at 00:26):
Once for me. I couldn't remember the details or my response clearly, with v5.1.0 or v5.2.0.
Rob Hausam (Apr 08 2021 at 00:33):
Thanks, Lin. And I'm still looking for how to fix it.
Rob Hausam (Apr 08 2021 at 01:12):
The code system and value set have 95587 codes, so the pre-expansion took a considerable amount of time. But now that it is finally complete, the "too many codes" issue seems to be no longer occurring. I assume that is expected (and I think it makes sense that it would be)?
Hanan Awwad (Apr 08 2021 at 09:56):
Hi Rob and Lin
Value set expansion worked in a background job that is running after submitting new value set, in case you called $expand before the expansion is finished (once vs is expanded its concepts is stored in database and its status switched to expanded), the expansion will be running on RAM to return the result, hence the vs is larger than 1000 it will throw this exception "too many codes" .
If you wait till the job is finished and all concepts stored in db and status is switched from in progress to expanded, then you won't have this exception any more
Lin Zhang (Apr 08 2021 at 12:24):
@Hanan Awwad Wow, Thanks
Rob Hausam (Apr 08 2021 at 12:26):
@Hanan Awwad Yes, that's exactly what I saw. What wasn't clear in advance, though, was the schedule for performing the expansion and the amount of time to expect that it would take. It just requires a bit of patience! :)
John Silva (Apr 08 2021 at 16:06):
Related question: how do you know when the expansion is complete? Doe HAPI log something that indicated "expansion complete"?
I tried running the Docker HAPI server and loaded a large CodeSystem and ValueSet that referenced it and I couldn't tell if the expansion ever completed or if the docker image just ran out of RAM or (image) disk space. I could only determine that the codes I was expecting to be loaded were not there when I performed the .../$expand?filter search.
Hanan Awwad (Apr 08 2021 at 19:47):
@John Silva , either you could check the status for vs in db it should be expanded or you could debug the job that implemented inside BaseTermReadSvcImpl class
John Silva (Apr 08 2021 at 20:34):
Thanks. I'm not familiar with HAPI enough to debug this and it's happening "inside" the Docker container so it's not easy for me to debug. I was hoping that something is logged which I can see that indicates that the expansion completed (or not). I'm running the docker container like this so the logs come right on standard output:
docker run -p 8080:8080 hapiproject/hapi
Hanan Awwad (Apr 08 2021 at 21:00):
@John Silva In case its running well, for sure you will see some logs from BaseTermReadSvcImpl class like the following
ourLog.info("Pre-expanded ValueSet[{}] with URL[{}] - Saved {} concepts in {}", valueSet.getId(), valueSet.getUrl(), accumulator.getConceptsSaved(), sw.toString());
Jame Dang (Apr 09 2021 at 13:55):
@John Silva : I'm not really familiar with HAPI but I think you can try the SpringBoot version (I tried it with starter project) which is easier to see the log and config the JVM
Jame Dang (Apr 10 2021 at 10:33):
@James Agnew : I tried the HAPI client to search with 5.4.0 PRE5 and the response.getTotal() always return 0 if I put the offset parameter (without the offset it is OK)? Maybe this is a bug of the HAPI 5.4.0 PRE5 ? Thanks and regards
James Agnew (Apr 10 2021 at 14:33):
Can you replicate this on hapi.fhir.org?
Jame Dang (Apr 10 2021 at 17:57):
@James Agnew : I have tried to connect to hapi.fhir.org (http://hapi.fhir.org/baseR4), it has the same problem and even I remove the offset parameter the total always return 0, in the 5.4.0 PRE5 if we remove the offset it is OK. I think the http://hapi.fhir.org is older than than the 5.4.0 PRE5 (which I'm using for my server), you can see http://hapi.fhir.org/baseR4/Patient the total is not exist (so I think it has problem).
For my server (using 5.4.0 PRE5) when I tried the link : /Patient?_getpagesoffset=2&_count=1 I it run OK and I can see the total.
But when I use the HAPI Client (see the code bellow) the result show me the link /Patient?_count=1&_offset=1 (I think the java client convert wrong parameter )
The code I used for testing:
FhirContext ctx = FhirContext.forR4();
serverBaseUrl="http://hapi.fhir.org/baseR4";
IGenericClient client = ctx.newRestfulGenericClient(serverBaseUrl);
// Build a search and execute it
Bundle response = client
.search()
.forResource(Patient.class)
// .count(10)
// .offset(1)
.returnBundle(Bundle.class)
.execute();
System.out.println("Number of Responses: " + response.getTotal());//Always 0
James Agnew (Apr 10 2021 at 18:11):
I'm not sure I follow. That client code you quoted is for a Patient search,
not a ValueSet expansion. The test server is not configured to support
offset searching for resources.
Jame Dang (Apr 10 2021 at 18:15):
@James Agnew : I got it, but can I report the problem of the search function some where? Maybe that is a problem (I'm not sure but maybe that is a bug, I will try to see the config for offset searching)
Jame Dang (Apr 10 2021 at 18:54):
@James Agnew : Sorry for inconvenience, I tried to add the .totalMode(SearchTotalModeEnum.ACCURATE) to my search and the result is OK now (I got the total). Thank you for your support
Last updated: Apr 12 2022 at 19:14 UTC