Stream: hapi
Topic: JPA starter - Lucine + Postgres
Mauricio Burgos Herrera (Nov 01 2021 at 13:30):
I'm want to use the JPA starter server among other things to make use of the terminology service. In the environment (Kubernetes on google cloud) that I'm working on we have a managed Postgres database but not Elasticsearch. In order to support some of the features of the terminology service without having to support an Elasticsearch cluster, I would like to deploy multiple instances of the JPA starter server all connected to the Postgres database and each using lucine with local file.
I've tested the setup locally using docker compose. Importing a CodeSystem and creating a ValueSet works fine, but when I try to $expand
the ValueSet I don't get any codes back.
Does anyone has experience with this setup? would this even work?
JP (Nov 03 2021 at 17:17):
I haven't tried that in a while, but it does not work in my experience. The local Lucene index is only updated by operations that occur on that specific node, so if something happens on a different node the next request on the original node works against an outdated index. That has implications for things like search result caching, for example.
JP (Nov 03 2021 at 17:19):
There are a couple managed solutions for Elastic on GCP, but if you have a Kubernetes cluster you could always roll your own deployment.
Mauricio Burgos Herrera (Nov 04 2021 at 13:56):
Thanks for the reply! Indeed I've found that the server uses hibernate search internally which behaves as you describe (https://docs.jboss.org/hibernate/search/6.0/reference/en-US/html_single/#gettingstarted-dependencies)
In my use-case using h2 + lucine is good enough. To prevent rebuilding the server after every deployment and not depending on elasticsearch, I'm creating an multi-stage image that starts the server loads the terminologies / value sets, and then copies the database files to the final image that we will run in kubernetes :)
Last updated: Apr 12 2022 at 19:14 UTC