FHIR Chat · Production deployment · hapi

Stream: hapi

Topic: Production deployment


view this post on Zulip Peter Imrie (Sep 20 2021 at 07:40):

Hi all,

We are making use of HAPI FHIR as a central repository system to store patient data for reporting.
In our Dev environment we have been using HAPI FHIR from Docker and connecting to a PostGres database (also running within a Docker container), this has been serving our purposes for Dev and test but would like to know if there are any guides for deploying to production in terms of:
1) how to size hardware
2) Should we use Docker in production

Has anyone been through a production planning exercise and have anything they would be able to share with us?

Thanks

Peter

view this post on Zulip Patrick Werner (Sep 20 2021 at 08:28):

Hi, this depends on many factors. We are using Docker in Production, for hapi we are using 4 cores and 8GB of RAM which works fine in our use-case. I would say minimum 2 cores and 4 GB are needed.

view this post on Zulip Peter Imrie (Sep 20 2021 at 08:42):

Thanks Patrick,

Are you running the database separately or also in Docker?

view this post on Zulip Patrick Werner (Sep 20 2021 at 08:43):

Postgres also in Docker

view this post on Zulip Peter Imrie (Sep 20 2021 at 10:48):

Thanks!
What sort of transaction volumes are you catering for with the set up you currently have?

In terms of the database, do you have a separate volume for the data files and are you clustering the db at all?

view this post on Zulip Peter Imrie (Sep 23 2021 at 05:48):

Just a bump on this thread if anyone has any real world experience deploying a production instance of Hapi FHIR and if there is any further advice on sizing and gotchas etc

view this post on Zulip Jens Villadsen (Sep 23 2021 at 05:53):

A setup of mine hosts roughly +10 HAPI FHIR servers. Ill vouch for Patricks advice

view this post on Zulip Peter Imrie (Sep 23 2021 at 07:51):

Thanks Jens. We are looking at about 200 000 patients with +- 10 years of clinical data which will have quite a heavy initial write process as all the data is imported, followed by a heavy read as data is then extracted. Trying to gauge how best to size and set up underlying DB etc


Last updated: Apr 12 2022 at 19:14 UTC