FHIR Chat · AWS FHIR Works · implementers

Stream: implementers

Topic: AWS FHIR Works


view this post on Zulip Nitin Suri (Sep 07 2021 at 14:39):

Hi All - Looking for some feedback / comment on using AWS FHIR work for FHIR implementation for our project. Does AWS FHIR uses HAPI behind the scene or custom development?

Also wanted to check if need to buy license for production version of HAPI.

view this post on Zulip Lloyd McKenzie (Sep 07 2021 at 15:02):

HAPI is open-source. Smile CDR is the organization that provides commercial support for and enhanced function on top of HAPI

view this post on Zulip Nitin Suri (Sep 07 2021 at 15:06):

Lloyd McKenzie said:

HAPI is open-source. Smile CDR is the organization that provides commercial support for and enhanced function on top of HAPI

Thanks @Lloyd McKenzie for quick response. It seems we can still run our application in production, built on HAPI w/o any license unless we need support or enhancements. Any idea on AWS FHIR Works?

view this post on Zulip Lloyd McKenzie (Sep 07 2021 at 15:12):

I'm afraid I can't speak to AWS

view this post on Zulip Nitin Suri (Sep 07 2021 at 15:21):

Lloyd McKenzie said:

I'm afraid I can't speak to AWS

No worries, i tried searching for AWS FHIR Works Stream but could not find any in Zulip.

view this post on Zulip John Silva (Sep 07 2021 at 15:31):

I don't believe AWS's FHIR implementation uses HAPI. Here's an interesting article that compares AWS's HealthLake to Google Cloud's Healthcare API. (it would be nice if there was also a comparison to Azure's FHIR implementation)

https://vneilley.medium.com/are-all-fhir-apis-the-same-v2-e8d8359e1412

view this post on Zulip Paul Church (Sep 07 2021 at 18:03):

Part 1 of the blog post was on Google vs. Azure: https://vneilley.medium.com/are-all-fhir-apis-the-same-48be75ac4ac5

view this post on Zulip Cooper Thompson (Sep 07 2021 at 18:10):

I'd love to see a summary/comparison of the data import/transformation pipeline features of platforms like Google and Azure. As an EHR developer, I've started getting questions about how to sync data from our EHR into generic FHIR servers like Google and Azure. Several folks I've talked to were de facto assuming that using a REST API (e.g. create/update) to load data into Azure/Google was the way to go, but often the data producers are not expecting to act as a FHIR client (and all the orchestration that involves).

view this post on Zulip Paul Church (Sep 07 2021 at 18:18):

Because of that gap we have been almost exclusively working with data mapping pipelines from either v2 messages over MLLP or CSV files from a database as the ways to get data in from the EHR. It would be nice to get a better pattern for fhir-to-fhir replication. And of course for bidirectional use cases our service is not set up to be a FHIR client either!

view this post on Zulip Cooper Thompson (Sep 07 2021 at 18:21):

That's exactly what I wanted/expected to hear. Now I just need the Azure story...

view this post on Zulip Cooper Thompson (Sep 07 2021 at 18:22):

Where "wanted/expected" is in the context of short-term operational goals. A standards based replication pattern long term would be nice, but seems... hard.

view this post on Zulip Paul Church (Sep 07 2021 at 18:29):

The bulk data "ping and pull" proposal for import is one way to address this pattern: https://github.com/smart-on-fhir/bulk-import/blob/master/import-pnp.md

But this is bulk rather than streaming replication.

view this post on Zulip Paul Church (Sep 07 2021 at 18:31):

Subscriptions are the other relevant IG, but there needs to be an intermediate component that is receiving the notifications and acting as the client to the destination.

view this post on Zulip Gino Canessa (Sep 07 2021 at 18:48):

@Caitlin Voegele @Brendan Kowitz

view this post on Zulip Craig McClendon (Sep 07 2021 at 21:39):

re: FHIR streaming/replication - A method we've used with in our own server development is to push a record into a stream/queue for every create, update, delete operation performed on a resource in a post-processing hook.

Basically we push a message to a queue which contains the resource (or URL if the resource is too big for the queue), along with some metadata such as operation type, resource type, timestamp, etc.

Then downstream of that we can add any consumers we want. Consumers to monitor for alert conditions, index data into secondary stores, anonymize and push to a second FHIR server, etc. It's a flexible model, but can make security more difficult.

I'm sure something similar could be done with the interceptor framework built into HAPI. I'm not aware if other implementations have anything similar.

view this post on Zulip John Silva (Sep 07 2021 at 22:07):

Bulk export is spec'd and there seems to be implementations out there but bulk import is not yet at that point. MS has a toolset that can import "bulk export data" (NDJSON) into Azure: https://github.com/microsoft/fhir-loader

view this post on Zulip Caitlin Voegele (Sep 09 2021 at 14:48):

In Azure there are a few items available around importing and exporting data.

For export, both the Azure API for FHIR and the FHIR service in the Azure Healthcare APIs have the spec implemented. You can see details about that here: https://docs.microsoft.com/en-us/azure/healthcare-apis/data-transformation/export-data

For import, the FHIR bulk data loader listed above is one option. Microsoft also recently added support for $import into the open-source FHIR server backed by SQL. The PR with this commit is here: https://github.com/microsoft/fhir-server/pull/1992

If there are other questions, feel free to ping me.

view this post on Zulip Mike Lohmeier (Sep 28 2021 at 15:49):

AWS FHIR Works is open sourced under Apache 2.0. There are companies using it and the community is active with ~6-10 issues/PRs a week. The nice thing about AWS FHIR Works is that all underlying technologies are horizontally scaled out of the box and are rock solid. So, as long as you don't mind committing to the AWS cloud, you can spend less time on the undifferentiated heavy lifting and more time on your FHIR implementation. We stepped back and drew out how we'd implement a FHIR API in AWS natively and came up with the exact same architecture as AWS FHIR Works. So, it was better, faster and cheaper to run with AWS FHIR Works.

AWS FHIR Works is a bit more immature than HAPI and isn't as ingrained with the healthcare community as some of the other OSS implementations. But sometimes that's a big positive. For example, they are asking the community right now for feedback on what use cases subscriptions will really be used. Instead of starting with the R4 spec for subscriptions that is a beast or the R4b spec which is much more tenable and work backwards from the use cases as opposed to the broad utopic specs. So far, they've landed on what I believe is a great out of the box subsection of FHIR R4 support that really gets the best pieces up and running while pushing off the features that aren't really being used in the field.

view this post on Zulip Brian Beatty (Oct 07 2021 at 22:07):

How do you feel that Firely compares to HAPI, Google, AWS and Azure?

view this post on Zulip Jayant Singh (Mar 15 2022 at 04:18):

Hey Folks!!
Anyone has any idea how to use AWS HealthLake API. I having issue on hitting the API, Permission Denied

view this post on Zulip Bill Quinn (Mar 15 2022 at 13:57):

I worked on a project last year where were we wrote our own custom import into Azure from CSV files, using the Firefly .NET sdk. We were loading in about 200,000 patient record, with associated resources. It was extremely slow doing it record by record, taking about 2 seconds per record, with each record load hitting about 9 different API endpoints.

No doubt there were faster ways to do this, but we were new to FHIR. It seemed to us at the time that there was a set of tools missing that would have made this easier. Configuration rather than having to write custom code.


Last updated: Apr 12 2022 at 19:14 UTC