Stream: social
Topic: Google project nightingale
Adam Flinton (Nov 12 2019 at 12:47):
"Google is teaming with one of the country's largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents. From the report:
The initiative, code-named "Project Nightingale," appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven't yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country's second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth."
Brendan Keeler (Nov 12 2019 at 13:36):
Much ado about nothing. They are a business associate. No one cares when the hip new AI startup has a BAA and does the same. The media's proclivity to attack big tech's forays into health with clickbait is just ambient noise at this point
Dave deBronkart (Nov 12 2019 at 18:12):
Um, it's not being seen as "much ado about nothing" by the patient community...
Lloyd McKenzie (Nov 12 2019 at 18:30):
The article doesn't indicate that any of the data is being linked to the other data that Google holds or that it's being tied into their advertising space. If that were happening, then that would be a cause for alarm. However, for Google to be applying their analytics capabilities to data from a health organization and feeding the results back into that health organization isn't a problem. In fact, it's one of the things we're trying to enable - allowing third parties to define analytical algorithms, decision support processes, etc. that can consume data from others and provide useful advice/guidance.
Few hospitals can afford to develop that expertise in-house and running the algorithms against larger data sets (even if partitioned) makes the algorithms better and less biased.
So long as the data aren't used for purposes they're not supposed to be and are kept appropriately segregated and the sharing is done within appropriate legal frameworks, there's no issue.
Now, if you don't trust Google (or Microsoft or Apple or Amazon or whomever) to adhere to the legal agreements they've signed, then you might have cause for anxiety. But you should probably have even more anxiety about smaller organizations that also provide analytics services and aren't under nearly the same scrutiny as the big ones.
John Moehrke (Nov 12 2019 at 18:54):
The concern is, as always should be considered, what happens when some intended and appropriate authorized use -- goes wrong. It is the failure-modes that are becoming more and more risky. That is to say the likelihood might be well controlled, but the "harm" part of risk is becoming much more impactful when the data are healthcare data vs just browser behavior. Reminder Risk is a function of ( likelihood, harm, detectability)
Dave deBronkart (Nov 12 2019 at 19:10):
Happily I know nobody on this chat is wrong-headed :-) especially present company. But in my role as a voice for the patient community, I CANNOT state too strongly how important the issues of transparency and consumer choice are.
Dave deBronkart (Nov 12 2019 at 19:11):
Already since the first article, this (something of a bombshell) has been added https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information?CMP=share_btn_tw
Dave deBronkart (Nov 12 2019 at 19:12):
It has nothing to do with google advertising - it has to do with the widespread fear of leaks and misuse of data. ESPECIALLY alarming to my advocate / activist friends is that the project was done in secret; note the Larry Page quote that suggests he knows better than individuals do what's good for them re use of their data.
Dave deBronkart (Nov 12 2019 at 19:13):
The lack of notice and opt-out is problematic, full stop.
Lloyd McKenzie (Nov 12 2019 at 19:42):
Do you expect every hospital/clinic/payer/etc. to let you know when they contract outside help to analyze/secure/back-up/convert/otherwise manipulate your data? It happens all the time and it's not considered to be "in secret" just because there's no notification to patients (or generally to staff). John is correct that whenever outside access occurs, there are challenges around detectibility of mis-use. And certainly the potential harm could be significant due to inappropriate use. But allowing data to flow in authorized ways to create better knowledge and deliver better care is absolutely what we're trying to enable. Whether that analysis is done by Google or by MomAndPopDataInc, due diligence is important to make sure that nothing untoward is happening to the data. But we certainly don't want to prohibit all sharing where due diligence is being done just on the grounds that something bad could happen.
Dave deBronkart (Nov 12 2019 at 20:00):
fwiw, here's a non-patient-activist's view on Twitter - pretty "hot" thread out there - from a colleague of Yale cardiologist and very-pro-patient-data guy Harlan Krumholz: https://twitter.com/gregggonsalves/status/1194248965266509824?s=20
@hmkyale is spot-on on this. It IS possible to do this the RIGHT way w open, real engagement w patients, substantive dialogue about the rules of the road. This was purposeful and deceitful on @Google's part, now creating suspicion and wariness instead of building partnerships.

@hmkyale @Google @Ascensionorg @ChrisMurphyCT @SenBlumenthal @rosadelauro @WSJ @ePatientDave @BraveBosom @TheLizArmy @MarilynMann @HugoOC @AmyPricePhD .@hmkyale is spot-on on this. It IS possible to do this the RIGHT way w open, real engagement w patients, substantive dialogue about the rules of the road. This was purposeful and deceitful on @Google's part, now creating suspicion and wariness instead of building partnerships.
- Gregg Gonsalves (@gregggonsalves)
Grahame Grieve (Nov 12 2019 at 20:08):
@Brendan Keeler you're probably right that there's nothing to see here technically, that the rules of the road (HIPAA) mean they'll be covered under a BAA, and that none of what they are being accused of is the case.
But the lesson here is that it doesn't matter: google's public reputation around data mis-use is very bad
Vassil Peytchev (Nov 12 2019 at 20:15):
Thanks for bringing this up, @Dave deBronkart.
I think part of patient advocacy and activism should be to seek relevant information, and counteract wildly inaccurate speculations and fake news. The Guardian's piece refers to a video, which is total bunk. Google's Cloud business unit (just like AWS, Azure, Oracle Cloud) provides services to businesses where the data is most definitely segregated and not used in any of the their social networking/advertising business. Without that segregation, no one would be using these services.
Can this lead to problems, if something is misconfigured? Yes, just like it would happen if Ascension misconfigures a server or gets malware on their computers.
The only part that could be considered going beyond providing IT services to Ascension is the use of the data as source for machine learning. It sounds like this part of the agreement is far from started, and is still being figured out. The Google response is at https://cloud.google.com/blog/topics/inside-google-cloud/our-partnership-with-ascension. It is probably a good starting point to identify what could be problematic and work constructively to address it (cf. John Moehrke's function of ( likelihood, harm, detectability)).
Dave deBronkart (Nov 12 2019 at 21:10):
Thanks, all. I think Grahame hit the point correctly, and especially, why on EARTH would they be covert about this?? Why??
It certainly does them no good to say "We didn't ask you because we don't have to and you might say no." That's precisely the situation in famous case a century ago that led to Informed Consent regulations (as foolish as they often are, in practice). I surgeon removed a woman's uterus without asking, expressly because, he said, if he had asked her, she would have said no!
Look, I'm no "anti-nerd." :-) 2/3 of how I got interested in the things that led me to FHIR are because I've worked with data all my life. I have analyzed and harvested data, to create valuable patterns. I got the Salesforce "Appy Award" in 2008 for effective use of Google AdWords in a marketing campaign that boosted our new-customer business 68% that year.
I have not a single penny to earn or lose on this issue. I'm just stomping around here trying to raise awareness of how sensitive this issue is.
Aphorism that's nearly a battle cry in the patient world: Nothing about me without me.
Grahame Grieve (Nov 12 2019 at 21:13):
why on EARTH would they be covert about this?
I see press releases etc. The only sense in which they are being covert is that they didn't ask for patient consent, and that's because they're doing this is as a BAA - no different to any other data processing service. So no notification needed. I suspect congress will revisit the question of just what it is that can be done under a BAA fairly soon
It is interesting how this plays out differently in countries that don't have anything like HIPAA....
Lloyd McKenzie (Nov 12 2019 at 21:13):
What does "covert" mean? HL7 "shares personally identifying information of its members and everyone who interacts with its issue tracking system, wiki pages and Confluence system with Microsoft and/or Google" - i.e. we leverage cloud services from both organizations.
We didn't ask anyone's permission to do that. And the only reason you'll likely see any press releases about it is because Google and Microsoft have provided those services to us for free and we wanted to give them some good press.
Dave deBronkart (Nov 12 2019 at 21:14):
Furthermore (not related to Google), there is an evil entity called the MIB that collects data about individuals into a profile. Outrage erupted over their marketing (a now-deleted cartoon with a literal price tag on a woman's arm), telling insurers that the MIB would warn them about individuals who are a bad risk. That cartoon is no longer anywhere to be found on the web, AFAIK. And here's what happened when I tried to find out if there were errors in my chart. http://patientdave.blogspot.com/2008/09/whats-in-your-mib-part-2.html
Dave deBronkart (Nov 12 2019 at 21:14):
Again, my outreach here is an effort to communicate with absolute clarity that there is EXTREME sensitivity on this subject among the patient community.
Lloyd McKenzie (Nov 12 2019 at 21:16):
Right. So one of our functions as a patient work group should be to come up with a checklist to look at when reading stories like this to see if there's something that the community really should get up in arms about or whether this is a relatively safe (and perhaps even desirable) form of data sharing.
Dave deBronkart (Nov 12 2019 at 21:17):
btw, my own cancer treatments happened a year after that blog post, resulting in a slew of completely fictional billing codes being added to my record. My insurance billing history now says I have (or had) volvulus of the intestine, metastases to the brain, aortic aneurysm, non-rheumatoid tricuspid valve disease, and more, all of which are false.
We have very good reason to not trust what's in the chart, nor what downstream companies MAY do with it. Our vigilance is not just the kind of naivete that Larry Page (classic young / healthy SiliValley white guy) thinks is our problem.
Dave deBronkart (Nov 12 2019 at 21:21):
Right. So one of our functions as a patient work group should be to come up with a checklist to look at when reading stories like this to see if there's something that the community really should get up in arms about or whether this is a relatively safe (and perhaps even desirable) form of data sharing.
Go right ahead. :-) Seriously - come up with something that we can test on all kinds of stories.
Note: the twitter threads I've seen are all clear that this is legal under HIPAA. Nobody I know is saying it's not. The point is transparency, and at another level, whether HIPAA goes far enough for today's age.
And just so you know, some activists are saying "If you guys are gonna harvest value out of data you EXTRACTED FROM ME, then CUT ME IN, bitches." :-)

@nickdawson For an even more-fuller POV, on ALL data: “Selling my data? CUT ME IN, bitches.” by Casey Quinlan https://link.medium.com/hDkIq2LscS
- Mighty Casey Loud AF 🔥🔥 (@MightyCasey)
Dave deBronkart (Nov 12 2019 at 21:22):
(One last thing before I sign off - seriously, @Lloyd McKenzie, for years e-patients and docs have been saying they want a way to know which stuff to trust. I / they / we would welcome anything that works.)
Grahame Grieve (Nov 12 2019 at 21:25):
the twitter threads I've seen are all clear that this is legal under HIPAA
but they are not clear that under the BAA rules, google doesn't have this data in the same sense as they have other data.
Lloyd McKenzie (Nov 12 2019 at 21:27):
If I wasn't under deadline, I'd take a stab at it right now - what's the list of questions we should be asking to help assess problem/goodness. Perhaps on the way to or from DevDays... :)
Brendan Keeler (Nov 12 2019 at 21:40):
An average-sized IDN generally has 70-100 third party applications ranging from traditional business associates like lab systems, PACS systems, outpatient pharmacy, etc to new digital health applications, AI, machine learning, etc. Patients don't mandate whether or not they can be used by the hospital, because they are tools chosen by the hospital to provide care under the BAA.
If you believe a BAA is an inadequate protection, HIPAA needs revision. If you believe a BAA is adequate, but Google is an improper steward of health data, it sounds like you have a bone to pick with Sundar Pichai. I don't understand the innate distrust when they also have a track record of trying to do good/make useful health products.
Grahame Grieve (Nov 12 2019 at 21:42):
I don't understand the innate distrust when they also have a track record
Because of the things they do under the 'also' category, and because non-technical humans are not so good at splitting categories
Debi Willis (Nov 12 2019 at 22:04):
If my doctor wants to use a Google service to help my doctor identify possible health issues that are hidden in my data AND Google is prohibited from using my data for ANY other purpose than to provide this service to my physician, my next requirement would be to make sure my data is correct. Please don't give me warnings based on inaccurate data in my chart and please don't miss possible health issues because my data is incorrect. If this service underscores the need for cleaning up data then perhaps that will be the best side effect for patient care. Can you imagine how completely incorrect the google insight would be when looking at Dave's chart with all the garbage in it?
John Moehrke (Nov 13 2019 at 02:31):
If I wasn't under deadline, I'd take a stab at it right now - what's the list of questions we should be asking to help assess problem/goodness. Perhaps on the way to or from DevDays... :)
The questions to ask are built into Privacy by Design (PbD) -- https://en.wikipedia.org/wiki/Privacy_by_design
which is a design (organizational, not necessarily well suited for product design)
based on the well established Privacy Principles https://healthcaresecprivacy.blogspot.com/2015/04/privacy-principles.html
BOTH are the foundation of GDPR
Adam Flinton (Nov 13 2019 at 07:37):
We have the opposite problem (in the UK). I constantly get grief over the fact that if a family member goes to a hospital then the hospital can't get the records from the GP (or vice versa). It is technically possible (& will be rectified over time ) but the main blocker is the old "who owns the data" chestnut. My position is that it is the patient but that view is only slowly being adopted. In this case the health co clearly believes it is "its data".
Kevin Mayfield (Nov 13 2019 at 09:37):
I think it's a little more complicated than that. A clinician getting access (e.g. UK GP Connect) to a record is a low IG request but NHS(D) asking for a copy of the records is a high IG request (e.g. UK GP Data).
As a patient I'd be asking why you need a copy of my records? It's this copy that health co, gp's and patients have issue with, they probably don't have many issues with the research questions being asked [It's the method of getting access to the data that's the big issue]
Shahid Karimi (Nov 13 2019 at 09:41):
Hello
Can anybody send me PHIR complaint postgreSQL database schema?
Lloyd McKenzie (Nov 13 2019 at 13:05):
@Shahid Karimi, you might want to ask that question over on #implementers
Lloyd McKenzie (Nov 13 2019 at 13:05):
Or better yet, #storage for FHIR
Dave deBronkart (Nov 13 2019 at 17:19):
Please don't give me warnings based on inaccurate data in my chart and please don't miss possible health issues because my data is incorrect.
Can you imagine how completely incorrect the google insight would be when looking at Dave's chart with all the garbage in it?
For instance this actual one. (Cause: the hypokalemia dx, during my cancer treatment, was never removed.)
pasted image
Here again for newcomers to the thread is my post listing all the errors (and fictions) we found.
Note (again) that when I subsequently asked my insurance company if they wanted to do a complete audit of the bills they'd received, their answer was "Don't worry about it," strongly suggesting that they don't have an incentive to remove false charges.
Diego Bosca (Nov 15 2019 at 12:15):
As a note, this kind of project as it is described should be illegal under GDPR.
BTW, I cannot think a benevolent use case of giving google your full name
Lloyd McKenzie (Nov 15 2019 at 12:47):
I think any illegality under GDPR would be limited to "purpose of use". If the patients involved had authorized their data to be used for the purpose of research on new predictive analytics to improve care delivery, would GDPR prohibit contracting another organization to do the analytics research if the organization was contractually behaving as a subsidiary and restricted from retaining the data or making any other use of the data once the research was complete? I can't imagine the nightmare in Europe if you had to get patient permission every time you wanted to sub-contract a data analytics or maintenance function to another organization. Obviously it would be essential that the contracted organization be bound to the same GDPR expectations as the contracting organization.
I can certainly see value in giving a machine learning algorithm a patient's full name. If one of the things it's doing is checking for potential errors in data (e.g. incorrect merges), then any anonymization algorithm that was applied to the data could corrupt/interfere with the machine learning process. The only way it being 'Google' matters is that Google potentially has other business interests in the use of the data, so the terms of the agreement (and the oversight of the agreement) would need to ensure that the shared data was only used for its authorized purpose. In practice, that should be done regardless of who the contracted organization is. It's pretty clear that the agreement here does have those constraints in place, though it's not totally clear what oversight mechanisms exist.
Diego Bosca (Nov 15 2019 at 13:32):
You would need to inform them at least, so they can apply for their ARCO rights. I'm pretty sure it could be done as long as contingency plans, intended usage, etc. are clearly stated and patients are informed. Not informing the patients is the worst offender in this case IMHO.
Jose Costa Teixeira (Nov 15 2019 at 13:41):
If the patients involved had authorized their data to be used for the purpose of research on new predictive analytics to improve care delivery, would GDPR prohibit contracting another organization to do the analytics research if the organization was contractually behaving as a subsidiary and restricted from retaining the data or making any other use of the data once the research was complete?
IMO that's fair game for GDPR so no, GDPR does not prohibit that. The other organisation is the processor and they are also under GDPR but this is acceptable.
Jose Costa Teixeira (Nov 15 2019 at 13:43):
Google potentially has other business interests in the use of the data, so the terms of the agreement (and the oversight of the agreement) would need to ensure that the shared data was only used for its authorized purpose. In practice, that should be done regardless of who the contracted organization is. It's pretty clear that the agreement here does have those constraints in place, though it's not totally clear what oversight mechanisms exist.
if they have other purposes, then any use of data for those purposes becomes another story that goes the same way - they need permission (consent or other).
Jose Costa Teixeira (Nov 15 2019 at 13:45):
As for who is the contracted organization, then they must be known, and the safeguards must be adequate (that neither controller or processors have a significant risk of misusing or losing data). IIRC, GDPR does not require the patient to know about that.
Jose Costa Teixeira (Nov 15 2019 at 13:45):
in other words, contingency plans, intended usage, etc are known by the privacy authorities, not necessarily by patient.
Lloyd McKenzie (Nov 15 2019 at 13:48):
Right. GDPR requires purpose of use permission. The US does not. Analyzing data to improve quality, delivery, outcomes and/or efficiency is considered part of 'operations'. So this would presumably also be fine in Europe without patient notification so long as the initial data capture was done informing the patient that their data could be used as part of analysis to improve quality, delivery, outcomes and/or health system efficiency. The fact that Google might be doing the actual work would never need to be disclosed to the patients and no further permission on their part would be needed.
Jose Costa Teixeira (Nov 15 2019 at 13:49):
for gdpr, patient is not the ultimate authority.
Dave deBronkart (Nov 15 2019 at 13:50):
I'll just add a quick note that two separate, co-equal concerns are woven through this topic: what's legal in different places, and the perceptions / concerns different people have.
I'll never assert that we have to "solve" people's worries, whatever that would mean. But if we care about adoption, we do need to be conscious of it.
Dave deBronkart (Nov 15 2019 at 13:51):
If this thread continues, perhaps we should start a #privacy stream. Someday.
Lloyd McKenzie (Nov 15 2019 at 13:56):
Part of the challenge is that people aren't aware that data sharing happens all the time and must happen for the system to function. Just as most people don't host their own email servers, hospitals are going to rely on external organizations to manage some of their data operations. Patients aren't going to be notified when that happens because it's not practical or reasonable to do so. What matters is whether the data being shared is shared with appropriate protections and appropriate oversight. That's the bit that matters - not whether patients had a say in the sharing. (I understand the arguments for patients having some say in how their information is used/benefit from use that may provide others with commercial value, but that's neither law nor cultural expectation in the U.S.)
Jose Costa Teixeira (Nov 15 2019 at 13:57):
as an example, suppose that there is some indices of criminal activity in whatever data we have about a patient- the patient does NOT get the absolute right to see that data or even know that it exists.
Jose Costa Teixeira (Nov 15 2019 at 13:57):
If this thread continues, perhaps we should start a #privacy stream. Someday.
:)
Jose Costa Teixeira (Nov 15 2019 at 13:59):
true. GDPR questions are not new, should be in Security and Privacy stream
Dave deBronkart (Nov 15 2019 at 14:02):
true. GDPR questions are not new, should be in Security and Privacy stream
Oh - I didn't even know it exists.
Would it be sensible, or clutter, to move the sociological / perception issues there?
Jose Costa Teixeira (Nov 15 2019 at 14:11):
when it comes to GDPR discussions, that space is already cluttered anyway. So I think this should go there, yes
Grahame Grieve (Nov 15 2019 at 19:26):
Just want to add a note from discussion elsewhere: it seems that the basic motivation for the whistleblower is that the existing HIPAA requirements (which the whistleblower believed were appropriate) weren't being conformed to by the Google team - at least to the degree that they believed that Google had not done sufficient internal training around compliance.
And this is quite important - unless leadership goes out of it's way to provide strong leadership about data stewardship, there won't be any integrity there. Copies will exist all over the place. Good stewardship is important, and data is a toxic asset. (what this really is a lesson to other countries about why laws like HIPAA are important)
Diego Bosca (Nov 15 2019 at 22:00):
The whistleblower speaks, seems purpose of data collection was far from clear
https://www.theguardian.com/commentisfree/2019/nov/14/im-the-google-whistleblower-the-medical-data-of-millions-of-americans-is-at-risk
Dave deBronkart (Nov 15 2019 at 22:19):
And this is quite important - unless leadership goes out of it's way to provide strong leadership about data stewardship, there won't be any integrity there. Copies will exist all over the place. Good stewardship is important, and data is a toxic asset. (what this really is a lesson to other countries about why laws like HIPAA are important)
Well said. This could be a pinned/sticky post representing the (perhaps) "informed fear"? felt by many patients.
I will say too that while "espionage" of one sort or another may be common in different industries, there's a nasty immoral taste to it when someone's being snoopy and careless (hello Zuck and Protti) with what may be intensely personal information.
A health-related aspect that's not present in most industries: in some cases if someone fears having a fact leak out, they don't pursue therapy. That's both inhuman (a complete system failure, eh?) and dangerous to "the herd" as well, if it's contagious.
Security really is fundamental in healthcare.
Dave deBronkart (Nov 16 2019 at 01:44):
I'm posting this Nov 14 Guardian update, just so it's here if anyone wants it. It's not too long; I haven't read it carefully, but there seems to be no shocker.
I'm the Google whistleblower. The medical data of millions of Americans is at risk
Dave deBronkart (Nov 16 2019 at 13:40):
Okay, I took your advice and migrated this topic to the Security & Privacy stream. There's more news today - I'll post it there.
Dave deBronkart (Nov 17 2019 at 15:31):
I could kick myself for not thinking of this connection before.
SPM member Sara Riggare is a Parkinson patient and PhD candidate at Karolinska Institute. She has organized a one day conference Monday on her new concept of a "lead patient" ("spetspatient"). I'm stopping there on my way to DevDays - it was cheaper than coming directly!
Dana Lewis is speaking. And I realized, I bet a fair number of them will be interested in better access to their data via FHIR. I'll be watching for that.
http://konferensspetspatienter.se/program-in-english/
Grahame Grieve (Nov 20 2019 at 05:13):
David from Google has responded: https://www.blog.google/technology/health/google-health-provider-tools-launch/
Grahame Grieve (Nov 20 2019 at 05:16):
and Ascension: https://www.ascension.org/News/News-Articles/2019/11/19/21/39/Fulfilling-the-promise-of-digital-health-information
Dave deBronkart (Nov 20 2019 at 08:08):
Thanks, Grahame. I did a quick scan and found two causes for concern: "private / privacy" gets only a passing nod ("Yeah, health data is personal"), and there's not a word about patient consent.
So, so far this seems to have worrying signs of "We're not listening" or "Don't think about that - think about this," neither of which will fly well with the people who are concerned.
Note: I did not just say "Fail! I hate this man!" I'm pointing to the things that are recurring red flags when a company like Google or FB wants to pave over the issue.
I'll read more carefully later.
Lloyd McKenzie (Nov 20 2019 at 08:25):
I'm not clear why patient consent would be required here (ignoring for the moment whether regulations require that or not - which in this case I don't believe they do).
The objective isn't to mine the patient's data to extract knowledge (where you might theoretically say that patients have a right to compensation). The objective is just to learn to better integrate and present the data the health provider has from different sources. I.e. The purpose of use of the data is consistent with why the data was originally captured - present it to appropriate clinicians at the appropriate time to support the delivery of patient care. If the Ascension was doing this themselves, patient consent certainly wouldn't be required. It's not reasonable to seek patient consent every time a healthcare provider outsources an IT function - which it appears is all that's happening here. Patients certainly should expect that appropriate data custody and protections are in place when outsourcing occurs, but the parties claim that has happened here. (I recognize the whistle-blower seems to disagree, so totally understand feeling some trepidation about what's actually happening.)
Dave deBronkart (Nov 20 2019 at 08:36):
Yes, "consent" is not technically the right word. I'll try to come up with better wording and edit the post.
The concern a lot of people have (note, I'm not stamping MY feet here) is that the company is tone deaf about the issue; you don't need to be a genius to anticipate the response to headlines about "Google" plus "millions of patient records."
Again, as I said up-thread, my main point is that we (the FHIR community) need to be aware of public sensitivity about this. I think there's a chance for our community to be true leaders on this subject in the public's eye ... nobody ever gained trust by simply saying "Oh, don't worry about that." :-)
I grin but it's a real issue, pragmatically.
John Moehrke (Nov 20 2019 at 14:51):
The unfortunate problem is that the best word the access control and legal community have for this is "Policy" which happens to be the same word they use for everything. There is some interest (2 people) to have a solution for encoding a policy for how data can be used and what obligations are placed on that use, so that a recipient of data have the rules in a processible form (not paper) to better enable appropriate use and avoid accidental (or malicious) misuse.
John Moehrke (Nov 20 2019 at 14:51):
See the deeper details I have placed on the #Security and Privacy stream
Dave deBronkart (Nov 20 2019 at 17:24):
The unfortunate problem is that the best word the access control and legal community have for this is "Policy" which happens to be the same word they use for everything.
ROFL! I've said LOL a lot recently, but not ROFL!
Keith Boone (Nov 22 2019 at 17:11):
Thanks, all. I think Grahame hit the point correctly, and especially, why on EARTH would they be covert about this?? Why??
It's a common business practice to protect customer relationships.
1. Not every business that does work with other businesses publicizes their customer list. Who your customers are is intellectual property, and not everyone shares who their customers without some thought being but into it.
2. Other vendors I've worked with in the past don't and won't tell the public who their customers are except in special circumstances.
3. Some customers prefer to not have it known who they use for certain services in BAA and similar arrangements. That may expose how they provide value add to their competitors.
There are a lot of reasons why this sort of information isn't commonly publicized. Do you know who you provider uses to connect to your insurer? B/c they likely use a service to do it. Do you know who your HCP uses to handle their billing? They likely don't do it themselves. A lot of these functions are outsourced to organizations that specialize in that function, and can therefore take advantage of economies of scale that aren't available to the single organization that needs that function.
Should there be different rules for BIG names like Ascension or Google with regard to publication of this sort of relationship? Why?
FWIW: For Google, and any other, this activity has to be firewalled from activities in other parts of the business, and very likely, separate business entities have to be created to ensure that firewall is strong. I strongly suspect that Google wholly owns the company that does the work, and that company has access to Google technology and services, but that the reverse is not true with regard to the data that organization has access to.
A similar thing happens to healthcare systems which also provide their own healthcare plans, and there's a lot of attention on these sort of relationships to ensure that private data remains accessibly only to the entity that provides the service (healthcare, or insurance).
Last updated: Apr 12 2022 at 19:14 UTC