FHIR Chat · My upcoming fistfight with Anthem for a correction · patient empowerment

Stream: patient empowerment

Topic: My upcoming fistfight with Anthem for a correction


view this post on Zulip Dave deBronkart (Aug 31 2020 at 22:00):

My Rx insurance plan sent my doctor a false notice that I'm non-compliant, and in 50 minutes on the phone NOBODY CAN FIGURE OUT WHERE IT CAME FROM. (I don't fault them - they were all intelligent & caring & working hard.)

Fistfight has begun on Twitter

view this post on Zulip Michele Mottini (Aug 31 2020 at 22:54):

Are you Richard or Dave?

view this post on Zulip Michele Mottini (Aug 31 2020 at 23:14):

If you have an Anthem member portal login you can get the myFHR app on your phone and download all the data they have about you

view this post on Zulip Dave deBronkart (Sep 03 2020 at 16:31):

I'll be updating this

view this post on Zulip John Moehrke (Sep 03 2020 at 16:41):

im bringing popcorn to the workgroup call today. :popcorn:

view this post on Zulip Dave deBronkart (Sep 03 2020 at 16:53):

I won't spend workgroup time on it - it's a classic example of something best done in a chat app where people can skip it :smile:

It turns out to be a War Of The Blind Robots story.

  1. When I signed up with Anthem, they defaulted me into an MTM program - medication therapy management - without telling me what I was getting into.
  2. When I tried to have my blood pressure med filled at CVS, with a minimal co-pay for 90 days, their computer couldn't reach Anthem's, so I said screw it and said never mind the insurance.
  3. Uh-oh: the robot said "WE HAVEN'T SEEN HIM BILL US FOR THIS MEDICATION THAT WE WON'T PAY FOR ANYWAY!!!" Boom: non-compliance letter, generated (apparently) but an Anthem subsidiary, which none of the phone customer service people have been told about.
  4. In parallel, btw, unrelated to Anthem, CVS's robot had a similar freak-out and kept robo-calling me, telling me it was time to order a refill, even while I had a 90 day supply in my medicine cabinet, from CVS.

I firmly believe we need a rigorously defined term for artificial opposite-of-intelligence, with subtypes for false assumptions, e.g. "If he didn't tell us, it must be that he's non-compliant."

CVS has lost all my business, btw, because their nag-bot recording did not have a menu option for "stop calling me." Even the store personnel couldn't do that. To shut it off they required me to call a different 800 number and talk to a human.

Anthem has accepted all my info and says they're looking into it.

view this post on Zulip John Moehrke (Sep 03 2020 at 17:42):

This is why AI should be seen as assisting, not doing without oversight. This was a lesson learned 20 years ago in the medical device industry with algorithms designed to do data analysis. EKG analysis, image analysis, etc....

view this post on Zulip John Moehrke (Sep 03 2020 at 17:42):

As a wise tweet once pointed out:::: If all your friends are jumping off a cliff, an AI / ML algorithm would push you off.

view this post on Zulip Dave deBronkart (Sep 03 2020 at 18:08):

John Moehrke said:

If all your friends are jumping off a cliff, an AI / ML algorithm would push you off.

Which is EXACTLY the opposite outcome of how my mother taught that scenario.

view this post on Zulip David Pyke (Sep 03 2020 at 18:11):

Why did your mother know about AI/ML?

view this post on Zulip Dave deBronkart (Sep 03 2020 at 18:12):

She knew nothing of algorithms, except one: her only framework was (and is) CS. (Common sense.) Use case:

Me: "But all my FRIENDS are doing ____[fill in the blank]_____"
Mom: "If all your friends were jumping off a cliff, would you??"

view this post on Zulip Abbie Watson (Sep 03 2020 at 18:12):

AI/ML algorithms need to a) be explainable, and b) have different 'gears' that they can shift between. I.e. putting them in forward, reverse, neutral, etc. With healthcare algorithms, that would probably be 'cost minimizing', 'life maximizing', 'comfort maximizing', 'risk reduction', etc.


Last updated: Apr 12 2022 at 19:14 UTC