Analysis Example: The Worst of Two Options
The Worst of Two Options
Rotem Landesman
Calling the doctor to schedule my appointment is, and no one has tried to convince me otherwise so far, a more horrific experience than anything else I’ve had to do since trying to make it on my own. I joke, but not entirely; in my and many others’ experiences, navigating the twists and turns of the American healthcare system, its insurance reliabilities and often questionable motives has left me jaded, annoyed, and most of all tired. I’ve not read any studies to this end, but I can say with careful certainty I don’t think this is the way healing is supposed to feel.
That’s why, when I first heard the pitch of Care-Box Links to an external site., an AI powered “doctor box” that has rolled out to the market in the past few weeks, I let myself be cautiously optimistic. It’s exactly what you’re imagining - a giant iPad of a box that a patient steps into, gets scanned by a few hundred sensors and input some personal information about themselves as well as their credit card details and poof - out comes the diagnosis, prescriptions if needed, and an easy to follow care plan on a dedicated app that’s integrated beautifully into this black box scenario. The company can already administer blood tests and other exams, and aspired to have the options of MRIs and open heart surgery in the future (according to their CEO).
That ambition is a scary and fascinating future, but not as daunting as the thought of all of the data, personal or otherwise, that needs to go into these machines for them to work properly. On the surface, considering the decision tree most diagnoses follow anyway, more data is a good thing. More decision points made by a non biased machine, perhaps uncovering some minor piece of information about ourselves that a doctor, with their numerous patients and just human-ness might’ve missed.
Sure, we’ve heard about the dangers of misdiagnosis Links to an external site. through AI curated systems. In a recent study looking at the diagnosis given by ChatGPT [1], the program was found to be 77% accurate, and only 60% accurate when its fed information was based on patients’ initial interaction with a doctor. Not terrible odds, but not great ones considering we’re thinking about people’s health. But reading through the press release, the CEO of Care-Box said something that captured my imagination and surpassed those fears in an instance:
“Aoun explained that the member agreement states that Forward will not and does not sell any personal data. ‘ We do not have that as a business model,’ he emphasized, adding, ‘The nice thing is we don’t take insurance, and so because we don’t take insurance, we’re really not giving your data to anyone. The only exception is with referrals to specialists. We’ll do that for you. But we’re not in the business of sending out your data. I don’t think that’s cool.’”
Let’s break that up. A CEO of a giant successful tech health tech company who’s very happy they don’t take insurance? Not cool. With the insurance crisis the US has been experiencing for decades, though diminished through the Affordable Healthcare Act in 2010 Links to an external site. but by no means can be considered solved, we should’ve learned better by now. Everyone deserves affordable healthcare, and everyone deserves to be able to get the care they want and need in a way that does not put more burden on them than it must.
But if I may, take a look at the second part of that quote. Aouin mentions that these magical doctor boxes, unlike a traditional doctor’s office, does not collect its patients’ information because it has no incentive to do so being disconnected from the insurance industry. Isn’t that… a good thing?
If we recall what happens at a human doctor’s office, perhaps we can shed light onto the terrifying thought that in order to not have our information sold to third party insurance companies we need to be diagnosed by an iPad. When we step in a doctor’s office, our file is opened and updated, with information about our health, previous visits, concerns we raised and possible future points where a check up would be beneficial. That information is protected by doctor-patient confidentiality rules as well as varying HIPPA laws, a federal law that creates “national standards to protect sensitive patient health information from being disclosed without the patient’s consent or knowledge”. Of the permitted uses and disclosures outlined in the act, one of them allows disclosure of personal health information without the patient’s knowledge to “treatment, payment, and healthcare operations” Links to an external site.. That makes sense - in order to know what they’re paying for, insurance companies should have access to the details of treatments and information about my health. On the flip side of that coin, however, health insurers use that data to build profiles Links to an external site. on us and estimate how much care we would need in the future in order to determine our current premium costs. If I am, for example, a woman in my 30s, an insurer might assume I’ll get x amount of vaccines a year, x amount of checkups, probably get the flu once or twice a year, and then multiply that cost by the amount of patients who like me fall into the bucket of “30s, woman” and see how much money it’ll need in its stashes to make sure we have what we need in the future.
But what if those stats are wrong? What if I want to get more check-ups than my bucket provides, break my foot accidentally or need stitches, and my insurance doesn’t have enough money to cover me? Can my insurance turn vindictive and spike up the cost of these medical needs because of the information they have on me, perhaps because I violated their neatly outlined persona? Perhaps, and it can get even more complex if I lose my job like people do, switch insurances, get married, have children, or any one of the myriad of things that a normal life entails.
Now all of a sudden a giant iPad in the middle of a mall retaining my information, and its CEO claiming that they are unlikely to sell it to third parties without my consent since it doesn’t affect their bottom line sounds more promising. Making money off of selling my data is “not cool”, as we heard the company mention - which honestly sounds like a binding contract to me.The only problem is that once that data is given to the company it’s theirs forever and without restrictions. Then, if perhaps they get bought or dismantled in the next few years my data travels with them - and who knows if the next CEO will have the same values (or if the current one will keep his?)
The choice seems to come down to who we’d like to trust more, or rather who do we trust less; a private company that claims to have no intention to sell my health information but is not bound by any federal laws and is more likely to do with my data as they will in the future, or my insurance company using my data to profile me and spike up the price I have to pay to live a healthy life. Not to end on a pessimistic note, but it seems like someone more talented than me should rethink the incentive system involving our health, and perhaps consider that at the other side of the line there are real humans.
[1] Rao, A., Pang, M., Kim, J., Kamineni, M., Lie, W., Prasad, A. K., ... & Succi, M. D. (2023). Assessing the utility of ChatGPT throughout the entire clinical workflow: Development and usability study. Journal of Medical Internet Research, 25, e48659