On Payola and X-rays: Part 1

Untangling the components of risk

It’s worth thinking about the many ways in which incentives drive behaviors in unexpected ways.

Recently I was trying to digest four cataclysmic AI scenarios and pondering for a bit the difference between severityand probability, as we apply it to risk.

Probability is the easiest to paraphrase, though the statistical definition quickly gets wonky. But to simplify: given a period of time over which you might make separate observations, divide the number of times you observe an event by the number of observations. That’s your probability.

Severity is generally more loosely measured and less well understood: What impact might an event have on those exposed to it? Somewhere between negligible and catastrophic captures most events, but usually these are estimates and subject to dispute about the particulars.

Risk, then, is the combination of both factors: a catastrophic scenario with a 1% probability of occurring in the next 100 years/hands dealt/trading days is far more risky than a scenario with minor impact but a 50% probability. Sizing one’s bets - aligning your exposure to the magnitude of the impact and the probability of occurrence - is the only way to truly manage risk.

While reading Antifragile, Nassim Taleb’s followup to the Black Swan and easily one of my favorite books this year, I came to understand a little more about how one might approach the inverse definition of risk: Increasing one’s exposure to scenarios with a greatly positive upside but with little cost/expense largely improves ones overall outcomes.

One clear example: always accept an invitation to a meal with a distant acquaintance: the potential upside (learning something new; connecting new dots; possibly a wonderful meal and good conversation) far outweigh the risks (dull conversation; terribly food; possibly insult to ego; mild food poisoning), and the cost is quite reasonable (a few hours of an evening; perhaps you bring the wine.)

All of which is a long-winded way to get to this point: when we design systems and aim to measure their impact, we need to be intensely aware of the risks involved in both the design decisions, and the choice of metrics.

Systems of consequence that attempt to apply metrics in a bid to improve performance immediately subject themselves to rigging. I learned recently of the “payola” guitar of the 1950s (ht Tom Whitwell ), a clever bit of mechanical engineering designed to produce no other outcome than increasing compensation for the artist at minimal additional effort:

“jury-rigged with four pickups wired into extra jacks that would each plug into a separate channel on the recording console.. the session player could collect four times union scale for playing four slightly different versions of the same guitar solo.”

Another example: a few weeks ago, my son sustained a back injury at school (parents all know the rush of terror-laden adrenaline that ensues when the phone rings in the middle of the day, and the caller announces, "this is the school nurse...") Short version: he's fine. But the followup to that phone call consisted of:

  • Drive to school: 20 minutes
  • Drive Dr's office: 5 minutes
  • Time at Drs office: 10 minutes
  • Walk to Hospital 2 blocks to Radiology for X-rays: 10 minutes
  • Time spent waiting in Radiology while three nurses make 5 phone calls to doctors office and insurance company to get an x-ray prescription rewritten, and to have the doctor call in a referral to the "specialist" radiologist, plus time for insurance company approval: 2 hours 15 minutes.
  • Waiting for X-ray technician: 5 minutes
  • Taking X-rays: 5 minutes
  • Time for radiologist to read x-rays and declare my son's spine "not broken": 8 minutes.

From phone call to final prognosis ("he's fine, but no rock climbing for two weeks") took about three hours and twenty minutes, of which 68% of the time was spent managing paperwork.

Why?

"The American Healthcare system sucks," is one ready answer, and certainly one I mumbled. But it's unsatisfying, if provably true. To wit, I recalled during our wait for bureaucratic deliverance, that I had experienced something quite similar in my youth.

Possibly around the time I was 11 or 12, I spent a summer at a camp in the Pocono Mountains, northwest of Philadelphia. At some sort of ice cream social/camp dance, I was running - backwards, like an idiot - around the dance hall. Heaven knows if anyone was chasing me, but I tripped, fell, and smacked the back of my head on the concrete floor.

What I remember following that is a bit hazy (it has been 30+ years!), but I clearly remember the camp nurse (who was probably also married to the camp director) driving me to the ER, where I received exactly 8 cranial x-rays. I remember with startling precision the triangular hard-foam pad placed under my neck as I lay on the gurney, to tilt my head just-so. I remember the phrases "subdural hematoma" and "no fracture." I remember being excused from swimming for the remainder of the camp session.

What I do not remember is my parents being involved on any level whatsoever. And, to their credit, when I recounted this story many years later, my mother insisted that no one had ever told them - at the time or at any point after - about it.

Contemporaries of mine will say, "yeah, that was life in the 80s. If it wasn't broken or gushing, no one made a big deal."

Liability mores and standards have changed, for sure, but so has something else: Health Insurance, and the (mostly hidden, until they're not) incentive systems they employ.

One of the reasons no one mentioned this incident to my parents was because, at the time, no one would have expected them to pay for the x-rays, or any of the medical treatment. Certainly not out of pocket. Frankly, that anyone would have been expected to pay anything is unlikely, I think. Moreover, the cost of 8 eight rays was not exorbitant, and my guess is the camp would have gladly paid cash on the spot.

So what has changed? Why, 30 years later, do 2-3 X-rays (which now involve no film, no chemical developing agents, and a radiologist who is almost certainly not State-side) require the mystical incantations of three members of the hospital office staff before they can be dispensed?

Everything must be coded just-so.

Just So the reporting infrastructure can determine if the X-rays were "medically necessary".

Just So the billing infrastructure can assess fees to the correct entity.

Just So the Explanation of Benefits can explain... well, very little in meaningful terms.

(Also, in many cases, Just So you can be billed for the full cost of the procedure anyway, regardless of whether your insurance covers it.)

I mean little of this story to portray myself as an angry consumer shaking his fist at the sky, but more so to ask: what must have been the process involved in designing a system where 2/3 of the time and effort is spent coding and documenting healthcare, and 1/2 of the time and effort is spent dispensing it?

My own experience tells me: it was long, complex, and biased towards expediency. My guess is that any effort on making the experience "patient-centered" was likely tacked on at the end.

What risks are associated with a system designed this way?

  • One possibility is that costs are sometimes not adequately accounted for, and that the "responsible party" is not properly held "responsible." At a large enough scale, this might mean that providers go un- or under-paid.
  • Another possibility is that outcomes are not properly tied to procedures. With enough of these errors in the system, patients might receive unnecessary procedures.
  • A third possibility is that patients might find the process so alarmingly confusing, or difficult, or arduous, that they avoid seeking healthcare in all but the most severe circumstances. Over the long term, this likely makes most conditions harder and more complicated to treat.

There are, I'm sure, more risks that I haven't identified (and I'd love to hear what they might be.) For the moment, I'll end on this note:

When we look at just the probability, and not the severity of impact, we misjudge the true risk.

In Part II, I'll come back and look in more detail at the probability and severity of these risks.

A teenager climbing up an indoor rockclimbing wall, inside an industrial looking rock gym. There are a varitey of different shapes of rock climbing holds, each colored to specify a designated route up the wall.

Rock Climbing, Again


-Ben