Flogging the data until it confesses

Did you ever hear the old joke where the boss says floggings will continue until morale improves? Flogging the data until results improve…or the data confesses…is not uncommon. Too bad.

In my career I’ve worked with companies with over 100k covered lives the claim costs of which could swing widely, from year to year, all because of a few extra transplants, big neonatal ICU cases, ventricular assist cases, etc.

Here are just a few of the huge single case claims I’ve observed in recent years:

  • $3.5M    cancer case
  • $6M       neonatal intensive care
  • $8M       hemophilia case
  • $1.4M    organ transplant
  • $1M       ventricular assist device

This is not a complaint. After all this is what health insurance should be about, huge unbudgetable health events.

All plans have one organ transplant every 10k life years or so, most of which will cost about $1M over 6 years. A plan with 1k covered lives will have such an expense on the average of every 10 years. Of course the company may have none for 15 years and two in the 16th year. The same goes for $500k+ ventricular assist device surgeries.

Looking at claims data for small groups is perilous, sometimes for large groups as well. Because of the high cost and relative infrequency of so-called “shock” claims, those over $250k, you need about 100k life years for the claims data to be even approximately 75% credible. When a group with 5k lives said they did something that cut the claims costs, they can’t really know if the change made a significant difference for a couple of decades.

Here is an example. A smallish group, about 3k covered lives asked me to help calculate how much their wellness plan was saving. They had all employees listed in three tiers: active wellness participants, moderate participants, and non-participants. I warned them they didn’t have enough data to be credible but they proceeded anyway. They expected active users would have the lowest claim costs and so on. When the data were reviewed, there was perfect reverse correlation. Active wellness users had the highest claim costs, moderate users had the next highest costs, and non-participants the lowest. In their final report, which I had nothing to do with preparing, and from which I had recused myself, they subtracted out big claims by the active and moderate users to get the results they wanted. In short they flogged the data until it confessed. Alas.

One large company claimed huge reductions in plan costs by adding a wellness program. It turns out during that period in question they also implemented an “early out” incentive. Upon examination, the early out program resulted in a big reduction in the number of older employees which more than accounted for the reduction in claims costs.

Here is yet another example. I was in a conference a few years ago in which a presenter from a small company, about 1k covered lives, claimed to have kept their health costs flat for five years through wellness initiatives. While he got a big ovation, his numbers just didn’t add up. I asked him a few questions after his speech about what other changes he made during that period. He said they lowered their “stop loss” limit from $100k to $50K a few of years earlier. Then he admitted to excluding his stop loss premium costs, which were skyrocketing, from his presentation. With a little bit of mental arithmetic I added that back in, which revealed his company’s total health costs were going up at the same rate as everyone else’s, perhaps even a little higher. Hmmm. I don’t think he deliberately mislead the audience. He just didn’t know better. When you hear boasts of big short-term impacts of wellness programs, beware of confirmation bias.

When a company claims they implemented something that caused their health plan costs to drop 15% or so, ask a few questions:

  1. The big question is did the company adjust for plan design changes, such as raising deductibles and copays, that merely shifted costs to employees?
  2. Did the changes really save claim dollars?
  3. Did they factor in stop loss premiums?
  4. How many life years of data did they observe?
  5.  Did the company exclude large or “shock” claims? (This is not uncommon, especially among wellness vendors.)
  6. Did it experience any big changes in demographics, such as through implementation of an early retirement program or layoffs that impacted older workers the worst?

When I’ve asked those kinds of questions, I’ve almost never seen a big claim of cost reductions by a small company hold up under scrutiny, and same for some big companies too.

Today flogging the data to get the desired results is all too common. That’s no surprise. They keep catching academics and big pharma doing the same thing. Skepticism is in a good thing.


Tom Emerick

Tom’s latest book, “An Illustrated Guide to Personal Health”, is now available on Amazon.

Leave a Reply

Your email address will not be published. Required fields are marked *