It’s “Turtles all the way down: ” an amazing bargain on Terry Pratchett’s DISKWORLD novels.

Not my usual topic of course but Humble Bundle has the most amazing bargain until the end of January on 39(!) Diskworld Novels:

https://www.humblebundle.com/books/terry-pratchetts-discworld-harpercollins-books

the novels fall into what you might describe as the comic fantasy genre and if you like that genre, this is an amazing bargain on some of the best books in this genre.

So what is Diskworld? “it’s a flat planet balanced on the backs of four elephants which in turn stand on the back of a giant turtle.” The books cleverly parody many traditional tropes in fantasy and, how can you not like a series where a hero named “Cohen the Barbarian” pops up from time to time?

Finally, it’s important to always remember that, as this well known story tells us, it’s “turtles all the way down”.

“A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.” The scientist gave a superior smile before replying, “What is the tortoise standing on?” “You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down!”

The number needed to treat: A better way to explain efficacy

In my previous blog entry, I tried to explain why absolute risk reductions was the right number to look at. The trouble is absolute reduction is expressed as a percentage and people generally hate percentages and usually need to do some further mental gymnastics to process the information. So statisticians in the 1980’s came up with another way to look at absolute risk reduction, it’s called the number needed to treat (usually abbreviated NNT).  NNT is actually pretty easy to understand. For example, suppose  a treatment cures everyone treated, well then the NNT is 1. One person, one treatment, one cure. Similarly, if when you treat two people, one  is cured, the NNT is 2. If you need to treat 3 people to cure one, the NNT is 3 and so on. 

But amusingly enough, all you need to do to get the NNT is basically flip over the absolute risk reduction percentage. For example, if the absolute risk reduction is 1%, that means 100 people needed to be treated to cure one. This means   the NNT is 100 but that is exactly:

1/1%

Of course, since fractional people are kind of weird, NNT is always rounded up to the next integer. So for example if the absolute risk reduction is 6%, the number needed to treat would mathematically be 1/6% or 16.6666, but we say the NNT in this case is 17.  

Yes, NNT gets more complicated if a treatment could conceivably harm some people and help others, but in most cases, just flipping over the absolute risk reduction and rounding up gives you the  NNT.

While I suggested in my previous blog that pharmaceutical companies should be required to give absolute risk reduction whenever they give relative risk reduction in an advertisement, requiring them to add the NNT in the same font etc. might even be a better idea!

The Semaglutide (Wegovy) Clinical Trial: Or How headline numbers mess with both Doctors and Patients minds

The recent SELECT study broadcast that using Wegovy size (2.4mg/weekly) dosages of Semaglutide for three years would reduce the risk of cardiovascular men events among some pretty seriously ill people by 20%. Unfortunately, when you dig deeper, what you discover is that what the study really showed is that if you treated 1,000 of these people for three years, you would reduce the number of cardiovascular events by 15 events. Moreover, at current list prices, this reduction of 15 events would cost almost 50 million dollars. How could a true headline number be so different from the actual reality?

Well, it all started with a press release way back on August 8, 2023 that yes heralded “a 20% reduction in cardiovascular events in non-diabetic overweight people with pre-existing cardiovascular or peripheral artery disease. Heck 68% of them already had a heart attack and they were 62 years old on average. These were not heathy people. And so the press release made it seem like using Semaglutide at a Wegovy-like dose of 2.4mg injected each week would be a game changer. Of course, us math types were eagerly awaiting the underlying numbers because press releases too often confuse the issue. They do this by not telling you all of the numbers and only giving you the numbers that make the trial look as good as possible.

When the actual trial numbers just came out (https://www.nejm.org/doi/full/10.1056/NEJMoa2307563), as many of us expected, this was a wonderful example of why “headline numbers” from clinical trials in a press release need to be viewed with suspicion. The problem is headline numbers are always about “relative risk reduction.” And relative risk reduction, while occasionally a useful statistic, is almost always a pretty small piece of the puzzle. So, let me first explain what relative risk reduction is and why it doesn’t tell you anything about how few events you prevent?

To understand why, let me give you an exaggerated example. Imagine a drug company tells you that our (very expensive) wonder drug reduces the risk of death by 50% in a fairly common disease. Sounds great, no? Then you dig a little further and discover if 10,000 people have the disease only two die if left untreated, with the wonder drug only one dies. Yep 50% reduction, no lies here. And then you dig a little deeper and find out the drug is likely to put you in the hospital, make 10% of the people who use it deaf, ruin the kidneys on 10% more etc., etc. Now you may be thinking is all that worth it for saving one life? That is a hard question, but in any case, congratulations you have just discovered “absolute risk reduction.” That’s the reduction in the actual number of events. And, as you have also just figured out, absolute risk reduction is a much better number to focus on than relative risk reduction.

More precisely, relative risk reduction measures how a treatment works in the treated group versus the group that got the placebo. It’s a pure percentage. Pure percentages like this are not tied to actual numbers of events: they are derived from the events and hide the number of events basically.

Let’s try another example: suppose your trial has 2000 patients, 1,000 got the treatment and 1,000 got a placebo. In the treated group, you had 8 events and the untreated group 10 events. That’s a 20% relative risk reduction, since 2/10 is 20%. But absolute risk reduction looks at the actual cases relative to the size of the groups. In other words, it takes into account the fact that most people in a trial don’t have any “events” at all. In our example, it’s an absolute risk reduction of only 2 individuals and as a percentage, that’s really small:

10/1000 – 8/1000 = 2/1000 =.2%

In other words: a 20% relative versus .2% absolute risk reduction – which is 1/100 of the relative risk reduction and wouldn’t make such a great headline number.

Obviously, I and many other math types think we would all benefit if drug companies were forbidden from broadcasting relative risk reduction without an equal emphasis on absolute reduction in their press releases or advertisements. Headline number with a high relative risk reduction uses the well know phenomenon of “anchoring” (https://en.wikipedia.org/wiki/Anchoring_effect) to mess with people’s minds!

O.K. what about the SELECT trial? It was a big trial of 17,604  patients, 45 or older and they all had preexisting cardiovascular disease. They also were overweight, with a body-mass index of 27 or greater. They may have been pre-diabetic but they were not yet diabetic. The trial lasted a little over 3 years. First off, the trial used what is called a “composite” endpoint: death from cardiovascular causes, a nonfatal heart attack, or a nonfatal stroke. Math types would automatically tell themselves: trials to detect combined events are easier to get significant results out of than trials for individual events. Testing for individual events need bigger and longer trials. For example, in the SELECT trial while deaths were reduced, they weren’t reduced enough, as we math types would say, to be ”statistically significant.” Also noteworthy, was that 17% of the participants in the Semaglutide groups dropped out of the study, roughly twice as many as in the placebo group. This was presumably because of the well known side effects to these drugs.

The results were as follows: there were 8803 in the the Semaglutide group. A cardiovascular “event” happened in 569 of the 8803 patients (6.5%). The placebo group had 8801 people, 701 of the 8801 patients had an event (8.0%).  This means the absolute risk reduction was about 1.5%. And yes the relative risk reduction was about 20%. But please note: the relative risk reduction was 16 times the absolute risk reduction!

Although I am not a doctor, I think it is wrong to call this a game changer: A 1.5% lowering in risk is small after all. But yes, it is obviously significant because this was a high risk group after all. Equally obviously, many cardiologists are excited because this is a pill that changes the risk for some pretty sick people, but I have to ask: how much of this excitement is due to the anchoring phenomenon of highlighting a 20% relative risk reduction? After all, as the accompanying editorial to the paper in the New England Journal of Medicine titled:” “SELECTing Treatments for Cardiovascular Disease— Obesity in the Spotlight” made it clear, we have no way of knowing if the effect of this drug comes from people losing 10% of their body weight. And of course the connection between weight loss and lowering the risk of cardiovascular events is well known: cardiologists have been telling patients like this to try to lose weight for basically forever. We simply don’t yet know if there are effects on cardiovascular health from Semaglutide over and above the weight loss it causes. And we do know that Semaglutide often leads to muscle loss and lower bone density.

But more to the point, while one would certainly like to have these drugs available to these patients, if the price of these drugs doesn’t come down, you can make a good argument their cost will literally break Medicare. Why? Well, roughly 35% of Medicare patients are overweight or obese. Roughly 75% of people over 65 have coronary artery disease. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616540/#:~:text=The%20American%20Heart%20Association%20(AHA,age%20of%2080%20%5B3%5D.)

Even if you assume these numbers are independent of each other, which is very unlikely, because overweight people are more likely to have cardiovascular disease, this means at least 26% of the roughly 31 million Medicare recipients (and likely more) might benefit from these drugs. We have an eligible population for these drugs therefore of more than 8 million people. At current list prices we would be spending 8mil*1350*12 or about 130 billion dollars a year. Total Medicare spending is about 725 billion a year. So, Medicare spending would go up by more than 17% overnight and stay at that level for a long time to come.

(Note added: My Slate article has more on the economics of just how much reducing Obesity saves using the NHANES data and data from a seminal PLOS article: https://slate.com/technology/2023/07/ozempic-costs-a-lot-it-doesnt-have-to.html)

Another slate article on Sensitivity and Specificity

https://slate.com/technology/2022/01/rapid-testing-covid-math-false-negatives-sensitivity.html

But they cut my draft down dramatically. I wanted to add a discussion of what is called “positive predictive value” (PPV) i.e. the answer to the question “if you test positive, do you have the disease?”

If a disease is relatively rare (alas not Omicron) even a positive result with a very specific test can be very misleading and unfortunately can confuse your doctor (https://www.nejm.org/doi/pdf/10.1056/NEJM197811022991808).

Here’s an example of what can go wrong. Suppose you have a test that is 99% specific.  If you read my Slate article you now know this means it only has 1% false positives. That is a pretty good test in the real world. But suppose the disease is also really rare, say only 1 in 1,000 people have it. Then it turns out that a positive test, even though the test is pretty darn good, isn’t telling you as much as you think! Why? Well, suppose you test 1,000 people. Since we are assuming the disease prevalence is only 1 in a 1000, you had only one person with the disease in your group of a 1,000. Now our test is 99% specific so there are 1% false positives. So among our 1,000 people, you will have about 10 false positives (.01*1000=10 although technically it is .01*9,999 =9.99). The 10 false positives dwarf the 1 true positive and the odds of you not having the disease are 10 to 1

(A clear treatment of PPV in the context of Covid here: https://www.someweekendreading.blog/weekend-editrix-exposed/)

Does the AstraZeneca Vaccine increase the chance of blood clots

I was just about to write a blog about how insane this claim is. If anything, the numbers show that the AstraZeneca vaccine  lowers the risk of blood clots but the great Weekend Editor beat me to it. So, even if you have to skip the great statistical stuff in his blog, just read his great post, please!

https://www.someweekendreading.blog/azox-vaccine-thrombo/

Please, please let this not be confirmed by more studies

https://jamanetwork.com/journals/jamacardiology/fullarticle/2768916

Yes I know this isn’t math – but it just confirms that people need to do everything possible to not get this horrible, horrible disease, i.e. (re)read my post https://garycornell.com/2020/04/21/multiplication-of-probabilities-or-what-to-do-when-you-have-to-go-shopping/. (HT to Charlie Stross.)