What journalists should know about the atomic bombings

Americans in particular use the atomic bombings as a short-hand for thinking about vitally important present-day issues like the ends justifying the means, who the appropriate targets of war are, and the use of force in general. Unfortunately, quite a lot of what Americans think they know about the atomic bombs is dramatically out of alignment with how historians understand them, and this shapes their takes on these present-day issues as well.

One of the reasons I enjoy reading history is learning about all the different ways my classic K-12 education was skewed, biased, simplified, or even dead wrong. It’s better to know the truth than be comforted.

In that vein, I highly recommend reading What journalists should know about the atomic bombings. It’s straightforward, does not contain appeals to emotion, and takes a historical perspective on what actually happened and why.

Extrapolating from one number

I was reading Scott Alexander’s excellent post this morning about why different people got the early coronavirus predictions right and wrong, and one of the things he mentions:

The coronavirus killed fewer people than the flu did in January. But it might kill more in February — and unlike the flu, its scope and effects are poorly understood and hard to guess at. The Chinese National Health Commission reports 24,324 cases, including 3,887 new ones today. There are some indications that these numbers understate the situation, as overwhelmed hospitals in Wuhan only have the resources to test the most severe cases. As of Tuesday, 171,329 people are under medical observation because they’ve had close contact with a confirmed case.

I, too, heard this from lots of people. “It’s no worse than the flu.” And, indeed, if all you went by was a single number (number of cases, or number of deaths) and compared them to the flu, you’d have been correct.

Obviously, there is a problem with this type of logic. It doesn’t take into account context, timing, what we know about each virus, and much more. And yet, the number of people who analogized the coronavirus to the flu was vast. In the population and in the mainstream media.

Making a judgement by extrapolating from this single number was wrong.

Bootcamp Placement Rates

For almost 6 years, I had a front-row seat to the inception and proliferation of coding bootcamps, both in person and online. Being the first online bootcamp, we saw ourselves as being unencumbered by the constraints of the classroom, taking lots of inspiration from the Khan Academy’s “flipped classroom” approach. We saw the classroom as an antiquated, byzantine model created before the industrial revolution, without real innovation in 200 years.

As such, we didn’t limit our enrollment. Anyone could enroll, and, provided they put in the time and effort, could get a job as a developer. There were lots of consequences to this, including a dropout rate that we tracked very closely, learned a great deal from, and iterated our entire program structure based upon.

As the bootcamp industry exploded, prospective students needed a way to comparison-shop. We found ourselves compared to brick-and-mortar bootcamps quite often – students were trying to decide if they should quit their jobs to pursue their new education or if they could do so on nights and weekends. Many other factors were at play.

We needed a clear, straightforward way to show prospective students why we felt our program was superior. We did this primarily via success stories: testimonials from students who had successfully completed the program and gone on to achieve great things. We even added it to a set of core tenets that we used to align the efforts of everyone across the company: Nothing says Bloc works like a long list of “people like me” now working as software developers and designers.

Then, we began getting lots of questions from prospective students about our placement rate.

This is a simple calculation advertised by brick-and-mortar bootcamps that simply takes the number of students who get jobs divided by the number who are accepted into the program. Simple, right?

The problem is that being an online program, we accepted everybody. Our numbers were at least an order of magnitude larger than any single other bootcamp, but the metric students were using to comparison shop was designed for programs that had a fundamental limit on how many students could enroll. It was like trying to gauge how effective Amazon is at stocking goods on their shelves compared to Walmart. It doesn’t make sense.

Furthermore, the metric intrinsically skewed up for the brick-and-mortar schools. When the denominator is low, the whole number goes up! So not only did we have a market-standard metric being used to comparison shop, it was a metric that was fundamentally at odds with not just the business we were running, but also the reason we had started the business in the first place.

We never did find a way to adequately articulate all of this context to prospective students. It was just too much. Or maybe we just needed to be better at marketing – but I don’t think so.

I think the problem is that most people extrapolate from a single number. All other things being equal, the simpler way to make the decision that resonates with a customer’s intuition will win.

It’s the same thing with the coronavirus vs. the flu. Using a single number to perform a judgement is not a good way to make the judgement.

Currently this is just a hypothesis of mine, because I only have two data points. But, I am now on the lookout for other data points – where else do people commonly make judgements based on a single number that are almost certainly bad?

Wednesday Morning Links

Some interesting links for your Wednesday: