Trigger warning: this post will seem cold and calculating
Over the past few months, we’ve been seeing more and more talk about one or several COVID-19 vaccines coming through, there are two juxtaposing, but equally reasonable, points of view regarding when we introduce a future vaccine to the public.
Point of view number one is that we should do this as fast possible. COVID-19 has killed hundreds of thousands worldwide, and put enormous strain on world economy. The sooner we get a vaccine out there, the more lives we save, and the quicker we get back to a normal life. A full phase three test can take two to four years, after all, and the world can’t wait.
Point of view number is that we should not put a vaccine out there until we know that is entirely safe. We need to do full phase three tests, and preferably beyond, to ensure that no one will suffer any serious side effects.
People with point of view number one see people with point of view number two as being overly cautious. People with point of view number two see people with point of view number two as reckless (and potentially greedy).
In many ways, this mirrors the discussion of when we should allow autonomous cars on our roads. Some will say that tomorrow isn’t soon enough, others say that they must be 100 % safe first, as in, we have to know, beyond any doubt, that no human will ever be killed or harmed by an autonomous car, ever.
Of course, we’ll never get to a point where we know for sure that no autonomous car will ever kill a human. In fact, even if we get to a point where all cars everywhere are autonomous, there’ll still be accidents, some fatal. And yes, some of those accidents will be caused by the very tech that makes the car autonomous.
OK, so we can never have autonomous cars, then, right? Because they’ll never be 100 % safe. But let me ask you this: are human drivers 100 % safe? Of course not. Not by a long shot.
But let’s make though experiment: current road fatalities in the US is about 38,000 a year. Let’s imagine we get to a point where wide-spread introduction of autonomous cars would mean that we’d reduce that to 10,000, but would then add 9,000 deaths due to system problems. Essentially 9,000 “new” fatalities that would not have occurred if we’d kept the status quo. But the grand total is still that we’ve reduced traffic fatalities by 50 %.
To that, many people will say that that’s a very cold and calculating way of looking at things. I’ve even had some people quote Captain America at me, saying “we don’t trade lives” (even if they did, eventually).
Every year we delay autonomous technology to get those 9,000 deaths closer to zero we’re adding an additional 9,000 deaths to the total roster. And the truth is that we’ll never get to zero. Ever. So a more relevant ethical dilemma could be: how many lives are you willing to sacrifice in order to reach a theoretical state where no one dies? Let’s say that it takes 5 years to reduce those tech-related deaths from 9,000 to 8,000 (law of diminishing returns dictate that the lower we get to zero, the more work each step of progress will take). Then you’ve sacrificed 40,000 people to save 1,000 a year. It will take 40 years to make up that loss.
The discussions remind of this video from Pen and Teller on why anti-vaxxers are wrong, even if they’re right.
Vaccine conspiracies: even if you’re right, you’re wrong
If you can’t be bothered watching the video, here’s a recap: even if the numbers that anti-vaxxers claim on how many kids have suffered ill effects from vaccines (these number are largely fabricated, though), the number of kids saved by the vaccines are still way higher. So even if anti-vaxxers were right, vaccines would still be a good idea.
Of course we shouldn’t put dangerous products out there, be they autonomous cars or vaccines. But we have to play the numbers, and choose the option that posts the smallest risk to the smallest number of people. We have to remember the cost of lives of the status quo.