“Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: ‘There are three kinds of lies: lies, damned lies, and statistics.'”
– Mark Twain
Monday morning began this week with a bang, as a new study on Bicyclist Safety conducted for the Governors Highway Safety Association was released. A reporter from Iowa called to get Bob Mionske’s take on the study, and we hadn’t even seen the study yet. That’s how it goes sometimes.
So we began skimming through the study, and as we did, I was reminded of Mark Twain’s observation about “lies, damned lies, and statistics.” It was immediately apparent that the study was flawed. Deeply flawed. Well, it was at least apparent to us. But would it be apparent to everybody? Would it be apparent to the media? Or to the Governors of the various states?
It doesn’t seem likely. More likely, most people, including the media, and influential people on Governors staffs across the country, would read the summary and accept the statistical analysis as “fact.” And because the statistical analysis is so thoroughly misleading, so completely wrong, the study has the potential to lead to deeply flawed public policy prescriptions.
So what exactly is wrong with this study? Read on….
“Cyclist Deaths are Rising”
One of the main findings of this study is that “the number of bicyclists killed on U.S. roadways is trending upward.” The study notes that “between 2010 and 2012, U.S. bicyclist deaths increased by 16 percent.”
So is it true?
Well, it’s true that there were more cyclist fatalities in 2012 than in 2010. But cyclist fatalities are actually trending downward. So how can a study get that basic fact so completely backwards?
Through the misleading use of language and statistics.
The study observes that more cyclists died in 2012 than in 2010. So of course, the number of cyclists killed is “trending up.” But those are raw numbers, and raw numbers mean nothing. What we really need to know to make sense of that raw number are the answers to questions like:
• How many cyclists were riding in 2010?
• How many cyclists were riding in 2012?
And if we want to get an even more accurate idea of what is happening, we need to ask:
• How many cyclist fatalities were there per vehicle miles traveled in 2010?
• How many cyclist fatalities were there per vehicle miles traveled in 2012?
And of course, we also need to look at cyclist fatalities over the long term, not just a two-year snapshot.
The answers to these questions would let us know whether the raw numbers of cyclist fatalities indicate that the rate of cyclist fatalities is rising, falling, or remaining stable.
But these questions are not asked, and the failure of the study’s author—former Insurance Institute for Highway Safety Chief Scientist Dr. Allan Williams—to do even basic statistical analysis suggests that the author is deliberately misleading readers of his study. But regardless of whether the study is deliberately misleading, or just incompetent science, the methodology and analysis are so deeply flawed that the study is at its most fundamental level, completely and utterly wrong.
What are the facts?
Cycling is increasing yearly. There are more cyclists riding now than in the 1970’s. How many more? Forbes reports that “Between 2000 and 2010, the number of bicycle commuters grew 40 percent nationwide, and was even greater — 77 percent — in some cities.” Streetsblog notes that in 1977, at the tail end of the 1970’s bicycle boom, there were 1.25 billion bicycle trips per year. By 2009, that number had soared to nearly 4 billion bicycle trips per year—and the number continues to rise.
And there are even more people riding today than in 2009.
When you factor in the massive increase in the number of people riding, an increase in the total number of people killed is not unusual. More to the point, even though there has been an increase in the number killed, the actual rate of cyclist fatalities has been falling dramatically. And this is in line with the “safety in numbers” hypothesis, which argues that as more people take up cycling, the rate of injuries and fatalities in car-on-bike crashes will fall, as drivers become accustomed to seeing cyclists and adjust their behavior accordingly.
The time frame studied—from 2010 to 2012—obscures what is really happening on our roads. Had the study looked at a different time frame—say, the years from 1975 until 2012—we would be looking at a dramatic fall in the annual number of cycling deaths, along with a dramatic rise in the number of people riding bikes. While there would still be a rise in fatalities between the years 2010 and 2012, it would be a small blip in the overall downward trend. And again, in a proper study, that blip would be seen in the context of the massive increase in ridership.
In fact, from 1975 until 2012, cyclist fatalities have remained steady at 2% of all motor-vehicle related deaths, even as cyclist numbers have skyrocketed. But even that 2% number doesn’t tell us the whole story. Thankfully, Michael Andersen over at BikePortland got out his calculator,crunched a few numbers, and came up with the numbers we all need to know: between 1977 and 2009, the adult bike fatality rate per trip declined by 43%.
In plain English, when you do the science correctly, there has actually been an enormous downward trend in cyclist fatalities.
Factors Contributing to Cyclist Fatalities
After noting that the number of fatalities has risen, the Bicyclist Safety study takes a look at two factors in cyclist fatalities. The discussion of these two factors has one thing in common—they both point the finger of blame for cyclist fatalities squarely at cyclists. The first of these factors is helmets, or more accurately, lack of helmets.
Helmets
The study notes that almost two-thirds of cyclist fatalities involved cyclists who were not helmeted. Obviously, we might draw the conclusion that these deaths were the result of head injuries that would have been prevented had the cyclists been helmeted. And in fact, the study concludes that “The lack of universal helmet use laws for bicyclists is a serious impediment to reducing deaths and injuries, resulting from both collisions with motor vehicles and in falls from bicycles not involving motor vehicles.”
But hold on. There is nothing—not one shred of evidence—in this study to support the conclusion that helmets would have prevented these deaths. Look, without getting into a debate about helmets (full disclosure: I always wear a helmet when I ride), it’s just plain misleading to say that two-thirds of all fatalities involved helmetless cyclists, without also discussing whether these cyclists suffered fatal head injuries. That’s just basic science. Suppose, for example, that most of these fatally injured, helmetless cyclists did not suffer fatal head injuries. Would that fact still support the study’s policy prescription for “mandatory helmet laws”? Of course not. And yet that is exactly what the study is asking us to do—to believe, without any evidence, that these fatally-injured, helmetless cyclists all suffered fatal head injuries, and that mandatory helmet laws would have prevented these deaths.
Now let’s take it a step further. Suppose that the majority of these helmetless cyclists did suffer fatal head injuries. Would a helmet have saved their lives? The answer to that depends upon the specific facts of each individual crash. Helmets are not “magic styrofoam hats.” They are only rated to prevent head injuries in crashes involving low-speed impacts (specifically, impacts below 14 MPH). Above 14 MPH, the impact can be expected to cause some degree of injury, and at high-speed impacts, no helmet can be expected to save a life. A cyclist who is hit from behind and suffers a head impact at 65 MPH is not likely to survive, regardless of whether or not the cyclist was wearing a helmet.
So when discussing helmets in relation to cyclist fatalities, it’s not enough to observe whether the cyclist was helmeted or not. We also need to know whether the cyclist suffered a head injury, and whether the impact speed was low enough for a helmet to make a difference. Again, that’s just basic science.
Bicycling Under the Influence
The study then shifts to a discussion of bicycling under the influence. According to the study, 25% of all bicyclist deaths from 2007 to 2011 involved a cyclist with a BAC of .08 or greater.
And the number of drivers with a BAC of .08 or greater who were fatally injured? 25%. The numbers are exactly the same.
The study’s policy prescription to reduce impaired cyclist fatalities is to enact “measures to reduce alcohol-impaired vehicle operation by bicyclists and motorists.”
Look, reducing cyclist fatalities is a good thing. But if we’re going to talk about impaired bicycling, we need to understand what we are looking at.
First, some percentage of impaired bicyclists were, previously, impaired drivers, until they lost their licenses. With this demographic, we’re talking about people with a substance-abuse problem, not “cyclists,” and that is what needs to be addressed if we want to reduce “impaired cyclist” fatalities amongst this demographic.
Another percentage of impaired cyclists have deliberately chosen to ride a bicycle as their means of transportation after drinking. The reason they choose a bicycle should be obvious—to avoid driving a motor vehicle while impaired. And in fact, some states, including South Dakota and Washington, have changed their DUI laws to accommodate these cyclists, because these states would rather have over-the-limit people riding bikes, instead of driving cars. Implicit in this public policy position is the understanding that impaired cyclists are likely to only harm themselves, while impaired drivers are much more likely to harm others.
Thus, public policy prescriptions that treat BUI (Bicycling Under the Influence) exactly the same as DUI fail to recognize exactly what harm enhanced DUI penalties have been enacted to prevent. So while we might wish to reduce impaired cyclist fatalities, it would be unwise to do so through policy prescriptions that fail to understand why impaired cycling is not the same as impaired driving.
Negligent and Reckless Driving
The leading cause of cyclist fatalities in 2012 was drivers who failed to yield. But you won’t find that fact anywhere in the Bicyclist Safety study, because negligent and reckless drivers are this study’s elephant in the room. Instead, the study keeps its sights strictly focused on blaming cyclists for their own fatalities.
And because cyclists are blamed for their own fatalities, the policy prescriptions tend to focus on cyclist behavior (although, to be fair, the study’s policy prescriptions don’t focus solely on changing cyclist behavior ).
A National Problem?
Finally, the study makes the argument that the rise in cyclist fatalities is mostly limited to a few states, and therefore, is not really a national problem.
In fact, following released of the report, the GHSA doubled down on this theme, tweeting that “Bike fatalities are a growing problem for specific groups, but not an issue universally.”
Get it? Your state may not have to do anything to address cyclist safety, because hey, it’s not a problem in your state. And even if it is a problem in your state, it is “not an issue universally.”
Conclusion
The Bicycle Safety study is a prime example of junk science. But this isn’t just some theoretical academic debate about research methodology and analysis. This study is already having real-world impacts, with the potential for a chilling effect on bicycling, in four ways: First, by placing false information in the hands of government officials and the media. Second, by potentially scaring off would-be cyclists. Third, by recommending policy prescriptions that fail to address the real issues in bicycle safety, and fourth, by recommending policy prescriptions that place the blame for cyclist fatalities on cyclists, and have been demonstrated to have a chilling effect on bicycling. Already, news stories about individual cyclist fatalities are referencing the study, as if to say that the study is right, cycling is dangerous.
But the study is not right (one wag who does understand statistics noted that if the fatality rate doesn’t change, you’re likely to be killed by your 10,000th birthday. “Scary!” he wryly observed.). Its methodology and its analysis are both faulty, and its conclusions cannot withstand even the most rudimentary of tests—which means that the study cannot survive one of the most fundamental principles of science.
And that needs to be said, in the media, and in the halls of government, every time this study rears its misshapen head.




