Skip to main content

“Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: ‘There are three kinds of lies: lies, damned lies, and statistics.'”

– Mark Twain
Monday morning began this week with a bang, as a new study on Bicyclist Safety conducted for the Governors Highway Safety Association was released. A reporter from Iowa called to get Bob Mionske’s take on the study, and we hadn’t even seen the study yet. That’s how it goes sometimes.

So we began skimming through the study, and as we did, I was reminded of Mark Twain’s observation about “lies, damned lies, and statistics.” It was immediately apparent that the study was flawed. Deeply flawed. Well, it was at least apparent to us. But would it be apparent to everybody? Would it be apparent to the media? Or to the Governors of the various states?

It doesn’t seem likely. More likely, most people, including the media, and influential people on Governors staffs across the country, would read the summary and accept the statistical analysis as “fact.” And because the statistical analysis is so thoroughly misleading, so completely wrong, the study has the potential to lead to deeply flawed public policy prescriptions.

So what exactly is wrong with this study? Read on….

“Cyclist Deaths are Rising”

One of the main findings of this study is that “the number of bicyclists killed on U.S. roadways is trending upward.” The study notes that “between 2010 and 2012, U.S. bicyclist deaths increased by 16 percent.”

So is it true?

Well, it’s true that there were more cyclist fatalities in 2012 than in 2010. But cyclist fatalities are actually trending downward. So how can a study get that basic fact so completely backwards?

Through the misleading use of language and statistics.

The study observes that more cyclists died in 2012 than in 2010. So of course, the number of cyclists killed is “trending up.” But those are raw numbers, and raw numbers mean nothing. What we really need to know to make sense of that raw number are the answers to questions like:

• How many cyclists were riding in 2010?
• How many cyclists were riding in 2012?

And if we want to get an even more accurate idea of what is happening, we need to ask:

• How many cyclist fatalities were there per vehicle miles traveled in 2010?
• How many cyclist fatalities were there per vehicle miles traveled in 2012?

And of course, we also need to look at cyclist fatalities over the long term, not just a two-year snapshot.

The answers to these questions would let us know whether the raw numbers of cyclist fatalities indicate that the rate of cyclist fatalities is rising, falling, or remaining stable.

But these questions are not asked, and the failure of the study’s author—former Insurance Institute for Highway Safety Chief Scientist Dr. Allan Williams—to do even basic statistical analysis suggests that the author is deliberately misleading readers of his study. But regardless of whether the study is deliberately misleading, or just incompetent science, the methodology and analysis are so deeply flawed that the study is at its most fundamental level, completely and utterly wrong.

What are the facts?

Cycling is increasing yearly. There are more cyclists riding now than in the 1970’s. How many more? Forbes reports that “Between 2000 and 2010, the number of bicycle commuters grew 40 percent nationwide, and was even greater — 77 percent — in some cities.” Streetsblog notes that in 1977, at the tail end of the 1970’s bicycle boom, there were 1.25 billion bicycle trips per year. By 2009, that number had soared to nearly 4 billion bicycle trips per year—and the number continues to rise.

And there are even more people riding today than in 2009.

When you factor in the massive increase in the number of people riding, an increase in the total number of people killed is not unusual. More to the point, even though there has been an increase in the number killed, the actual rate of cyclist fatalities has been falling dramatically. And this is in line with the “safety in numbers” hypothesis, which argues that as more people take up cycling, the rate of injuries and fatalities in car-on-bike crashes will fall, as drivers become accustomed to seeing cyclists and adjust their behavior accordingly.

The time frame studied—from 2010 to 2012—obscures what is really happening on our roads. Had the study looked at a different time frame—say, the years from 1975 until 2012—we would be looking at a dramatic fall in the annual number of cycling deaths, along with a dramatic rise in the number of people riding bikes. While there would still be a rise in fatalities between the years 2010 and 2012, it would be a small blip in the overall downward trend. And again, in a proper study, that blip would be seen in the context of the massive increase in ridership.

In fact, from 1975 until 2012, cyclist fatalities have remained steady at 2% of all motor-vehicle related deaths, even as cyclist numbers have skyrocketed. But even that 2% number doesn’t tell us the whole story. Thankfully, Michael Andersen over at BikePortland got out his calculator,crunched a few numbers, and came up with the numbers we all need to know: between 1977 and 2009, the adult bike fatality rate per trip declined by 43%.

In plain English, when you do the science correctly, there has actually been an enormous downward trend in cyclist fatalities.

Factors Contributing to Cyclist Fatalities

After noting that the number of fatalities has risen, the Bicyclist Safety study takes a look at two factors in cyclist fatalities. The discussion of these two factors has one thing in common—they both point the finger of blame for cyclist fatalities squarely at cyclists. The first of these factors is helmets, or more accurately, lack of helmets.

Helmets

The study notes that almost two-thirds of cyclist fatalities involved cyclists who were not helmeted. Obviously, we might draw the conclusion that these deaths were the result of head injuries that would have been prevented had the cyclists been helmeted. And in fact, the study concludes that “The lack of universal helmet use laws for bicyclists is a serious impediment to reducing deaths and injuries, resulting from both collisions with motor vehicles and in falls from bicycles not involving motor vehicles.”

But hold on. There is nothing—not one shred of evidence—in this study to support the conclusion that helmets would have prevented these deaths. Look, without getting into a debate about helmets (full disclosure: I always wear a helmet when I ride), it’s just plain misleading to say that two-thirds of all fatalities involved helmetless cyclists, without also discussing whether these cyclists suffered fatal head injuries. That’s just basic science. Suppose, for example, that most of these fatally injured, helmetless cyclists did not suffer fatal head injuries. Would that fact still support the study’s policy prescription for “mandatory helmet laws”? Of course not. And yet that is exactly what the study is asking us to do—to believe, without any evidence, that these fatally-injured, helmetless cyclists all suffered fatal head injuries, and that mandatory helmet laws would have prevented these deaths.

Now let’s take it a step further. Suppose that the majority of these helmetless cyclists did suffer fatal head injuries. Would a helmet have saved their lives? The answer to that depends upon the specific facts of each individual crash. Helmets are not “magic styrofoam hats.” They are only rated to prevent head injuries in crashes involving low-speed impacts (specifically, impacts below 14 MPH). Above 14 MPH, the impact can be expected to cause some degree of injury, and at high-speed impacts, no helmet can be expected to save a life. A cyclist who is hit from behind and suffers a head impact at 65 MPH is not likely to survive, regardless of whether or not the cyclist was wearing a helmet.

So when discussing helmets in relation to cyclist fatalities, it’s not enough to observe whether the cyclist was helmeted or not. We also need to know whether the cyclist suffered a head injury, and whether the impact speed was low enough for a helmet to make a difference. Again, that’s just basic science.

Bicycling Under the Influence

The study then shifts to a discussion of bicycling under the influence. According to the study, 25% of all bicyclist deaths from 2007 to 2011 involved a cyclist with a BAC of .08 or greater.

And the number of drivers with a BAC of .08 or greater who were fatally injured? 25%. The numbers are exactly the same.

The study’s policy prescription to reduce impaired cyclist fatalities is to enact “measures to reduce alcohol-impaired vehicle operation by bicyclists and motorists.”

Look, reducing cyclist fatalities is a good thing. But if we’re going to talk about impaired bicycling, we need to understand what we are looking at.

First, some percentage of impaired bicyclists were, previously, impaired drivers, until they lost their licenses. With this demographic, we’re talking about people with a substance-abuse problem, not “cyclists,” and that is what needs to be addressed if we want to reduce “impaired cyclist” fatalities amongst this demographic.

Another percentage of impaired cyclists have deliberately chosen to ride a bicycle as their means of transportation after drinking. The reason they choose a bicycle should be obvious—to avoid driving a motor vehicle while impaired. And in fact, some states, including South Dakota and Washington, have changed their DUI laws to accommodate these cyclists, because these states would rather have over-the-limit people riding bikes, instead of driving cars. Implicit in this public policy position is the understanding that impaired cyclists are likely to only harm themselves, while impaired drivers are much more likely to harm others.

Thus, public policy prescriptions that treat BUI (Bicycling Under the Influence) exactly the same as DUI fail to recognize exactly what harm enhanced DUI penalties have been enacted to prevent. So while we might wish to reduce impaired cyclist fatalities, it would be unwise to do so through policy prescriptions that fail to understand why impaired cycling is not the same as impaired driving.

Negligent and Reckless Driving

The leading cause of cyclist fatalities in 2012 was drivers who failed to yield. But you won’t find that fact anywhere in the Bicyclist Safety study, because negligent and reckless drivers are this study’s elephant in the room. Instead, the study keeps its sights strictly focused on blaming cyclists for their own fatalities.

And because cyclists are blamed for their own fatalities, the policy prescriptions tend to focus on cyclist behavior (although, to be fair, the study’s policy prescriptions don’t focus solely on changing cyclist behavior ).

A National Problem?

Finally, the study makes the argument that the rise in cyclist fatalities is mostly limited to a few states, and therefore, is not really a national problem.

In fact, following released of the report, the GHSA doubled down on this theme, tweeting that “Bike fatalities are a growing problem for specific groups, but not an issue universally.”

Get it? Your state may not have to do anything to address cyclist safety, because hey, it’s not a problem in your state. And even if it is a problem in your state, it is “not an issue universally.”

Conclusion

The Bicycle Safety study is a prime example of junk science. But this isn’t just some theoretical academic debate about research methodology and analysis. This study is already having real-world impacts, with the potential for a chilling effect on bicycling, in four ways: First, by placing false information in the hands of government officials and the media. Second, by potentially scaring off would-be cyclists. Third, by recommending policy prescriptions that fail to address the real issues in bicycle safety, and fourth, by recommending policy prescriptions that place the blame for cyclist fatalities on cyclists, and have been demonstrated to have a chilling effect on bicycling. Already, news stories about individual cyclist fatalities are referencing the study, as if to say that the study is right, cycling is dangerous.

But the study is not right (one wag who does understand statistics noted that if the fatality rate doesn’t change, you’re likely to be killed by your 10,000th birthday. “Scary!” he wryly observed.). Its methodology and its analysis are both faulty, and its conclusions cannot withstand even the most rudimentary of tests—which means that the study cannot survive one of the most fundamental principles of science.

And that needs to be said, in the media, and in the halls of government, every time this study rears its misshapen head.

 

Oregon Bicycle Accident Lawyer

Bike Law - Bicycle Accident Lawyer

Join the discussion 4 Comments

  • Avatar Mark D Friis says:

    I feel this is a ploy by the auto insurance industry and car companies to get a mandatory helmet law. The one thing they emphasize is the helmet data which is misleading. It is a red herring to do nothing but discourage biking. they are looking at what happened to Australia. And if we look at the data they are spouting, it actually doesn’t prove that helmets are saving lives. If 2/3 of victims are not wearing one and we only have roughly 60% of riders not using them, then it’s a wash statistically. It’s total BS!

  • Avatar Khal Spencer says:

    I wonder if one could even get a statistically significant finding out of three years of data. If one could, though, it really would be nice to know what would account for a 16% increase in fatals, whether the spike was statistically related to commuting in those high growth cities, for example, or related to deaths from distracted drivers, or something tangible.

  • I read through the report and I don’t feel the report is quite as one-sided as you feel it is. The overall tone of the report appeared to me to be one of dealing with the practical difficulties of reintegrating pedestrians and bicycles onto the roadways and focused on what bicyclists and their advocates can do to help make that happen safely.

    There was quite a bit of discussion about the need for driver education about sharing the road. The reports also are includes recommendations for on-street changes and advocacy for Complete Streets policies and mixed land use development favoring pedestrian and bicycle use.

    While the report itself did not include a new study about the efficacy of helmet use, Dr. Williams did reference studies of helmet laws, particularly in Australia, that may have the study controls you indicated were needed.

    The emphasis on impaired riding seemed a bit out there to me for all the reasons you mentioned. The discussion did, however, also included recommendations for dealing with impaired driving.

    I do feel Dr. Williams could have been more explicit about the relationship between increased ridership and increased fatalities. The report acknowledges that increases in bicycle use have occurred, but makes a comment that there is not a reliable measure of that increase. There are legitimate problems with measuring the increase, but there have been many efforts to improve those measures in recent years and it may be useful to provide that information to the policy makers who will almost certainly reference this report!

    One final comment on the statistics; it has been my reading of current safety initiatives related to MAP-21 that the actual number of fatalities, and not simply the rate of fatalities, is one of the performance measures being advocated for Safety for both vehicular and non-vehicular programs. There has been push back to include the rate as the primary measure, but my point is that this particular statistical discussion is not simply an issue with non-vehicular data. By the way, I favor rates!

    It would be my recommendation to bicycle advocates to prepare an alternative executive summary report that could be used when discussing the report with policy makers. This alternative report could emphasize some of the report’s positive recommendations, while placing the helmet and impaired rider issues in their appropriate context.

  • Avatar Joe says:

    Rick, The Governor’s report seems like a fairly well-balanced objective analysis of bike crashes. It acknowledges that increased exposure, i.e. increased ridership, may contribute to the increase in fatalities. It also (correctly) notes that exposure data is not adequate to draw specific conclusions. And no, your source, Forbes magazine, does not constitute a reliable data source for researchers.

    Your choice of fatality rate as a measure of safety instead of the raw number of fatalities is flawed as well. Thank goodness you’re not a lifeguard on the beach. Your solution to a shark attack would be to have everyone run into the water so that the number of attacks/swimmer would go down — safety in numbers right?

    There’s also an inherent assumption when using crash rates that crashes and exposure vary linearly despite absolutely no empirical evidence to support the claim. In fact, crash rates of all sorts tend to decrease as exposure increases. This is not novel and it is not indicative of improved safety. Would it bother you to know that the crash rate also goes down as motor vehicle volumes increase? “We need to get more people driving in order to improve bicyclist safety” said no one EVER. Yet, the math works out the same way. The bicycle crash rate goes down as the number of motor vehicles increase. Do you think putting more drivers on the road is a good thing for cyclist safety? I’d hope not. But that’s the conclusion you can draw using crash rates.

    There is undoubtedly a relationship between the number of fatal crashes and the number of riders (not to mention number of vehicles) on the road but we don’t know what that relationship is or what it should be. Without knowledge of what it should be the actual crash rate is meaningless. Believe it or not, it’s possible for a crash rate to go down as things get less safe.

    Absent any knowledge about what the rate is vs. what we’d expect it to be, the crash frequency is a better measure of safety. After all, in the end, shouldn’t our goal be to reduce the number of fatalities despite increased ridership? That’s what’s happening with motor vehicles. The long-term trend for motor vehicle fatalities has been trending down now for years despite ever increasing vehicle-miles traveled. So it can be done.

    You state: “Had the study looked at a different time framesay, the years from 1975 until 2012we would be looking at a dramatic fall in the annual number of cycling deaths”. 1) I think that’s an overstatement. I was able to find data back to 1983 (at NHTSA). The overall trend in fatalities from 1983 – 2012 is slightly downward — 839 fatalities in 1983 vs. 726 in 2012, about 13% overall or about 0.5% per year) far from a “dramatic fall”. 2) You have to dig into the data a little bit further. In the same time period, crashes involving children under 16 have gone from 445 in 1983 to 65 in 2012. That really is a dramatic fall! That’s an 85% reduction or 6.4%/year.

    For riders over 16 the news is not so good. The crash trend for them is upward (394 crashes in 1983 compared to 661 in 2012, a 68% increase). So really, for the group were discussing, the long term trend is no more encouraging than the short term trend described in the Governors report.

    The bottom line is that we need to improve bike safety and we need to focus on bringing the number of crashes down. It’s been done for young bicyclists; it’s been done for motorists. It can be done for adult riders as well. However, pretending there’s not an issue because it doesnt fit the meme that bicyclist is inherently safe is not helpful to the cause.