In Light of New Evidence

0

October 2013
Ivan Obolensky

Kevin Stewart commutes between his home and his work by car. The daily trip takes forty minutes on a good traffic day and an hour and fifteen on a bad one. He has been making the commute for a year and a half. He rarely exceeds the speed limit. He considers himself a good driver but on a particular Tuesday he finds himself in a fender bender on his way to work. He is not injured and neither car requires a tow truck.

This is his first accident. How should he view his driving skills? Is he a good driver that got unlucky, or is he a bad driver that has been lucky so far?

Probably he will consider himself a good driver that had an unlucky break. This would be in keeping with a 1981 study of driver perceptions of their driving ability.

The study asked drivers in the US and Sweden to rank how well they drove compared to other drivers. The result was that 93% of drivers ranked themselves in the top 50% in skill level and 88% in the top half in terms of safety.

It would appear that many drivers have an exaggerated sense of their driving prowess; after all there can’t be only 7% in the lower 50%.1

This tendency is called illusory superiority.

Illusory superiority is a cognitive bias in which those that are incompetent have an inflated opinion of their competence.

This bias does not apply to driving alone. As a rule the majority usually considers themselves better than average.

On the other hand those who are competent in a task tend to underestimate their competence. The skilled group perhaps finds the task in question easy and assumes that others will also find it so. They underestimate their higher competence in relation to the rest of the group. 2

Kevin decides to investigate more closely.

One way to cross check his skill level is to look at the probability of his getting in an accident in the first place. He discovers that the Insurance industry estimates that an average person will file a claim once every 17.9 years. Kevin is 34 and has been driving since he was 16. He seems to be average according to this measure.3

This view of probability is called the frequentist approach, or frequentist probability. It might also be called the standard interpretation of probability.

One looks at a large number of outcomes such as the number of deaths by a specific cause and then one divides that number by the total population to get an estimate of the probability of that event occurring. Insurance companies do this regularly and the result is actuarial tables that give the probability that a person of a particular age will live through the year. These tables cannot say what will happen to a specific individual but can accurately describe what will happen to a population as a whole. The tables allow insurance companies to calculate premiums and remain profitable.

In the case of the US population and motor vehicles, the lifetime probability of death in a car accident is 1 in 100 or 1%. This statistic is saying that given many groups of 100 people in the US chosen at random, on average, one of each of those groups will die in a car accident.

Additionally, using this approach it is estimated that every member of the population has the same chance as everyone else and that whatever the event to be calculated, there are a large enough numbers involved so that the result is likely to be accurate in aggregate.

The probability estimate is based on observable and recorded information. The frequency arrived at is a reflection of the real world. There is no opinion involved. The probability is inherent in the thing being studied like the odds of a number on a dice coming up. It can only be determined by repeated trials and is intrinsic to the nature of the dice itself. It cannot be changed only discovered.

In the case of Kevin specifically, he has had only one accident. He is an individual. The event is not repeatable (at least he hopes not). He also drives in Southern California as opposed to South Dakota where there are a lot fewer cars. The one accident in 17.9 years is a population average for the entire US. Perhaps the accident rate should be one accident in ten years because of the volume of traffic? What should he do?

The frequentist approach has little to say about infrequent events. He is on his own.

There is another form of probability that has had a rocky history. It has only gained acceptance over the last thirty years.

In the 1740s The Reverend Thomas Bayes came up with a means of determining how the probability of an event changes in light of new information.

The French mathematician Pierre-Simon Laplace at the beginning of the 19th century gave it its modern form. It still retains its name as Bayes Theorem. It is called a Bayesian probability.

Bayesian Probability could be considered evidential probability. Bayes started with a thought experiment. He imagined a square table after an associate has placed a cue ball on it. He has his back to it and does not know where the cue ball is on the table. He imagine d an associate tossing a ball onto the table and then telling him whether it is to the right or left of the cue ball. He has his associate throw several more balls reporting whether the ball landed to the right or the left.

Bayes discovered that as more balls were thrown each new piece of information placed his cue ball within a more and more limited space.

Bayes used his knowledge of the present (the throws) to say something about the past (where the cue ball had been placed). Although he may never know the exact location of the cue ball, given enough information he could have a greater and greater certainty as to where it was.

In Kevin’s case, he thought he was a pretty good driver. He has now had an accident. He could simply dismiss it by blaming the other driver, deciding he had had a bad day, or that it was only a minor incident. He could still retain his belief and continue as usual.

This would tend to be the norm.

On the other hand it might be in Kevin’s best interest to reconsider his driving prowess in light of this new evidence. Before he thought he was at least in the top 25% of all drivers. Now he considers that might not be the case. He may be simply average. He has changed his point of view. As a result he might decide that he now needs to do something to improve his skills, perhaps by doing a defensive driving course, or checking his eyesight and getting new glasses.

Bayesian probability has been received with applause or jeers depending on which period of history one is looking at. In some circles it is still not accepted. Primarily this is due to the fact that there is subjectivity involved.

Under the strict rules of the frequentist, by allowing opinion to enter into the mix the results are just qualified guesses. The only types of observations that are valid are real world facts. Individual assessments of likelihood are not only individual opinion but bad science.

How this works is one starts with a best guess (the aircraft went down somewhere in this area). A search team is sent but finds nothing. In light of this new information the likelihood of finding the aircraft in that area decreases while other areas that were initially unlikely are recalibrated at a higher level. Resources are allocated appropriately. While this is shortcutting the mathematics of search which is surprisingly extensive, it gives one a flavor of the fluid nature of the Bayesian approach.

In the early 1970s, Carl Rasmussen of MIT was appointed to assess the safety of the Nuclear Power Industry. The problem was that there had never been a nuclear plant accident. Working at first with the failure rates of various valves, pumps, and other equipment, there was not enough information. He turned to Bayesian analysis which at the time was considered in a dim light by the scientific community. He tried to couch the fact that Bayesian techniques were used in other terms. The report was delivered in 1974 to the Atomic Energy Commission.

Prevailing expert opinion was that the probability of a nuclear plant accident that damaged the nuclear core was low, but if damage should occur, the consequences would have a very high likelihood of being catastrophic. The Rasmussen report on the other hand set core damage probability in the event of an accident significantly higher but with a lower probability of catastrophic consequences.

When the report was studied and the fact that Bayesian Analysis was used to model the results AEC support for the report was withdrawn.

Five year later his report was vindicated when the core of the Three Mile Island plant was severely damaged in a nuclear accident. The core was damaged; however, the accident result was not catastrophic.4

Over the years Bayesian probability analysis has become a standard tool to estimate the likelihood of infrequent events.

In Kevin’s case, he finally concluded that he had a bit of an inflated idea as to his driving skill in light of his accident.  He signed up for a high-speed defensive driving course to hone his abilities.

His decision to upgrade his skill set was based on the idea that the likelihood of a future accident had increased in light of new evidence, his recent fender bender.

Unlike most people, he did not hold onto his previous beliefs in spite of contrary evidence. Rather, he changed his mind.  He took a Bayesian approach.

 

 


 

  1. Svenson, O. (February 1981). “Are we all less risky and more skillful than our fellow drivers?”. Acta Psychologica 47 (2): 143–1. Retrieved October 22, 2013 from http://www.sciencedirect.com/science/article/pii/0001691881900056.
  2. Kruger, J. & Dunning, D. (1999) “Unskilled and Unaware of it: How Difficulties in Recognizing One’s Own Incompetence leads to Inflated Self-Assessments, Journal of Personality and Social Psychology, Vol. 77, No. 6: 1121-1134. Retrieved October 22, 2013 from http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf.
  3. Toups, D. (2011). “How Many Times Will You Crash Your Car?” Forbes. Retrieved October 22, 2013 from http://www.forbes.com/sites/moneybuilder/2011/07/27/how-many-times-will-you-crash-your-car/.
  4. McGrayne, S.B. (2011) The Theory That Would Not Die. New Haven, CT: Yale University Press

 


If you would like to sign up for our monthly articles, please click here.

Interested in reprinting our articles? Please see our reprint requirements.

© 2013 Ivan Obolensky. All rights reserved. No part of this publication can be reproduced without the written permission from the author.

 

 

 

 

Leave a Reply