Study: Subconscious human bias exists in tournament selection
In the minutes after the NCAA selection committee announced this year's tournament bracket on Sunday night, the most curious decisions became instant debate fodder. On CBS, SI's
Had either read a soon-to-be published study by three economists, they might not have been so surprised.
The study, by
According to the researchers, Wake Forest would have beaten out Virginia Tech this year even after removing the controls for selection committee bias. Hokies fans should be angrier about their team's abysmal out-of-conference schedule, but it probably didn't escape their notice that Wake Forest athletic director
The study's authors aren't accusing Wellman or any other selection committee member of deliberately rigging the process. In fact, they realize Wellman wouldn't have even been allowed in the room when the committee voted to grant Wake Forest an at-large bid, nor would he have been allowed to offer an opinion on fellow ACC member Virginia Tech. What the authors are suggesting is that the selection process setup allows subconscious biases to creep into the proceedings.
"We're accusing the committee of being human," said Coleman, a professor at the University of North Florida. "It's human nature. We all are biased at some level."
While the authors' findings probably wouldn't surprise anyone who has ever sat on a barstool and debated the selection committee's seeding and at-large choices, this is the first time anyone has performed a statistical analysis to determine whether the conventional wisdom is true. The appearance of bias, meanwhile, is a sensitive subject to the NCAA, which distributes millions of dollars to conferences and schools based on how many games each school plays in the tournament.
After reviewing the study,
SI.com also sent the study to be reviewed by University of Michigan sports management professor
The project began 10 years ago as particularly nerdy version of a traditional March pastime: forecasting the bracket. Coleman stumbled upon
After several years of predicting the tournament field using their
For most, the defining memory of the 2006 tournament is watching George Mason, an at-large team from the Colonial Athletic Association, crash the Final Four. For Coleman, DuMond and Lynch, it was that selection Sunday. Air Force hadn't even bothered to gather as a team to watch the selection show. The Falcons got an at-large bid. Meanwhile, George Mason's situation looked dire thanks to a pair of losses to Hofstra -- a team with a similar resume and an RPI of 30. George Mason got an at-large bid. Hofstra didn't. "I need someone to explain this process to me," Hofstra coach
Little did Pecora know Coleman and Lynch were about to recruit the man who might provide an explanation. DuMond, steamed that his alma mater, Florida State, had seen its bubble burst that year, happily agreed to join the project. The information Coleman and Lynch sought was in DuMond's professional wheelhouse. A principal at Charles River Associates in Tallahassee, Fla., DuMond specializes in discrimination issues involving labor and employment, so he knows how to mine data for bias. In looking at the at-large bids that year, DuMond noticed a common complaint that can best be described with a passage from a story
So DuMond tweaked the model to test several hypotheses, including whether committee representation aided teams. The trio fed the previous tournaments' data back into the model, and the number of misses dropped. Accuracy rose from 94 percent to 97 percent.
How accurate is the model now? On Sunday, it correctly predicted 33 of the 34 at-large teams. The lone miss fell just on the other side of the trio's cut line. The model predicted Mississippi State (with an 81.4 percent chance of getting a bid) would be the last team in, while Florida (with an 80.2 percent chance) would be the first team out. The Gators got a bid. The Bulldogs didn't.
"Once we started controlling for things like the membership representation on the committee as well as some other factors, our predictions got more accurate," DuMond said. "In some sense, that's the best proof that these things matter."
It's not proof enough for the NCAA's Paskus, who had major issues with the model that found bias in at-large selections. His chief complaint was that the authors tilted the study to their findings by using too high of a P-value, which is a statistical term that describes the odds of a random event. A P-value of .01 means there is only a one percent chance that the model will predict a totally random event. In some instances, the authors used a P-value of .1 in the at-large model, meaning there is a 10 percent chance of a random result. "The standard practice in the social sciences -- and I assume economics is still holding to this -- when you're looking into that many tests of significance, you're supposed to take that into account in the P-value you use," Paskus said. "So rather than stacking the odds in your favor by using that .1 and trying to find a bunch of significant results, you're supposed to go in the other direction and look at a .01 or .001 to be sure the effects you're seeing aren't just due to chance."
The authors contend that the most important findings -- the committee representation bias and favoritism toward the Pac-10 and Big East in at-large selections -- were statistically significant to the .01 level. They also pointed out that in the model that found bias in seeding, the results were statistically significant at the .0001 level, meaning there is a one-hundredth of one percent chance of a random result.
Paskus also took issue with some of the odds ratios used by the authors. An odds ratio is the likelihood that one event will happen over another with all other factors equal. In the case of Pac-10 teams receiving an at-large selection, the authors found an odds ratio of 9,999. "Those numbers, statistically, are nonsensical," Paskus said. "The odds ratios just can't be that large, especially given the data they have where they're only looking at a couple hundred teams over a 10-year period."
The authors argue that while the number seems high, when translated using standard statistical measures, it isn't. DuMond wrote in an e-mail that if, for example, UCLA and Siena each have a 90 percent chance -- based on their on-court resumes -- of receiving an at-large bid, the bias factor would instead give UCLA a 99.99 percent chance of receiving a bid. Siena's chance would remain at 90 percent. Such a difference, DuMond wrote, probably wouldn't be visible to the naked eye.
In his independent evaluation, Fort also mentioned the eye-popping odds ratio. "One shouldn't make too much of the idea that, say, a Pac-10 [team] has a 10,000 times higher chance of being selected relative to some other 'minor' team of similar performance-only variables," Fort wrote in an e-mail. He then explained the same translation as DuMond. Fort's main concern was the sample size. He wrote that because the results affected by bias were so few in number, the results may have statistical significance, but they may lack "impact significance."
"They call these observations 'the bias,' but the real question is what makes this bias happen?" Fort wrote. "At the heart of it then is why the NCAA structures this decision process so that it produces these outcomes (the authors must eventually admit, in a very small percentage of the actual cases)? If we observe that committee membership influences outcome, then why allow that committee membership in the first place? ... It is a choice by the NCAA that generates the outcomes that the authors observe. And that is the interesting issue, rather than the fact (that everybody knows anyway) that it occurs."
The NCAA has deterrents in place to discourage bias. For example, a conference commissioner must leave the room whenever a team from the commissioner's conference is discussed. An athletic director must leave the room during discussion of his own team, and he is allowed to offer only facts --no opinions -- about fellow conference teams. Shaheen said some members go above and beyond. When
DuMond said that while this seems an effective measure on its surface, it doesn't take into account discussions of other teams on the bubble. "They leave the room," DuMond said. "But they also come back in the room. ... A [member] knows [his school is] a bubble team, so he can vote against other bubble teams strategically and save a spot more or less. So while there are some rules in place to at least get rid of the perception of bias, that doesn't necessarily mean that they're working. The statistical evidence suggests that they're not."
Shaheen argues that because a team needs a majority to be seeded or selected as an at-large, one vote isn't likely to swing the decision. He also argued that some at-large decisions involve more than two teams. In some cases, committee members may be discussing the relative merits of five teams from five different conferences. If everyone -- including other schools' athletic directors -- with even tangential involvement left the room, there might not be enough members left to decide. "At some point," Shaheen said, "you have to vote."
The committee also utilizes blind resumes -- team profiles with the team names stripped away -- to help eliminate bias. The problem, according to the researchers, is that committee members are so well prepared going into the selection process that they can't help identifying which team is which using a blind resume. "It's almost impossible," Lynch said, "to make this a blind decision."
So what can the NCAA do to eliminate the perception of bias? DuMond suggests more transparency. "I don't understand why the NCAA doesn't let media people in the room when they're having this debate," DuMond said. "They act like it's some secretive process and it would be somehow unfair for the media to report on it. But if the U.S. Congress lets reporters in as they're making laws that affect millions of people, I don't see why they can't have reporters in there watching 10 guys pick 64 basketball teams."
Every year, the NCAA does hold a mock selection committee for selected media members to help reporters better understand the herculean task of filling the bracket while still following the tournament's principles and procedures, but reporters are not allowed to view the actual selection.
DuMond's idea has some merit, but allowing reporters into the room wouldn't eliminate the perception of bias because the accounts of the deliberations would be filtered through the reporters, who also are subject to their own subconscious biases. Here's another idea.
So why not station cameras in the committee room and turn it into a weekend-long reality show? Fans would certainly watch, and the NCAA could generate some more revenue that it could in turn distribute to member schools. Meanwhile, committee members would know that any bias in their decision would be immediately sniffed out by the fans at home, so they might be a little more conscious of their subconscious biases.
That wouldn't work, Shaheen said, because eventually the committee members must leave the room. Once outside, they would get skewered by fans and by colleagues for the opinions they expressed in the selection room. "You have to be able to know that you can say something honest and critical," Shaheen said.
That's a very human concern for a process that will forever be criticized and scrutinized because of its very humanity. "I don't think they're trying to actively screw other schools to benefit themselves," DuMond said. "But people act in certain ways where biases will show up."