Skip to main content

Study: Subconscious human bias exists in tournament selection


In the minutes after the NCAA selection committee announced this year's tournament bracket on Sunday night, the most curious decisions became instant debate fodder. On CBS, SI's Seth Davis wondered aloud about the respect given to the Pac-10 -- a relatively weak conference this season -- in the form of a No. 8 seed for regular-season champ Cal. On ESPN, Dick Vitale railed against the committee for awarding Wake Forest an at-large bid and leaving out Virginia Tech, which had a better overall record, a better ACC record and a head-to-head win against the Demon Deacons.

Had either read a soon-to-be published study by three economists, they might not have been so surprised.

The study, by Jay Coleman, Mike DuMond and Allen Lynch, looked at selection data from 10 tournaments (1999-2008) and found that when seeding the tournament, membership in one of the six BCS conferences is worth an average of an extra 1.75 seeds. The study also found that having a conference representative on the 10-member selection committee resulted not only in a higher seed but also in a better chance of getting an at-large bid. According to the authors, a true bubble team (one with a 50-50 chance of getting in or being left out) would have a 49 percent better chance of getting in if its athletic director is on the committee, a 41 percent better chance if its conference commissioner was on the committee and a 23 percent better chance if a fellow conference AD is a member of the committee.

According to the researchers, Wake Forest would have beaten out Virginia Tech this year even after removing the controls for selection committee bias. Hokies fans should be angrier about their team's abysmal out-of-conference schedule, but it probably didn't escape their notice that Wake Forest athletic director Ron Wellman is a member of the selection committee.

The study's authors aren't accusing Wellman or any other selection committee member of deliberately rigging the process. In fact, they realize Wellman wouldn't have even been allowed in the room when the committee voted to grant Wake Forest an at-large bid, nor would he have been allowed to offer an opinion on fellow ACC member Virginia Tech. What the authors are suggesting is that the selection process setup allows subconscious biases to creep into the proceedings.

"We're accusing the committee of being human," said Coleman, a professor at the University of North Florida. "It's human nature. We all are biased at some level."

While the authors' findings probably wouldn't surprise anyone who has ever sat on a barstool and debated the selection committee's seeding and at-large choices, this is the first time anyone has performed a statistical analysis to determine whether the conventional wisdom is true. The appearance of bias, meanwhile, is a sensitive subject to the NCAA, which distributes millions of dollars to conferences and schools based on how many games each school plays in the tournament.

Greg Shaheen, the NCAA's senior vice president of basketball and business strategies, said the committee goes out of its way to eliminate bias, and he pointed out that -- at least with the finding of bias in at-large selections -- the selections in question accounted for only about three percent of the selections studied, making the sample size relatively small. Shaheen also said the authors did not account for the rules that govern seeding and bracketing the tournament.

After reviewing the study, Tom Paskus, the NCAA's head research scientist, expressed several concerns with the study's methodology. In one case, he called a particular set of data "nonsensical." The authors counter that the study was evaluated independently before it was approved for publication in Managerial and Decision Economics. Emory University professor Paul Rubin, the journal's editor, confirmed in a phone interview that the study was peer reviewed by economist who specializes in sports. In the world of academic journals, such a reviewer is called, coincidentally enough, a referee. Rubin declined to name the referee in order to protect the double-blind review process, but he said the referee approved the paper for publication after one round of revisions. also sent the study to be reviewed by University of Michigan sports management professor Rod Fort, one of the nation's leading researchers on the business of college sports. Fort echoed Shaheen's sample-size concern, but he said he found no major methodological flaws.

The project began 10 years ago as particularly nerdy version of a traditional March pastime: forecasting the bracket. Coleman stumbled upon Jerry Palm's, a trove of roundball data that included almost every statistic an economist could want to build a model. "For folks like the three of us," Coleman said, "something like that is a gold mine." So Coleman brought in Lynch, who is now a professor at Mercer University in Macon, Ga. (Lynch and DuMond also are co-designers of the college football recruit prediction model that helped inspire's 2009State of Recruiting project.) The pair used the usual factors such as record, conference record, RPI, quality wins and other on-court statistics to build the model.

After several years of predicting the tournament field using their Dance Card formula, Coleman and Lynch noticed they usually missed two or three at-large teams a year. They also noticed the teams they missed on seemed to have similar characteristics. They realized they needed to change the formula after the 2006 tournament.

For most, the defining memory of the 2006 tournament is watching George Mason, an at-large team from the Colonial Athletic Association, crash the Final Four. For Coleman, DuMond and Lynch, it was that selection Sunday. Air Force hadn't even bothered to gather as a team to watch the selection show. The Falcons got an at-large bid. Meanwhile, George Mason's situation looked dire thanks to a pair of losses to Hofstra -- a team with a similar resume and an RPI of 30. George Mason got an at-large bid. Hofstra didn't. "I need someone to explain this process to me," Hofstra coach Tom Pecora told The Buffalo News.

Little did Pecora know Coleman and Lynch were about to recruit the man who might provide an explanation. DuMond, steamed that his alma mater, Florida State, had seen its bubble burst that year, happily agreed to join the project. The information Coleman and Lynch sought was in DuMond's professional wheelhouse. A principal at Charles River Associates in Tallahassee, Fla., DuMond specializes in discrimination issues involving labor and employment, so he knows how to mine data for bias. In looking at the at-large bids that year, DuMond noticed a common complaint that can best be described with a passage from a story John Markon wrote for the March 14, 2006 edition of the Richmond (Va.) Times-Dispatch.

This year, everyone's complaining about the same three or four choices - Air Force, Utah State, California and George Mason. We've heard all the arguments based on RPI, quality losses, record in the most recent 10 games, etc.

Scroll to Continue

SI Recommends

Let's make it slightly simpler:

Air Force: A Mountain West Conference AD, Chris Hill of Utah, was on the committee.

California: A Pacific 10 Conference AD, Dan Guerrero of UCLA, was on the committee.

Utah State: Western Athletic Conference Commissioner Karl Benson was on the committee.

George Mason: Mason AD Tom O'Connor was on the committee.

So DuMond tweaked the model to test several hypotheses, including whether committee representation aided teams. The trio fed the previous tournaments' data back into the model, and the number of misses dropped. Accuracy rose from 94 percent to 97 percent.

How accurate is the model now? On Sunday, it correctly predicted 33 of the 34 at-large teams. The lone miss fell just on the other side of the trio's cut line. The model predicted Mississippi State (with an 81.4 percent chance of getting a bid) would be the last team in, while Florida (with an 80.2 percent chance) would be the first team out. The Gators got a bid. The Bulldogs didn't.

"Once we started controlling for things like the membership representation on the committee as well as some other factors, our predictions got more accurate," DuMond said. "In some sense, that's the best proof that these things matter."

It's not proof enough for the NCAA's Paskus, who had major issues with the model that found bias in at-large selections. His chief complaint was that the authors tilted the study to their findings by using too high of a P-value, which is a statistical term that describes the odds of a random event. A P-value of .01 means there is only a one percent chance that the model will predict a totally random event. In some instances, the authors used a P-value of .1 in the at-large model, meaning there is a 10 percent chance of a random result. "The standard practice in the social sciences -- and I assume economics is still holding to this -- when you're looking into that many tests of significance, you're supposed to take that into account in the P-value you use," Paskus said. "So rather than stacking the odds in your favor by using that .1 and trying to find a bunch of significant results, you're supposed to go in the other direction and look at a .01 or .001 to be sure the effects you're seeing aren't just due to chance."

The authors contend that the most important findings -- the committee representation bias and favoritism toward the Pac-10 and Big East in at-large selections -- were statistically significant to the .01 level. They also pointed out that in the model that found bias in seeding, the results were statistically significant at the .0001 level, meaning there is a one-hundredth of one percent chance of a random result.

Paskus also took issue with some of the odds ratios used by the authors. An odds ratio is the likelihood that one event will happen over another with all other factors equal. In the case of Pac-10 teams receiving an at-large selection, the authors found an odds ratio of 9,999. "Those numbers, statistically, are nonsensical," Paskus said. "The odds ratios just can't be that large, especially given the data they have where they're only looking at a couple hundred teams over a 10-year period."

The authors argue that while the number seems high, when translated using standard statistical measures, it isn't. DuMond wrote in an e-mail that if, for example, UCLA and Siena each have a 90 percent chance -- based on their on-court resumes -- of receiving an at-large bid, the bias factor would instead give UCLA a 99.99 percent chance of receiving a bid. Siena's chance would remain at 90 percent. Such a difference, DuMond wrote, probably wouldn't be visible to the naked eye.

In his independent evaluation, Fort also mentioned the eye-popping odds ratio. "One shouldn't make too much of the idea that, say, a Pac-10 [team] has a 10,000 times higher chance of being selected relative to some other 'minor' team of similar performance-only variables," Fort wrote in an e-mail. He then explained the same translation as DuMond. Fort's main concern was the sample size. He wrote that because the results affected by bias were so few in number, the results may have statistical significance, but they may lack "impact significance."

"They call these observations 'the bias,' but the real question is what makes this bias happen?" Fort wrote. "At the heart of it then is why the NCAA structures this decision process so that it produces these outcomes (the authors must eventually admit, in a very small percentage of the actual cases)? If we observe that committee membership influences outcome, then why allow that committee membership in the first place? ... It is a choice by the NCAA that generates the outcomes that the authors observe. And that is the interesting issue, rather than the fact (that everybody knows anyway) that it occurs."

The NCAA has deterrents in place to discourage bias. For example, a conference commissioner must leave the room whenever a team from the commissioner's conference is discussed. An athletic director must leave the room during discussion of his own team, and he is allowed to offer only facts --no opinions -- about fellow conference teams. Shaheen said some members go above and beyond. When Bob Bowlsby, then athletic director at Iowa, was the committee chair in 2004 and 2005, he also recused himself from the room when his former employer, Northern Iowa, was discussed. Shaheen also said committee members don't engage in backroom dealings. During their brief respites from the committee room, they may exercise or sleep, but the last thing they want to talk about is seeding or at-large choices. "In 10 years, I have never witnessed an exchange that goes beyond the boundaries," Shaheen said.

DuMond said that while this seems an effective measure on its surface, it doesn't take into account discussions of other teams on the bubble. "They leave the room," DuMond said. "But they also come back in the room. ... A [member] knows [his school is] a bubble team, so he can vote against other bubble teams strategically and save a spot more or less. So while there are some rules in place to at least get rid of the perception of bias, that doesn't necessarily mean that they're working. The statistical evidence suggests that they're not."

Shaheen argues that because a team needs a majority to be seeded or selected as an at-large, one vote isn't likely to swing the decision. He also argued that some at-large decisions involve more than two teams. In some cases, committee members may be discussing the relative merits of five teams from five different conferences. If everyone -- including other schools' athletic directors -- with even tangential involvement left the room, there might not be enough members left to decide. "At some point," Shaheen said, "you have to vote."

The committee also utilizes blind resumes -- team profiles with the team names stripped away -- to help eliminate bias. The problem, according to the researchers, is that committee members are so well prepared going into the selection process that they can't help identifying which team is which using a blind resume. "It's almost impossible," Lynch said, "to make this a blind decision."

So what can the NCAA do to eliminate the perception of bias? DuMond suggests more transparency. "I don't understand why the NCAA doesn't let media people in the room when they're having this debate," DuMond said. "They act like it's some secretive process and it would be somehow unfair for the media to report on it. But if the U.S. Congress lets reporters in as they're making laws that affect millions of people, I don't see why they can't have reporters in there watching 10 guys pick 64 basketball teams."

Every year, the NCAA does hold a mock selection committee for selected media members to help reporters better understand the herculean task of filling the bracket while still following the tournament's principles and procedures, but reporters are not allowed to view the actual selection.

DuMond's idea has some merit, but allowing reporters into the room wouldn't eliminate the perception of bias because the accounts of the deliberations would be filtered through the reporters, who also are subject to their own subconscious biases. Here's another idea. John List, an economics professor at the University of Chicago, has studied altruism for years. Through his research --some done on the cutthroat trading floor of baseball card shows -- he discovered that people tend to be altruistic when they know they're being watched.

So why not station cameras in the committee room and turn it into a weekend-long reality show? Fans would certainly watch, and the NCAA could generate some more revenue that it could in turn distribute to member schools. Meanwhile, committee members would know that any bias in their decision would be immediately sniffed out by the fans at home, so they might be a little more conscious of their subconscious biases.

That wouldn't work, Shaheen said, because eventually the committee members must leave the room. Once outside, they would get skewered by fans and by colleagues for the opinions they expressed in the selection room. "You have to be able to know that you can say something honest and critical," Shaheen said.

That's a very human concern for a process that will forever be criticized and scrutinized because of its very humanity. "I don't think they're trying to actively screw other schools to benefit themselves," DuMond said. "But people act in certain ways where biases will show up."