Certain players perform far better in the October spotlight than others, but can that performance be predicted? A new study at NYU attempts to answer that question.
Imagine, all you would-be MLB front office types, if you could predict how players would perform in the postseason. I’m not suggesting crystal-ball stuff, like foreseeing that someone will thwack home runs in six straight playoff games (as the Mets’ Daniel Murphy did in 2015) or that a guy might commit two errors in one inning of a Division Series game (the Rangers’ Elvis Andrus in Game 5 against Toronto). Imagine, though, if you could predict postseason performance in a general, bottom-line sort of way, similar to the way we predict performance in the regular season.
Regular season predictions may be miles from perfect, and mistakes can be costly (as the free agent contracts for the likes of Carl Crawford and Jason Bay can attest), but overall there’s a pretty high level of correlation between projected and actual production. That correlation drives player value. Projected regular-season performance—based largely on the sprawl of statistics that every club has on every player—explains why Chris Davis commanded $161 million from the Orioles this offseason, Jeff Samardzija got $90 million from the Giants and Justin Maxwell settled for a minor league deal from the Marlins.
Yet there’s no metric, or suite of metrics, for assessing how guys might do in future postseasons. That’s mainly because the sample sizes of previous playoff performances is small. But wouldn’t it be swell, as you sought to fill the roster of your pennant contending team, to have an idea, any idea, of how certain players might perform in the most important games of the season?
More precisely what if you could identify characteristics in players that told you who among them was pre-disposed to thrive, and who was likely to wilt?
That’s what four graduate students at NYU’s School of Professional Studies Tisch Sports Institute recently set out to do. And the early research by Juan Abreu, Ian Chuang, Nicholas Durst and Daniel Fox has yielded some intriguing—if inconclusive—results.
To start, let’s accept the premise that postseason baseball is something other than regular-season baseball. “It is. The two are very, very different,” says Hall of Fame pitcher John Smoltz, who was a raging stud (15-4. 2.67 ERA,) over 209 postseason innings between 1991 and 2009. The difference comes not only from the increased pressure and higher stakes of the postseason (though those are factors), but also because of environmental conditions. The playoffs follow after a long season’s grind. Games are typically played in cooler weather and before larger, noisier crowds. Not least, the players’ entrenched routines get upset by unusual travel schedules, atypical media needs, higher public relations demands, sudden managerial decisions and so on. Regular season and postseason baseball are very similar, but they aren’t equivalent. And in a sport with such thin margins for error, it’s reasonable to believe that the differences between the two could have an impact.
The first task is to determine who, by statistical measures, truly have risen or fallen in the playoffs. The NYUSPS foursome (whom I advised along the way) looked across the 21 seasons of the wild-card era and considered all the players who’ve appeared in the playoffs during that time. They built a database of players with at least 50 career postseason at-bats. It’s a low at-bat minimum on which to base valuations, no doubt, but the idea, at the outset at least, was to capture a large group of players.
The group found 310 players who met that 50 at-bat threshold, from 1995 through the end of the 2015 World Series. More than half those players (160) had upward of 100 at-bats.
As a measurement of batter proficiency, the researchers logged the career regular season on-base-percentage plus slugging percentage (OPS) for each player. Then they calculated the standard deviation across those 310 OPS numbers. That came to .146. Next, the researchers compared each player’s postseason OPS to his regular season OPS and categorized those whose post-season OPS fell within the standard deviation range of their regular season OPS as “relatively consistent.” The players who improved or declined by more than .146 in the postseason were classified as either “relatively better” or “relatively worse.”
Okay, there’s clearly a flaw here—the standard deviation was calculated over the whole population, but it is being applied to individuals. That’s not ideal; among other things the model might overlook a particular player’s streakiness. But calculating the individual player’s standard variation presents its own problems, and in this case we’re simply using the standard deviation measurement to help suss out the true over- and underperformers. Remember that a standard deviation of .146 means that a player with a career regular season OPS of .700 could have a post season OPS of anywhere between .554 and .846 and land in the researchers “relatively consistent” category. That’s a big sweep. So players described as either “relatively better” or “relatively worse” really have been much better or much worse than in their regular seasons.
Of the 310 players, 208 of them (about 67%, that is, the normal distribution) fell within the range of standard deviation. Postseason star Derek Jeter for example, has a career OPS of .839 compared to his regular season .817. That’s +.022. Which means Jeter didn’t so much raise his offensive game in the playoffs as he did maintain it.
Another postseason heartstopper, David Ortiz, also lands among the “relatively consistent” with a .962 postseason OPS that represents a jump of +.037 from his .925 regular season mark. And the group also includes the inimitable Alex Rodriguez, who despite a drop from a regular season OPS of .936 to a postseason .822 (-.114) still qualifies as relatively consistent.
Of the remaining 102 players, 21 exceeded their regular season OPS to the point of being “relatively better” in the postseason. That only 7% of the 310 fell into this top group isn’t entirely a surprise. Hitting should be harder in the playoffs, when batters tend to face stronger pitching staffs and when opposing managers tend to make more targeted pitching changes that result in tougher matchups. That elite 21 includes active stars such as the Yankees’ Carlos Beltran (an impressive leap of +.270 in OPS from .845 to 1.115) the Orioles’ Nelson Cruz (.844 to 1.016) and the Royals’ Alcides Escobar (.642 to .793). Each of the three has 135 postseason at-bats or more.
As the table below shows, you could almost build an entire active roster of players who have been "relatively better" in the postseason, with only a third baseman missing from the lineup.
|Position||player||regular season OPS||Postseason OPS||Difference|
|First base||James Loney||.749||.959||.210|
|Second base||Daniel Murphy||.755||1.150||.360|
Then there are the 81 players (26%) who fell into the “relatively worse” category. They include Giants’ catcher Buster Posey (a drop of -.216, over 188 at bats, his grand slam in Game 5 of the 2012 NLDS against the Reds notwithstanding); Rangers’ slugger Prince Fielder (a fall of -.316 over 164 at-bats) and his teammate Josh Hamilton (-.232); Mariners second baseman Robinson Cano (-.164) and the Cubs' newly signed $184 Million Man, Jason Heyward (-.176). Cooperstown outcast Mark McGwire (-.313) and 2016 Hall of Fame electee Mike Piazza (-.163) also live among the relatively worse.
So now we know how the players stack up, but does anything unify those in a particular group? Are there any shared traits among players who overperform or underperform? Along with OPS, the NYUSPS Tisch Sports Institute graduate researchers charted the players through several other statistics, including batting average, batting average on balls in play, and RBIs per game. Nothing emerged that conclusively foretold who would succeed or fail. But two things to think about: First, it’s possible that a player's tendency to hot or cold spells could be impacting who falls into which category. And second, the players in the “relatively better” group made their playoff debuts at the somewhat advanced average age of 28.2. Fifteen of the 21 were older than 27 at the time of their first playoff at-bat. That may indeed mean something.
Whether this is a meaningful trend and then what implications it may have are not clear. What does seem clear is that this is an area in which analytics and shoe-leather scouting could work nicely in tandem. Smoltz believes you can “see it in a guy’s eyes if he has what it takes “ to excel in the postseason. Maybe a scout should find as many of those 21 relatively better players as he can and look really, really deeply into their eyes.
There’s also no question that the area of predicting postseason performance bears further inquiry, especially considering how huge the payoff of any edge in predictive ability might be.
Kostya Kennedy is a clinical assistant professor at the NYU School of Professional Studies' Tisch Institute of Sports Management, Media and Business. He is a Sports Illustrated special contributor.