The Indian Wells Quarter of Death

Italian translation at settesei.it

The Indian Wells men’s draw looks a bit lopsided this year. The bottom quarter, anchored by No. 2 seed Novak Djokovic, also features Roger Federer, Rafael Nadal, Juan Martin del Potro, and Nick Kyrgios. It doesn’t take much analysis to see that the bracket makes life more difficult for Djokovic, and by extension, it cleared the way for Andy Murray. Alas, Murray lost his opening match against Vasek Pospisil on Saturday, making No. 3 seed Stan Wawrinka the luckiest man in the desert.

The draw sets up some very noteworthy potential matches: Federer and Nadal haven’t played before the quarterfinal since their first encounter back in 2004, and Fed hasn’t played Djokovic before the semis in more than 40 meetings, since 2007. Kyrgios, who has now beaten all three of the elites in his quarter, is likely to get another chance to prove his mettle against the best.

I haven’t done a piece on draw luck for awhile, and this seemed like a great time to revisit the subject. The principle is straightforward: By taking the tournament field and generating random draws, we can do a sort of “retro-forecast” of what each player’s chances looked like before the draw was conducted–back when Djokovic’s road wouldn’t necessarily be so rocky. By comparing the retro-forecast to a projection based on the actual draw, we can see how much the luck of the draw impacted each player’s odds of piling up ranking points or winning the title.

Here are the eight players most heavily favored by the pre-draw forecast, along with the their chances of winning the title, both before and after the draw was conducted:

Player                 Pre-Draw  Post-Draw  
Novak Djokovic           26.08%     19.05%  
Andy Murray              19.30%     26.03%  
Roger Federer            10.24%      8.71%  
Rafael Nadal              5.46%      4.80%  
Stan Wawrinka             5.08%      7.14%  
Kei Nishikori             5.01%      5.67%  
Nick Kyrgios              4.05%      2.62%  
Juan Martin del Potro     4.00%      2.34%

These odds are based on my jrank rating system, which correlates closely with Elo. I use jrank here instead of Elo because it’s surface-specific. I’m also ignoring the first round of the main draw, which–since all 32 seeds get a first-round bye–is just a glorified qualifying round and has very little effect on the title chances of seeded players.

As you can see, the bottom quarter–the “group of death”–is in fact where title hopes go to die. Djokovic, who is still considered to be the best player in the game by both jrank and Elo, had a 26% pre-draw chance of defending his title, but it dropped to 19% once the names were placed in the bracket. Not coincidentally, Murray’s odds went in the opposite direction. Federer’s and Nadal’s title chances weren’t hit quite as hard, largely because they weren’t expected to get past Djokovic, no matter when they faced him.

The issue here isn’t just luck, it’s the limitation of the ATP ranking system. No one really thinks that del Potro entered the tournament as the 31st favorite, or that Kyrgios came in as the 15th. No set of rankings is perfect, but at the moment, the official rankings do a particularly poor job of reflecting the players with the best chances of winning hard court matches.  The less reliable the rankings, the better chance of a lopsided draw like the one in Indian Wells.

For a more in-depth look at the effect of the draw on players with lesser chances of winning the title, we need to look at “expected ranking points.” Using the odds that a player reaches each round, we can calculate his expected points for the entire event. For someone like Kyle Edmund, who would have almost no chance of winning the title regardless of the draw, expected points tells a more detailed story of the power of draw luck. Here are the ten players who were punished most severely by the bracket:

Player                 Pre-Draw Pts Post-Draw Pts  Effect  
Kyle Edmund                    28.8          14.3  -50.2%  
Steve Johnson                  65.7          36.5  -44.3%  
Vasek Pospisil                 29.1          19.4  -33.2%  
Juan Martin del Potro         154.0         104.2  -32.3%  
Stephane Robert                20.3          14.2  -30.1%  
Federico Delbonis              20.0          14.5  -27.9%  
Novak Djokovic                429.3         325.4  -24.2%  
Nick Kyrgios                  163.5         124.6  -23.8%  
Horacio Zeballos               17.6          14.1  -20.0%  
Alexander Zverev              113.6          91.5  -19.4%

At most tournaments, this list is dominated by players like Edmund and Pospisil: unseeded men with the misfortune of drawing an elite opponent in the first round. Much less common is to see so many seeds–particularly a top-two player–rating as the most unlucky. While Federer and Nadal don’t quite make the cut here, the numbers bear out our intuition: Fed’s draw knocked his expected points from 257 down to 227, and Nadal’s reduced his projected tally from 195 to 178.

The opposite list–those who enjoyed the best draw luck–features a lot of names from the top half, including both Murray and Wawrinka. Murray squandered his good fortune, putting Wawrinka in an even better position to take advantage of his own:

Player              Pre-Draw Pts  Post-Draw Pts  Effect  
Malek Jaziri                21.9           31.6   44.4%  
Damir Dzumhur               29.1           39.0   33.9%  
Martin Klizan               27.6           36.4   32.1%  
Joao Sousa                  24.7           31.1   25.9%  
Peter Gojowczyk             20.4           25.5   24.9%  
Tomas Berdych               93.6          116.6   24.6%  
Mischa Zverev               58.5           72.5   23.8%  
Yoshihito Nishioka          26.9           32.6   21.1%  
John Isner                  80.2           97.0   21.0%  
Andy Murray                369.1          444.2   20.3%  
Stan Wawrinka              197.8          237.7   20.1%

Over the course of the season, quirks like these tend to even out. Djokovic, on the other hand, must be wondering how he angered the draw gods: Just to earn a quarter-final place against Roger or Rafa, he’ll need to face Kyrgios and Delpo for the second consecutive tournament.

If Federer, Kyrgios, and del Potro can bring their ATP rankings closer in line with their true talent, they are less likely to find themselves in such dangerous draw sections. For Djokovic, that would be excellent news.

Measuring the Performance of Tennis Prediction Models

With the recent buzz about Elo rankings in tennis, both at FiveThirtyEight and here at Tennis Abstract, comes the ability to forecast the results of tennis matches. It’s not far fetched to ask yourself, which of these different models perform better and, even more interesting, how they fare compared to other ‘models’, such as the ATP ranking system or betting markets.

For this, admittedly limited, investigation, we collected the (implied) forecasts of five models, that is, FiveThirtyEight, Tennis Abstract, Riles, the official ATP rankings, and the Pinnacle betting market for the US Open 2016. The first three models are based on Elo. For inferring forecasts from the ATP ranking, we use a specific formula1 and for Pinnacle, which is one of the biggest tennis bookmakers, we calculate the implied probabilities based on the provided odds (minus the overround)2.

Next, we simply compare forecasts with reality for each model asking If player A was predicted to be the winner ($latex P(a) > 0.5$), did he really win the match? When we do that for each match and each model (ignoring retirements or walkovers) we come up with the following results.

Model		% correct
Pinnacle	76.92%
538		75.21%
TA		74.36%
ATP		72.65%
Riles		70.09%

What we see here is how many percent of the predictions were actually right. The betting model (based on the odds of Pinnacle) comes out on top followed by the Elo models of FiveThirtyEight and Tennis Abstract. Interestingly, the Elo model of Riles is outperformed by the predictions inferred from the ATP ranking. Since there are several parameters that can be used to tweak an Elo model, Riles may still have some room left for improvement.

However, just looking at the percentage of correctly called matches does not tell the whole story. In fact, there are more granular metrics to investigate the performance of a prediction model: Calibration, for instance, captures the ability of a model to provide forecast probabilities that are close to the true probabilities. In other words, in an ideal model, we want 70% forecasts to be true exactly in 70% of the cases. Resolution measures how much the forecasts differ from the overall average. The rationale here is, that just using the expected average values for forecasting will lead to a reasonably well-calibrated set of predictions, however, it will not be as useful as a method that manages the same calibration while taking current circumstances into account. In other words, the more extreme (and still correct) forecasts are, the better.

In the following table we categorize the set of predictions into bins of different probabilities and show how many percent of the predictions were correct per bin. This also enables us to calculate Calibration and Resolution measures for each model.

Model    50-59%  60-69%  70-79%  80-89%  90-100% Cal  Res   Brier
538      53%     61%     85%     80%     91%     .003 .082  .171
TA       56%     75%     78%     74%     90%     .003 .072  .182
Riles    56%     86%     81%     63%     67%     .017 .056  .211
ATP      50%     73%     77%     84%     100%    .003 .068  .185
Pinnacle 52%     91%     71%     77%     95%     .015 .093  .172

As we can see, the predictions are not always perfectly in line with what the corresponding bin would suggest. Some of these deviations, for instance the fact that for the Riles model only 67% of the 90-100% forecasts were correct, can be explained by small sample size (only three in that case). However, there are still two interesting cases (marked in bold) where sample size is better and which raised my interest. Both the Riles and Pinnacle models seem to be strongly underconfident (statistically significant) with their 60-69% predictions. In other words, these probabilities should have been higher, because, in reality, these forecasts were actually true 86% and 91% percent of the times.3 For the betting aficionados, the fact that Pinnacle underestimates the favorites here may be really interesting, because it could reveal some value as punters would say. For the Riles model, this would maybe be a starting point to tweak the model.

In the last three columns Calibration (the lower the better), Resolution (the higher the better), and the Brier score (the lower the better) are shown. The Brier score combines Calibration and Resolution (and the uncertainty of the outcomes) into a single score for measuring the accuracy of predictions. The models of FiveThirtyEight and Pinnacle (for the used subset of data) essentially perform equally good. Then there is a slight gap until the model of Tennis Abstract and the ATP ranking model come in third and fourth, respectively. The Riles model performs worst in terms of both Calibration and Resolution, hence, ranking fifth in this analysis.

To conclude, I would like to show a common visual representation that is used to graphically display a set of predictions. The reliability diagram compares the observed rate of forecasts with the forecast probability (similar to the above table).

The closer one of the colored lines is to the black line, the more reliable the forecasts are. If the forecast lines are above the black line, it means that forecasts are underconfident, in the opposite case, forecasts are overconfident. Given that we only investigated one tournament and therefore had to work with a low sample size (117 predictions), the big swings in the graph are somewhat expected. Still, we can see that the model based on ATP rankings does a really good job in preventing overestimations even though it is known to be outperformed by Elo in terms of prediction accuracy.

To sum up, this analysis shows how different predictive models for tennis can be compared among each other in a meaningful way. Moreover, I hope I could exhibit some of the areas where a model is good and where it’s bad. Obviously, this investigation could go into much more detail by, for example, comparing the models in how well they do for different kinds of players (e.g., based on ranking), different surfaces, etc. This is something I will spare for later. For now, I’ll try to get my sleeping patterns accustomed to the schedule of play for the Australian Open, and I hope, you can do the same.

Peter Wetz is a computer scientist interested in racket sports and data analytics based in Vienna, Austria.

Footnotes

1. $latex P(a) = a^e / (a^e + b^e) $ where $latex a $ are player A’s ranking points, $latex b $ are player B’s ranking points, and $latex e $ is a constant. We use $latex e = 0.85 $ for ATP men’s singles.

2. The betting market in itself is not really a model, that is, the goal of the bookmakers is simply to balance their book. This means that the odds, more or less, reflect the wisdom of the crowd, making it a very good predictor.

3. As an example, one instance, where Pinnacle was underconfident and all other models were more confident is the R32 encounter between Ivo Karlovic and Jared Donaldson. Pinnacle’s implied probability for Karlovic to win was 64%. The other models (except the also underconfident Riles model) gave 72% (ATP ranking), 75% (FiveThirtyEight), and 82% (Tennis Abstract). Turns out, Karlovic won in straight sets. One factor at play here might be that these were the US Open where more US citizens are likely to be confident about the US player Jared Donaldson and hence place a bet on him. As a consequence, to balance the book, Pinnacle will lower the odds on Donaldson, which results in higher odds (and a lower implied probability) for Karlovic.

The Unexpectedly Predictable IPTL

December is here, and with the tennis offseason almost five days old, it’s time to resume the annual ritual of pretending we care about exhibitions. The hit-and-giggle circuit gets underway in earnest tomorrow with the kickoff, in Japan, of the 2016 IPTL slate.

The star-studded IPTL, or International Premier Tennis League, is two years old, and uses a format similar to that of the USA’s World Team Tennis. Each match consists of five separate sets: one each of men’s singles, women’s singles, (men’s) champions’ singles, men’s doubles, and mixed doubles. Games are no-ad, each set is played to six games, and a tiebreak is played at 5-5. At the end of all those sets, if both teams have the same number of games, representatives of each side’s sponsors thumb-wrestle to determine the winner. Or something like that. It doesn’t really matter.

As with any exhibition, players don’t take the competition too seriously. Elites who sit out November tournaments due to injury find themselves able to compete in December, given a sufficient appearance fee. It’s entertaining, but compared to the first eleven months of the year, it isn’t “real” tennis.

That triggers an unusual research question: How predictable are IPTL sets? If players have nothing at stake, are outcomes simply random? Or do all the participants ease off to an equivalent degree, resulting in the usual proportion of sets going the way of the favorite?

Last season, there were 29 IPTL “matches,” meaning that we have a dataset consisting of 29 sets each of men’s singles, women’s singles, and men’s doubles. (For lack of data, I won’t look at mixed doubles, and for lack of interest, forget about champion’s singles.) Except for a handful of singles specialists who played doubles, we have plenty of data on every player. Using Elo ratings, we can generate forecasts for every set based on each competitor’s level at the time.

Elo-based predictions spit out forecasts for standard best-of-three contests, so we’ll need to adjust those a bit. Single-set results are more random, so we would expect a few more upsets. For instance, when Roger Federer faced Ivo Karlovic last December, Elo gave him an 89.9% chance of winning a traditional match, and the relevant IPTL forecast is a more modest 80.3%. With these estimates, we can see how many sets went the way of the favorite and how many upsets we should have expected given the short format.

Let’s start with men’s singles. Karlovic beat Federer, and Nick Kyrgios lost a set to Ivan Dodig, but in general, decisions went the direction we would expect. Of the 29 sets, favorites won 18, or 62.1%. The Elo single-set forecasts imply that the favorites should have won 64.2%, or 18.6 sets. So far, so predictable: If IPTL were a regular-season event, its results wouldn’t be statistically out of place.

The results are similar for women’s singles. The forecasts show the women’s field to be more lopsided, due mostly to the presence of Serena Williams and Maria Sharapova. Elo expected that the favorites would win 20.4, or 70.4% of the 29 sets. In fact, the favorites won 21 of 29.

The men’s doubles results are more complex, but they nonetheless provide further evidence that IPTL results are predictable. Elo implied that most of the men’s doubles matches were close: Only one match (Kei Nishikori and Pierre-Hugues Herbert against Gael Monfils and Rohan Bopanna) had a forecast above 62%, and overall, the system expected only 16.4 victories for the favorites, or 56.4%. In fact, the Elo-favored teams won 19, or 65.5% of the 29 sets, more than the singles favorites did.

The difference of less than three wins in a small sample could easily just be noise, but even so, a couple of explanations spring to mind. First, almost every team had at least one doubles specialist, and those guys are accustomed to the rapid-fire no-ad format. Second, the higher-than-usual number of non-specialists–such as Federer, Nishikori, and Monfils–means that the player ratings may not be as reliable as they are for specialists, or for singles. It might be the case that Nishikori is a better doubles player than Monfils, but because both usually stick to singles, no rating system can capture the difference in abilities very accurately.

Here is a summary of all these results:

Competition      Sets  Fave W  Fave W%  Elo Forecast%  
Men's Singles      29      18    62.1%          64.2%  
Women's Singles    29      21    72.4%          70.4%  
ALL SINGLES        58      39    67.3%          67.3%  
                                                       
Men's Doubles      29      19    65.5%          56.4%  
ALL SETS           87      58    66.7%          63.7%

Taken together, last season’s evidence shows that IPTL contests tend to go the way of the favorites. In fact, when we account for the differences in format, favorites win more often than we’d expect. That’s the surprising bit. The conventional wisdom suggests that the elites became champions thanks to their prowess at high-pressure moments; many dozens of pros could reach the top if they were only stronger mentally. In exhos, the mental game is largely taken out of the picture, yet in this case, the elites are still winning.

No matter how often the favorites win, these matches are still meaningless, and I’m not about to include them in the next round of player ratings. However, it’s a mistake to disregard exhibitions entirely. By offering a contrast to the high-pressure tournaments of the regular season, they may offer us perspectives we can’t get anywhere else.

Forecasting Davis Cup Doubles

One of the most enjoyable aspects of Davis Cup is the spotlight it shines on doubles. At ATP events, doubles matches are typically relegated to poorly-attended side courts. In Davis Cup, doubles gets a day of its own, and crowds turn out in force. Even better, the importance of Davis Cup inspires many players who normally skip doubles to participate.

Because singles specialists are more likely to play doubles, and because most Davis Cup doubles teams are not regular pairings, forecasting these matches is particularly difficult. In the past, I haven’t even tried. But now that we have D-Lo–Elo ratings for doubles–it’s a more manageable task.

To my surprise, D-Lo is even more effective with Davis Cup than it is with regular-season tour-level matches. D-Lo correctly predicts the outcome of about 65% of tour-level doubles matches since 2003. For Davis Cup World Group and World Group Play-Offs in that time frame, D-Lo is right 70% of the time. To put it another way, this is more evidence that Davis Cup is about the chalk.

What’s particularly odd about that result is that D-Lo itself isn’t that confident in its Davis Cup forecasts. For ATP events, D-Lo forecasts are well-calibrated, meaning that if you look at 100 matches where the favorite is given a 60% chance of winning, the favorite will win about 60 times. For the Davis Cup forecasts, D-Lo thinks the favorite should win about 60% of the time, but the higher-rated team ends up winning 70 matches out of 100.

Davis Cup’s best-of-five format is responsible for part of that discrepancy. In a typical ATP doubles match, the no-ad scoring and third-set tiebreak introduce more luck into the mix, making upsets more likely. A matchup that would result in a 60% forecast in the no-ad, super-tiebreak format translates to a 64.5% forecast in the best-of-five format. That accounts for about half the difference: Davis Cup results are less likely to be influenced by luck.

The other half may be due to the importance of the event. For many players, regular-season doubles matches are a distant second priority to singles, so they may not play at a consistent level from one match to the next. In Davis Cup, however, it’s a rare competitor who doesn’t give the doubles rubber 100% of their effort. Thus, we appear to have quite a few matches in which D-Lo picks the winner, but since it uses primarily tour-level results, it doesn’t realize how heavily the winner should have been favored.

Incidentally, home-court advantage doesn’t seem to play a big role in doubles outcomes. The hosting side has won 52.6% of doubles matches, an edge which could have as much to do with hosts’ ability to choose the surface as it is does with screaming crowds and home cooking. This isn’t a factor that affects D-Lo forecasts, as the system’s predictions are as accurate when it picks the away side as when it picks the home side.

Forecasting Argentina-Croatia doubles

Here are the D-Lo ratings for the eight nominated players this weekend. The asterisks indicate those players who are currently slated to contest tomorrow’s doubles rubber:

Player                 Side  D-Lo     
Juan Martin del Potro  ARG   1759     
Leonardo Mayer         ARG   1593  *  
Federico Delbonis      ARG   1540     
Guido Pella            ARG   1454  *  
                                      
Ivan Dodig             CRO   1856  *  
Marin Cilic            CRO   1677     
Ivo Karlovic           CRO   1580     
Franco Skugor          CRO   1569  *

As it stands now, Croatia has a sizable advantage. Based on the D-Lo ratings of the currently scheduled doubles teams, the home side has a 189-point edge, which converts to a 74.8% probability of winning. But remember, that’s the chance of winning a no-ad, super-tiebreak match, with all the luck that entails. In best-of-five, that translates to a whopping 83.7% chance of winning.

Making matters worse for Argentina, it’s likely that Croatia could improve their side. Argentina could increase their odds of winning the doubles rubber by playing Juan Martin del Potro, but given Delpo’s shaky physical health, it’s unlikely he’ll play all three days. Marin Cilic, on the other hand, could very well play as much as possible. A Cilic-Ivan Dodig pairing would have a 243-point advantage over Leonardo Mayer and Guido Pella, which translates to an 89% chance of winning a best-of-five match. Even Mayer’s Davis Cup heroics are unlikely to overcome a challenge of that magnitude.

Given the likelihood that Pella will sit on the bench for every meaningful singles match, it’s easy to wonder if there is a better option. Sure enough, in Horacio Zeballos, Argentina has a quality doubles player sitting at home. The two-time Grand Slam doubles semifinalist has a current D-Lo rating of 1758, almost identical to del Potro’s. Paired with Mayer, Zeballos would bring Argentina’s chances of upsetting a Dodig-Franco Skugor team to 43%. Zeballos-Mayer would also have a 32% chance of defeating Dodig-Cilic.

A full Argentina-Croatia forecast

With the doubles rubber sorted, let’s see who is likely to win the 2016 Davis Cup. Here are the Elo– and D-Lo-based forecasts for each currently-scheduled match, shown from the perspective of Croatia:

Rubber                      Forecast (CRO)  
Cilic v Delbonis                     90.8%  
Karlovic v del Potro                 15.8%  
Dodig/Skugor v Mayer/Pella           83.7%  
Cilic v del Potro                    36.3%  
Karlovic v Delbonis                  75.8%

Elo still believes Delpo is an elite-level player, which is why it makes him the favorite in the pivotal fourth rubber against Cilic. The system is less positive about Federico Delbonis, who it ranks 68th in the world, against his #41 spot on the ATP computer.

These match-by-match forecasts imply a 74.2% probability that Croatia will win the tie. That’s more optimistic than the betting market which, a few hours before play begins, gives Croatia about a 65% edge.

However, most of the tweaks we could make would move the needle further toward a Croatia victory. Delpo’s body may not allow him to play two singles matches at full strength, and the gap in singles skill between him and Mayer is huge. Croatia could improve their doubles chances if Cilic plays. And if there is a home-court or surface advantage, it would probably work against the South Americans.

Even more likely than a Croatian victory is a 1-1 split of the first two matches. If that happens, everything will hang in the balance tomorrow, when the world tunes in to watch a doubles match.

Why Novak Djokovic is Still Number One

Italian translation at settesei.it

Two weeks ago, Andy Murray took over the ATP #1 ranking from Novak Djokovic. Yesterday, he defeated Djokovic in their first meeting since June, securing his place at the top of the year-end ranking table. Murray has been outstanding in the second half of this season, winning all but three of his matches since the Roland Garros final, and he capped the year in style, beating four top-five players to claim the title at the World Tour Finals.

Despite all that, Murray is not the best player in the world. That title still belongs to Djokovic. Since June, Murray has closed the gap, establishing himself as part of what we might call the “Big Two,” but he hasn’t quite ousted his rival. There’s no question that over this period, Murray has played better–that sort of thing is occasionally debatable, but this season it’s just historical fact–but identifying the best player implies something more predictive, and it’s much more difficult to determine by simply looking over a list of recent results.

The ATP rankings generally do a good job of telling us which players are better than others. But the official system has two major problems: It ignores opponent quality, and it artificially limits its scope to the last 52 weeks. Pundits and fans tend to have different problems: They often give too much credit to opponent quality (“He beat Djokovic, so now he’s number one!”) and exhibit an even more extreme recency bias (“He’s looked unbeatable this week!”).

Two systems that avoid these issues–Elo and Jrank–both place Djokovic comfortably ahead of Murray. These algorithms handle the details of recent matches and opponent quality differently from each other, but what they share in common is more important: They consider opponent quality and they don’t use an arbitrary time cutoff like the ATP ranking system does.

Here’s how the three methods would forecast a Djokovic-Murray match, were it held today:

  • ATP: Murray favored, 51.6% chance of winning
  • Elo: Djokovic favored, 61.6% chance of winning
  • Jrank: Djokovic favored, 57.0% chance of winning

Betting markets favored Djokovic by a margin of slightly more than 60/40 yesterday, though bettors probably gave him some of that edge because they thought Murray would be fatigued after his marathon match on Saturday.

As I wrote last week, Elo doesn’t deny that Murray has had a tremendous half-season. Instead, it gives him less credit than the official algorithm does for victories over lesser opponents (such as John Isner in the Paris Masters final), and it recognizes that he started his current run of form at an enormous disadvantage. With his title in London, Murray reached a new peak Elo rating, but it still isn’t enough to overtake Djokovic.

Even though Elo still prefers Novak by a healthy margin, it reflects how much the situation at the top of the ranking list has changed. At the beginning of 2016, Elo gave Djokovic a 76.5% chance of winning a head-to-head against Murray, and that probability rose as high as 81% in April. It fell below 70% after the Olympics, and the gap is now the smallest it has been since February 2011.

Last week illustrates how difficult it will be for Murray take over the #1 Elo ranking place. The pre-tournament Elo difference of 91 points between the two players has shrunk by only 8%, to 84 points. Murray’s win yesterday was worth a bit more than a measly seven points, but Djokovic had several opportunities to nudge his rating upwards in his first four matches, as well. Despite some of Novak’s head-scratching losses this fall, he still wins most of his matches–some of them against very good players–slowing the decline of his Elo rating.

Of course, Elo is just a measuring stick–like any ranking system, it doesn’t tell us what’s really happening on court. It’s possible that Murray has made a significant (and semi-permanent) leap forward or that Djokovic has taken a major step back. On the other hand, streaks happen even without such leaps, and they always end. The smart money is usually on small, gradual changes to the status quo, and Elo gives us a way to measure those changes.

For Elo to rate Murray ahead of Djokovic, it will probably require several more months of these gradual changes. The only faster alternative is for Djokovic to start losing more matches to the likes of Jiri Vesely and Sam Querrey. When faced with dramatic evidence, Elo makes more dramatic changes. While Djokovic has occasionally provided that evidence this season, he has usually offered enough proof–like four wins at the World Tour Finals–to comfortably maintain his position at the top.

Factchecking the History of the ATP Number One With Elo

Italian translation at settesei.it

As I wrote at The Economist this week, Andy Murray might sit atop the ATP rankings, but he probably isn’t the best player in tennis right now. That honor still belongs to Novak Djokovic, who comes in higher on the Elo ranking list, which uses an algorithm that is more predictive of match outcomes than the ATP table.

This isn’t the first time Elo has disagreed with the official rankings over the name at the top. Of the 26 men to have reached the ATP number one ranking, only 18 also became number one on the Elo list. A 19th player, Guillermo Coria, was briefly Elo #1 despite never achieving the same feat on the ATP rankings.

Four of the remaining eight players–Murray, Patrick Rafter, Marcelo Rios, and John Newcombe–climbed as high as #2 in the Elo rankings, while the last four–Thomas Muster, Carlos Moya, Marat Safin, and Yevgeny Kafelnikov–only got as high as #3. Moya and Kafelnikov are extreme cases of the rankings mismatch, as neither player spent even a single full season inside the Elo top five.

By any measure, though, Murray has spent a lot of time close to the top spot. What makes his current ascent to the #1 spot so odd is that in the past, Elo thought he was much closer. Despite his outstanding play over the last several months, there is still a 100-point Elo gap between him and Djokovic. That’s a lot of space: Most of the field at the WTA Finals in Singapore this year was within a little more than a 100-point range.

January 2010 was the Brit’s best shot. At the end of 2009, Murray, Djokovic, and Roger Federer were tightly packed at the top of the Elo leaderboard. In December, Murray was #3, but he trailed Fed–and the #1 position–by only 25 points. In January, Novak took over the top spot, and Murray closed to within 16 points–a small enough margin that one big upset could make the difference. Altogether, Murray has spent 63 weeks within 100 points of the Elo top spot, none of those since August 2013.

For most of the intervening three-plus years, Djokovic has been steadily setting himself apart from the pack. He reached his career Elo peak in April of this season, opening up a lead of almost 200 points over Federer, who was then #2, and 250 points over Murray. Since Roland Garros, Murray has closed the gap somewhat, but his lack of opportunities against highly-rated players has slowed his climb.

If Murray defeats Djokovic in the final this week in London, it will make the debate more interesting, not to mention secure the year-end ATP #1 ranking for the Brit. But it won’t affect the Elo standings. When two players have such lengthy track records, one match doesn’t come close to eliminating a 100-point gap. Novak will end the season as Elo #1, and he is well-positioned to maintain that position well into 2017.

Forecasting the 2016 ATP World Tour Finals

Italian translation at settesei.it

Andy Murray is the #1 seed this week in London, but as I wrote for The Economist, Novak Djokovic likely remains the best player in the world. According to my Elo ratings, he would have a 63% chance of winning a head-to-head match between the two. And with the added benefit of an easier round-robin draw, the math heavily favors Djokovic to win the tournament.

Here are the results of a Monte Carlo simulation of the draw:

Player        SF      F      W  
Djokovic   95.3%  73.9%  54.6%  
Murray     86.3%  58.3%  29.7%  
Nishikori  60.4%  24.9%   7.8%  
Raonic     50.9%  16.3%   3.3%  
Wawrinka   29.4%   7.8%   1.6%  
Monfils    33.2%   8.7%   1.4%  
Cilic      23.9%   5.8%   1.1%  
Thiem      20.7%   4.1%   0.5%

I don’t think I’ve ever seen a player favored so heavily to progress out of the group stage. Murray’s 86% chance of doing so is quite high in itself; Novak’s 95% is otherworldly. His head-to-heads against the other players in his group are backed up by major differences in Elo points–Dominic Thiem is a lowly 15th on the Elo list, given only a 7.4% chance of beating the Serb.

If Milos Raonic is unable to compete, Djokovic’s chances climb even higher. Here are the probabilities if David Goffin takes Raonic’s place in the bracket:

Player        SF      F      W  
Djokovic   96.8%  75.2%  55.4%  
Murray     86.2%  60.7%  30.6%  
Nishikori  60.7%  26.3%   8.1%  
Monfils    47.7%  12.4%   1.8%  
Wawrinka   29.3%   8.5%   1.7%  
Cilic      23.8%   6.2%   1.1%  
Thiem      29.5%   5.8%   0.7%  
Goffin     26.0%   4.9%   0.5%

The luck of the draw was on Novak’s side. I ran another simulation with Djokovic and Murray swapping groups. Here, Djokovic is still heavily favored to win the tournament, but Murray’s semifinal chances get a sizable boost:

Player        SF      F      W  
Djokovic   92.8%  75.1%  54.9%  
Murray     90.9%  58.1%  29.8%  
Nishikori  58.4%  26.9%   7.5%  
Raonic     52.3%  14.3%   3.3%  
Wawrinka   26.9%   8.4%   1.6%  
Monfils    35.3%   7.5%   1.4%  
Cilic      21.9%   6.2%   1.0%  
Thiem      21.6%   3.4%   0.5%

Elo rates Djokovic so highly that he is favored no matter what the draw. But the draw certainly helped.

Doubles!

I’ve finally put together a sufficient doubles dataset to generate Elo ratings and tournament forecasts for ATP doubles. While I’m not quite ready to go into detail, I can say that, by using the Elo algorithm and rating players individually, the resulting forecasts outperform the ATP rankings about as much as singles Elo ratings do.

Here is the forecast for the doubles event at the World Tour Finals:

Team               SF      F      W  
Herbert/Mahut   76.4%  49.5%  32.1%  
Bryan/Bryan     68.7%  36.8%  19.9%  
Kontinen/Peers  55.7%  29.1%  13.8%  
Dodig/Melo      58.4%  28.1%  13.2%  
Murray/Soares   48.3%  20.8%   8.6%  
Lopez/Lopez     37.7%  16.4%   6.2%  
Klaasen/Ram     30.2%  11.9%   4.0%  
Huey/Mirnyi     24.6%   7.3%   2.2%

This distribution is more like what round-robin forecasts usually look like, without a massive gap between the top of the field and the rest. Pierre-Hugues Herbert and Nicolas Mahut are the top rated team, followed closely by Bob Bryan and Mike Bryan. Max Mirnyi was, at his peak, one of the highest Elo-rated doubles players, but his pairing with Treat Huey is the weakest of the bunch.

The men’s doubles bracket has some legendary names, along with some players–like Herbert and Henri Kontinen–who may develop into all-time greats, but it has no competitors who loom over the rest of the field like Murray and Djokovic do in singles.

How Elo Solves the Olympics Ranking Points Conundrum

Italian translation at settesei.it

Last week’s Olympic tennis tournament had superstars, it had drama, and it had tears, but it didn’t have ranking points. Surprise medalists Monica Puig and Juan Martin del Potro scored huge triumphs for themselves and their countries, yet they still languish at 35th and 141st in their respective tour’s rankings.

The official ATP and WTA rankings have always represented a collection of compromises, as they try to accomplish dual goals of rewarding certain behaviors (like showing up for high-profile events) and identifying the best players for entry in upcoming tournaments. Stripping the Olympics of ranking points altogether was an even weirder compromise than usual. Four years ago in London, some points were awarded and almost all the top players on both tours showed up, even though many of them could’ve won more points playing elsewhere.

For most players, the chance at Olympic gold was enough. The level of competition was quite high, so while the ATP and WTA tours treat the tournament in Rio as a mere exhibition, those of us who want to measure player ability and make forecasts must factor Olympics results into our calculations.

Elo, a rating system originally designed for chess that I’ve been using for tennis for the past year, is an excellent tool to use to integrate Rio results with the rest of this season’s wins and losses. Broadly speaking, it awards points to match winners and subtracts points from losers. Beating a top player is worth many more points than beating a lower-rated one. There is no penalty for not playing–for example, Stan Wawrinka‘s and Simona Halep‘s ratings are unchanged from a week ago.

Unlike the ATP and WTA ranking systems, which award points based on the level of tournament and round, Elo is context-neutral. Del Potro’s Elo rating improved quite a bit thanks to his first-round upset of Novak Djokovic–the same amount it would have increased if he had beaten Djokovic in, say, the Toronto final.

Many fans object to this, on the reasonable assumption that context matters. It certainly seems like the Wimbledon final should count for more than, say, a Monte Carlo quarterfinal, even if the same player defeats the same opponent in both matches.

However, results matter for ranking systems, too. A good rating system will do two things: predict winners correctly more often than other systems, and give more accurate degrees of confidence for those predictions. (For example, in a sample of 100 matches in which the system gives one player a 70% chance of winning, the favorite should win 70 times.) Elo, with its ignorance of context, predicts more winners and gives more accurate forecast certainties than any other system I’m aware of.

For one thing, it wipes the floor with the official rankings. While it’s possible that tweaking Elo with context-aware details would better the results even more, the improvement would likely be minor compared to the massive difference between Elo’s accuracy and that of the ATP and WTA algorithms.

Relying on a context-neutral system is perfect for tennis. Instead of altering the ranking system with every change in tournament format, we can always rate players the same way, using only their wins, losses, and opponents. In the case of the Olympics, it doesn’t matter which players participate, or what anyone thinks about the overall level of play. If you defeat a trio of top players, as Puig did, your rating skyrockets. Simple as that.

Two weeks ago, Puig was ranked 49th among WTA players by Elo–several places lower than her WTA ranking of 37. After beating Garbine Muguruza, Petra Kvitova, and Angelique Kerber, her Elo ranking jumped to 22nd. While it’s tough, intuitively, to know just how much weight to assign to such an outlier of a result, her Elo rating just outside the top 20 seems much more plausible than Puig’s effectively unchanged WTA ranking in the mid-30s.

Del Potro is another interesting test case, as his injury-riddled career presents difficulties for any rating system. According to the ATP algorithm, he is still outside the top 100 in the world–a common predicament for once-elite players who don’t immediately return to winning ways.

Elo has the opposite problem with players who miss a lot of time due to injury. When a player doesn’t compete, Elo assumes his level doesn’t change. That’s clearly wrong, and it has cast a lot of doubt over del Potro’s place in the Elo rankings this season. The more matches he plays, the more his rating will reflect his current ability, but his #10 position in the pre-Olympics Elo rankings seemed overly influenced by his former greatness.

(A more sophisticated Elo-based system, Glicko, was created in part to improve ratings for competitors with few recent results. I’ve tinkered with Glicko quite a bit in hopes of more accurately measuring the current levels of players like Delpo, but so far, the system as a whole hasn’t come close to matching Elo’s accuracy while also addressing the problem of long layoffs. For what it’s worth, Glicko ranked del Potro around #16 before the Olympics.)

Del Potro’s success in Rio boosted him three places in the Elo rankings, up to #7. While that still owes something to the lingering influence of his pre-injury results, it’s the first time his post-injury Elo rating comes close to passing the smell test.

You can see the full current lists elsewhere on the site: here are ATP Elo ratings and WTA Elo ratings.

Any rating system is only as good as the assumptions and data that go into it. The official ATP and WTA ranking systems have long suffered from improvised assumptions and conflicting goals. When an important event like the Olympics is excluded altogether, the data is incomplete as well. Now as much as ever, Elo shines as an alternative method. In addition to a more predictive algorithm, Elo can give Rio results the weight they deserve.

New at TennisAbstract: Weekly Elo Reports

Starting today, you can find weekly Elo ranking reports on the home page of Tennis Abstract. Here are the men’s ratings, and here are the women’s ratings.

Elo is a rating system originally designed for chess, and now used across a wide range of sports. It awards points based on who you beat, not when you beat them. That’s in direct contrast to the official ATP and WTA ranking systems, which award points based on tournament and round, regardless of whether you play a qualifier or the number one player in the world.

As such, there are some notable differences between Elo-based rankings and the official lists. In addition to some rearrangement in the top ten, ATP Elo ratings place last week’s champion Roberto Bautista Agut up at #12 (compared to #17 in the official ranking) and Jack Sock at #13 (instead of #23).

The shuffling is even more dramatic on the women’s side. Belinda Bencic, still outside the top ten in the official WTA ranking, is up to #5 by Elo. After her Fed Cup heroics last weekend, Bencic is a single Elo point away from drawing equal with #4 Angelique Kerber.

These new Elo reports also show peaks for every player. That way, you can see how close each player is to his or her career best. You can also spot which players–like Bencic and Bautista Agut–are currently at their peak.

Like any rating system, Elo isn’t perfect. In this simple form, it doesn’t consider surface at all. I haven’t factored Challenger, ITF, or qualifying results into these calculations, either. Elo also doesn’t make any adjustments when a player misses considerable time to injury; a player just re-assumes his or her old rating when they return.

That said, Elo is a more reliable way of comparing players and predicting match outcomes than the official ranking system. And now, you can check in on each player’s rating every week.

Elo-Forecasting the WTA Tour Finals in Singapore

With the field of eight divided into two round-robin groups for the WTA Tour Finals in Singapore, we can play around with some forecasts for this event. I’ve updated my Elo ratings through last week’s tournaments, and the first thing that jumps out is how different they are from the official rankings.

Here’s the Singapore field:

EloRank  Player                Elo  Group  
2        Maria Sharapova      2296    RED  
4        Simona Halep         2181    RED  
6        Garbine Muguruza     2147  WHITE  
8        Petra Kvitova        2136  WHITE  
9        Angelique Kerber     2129  WHITE  
11       Agnieszka Radwanska  2100    RED  
15       Lucie Safarova       2051  WHITE  
21       Flavia Pennetta      2004    RED

Serena Williams (#1 in just about every imaginable ranking system) chose not to play, but if Elo ruled the day, Belinda Bencic, Venus Williams, and Victoria Azarenka would be playing this week in place of Agnieszka Radwanska, Lucie Safarova, and Flavia Pennetta.

Anyway, we’ll work with what we’ve got. Maria Sharapova is, according to Elo, a huge favorite here. The ratings translate into a forecast that looks like this:

Player                  SF  Final  Title  
Maria Sharapova      83.7%  61.1%  43.6%  
Simona Halep         60.8%  35.4%  15.9%  
Garbine Muguruza     59.4%  25.7%  11.3%  
Petra Kvitova        55.2%  23.0%   9.8%  
Angelique Kerber     53.1%  21.7%   8.8%  
Agnieszka Radwanska  37.4%  17.4%   6.1%  
Lucie Safarova       32.3%   9.7%   3.1%  
Flavia Pennetta      18.1%   6.0%   1.4%

If Sharapova is really that good, the loser in today’s draw was Simona Halep. The top seed would typically benefit from having the second seed in the other group, but because Garbine Muguruza recently took over the third spot in the rankings, Pova entered the draw as a dangerous floater.

However, these ratings don’t reflect the fact that Sharapova hasn’t completed a match since Wimbledon. They don’t decline with inactivity, so Pova’s rating is the same as it was the day after she lost to Serena back in July. (My algorithm also excludes retirements, so her attempted return in Wuhan isn’t considered.)

With as little as we know about Sharapova’s health, it’s tough to know how to tweak her rating. For lack of any better ideas, I revised her Elo rating to 2132, right between Petra Kvitova and Angelique Kerber. At her best, Sharapova is better than that, but consider this a way of factoring in the substantial possibility that she’ll play much, much worse–or that she’ll get injured and her matches will be played by Carla Suarez Navarro instead. The revised forecast:

Player                  SF  Final  Title  
Simona Halep         69.9%  40.9%  24.0%  
Garbine Muguruza     59.4%  31.5%  16.5%  
Maria Sharapova      57.6%  29.5%  14.5%  
Petra Kvitova        55.6%  28.4%  14.4%  
Angelique Kerber     52.5%  26.3%  13.2%  
Agnieszka Radwanska  47.9%  22.3%   9.9%  
Lucie Safarova       32.6%  12.9%   4.9%  
Flavia Pennetta      24.7%   8.3%   2.7%

If this is a reasonably accurate estimate of Sharapova’s current ability, the Red group suddenly looks like the right place to be. Because Elo doesn’t give any particular weight to Grand Slams, it suggests that the official rankings far overestimate the current level of Safarova and Pennetta. The weakness of those two makes Halep a very likely semifinalist and also means that, in this forecast, the winner of the tournament is more likely (54% to 46%) to come from the White group.

Without Serena, and with Sharapova’s health in question, there are simply no dominant players in the field this week. If nothing else, these forecasts illustrate that we’d be foolish to take any Singapore predictions too seriously.