Italian translation at settesei.it
When is an error unforced? If you envision designing an algorithm to answer that question, it quickly becomes unmanageable. You’d need to take into account player position, shot velocity, angle, and spin, surface speed, and perhaps more. Many errors are obviously forced or unforced, but plenty fall into an ambiguous middle ground.
Most of the unforced error counts we see these days–via broadcasts or in post-match recaps–are counted by hand. A scorer is given some guidance, and he or she tallies each kind of error. If the human-scoring algorithm is boiled down to a single rule, it’s something like: “Would a typical pro be expected to make that shot?” Some scorers limit the number of unforced errors by always counting serve returns, or net shots, or attempted passing shots, as forced.
Of course, any attempt to sort missed shots into only two buckets is a gross oversimplification. I don’t think this is a radical viewpoint. Many tennis commentators acknowledge this when they explain that a player’s unforced error count “doesn’t tell the whole story,” or something to that effect. In the past, I’ve written about the limitations of the frequently-cited winner-to-unforced error ratio, and the similarity between unforced errors and the rightly-maligned fielding errors stat in baseball.
Imagine for a moment that we have better data to work with–say, Hawkeye data that isn’t locked in silos–and we can sketch out an improved way of looking at errors.
First, instead of classifying only errors, it’s more instructive to sort potential shots into three categories: shots returned in play, errors (which we can further distinguish later on), and opponent winners. In other words: Did you make it, did you miss it, or did you fail to even get a racket on it? One man’s forced error is another man’s ball put back in play*, so we need to consider the full range of possible outcomes from each potential shot.
*especially if the first man is Bernard Tomic and the other man is Andy Murray.
The key to gaining insight from tennis statistics is increasing the amount of context available–for instance, taking a player’s stats from today and comparing them to the typical performance of a tour player, or contrasting them with how he or she played in the last similar matchup. Errors are no different.
Here’s a basic example. In the sixth game of Angelique Kerber‘s match in Sydney this week against Darya Kasatkina, she hit a down-the-line forehand:
Thanks to the Match Charting Project, we have data for about 350 of Kerber’s down-the-line forehands, so we know it goes for a winner 25% of the time, and her opponent hits a forced error another 9% of the time. Say that a further 11% turn into unforced errors, and we have a profile for what usually happens when Kerber goes down the line: 25% winners, 20% errors, 55% put back in play. We might dig even deeper and establish that the 55% put back in play consists of 30% that ultimately resulted in Kerber winning the point against 25% that she eventually lost.
In this case, Kasatkina was able to get a racket on the ball, but missed the shot, resulting in what most scorers would agree was a forced error:
This single instance–Kasatkina hitting a forced error against a very effective type of offensive shot–doesn’t tell us anything on its own. Imagine, though, that we tracked several players in 100 attempts each to reply to a Kerber down-the-line forehand. We might discover that Kasatkina lets 35 of 100 go for winners, or that Simona Halep lets only 15 go for winners and gets 70 back in play, or that Anastasia Pavlyuchenkova hits an error on 30 of the 100 attempts.
My point is this: With more granular data, we can put errors in a real-life context. Instead of making a judgment about the difficulty of a certain shot (or relying on a scorer to do so), it’s feasible to let an algorithm do the work on 100 shots, telling us whether a player is getting to more balls than the average player, or making more errors than she usually does.
The continuum, and the future
In the example outlined above, there’s a lot of important details that I didn’t mention. In comparing Kasatkina’s error to a few hundred other down-the-line Kerber forehands, we don’t know whether the shot was harder than usual, whether it was placed more accurately in the corner, whether Kasatkina was in better position than Kerber’s typical opponent on that type of shot, or the speed of the surface. Over the course of 100 down-the-line forehands, those factors would probably even out. But in Tuesday’s match, Kerber hit only 18 of them. While a typical best-of-three match will give us a few hundred shots to work with, this level of analysis can only tell us so much about specific shots.
The ideal error-classifying algorithm of the future would do much better. It would take all of the variables I’ve mentioned (and more, undoubtedly) and, for any shot, calculate the likelihood of different outcomes. At the moment of the first image above, when the ball has just come off of Kerber’s racket, with Kasatkina on the wrong half of the baseline, we might estimate that there is a 35% chance of a winner, a 25% chance of an error, and a 40% chance that ball is returned in play. Depending on the type of analysis we’re doing, we could calculate those numbers for the average WTA player, or for Kasatkina herself.
Those estimates would allow us, in effect, to “rate” errors. In this example, the algorithm gives Kasatkina only a 40% chance of getting the ball back in play. By contrast, an average rallying shot probably has a 90% chance of ending up back in play. Instead of placing errors in buckets of “forced” and “unforced,” we could draw lines wherever we wish, perhaps separating potential shots into quintiles. We would be able to quantify whether, for instance, Andy Murray gets more of the most unreturnable shots back in play than Novak Djokovic does. Even if we have an intuition about that already, we can’t even begin to prove it until we’ve established precisely what that “unreturnable” quintile (or quartile, or whatever) consists of.
This sort of analysis would be engaging even for those fans who never look at aggregate stats. Imagine if a broadcaster could point to a specific shot and say that Murray had only a 2% chance of putting it back in play. In topsy-turvy rallies, this approach could generate a win probability graph for a single point, an image that could encapsulate just how hard a player worked to come back from the brink.
Fortunately, the technology to accomplish this is already here. Researchers with access to subsets of Hawkeye data have begun drilling down to the factors that influence things like shot selection. Playsight’s “SmartCourts” classify errors into forced and unforced in close to real time, suggesting that there is something much more sophisticated running in the background, even if its AI occasionally makes clunky mistakes. Another possible route is applying existing machine learning algorithms to large quantities of match video, letting the algorithms work out for themselves which factors best predict winners, errors, and other shot outcomes.
Someday, tennis fans will look back on the early 21st century and marvel at just how little we knew about the sport back then.