Quantifying individual performance in team events

Jordi Duch, Joshua S. Waitzmann, Luís A. Nunes Amaral, "Quantifying the Performance of Individual Players in a Team Activity", PLoS ONE, 5(6): e10937, 2010. [Citation]

The question of how to assign individual credit within team activities is an important one which spans various endeavors, from business projects to scientific research collaborations to team sports.  In this publication the researchers present a statistical methodology based on social network analysis to develop metrics that capture the influence of individual players on a soccer match.  Application of this model to the 2008 European Championships generated results that corresponded very well with subjective judgments of commentators and pundits — eight of the top 20 players in the tournament according to the rating index also appeared in UEFA's all-tournament team.

——–

Luis Amaral is a native of Portugal and a professor in the Department of Chemical and Biological Engineering at Northwestern University.  He is one of the many academics who have sought to merge their love of soccer with their technical expertise, and the Footballer Rating that he has developed is the result. The rating is based on a research paper that he wrote with one of his graduate students and a research colleague at Northwestern (Jordi Duch splits his time between NW and Universitat Rovira i Virgili in Spain).  The paper was written late last year, but was accepted in time for publication during the World Cup.  You can find a collection of press clippings here.

The analysis melds concepts from analysis of social networks with normalized statistical scoring to develop a measure of a player's impact on the game.  The raw data come from play-by-play information that includes ball passes between players of the same team and shots toward goal.  These data are represented in a flow network (two per competing team) in which the nodes are the players and the arcs the pass completion percentage.  There are two extra nodes that represent shots off goal and shots on goal, but I think of them as "sinks" in the flow path.

The end result of all this is a network that describes the probability that a particular path in the flow network results in a shot.  A couple of concepts from social network analysis is used — betweenness centrality, which describes the node (player) through which play resulting in a shot occurs, and flow centrality, which describes the proportion in which the node influences play.  If we have a proportion of player influence relative to total plays in a match, we can — assuming a normal statistical distribution — relate the player's impact on the game relative to the norm.  This is the z-score, the Footballer Rating, that Amaral and his team have developed. 

A rating of 0.0 represents an average performance; +1.0 or -1.0 represent performances that were one standard deviation above or below an average match performance.  A performance of +/- 2.0 is exceptionally high (or low, depending on the sign), and 3.0 is a performance that is almost unheard of; it might happen only a few times ever (your '10' or '0' ratings).  I've seen the rating described in some places to goals scored in a match or some sort of over/under rating, and that is NOT what it is.  It is strictly a measure of player performance relative to the norm for the match, and if you consider the average performance of the top two players you might be able to get an indication of who might win a match (drawn result predictions get iffy).   But the rating as it exists right now is not a goal measure.

The rating has been tested with data from the 2008 European championships and applied for the ongoing World Cup.  Amaral and his team were able to compile a list of the top 20 players from Euro 2008 based on average ratings, and were able to match eight players from UEFA's official all-tournament list.  I was curious to find out how many defenders were on both lists, and was surprised that there were two — Carlos Marchena (Spain) and José Bosingwa (Portugal).  The rating methodology appeared to be skewed toward offensive-minded players, and therefore biased against goalkeepers and strictly defensive players.  Marchena is a defensive midfielder who makes forays down the pitch, but I'm not sure about Bosingwa.  For the World Cup, it does seem that the rating captures defensive players who make offensive contributions by either starting play or taking a shot toward goal.  It does not appear to capture opposing goals that are saved by the player, which is just as important a metric as offensive goals influenced.

Looking at the ratings there appear to be a minimum of participation required to produce a rating that makes sense.  There are several players with very negative ratings (-2.5 and below really doesn't make a lot of sense) who have not played much, and top ranked players with less than 200 passes over the course of the tournament.  I do find it interesting that the players at the top of the Footballer rating are the players who you would expect — Messi, Xavi, Xabi.  I'm not sure Brazilians would agree with Melo's inclusion at the top of the list, and English fans would be very surprised to see Frank Lampard ranked so highly.  The rating appears to value players who are good passers and are central to their team's play, but Cristiano Ronaldo's statistics are the poorest of any player in the top 20, so I'm not sure how he was ranked so high. 

As with all of these statistical measurements, the critical element is data.  The data required for the flow analysis and subsequent processing come from play-by-play information which can come from textual or video sources. Joshua Waitzmann wrote software to extract the play-by-play information from the UEFA website, which would bring up some interesting issues on database rights (the legal rights protecting database design and content are much more stringent in Europe than the USA, and I know that UEFA enforces those rights).  Perhaps they are working with UEFA and FIFA to obtain their data, which would sidestep the legal issues.  If not, both organizations know about the rating now and would be very interested in it.

The Amaral paper presents a rating system for players that is a first step in devising quantitative analysis of player performance from play-by-play data in a match.  It's not quite an "expected goal value" measurement, but the work represents a significant step in that direction.

Share