Current Player Usage Charts
Player usage charts are one of the seminal works of data visualization in hockey. Created by Rob Vollman and automated by Robb Tufts here, these charts provide a simple look at the role played by each skater on a team. For the uninitiated, the axes of the standard player usage charts show offensive zone start % and quality of competition (relative corsi). The former is designed to give a sense of how often the player is relied on for offense or defense, and the latter shows the skill level of the player’s competition.
I suspect people look at player usage charts for one of two reasons. First, they want to see how a coach is using all of the players he has available, e.g., “Vancouver tries to give the Sedins the best offensive opportunities and has Manny Malhotra take on the tough defensive assignments. Second, they want to provide additional context for a player’s performance in other measures, e.g., “Manny Malhotra’s Corsi For % is only low because he starts every shift in the defensive zone against the toughest competition”.
However, recent work has led me to think that player usage charts do a limited job at addressing either task:
- Micah Blake McCurdy presented work at the RIT Hockey Analytics Conference showing that zone starts don’t really matter. We would only need to make very small adjustments to account for the role of zone starts in explaining any variations in a player’s performance. In addition, faceoffs account for less than half of all shift starts and not all faceoffs are also shift starts.
- Conor Tompkins showed on hockey-graphs.com that quality of competition does not significantly vary between players in a large sample. Over the course of a season, coaches do not have enough control to regularly shelter some players while assigning the toughest minutes to others.
Essentially, there are issues with both measures regardless of what you’re using them for:
|Zone Starts||Team-Level View||Lots of shift starts are not faceoffs, and the coach’s choice at a faceoff is frequently restricted by who recently played|
|Individual Player Evaluation||Not all faceoffs are shift starts; OZ faceoffs can mean the player is driving play and then taking the OZ faceoff on the same shift|
|QoC||Team-Level View||Very little variation between players over the course of a season|
|Individual Player Evaluation|
An Alternative Look
These two items have left me reluctant to use player usage charts for evaluation. Instead, I wanted to look into an alternative view that might effectively provide context for skater usage.
In place of zone starts on the x-axis, I’m using a measure based on player allocation in different score states: TOI % leading – TOI % trailing. This is used by Dom Galamini on his popular HERO charts, and I think adequately does the job here. It shows who a coach turns to when the team needs to come from behind or protect the lead. I also think it does nicely when used for player evaluation because differences in deployment by score status are easily corrected for by looking at score-adjusted figures.
In place of the y-axis, I used Quality of Teammates. Obviously, this does not even try to measure the same thing is Quality of Competition. However, it does provide useful context for how a player is being used, and it does not have the same distribution problems as QoC. The effects of teammates are observable over the course of a season. As Garret Hohl points out here, there are still times when looking at QoC is valuable, but if I had to pick one I’d take QoT.
In addition to the axes described above, the size of the bubbles represents average TOI per game and the color shows relative Corsi For %. Asterisks on player names means that that player was on multiple teams this season, and they should be compared to his teammates with caution. All of these features were developed in the original player usage charts and I’m adapting them here. All data shown is score adjusted data from this season and is from War-On-Ice as of January 13th.
To be clear, these are intended to be an additional tool for understanding player usage, not a complete replacement of the original charts. They are decidedly a work in progress rather than an unambiguous case for one method. The measures I use here are far from flawless, and in the case of the y-axis does not even capture the same attribute. I think that the measures worth included deserve a larger conversation, and ideally they would be shown to vary across players in a large sample and be meaningful in performance.