Hockey Outsider
Registered User
- Jan 16, 2005
- 9,144
- 14,456
Over the weekend I spent some time tackling a project that I've wanted to do for years - applying Iain Fyffe's "Point Allocation" methodology to the NHL playoffs (from 1980 to 2017).
The purpose of the method is to allocate points to each player on a team, in accordance with who helped generate them. (Since this is the playoffs, I've decided to allocate wins; points in the standings are obviously a regular season concept). Iain's method (which is in turn borrowed from baseball writer Bill James's "Win Shares" system) can be read here - https://web.archive.org/web/20030705023433/http://www.puckerings.com:80/research/ptalloc.html.
The most significant departure from his approach is I allocate defensive points based on estimated even-strength ice time. Ultimately this method ranks all players, regardless of their position or what season they played in, based on a standard currency - wins.
I spot-checked some of the results to make sure the system "works". For example, if you add up all of the player scores on the 2002 Detroit Red Wings, you'd get exactly 16 wins. If you add up all of the scores for every single player who's played for the Washington Capitals, you'd get exactly 116 (the number of playoff games the franchise has won - prior to 2018).
Some might ask - isn't this the same thing as hockey-reference.com's widely-criticized "Point Shares"? Conceptually there are similarities, but the biggest issue with that system is they fundamentally don't understand how to evaluate defensive play. This system is far from perfect (see my list of self-criticisms), but it has a much better starting point. (I also have consistent data going back to 1980 - sometimes that website's results are wonky because they have to make so many simplifying assumptions for the years prior to expansion that the distribution gets flattened).
For the most part, the results make sense. It's always tough to create a purely statistical system (since obviously the computer doesn't care about a player's name or reputation). Bill James once said something to the effect of "If a model is never surprising, it's probably not useful. If a model is consistently surprising, it's probably wrong".
The purpose of the method is to allocate points to each player on a team, in accordance with who helped generate them. (Since this is the playoffs, I've decided to allocate wins; points in the standings are obviously a regular season concept). Iain's method (which is in turn borrowed from baseball writer Bill James's "Win Shares" system) can be read here - https://web.archive.org/web/20030705023433/http://www.puckerings.com:80/research/ptalloc.html.
The most significant departure from his approach is I allocate defensive points based on estimated even-strength ice time. Ultimately this method ranks all players, regardless of their position or what season they played in, based on a standard currency - wins.
I spot-checked some of the results to make sure the system "works". For example, if you add up all of the player scores on the 2002 Detroit Red Wings, you'd get exactly 16 wins. If you add up all of the scores for every single player who's played for the Washington Capitals, you'd get exactly 116 (the number of playoff games the franchise has won - prior to 2018).
Some might ask - isn't this the same thing as hockey-reference.com's widely-criticized "Point Shares"? Conceptually there are similarities, but the biggest issue with that system is they fundamentally don't understand how to evaluate defensive play. This system is far from perfect (see my list of self-criticisms), but it has a much better starting point. (I also have consistent data going back to 1980 - sometimes that website's results are wonky because they have to make so many simplifying assumptions for the years prior to expansion that the distribution gets flattened).
For the most part, the results make sense. It's always tough to create a purely statistical system (since obviously the computer doesn't care about a player's name or reputation). Bill James once said something to the effect of "If a model is never surprising, it's probably not useful. If a model is consistently surprising, it's probably wrong".
Last edited: