Grandpabuzz
Registered User
As some of you know, I’ve put out some posts the past few years providing a list of ratings for players in the league as well as mock drafts based on a similar methodology. Some examples of my posts include:
Mock Draft
Player Ratings
I’ve continued to refine the process over the years, but what I do is essentially gathering data for each player based on the statistics I think are important only on an even strength basis-numbers that directly affect the outcome of the game (what should be the most important for any player). I then put the numbers through a formula that spits out a value attributed to the player’s performance. It is similar to PER for those familiar with basketball and different than Corsi, as again I focus more on actual contribution to the game’s end result rather than shots. A more detailed explanation can be found here:
More Information
Since defensemen play a different game versus forwards and versus goaltenders, the number that is spit out from the original formula is then adjusted so there can be a similar comparison rating among every player. This stage is not yet perfect, but still provides a good idea of what the ultimate player ratings are. For example, this season the average forward rating is 54.2, the average defensemen rating is 56.7 and the average goalie rating is 56.0 based on even strength minutes played.
I’ve collected statistics for each player the past seven seasons. And for the most part, players tend to score fairly consistently each year. As an example, here are Crosby’s ratings:
2006 - 2007: 71.2
2007 - 2008: 76.8
2008 - 2009: 72.5
2009 - 2010: 77.9
2010 - 2011: 90.6
2011 - 2012: 79.9
2012 - 2013: 80.5
Because Crosby has been fairly consistent over these years, he’s expected rating or “pro forma†rating is a time-weighted average of 77.1. There are some players who, because of injury or old age, see sharp changes in their score over time, and so I’ve instead had to use a different formula based on most recent performance to get their ultimate rating. An example would be Dany Heatley
2006 - 2007: 73.4
2007 - 2008: 71.4
2008 - 2009: 65.2
2009 - 2010: 62.2
2010 - 2011: 53.7
2011 - 2012: 56.0
2012 - 2013: 55.3
As you can see, a huge drop-off starting in 2010, and as a result his pro forma rating is now 55.0. Generally players that score above 60 are "top" players, players that score between 50 and 60 are "average" or "solid" as you get closer to 60, and players below 50 are "poor." But those are just rough guidelines.
For those interested, here are my current best active forwards/defensemen/goalies since the 2006-2007 season based on this method. Note that the score shown does not include results from this season.
So based on this type of analysis, here are the top players and their respective ratings mid-way through this season versus their expected rating coming in the season. Players must have played an adequate number of minutes to be considered.
One thing to note is that prospects are an interesting case, because the way that rookies are initially rated is at 50. They are then “bumped†up to rating based on their performance in junior leagues (which is another set of formulas based on regression analysis of current players). No prospect can score above 60 coming into the league. This is obviously not going to be accurate (Hertl's current rating is 71), but from the data I’ve seen, the vast majority of prospects tend to enter the league between a 50-60 rating.
Now, what I’ve also been able to do is take these ratings and consolidate the data with all the players on each team and each player’s total ice time each game to come up with a value for the entire team. Using the values I can then rank each team against one another as shown on the left side of the picture below. While not perfect, the current ranking very closely resembles the actual standings.
The right side of the rankings predicts how the season will end for each team based on how the team has performed so far, player’s expected performance, and other factors such as average ice time, injuries, transactions, etc.
Using the same methodology, if I were to go back and predict the standings for this year assuming the “expected rating†but assuming that each team plays each of its players the same amount of games/minutes, the standings would look like the table below. From here you can see that teams like Edmonton, Buffalo and New York are severely underperforming, while teams like Colorado, Tampa and Anaheim are over performing.
Again, this is just an analysis based on data I find important and in no means is meant to be more than a supplement to other advanced metrics out there. Let me know if you have any questions or what to see other players current/historical/etc.
Mock Draft
Player Ratings
I’ve continued to refine the process over the years, but what I do is essentially gathering data for each player based on the statistics I think are important only on an even strength basis-numbers that directly affect the outcome of the game (what should be the most important for any player). I then put the numbers through a formula that spits out a value attributed to the player’s performance. It is similar to PER for those familiar with basketball and different than Corsi, as again I focus more on actual contribution to the game’s end result rather than shots. A more detailed explanation can be found here:
More Information
Since defensemen play a different game versus forwards and versus goaltenders, the number that is spit out from the original formula is then adjusted so there can be a similar comparison rating among every player. This stage is not yet perfect, but still provides a good idea of what the ultimate player ratings are. For example, this season the average forward rating is 54.2, the average defensemen rating is 56.7 and the average goalie rating is 56.0 based on even strength minutes played.
I’ve collected statistics for each player the past seven seasons. And for the most part, players tend to score fairly consistently each year. As an example, here are Crosby’s ratings:
2006 - 2007: 71.2
2007 - 2008: 76.8
2008 - 2009: 72.5
2009 - 2010: 77.9
2010 - 2011: 90.6
2011 - 2012: 79.9
2012 - 2013: 80.5
Because Crosby has been fairly consistent over these years, he’s expected rating or “pro forma†rating is a time-weighted average of 77.1. There are some players who, because of injury or old age, see sharp changes in their score over time, and so I’ve instead had to use a different formula based on most recent performance to get their ultimate rating. An example would be Dany Heatley
2006 - 2007: 73.4
2007 - 2008: 71.4
2008 - 2009: 65.2
2009 - 2010: 62.2
2010 - 2011: 53.7
2011 - 2012: 56.0
2012 - 2013: 55.3
As you can see, a huge drop-off starting in 2010, and as a result his pro forma rating is now 55.0. Generally players that score above 60 are "top" players, players that score between 50 and 60 are "average" or "solid" as you get closer to 60, and players below 50 are "poor." But those are just rough guidelines.
For those interested, here are my current best active forwards/defensemen/goalies since the 2006-2007 season based on this method. Note that the score shown does not include results from this season.
So based on this type of analysis, here are the top players and their respective ratings mid-way through this season versus their expected rating coming in the season. Players must have played an adequate number of minutes to be considered.
One thing to note is that prospects are an interesting case, because the way that rookies are initially rated is at 50. They are then “bumped†up to rating based on their performance in junior leagues (which is another set of formulas based on regression analysis of current players). No prospect can score above 60 coming into the league. This is obviously not going to be accurate (Hertl's current rating is 71), but from the data I’ve seen, the vast majority of prospects tend to enter the league between a 50-60 rating.
Now, what I’ve also been able to do is take these ratings and consolidate the data with all the players on each team and each player’s total ice time each game to come up with a value for the entire team. Using the values I can then rank each team against one another as shown on the left side of the picture below. While not perfect, the current ranking very closely resembles the actual standings.
The right side of the rankings predicts how the season will end for each team based on how the team has performed so far, player’s expected performance, and other factors such as average ice time, injuries, transactions, etc.
Using the same methodology, if I were to go back and predict the standings for this year assuming the “expected rating†but assuming that each team plays each of its players the same amount of games/minutes, the standings would look like the table below. From here you can see that teams like Edmonton, Buffalo and New York are severely underperforming, while teams like Colorado, Tampa and Anaheim are over performing.
Again, this is just an analysis based on data I find important and in no means is meant to be more than a supplement to other advanced metrics out there. Let me know if you have any questions or what to see other players current/historical/etc.