My 2014-2015 NHL Standings based on Advanced Stats

supsens

Registered User
Oct 6, 2013
6,577
2,000
do you treat all ice time the same? like pk and pp is the same as even strength?
I don't see how chris phillips playing 14 minutes a night on even strength is so big of a deal that it's the biggest hit in the NHL
 

Grandpabuzz

Registered User
Oct 13, 2003
910
0
Dallas, Texas
Updated standings below for 10.6.14 (includes Islander trades and line-up constructs based on what's provided in the main forum).

10614.png


The impact of adding Boychuck and Leddy was a positive, but the even bigger influence was not having to play Donovan, Strait or Carkner anymore. The replacement value was huge for a position that has more a significant factor per player than a standard forward.

Also found a mistake with Florida, where I overallocated too many minutes to the defense (and zero to Gudbranson) in the earlier version. That explains the drop here.
 

Grandpabuzz

Registered User
Oct 13, 2003
910
0
Dallas, Texas
do you treat all ice time the same? like pk and pp is the same as even strength?
I don't see how chris phillips playing 14 minutes a night on even strength is so big of a deal that it's the biggest hit in the NHL

I don't include any impact of PP or PK. The influence from those factors are too volatile. Yes, some teams generally perform very well on the PP or PK versus others, but typically those differences are captured in the even strength performance as well.

The ice time use for the final calculation is "grossed-up" to a number that assumes no penalties are called in the game.

Philips plays in a role where he is nearly 5% of the team's value. a 15 point drop in expectations will have a huge impact on a team with that much influence.
 

Rogie

ALIVE
May 17, 2013
1,742
235
Kyoungsan
EDIT: ***NOTE: Post 28 has updated standings as of October 6, 2014******

To follow up on my ongoing analysis of advanced stats in the NHL, below I will attempt to predict the standings next year. The way I do this is use my predicted value for each player expected to play in the NHL during the season and estimate approximately what type of ice time he will get over the duration of 82 games. The predicted ice time is a bit arbitrary but is based on historical figures and current roster constructs. Ultimately a shift in per-game ice time by a minute (or even two) does not really affect the team’s total performance.

The total ice time for each player is then normalized for each team, so that each team’s forwards play a total of 180 minutes per game, defenders 120 minutes and goalies 60 minutes per game.

I then value for each position based on the offensive and defensive zone. When defending, you have the goaltender, three forwards and two defenders. As a result, the goalie would have a 16% value, the defenders would have a 33% value and the forwards a 50% value.

In the offensive zone, you have three forwards and two defenders. The forwards would have a 60% value and the defenders a 40% value. When you combine the zones, an individual forward on the team would have a 4.6% impact on the game, an individual defender a 6.1% impact and the goaltender an 8.3% impact. These percentages are on a normal basis and obviously changes depending on that forward/defender’s ice time relative to his position peers.

As an example, if Chara on the Bruins plays 30 minutes per game and the rest of the defenders play 18 minutes per game, Chara’s positional value would be 9.2% whereas the rest of the defenders on the team would have a 5.5% positional value. For the upcoming season, the current max position players I have in terms of time are:
Forward: Steven Stamkos (6.6% impact)
Defense: Ryan Suter (6.9% impact)
Goalie: Sergei Bobrovsky (7.0% impact)

Please note that these impact figures are used as a run-rate for the entire season. So if a player (like Mike Fischer for example) is expected to be injured for part of the season, his actual per game impact will be much higher than the season impact used to calculate the standings below.

Once I have the individual impact for each player, I can then multiply that by their projected score, which I discuss in great detail here:

http://www.proformahockey.com/#!about/c20r9

The result is a raw value for that team. Before ranking the teams on that raw value, I make two other adjustments.
1. I adjust for schedule differences for the season. This impact is minimal and only has a significant change if one team is leaps and bounds better (or worse) that the rest of its division

2. I adjust for “new players†that come into the NHL. I have a three way method of rating prospects.
a. First is by their potential. Their score here resembles what I think the player’s best season will look like as an NHLer
b. Second is by the expected contribution on Day 1 in the NHL. I cap this number between 50 and 60 relative to what their potential score is. Prospects, and especially new defenders typically perform poorly in their first season (see Reilly or Jones last year), so this adjustment tempers the potential rating expected a few years later
c. Third is the expected contribution on Day 1 in the NHL adjusted for ice time expectations. In a vacuum, all new players would have their “B†score above.

However, some new players will be given significant ice time. Historically new players who are given significant ice time tend to perform a bit better than those new players that get expected ice time. This adjustment accounts for that. A good example would be what happened in Tampa last year with Tyler Johnson and Ondrej Palat as you’ll see below.

Last Season’s predictions

I went back and used my model to predict how last season would turn out based on the roster’s the teams had at the end of September. I then looked at old stories about expected lineups/injuries to figure out the expected ice times. The result was as follows:

oldstandings.png


As you can see the biggest surprises in the West were Vancouver, Edmonton, Anaheim and Colorado. The biggest surprises in the East were Ottawa, Tampa Bay, Buffalo and Philadelphia. Below I’ll attempt to reconcile those differences

Vancouver: Biggest impact was the Sedin’s and Burrows’ play. The Sedin’s played ~20% worse than expected and Burrows ~30% worse. The rest of the team generally played as expected with some offsets (Kassian a bit better versus Kesler a bit worse). If I had known the Sedins/Burrows final score the Canucks would have shown up 11th in the West.

Edmonton: Biggest impacts were Dubnyk, Nick Schultz and Belov. Schultz and Dubnyk both played worse than 25% of their expectations and if I had known their scores, Edmonton would’ve placed 11th in the West. Smyth and Yakupov had poor seasons too, but the impact was not as severe.

Anaheim: Big positive surprise with Getzlaf. He performed 30% better than expectations (he had been trending downward until a revival in the 2013 season) and had his score been known prior, the Ducks would have propelled to 4th in the West – just from Getzlaf alone! The rest of the team played close to expectations, with Lovejoy and Bonino having very strong, but not as impactful, seasons.

Colorado: This is the team that didn’t have one player screw up the rankings. Erik Johnson probably would be closest, as his performance was 28% better than expectations (his best season by far), but incorporating that score only moves the team up to 10th place. The rest of boost all came from incremental improvements from the rest of the team. Parenteau, Duchene, O’Reilly, Landeskog, Stastny, even John Mitchell all had small improvements in rating. However, unlike other teams there weren’t any players to offset those incremental gains. In fact, only Ryan Wilson had a poor year last year compared to expectations, but he only played a handful of games.

Ottawa: Biggest problems were Phillips (36% underperformance ), Michalek (18% underperformance), and to an extent Wiercoich (27% underperformance). Adjusting Phillips alone, brings down Ottawa to 12th in the East. He is probably the most important player for any team’s deviation from expectations last season. Ottawa also didn’t have any players significantly outperform. Cowen and Turris were the only ones that came close.

Tampa Bay: Tampa Bay should look worse that it shows. The expectations shown in the list above assumed Stamkos would be healthy all season and St.Louis wouldn’t be traded. Tampa Bay’s listing is also inched higher because of the “new player adjustment†factor I described above regarding players like Johnson and Palat. However, the biggest reason for the mismatch is Bishop. He performed 20% better than expectations and is the reason why Tampa did so well (also illustrated how poorly they did when he was injured). He was essentially the anti-Chris Phillips for the Lightning.

Buffalo: They were the worst team in the league, so at number 10 they still look high to most. The expectation includes a full season of Vanek, but even if you replace him with Moulson, the impact doesn’t change dramatically. Buffalo essentially was the anti-Colorado. No players drastically underperformed, but it was more than all the players incrementally performed a bit worse than expectations and there were no “positive-playing†players to offset that. McBain was really the only player who played significantly better than expectations.

Philadelphia: The Flyers are by far the biggest mystery to me. There was no standout player to attribute to their success. In fact most of the players played a bit below expectations. Mason, Streit, Raffl and Brayden Schenn were the only players to perform better than 10% of projections. And even if I use the exact rating for each player from last season, Philly still ends up 14th in the East, whereas when I adjust for every other team, the standing is reflected fairly accurately. It is true that at one point the Flyers did have a terrible record, but to turnaround without too many significant changes to the roster (besides MacDonald) still baffles me. As you’ll see in my projections for next season, I have Philly low again, because I cannot reconcile using numbers to why they will be in the playoffs.

This season’s projections(Note: Updated for current injuries like Staal and Stepan)

newstandings.png


As you can see, the big surprises last season are typically the big surprises I went over last season. In the West, they are Vancouver and Colorado. And in the East, they are Washington, Carolina and Philadelphia.

Vancouver: If you read my description above on the rating system, you will see that I am believer in an adjusted form of reversion to the norm. I’ve adjusted the Sedins’ and Burrow’s scores to reflect their poor performance last season, but until they have consistent figures at that rate, their rating will still weigh more to their previous season’s success. As a result, Vancouver should still be a playoff team. Kesler has never scored well using my system, so replacing him with Bonino is not impactful. Replacing Garrison with Sbisa did have some impact as Garrison is expected to perform at a much higher level.

Colorado: As mentioned with Vancouver, this is another case of reversion to the mean. In general, the player’s expectation ratings increased relative to last season, but not so much as to outweigh the ratings from seasons before. Erik Johnson is the big player that’s impacted. Also, although Iginla over Parenteau is a slight upgrade, Briere over Stastny is a big downgrade. Colorado also has the third worst schedule impact (after Nashville and Winnipeg).

Washington: Despite, the criticism regarding how much money was paid for Niskanen and Orpik, both do add significantly to Washington’s defense. The numbers still show that Holtby is one of the most underrated goalies in the league, and while the offense is weak after Ovechkin, the defense, especially after the edition of Trotz (qualitatively speaking), will makes up for that fact. The team reminds me of Nashville in 2008.

Carolina: A lot of people seem to think that Carolina will be one of the worst teams this upcoming season. They didn’t make a lot of changes during the offseason, and their performance last year was below average. Like Holtby, Khudobin’s numbers point him as one of the most underrated goaltenders. If Cam Ward gets most of the starts, I think Carolina won’t perform as well. The defense is fairly average – there isn’t a star, but there isn’t a guy who will bring the pack down either. The offense is the most interesting story. The numbers show that Carolina’s top five (Staals, Tlustly, Semin and Skinner) are actually one of the most potent group of forwards in the NHL. The problem lies with the depth after that. Carolina’s team has enough to be in the playoffs compared to some of the other teams in the East, but would probably be the first team eliminated.

Philadelphia: Like last year’s projections, Philadelphia shows up near the bottom of the conference. They didn’t make a ton of changes in the offseason (Umberger for Hartnell, added Del Zotto), so maybe this year we will see the real Flyers according to the numbers? Steve Mason’s play will be the biggest factor.

Um, great work - really interesting!!!

I read a report but can't remember where, that said that forwards have more value than defense-men in hockey - and I think the report was in regards to statistical data.

You've likely read that study/report. Any thought about changing the weighting and
giving more weight to forwards than defense-men.

I can't remember how the study valued goaltenders - I'd imagine they probably get more weight - but, maybe that is counter intuitive.
 

Grandpabuzz

Registered User
Oct 13, 2003
910
0
Dallas, Texas
Um, great work - really interesting!!!

I read a report but can't remember where, that said that forwards have more value than defense-men in hockey - and I think the report was in regards to statistical data.

You've likely read that study/report. Any thought about changing the weighting and
giving more weight to forwards than defense-men.

I can't remember how the study valued goaltenders - I'd imagine they probably get more weight - but, maybe that is counter intuitive.

Dont remember that report, but forwards as a group definitely have the most value. However, since there are double the number of forwards in a game versus defensemen, the individual defensemen hold a bit more value than comparative individual forward.
 

Rogie

ALIVE
May 17, 2013
1,742
235
Kyoungsan
Dont remember that report, but forwards as a group definitely have the most value. However, since there are double the number of forwards in a game versus defensemen, the individual defensemen hold a bit more value than comparative individual forward.

Good point. I never considered that.
 

Rogie

ALIVE
May 17, 2013
1,742
235
Kyoungsan
Dont remember that report, but forwards as a group definitely have the most value. However, since there are double the number of forwards in a game versus defensemen, the individual defensemen hold a bit more value than comparative individual forward.

Sorry, I"m still not clear on this.

You mention there are double the number of forwards in a game. I follow that.

My thinking is that, at any one time (ES for example) there are 3 forwards and 2 defense-men on the ice. And, each of the forwards are contributing more to the outcome of the game than each of the defense-men, hence they have more weight. Of course, defense-men can log more ice time and you already account for the time on ice in your formulas. I'm thinking along the lines "strokes gained per shot" in golf (if you happen to have read the book Every Shot Counts). Sometimes chips gain more and sometimes putts gain more (in terms of contributing to the score), but it was found that shots from 100 to 250 yards contribute more to score than putts. In this way, I"m thinking a forward has more impact on the outcome of the game and thus could be weighted more than a defense-men. I guess I don't understand how "double the number of forwards in the game" affects the weighting. Sorry, I"m not following.
 

Ad

Upcoming events

Ad

Ad