2019/2020 Advanced Stats / Analytics Tracker

SnuggaRUDE

Registered User
Apr 5, 2013
8,947
6,480
After reading recent posts.. Im interested now w the above line of thinking .. Sucks ( to be interested) .. Anyway.. I bet each team has secret complicated metrics that pertain to each team .. Each line they face.. Every time variables like switching one player in and metric difference based on toi against certain lines.. Any idea if this is true and public only gets certain results of older algorithms..

I'd like to believe this is true. I think the data exists today. We know opposition by minutes for a player right? So we should be able to make an xGA blend by multiplying Opp_xGF/60 * minutes matched. Using a xGoalDiff blend would be even better.

The data will be noisy, but over a season you should get an idea of who's being leveraged correctly. Maybe plotting these by year will give us some interesting insights in to how players age, or are deployed as they age.

If you secretly value a player and no one understands the nuance.. You win..and its cheap.. How is this not picked up by eye test.. Like hitting forecheck hesitation factor etc against/ for..

The eye test is flawed for two reasons:

1. Observational biases - too many events are happening and subconsciously we filter out or highlight things which may not be relevant. And those which we don't filter we still may be assigning them incorrect weights.

2. The eye test is based upon events triggered in no small part due to skill prominence. It can be hard to tell the difference between a two super lines cancelling themselves or two mediocre lines doing the same. Against similarly skilled opposition you still need to use the same set of safe tactics, you don't mega dangle Barkov.
 

joshjull

Registered User
Aug 2, 2005
78,463
39,916
Hamburg,NY
Almost, I'm wondering if there is a stat for xGF of a line modified by the strength of their opposition. Such a stat would give a line much credit for scoring against Barkov, or blanking McDavid.

I better understand what you're looking for now but I don’t understand why.

If you want to know how a line did against another. Look at the numbers for each line for the season and compare them to the numbers for the game. Or look at the game break downs themselves on place like natural stat trick.
 
Last edited:

joshjull

Registered User
Aug 2, 2005
78,463
39,916
Hamburg,NY
I'd like to believe this is true. I think the data exists today. We know opposition by minutes for a player right? So we should be able to make an xGA blend by multiplying Opp_xGF/60 * minutes matched. Using a xGoalDiff blend would be even better.

The data will be noisy, but over a season you should get an idea of who's being leveraged correctly. Maybe plotting these by year will give us some interesting insights in to how players age, or are deployed as they age.



The eye test is flawed for two reasons:

1. Observational biases - too many events are happening and subconsciously we filter out or highlight things which may not be relevant. And those which we don't filter we still may be assigning them incorrect weights.

2. The eye test is based upon events triggered in no small part due to skill prominence. It can be hard to tell the difference between a two super lines cancelling themselves or two mediocre lines doing the same. Against similarly skilled opposition you still need to use the same set of safe tactics, you don't mega dangle Barkov.

Teams can already figure that out with existing data. Leveraging lines is about the big picture as well. Its not just about the performance of any one line or d-pair.. As an example, if a team's lines 1/2 can be breaking even/hold the line in their matchups while lines 3/4 are clearly winning their deployments. That's a recipe for success even though the top two lines aren't "winning" their matchups. Another example, how certain lines do with certain d-pairs vs others in certain deployments. There are a few different permutations of this.

I think you're also too focused on individual line matchups by trying to give extra value to certain ones. Using Jack vs Barkov as you are to make my point. The only time how Jack's line did vs Barkov is going to matter is when we play Florida. Its not going to matter for a complete evaluation of the season. Its a spec of data when compared to a seasons worth.
 
Last edited:

itwasaforwardpass

I'll be the hyena
Mar 4, 2017
5,317
5,121
Almost, I'm wondering if there is a stat for xGF of a line modified by the strength of their opposition. Such a stat would give a line much credit for scoring against Barkov, or blanking McDavid.

I believe that is what jc17 replied with. For whatever reason quality of competition numbers vary very little over a large sample size. So does QoC not really matter or do we need better ways to measure it? Maybe both? It seems to not have much of an affect, barring further evidence.

 

K8fool

Registered User
Sep 30, 2018
3,085
893
stomach of giant parasitic worm
I'd like to believe this is true. I think the data exists today. We know opposition by minutes for a player right? So we should be able to make an xGA blend by multiplying Opp_xGF/60 * minutes matched. Using a xGoalDiff blend would be even better.

The data will be noisy, but over a season you should get an idea of who's being leveraged correctly. Maybe plotting these by year will give us some interesting insights in to how players age, or are deployed as they age.

The eye test is flawed for two reasons:

1. Observational biases - too many events are happening and subconsciously we filter out or highlight things which may not be relevant. And those which we don't filter we still may be assigning them incorrect weights.

2. The eye test is based upon events triggered in no small part due to skill prominence. It can be hard to tell the difference between a two super lines cancelling themselves or two mediocre lines doing the same. Against similarly skilled opposition you still need to use the same set of safe tactics, you don't mega dangle Barkov.

I am a believer that numbers may give us permission to believe things we deep down already know.. They are also a great tool to convince others.. Hopefully like much of repeatable science they reveal hidden aspects we overlook due to your points one and two . I tend to believe the "dune" concept that old hockey guys trained themslves through film to notice nuance and the more we use algorithm we lose the need for the savants and there is probably not 32 around anyway... Great points.. Would like ti think botts is the guy with his own algorithm s and that makes look for things that back it up.. We'll see..
 

K8fool

Registered User
Sep 30, 2018
3,085
893
stomach of giant parasitic worm

Buffaloed

webmaster
Feb 27, 2002
43,324
23,584
Niagara Falls
Micah McCurdy has been working on that. I don't know much about it though, and I don't have an Athletic subscription to read this.

Ranking the impact of every NHL coach on the team's success

I read it when it first came out. It screams "trust your eyeballs".

This summer, Micah McCurdy developed a model to try to isolate how a coach impacts on-ice performance both offensively and defensively, and he applied this model to every NHL coach who has been in the league going back to the 2007-08 season.
So, using this data, we added each coach’s offensive and defensive impact together to quantify performance last season and see if we could have some fun with the numbers.

Overall-Rankings.png


Notes: McCurdy’s model uses positive values to indicate offensive performance and negative values to indicate defensive performance. That means, in viewing his charts, positive values on offensive impact are “good” while negative numbers are “bad.” The reverse is true on defensive charts: positive numbers indicate more offense by the opponent is happening which is “bad.” Negative numbers indicate the absence of opponent’s offense which is “good.”
For the purposes of our exercise, we converted negative to positive for the defensive numbers to allow for a true additive process.
All data from hockeyviz.com. You can view all coaching impact charts
here.

The Housley ranking is an embarrassment.
 

Fezzy126

Rebuilding...
May 10, 2017
8,569
11,286
I believe that is what jc17 replied with. For whatever reason quality of competition numbers vary very little over a large sample size. So does QoC not really matter or do we need better ways to measure it? Maybe both? It seems to not have much of an affect, barring further evidence.



I detest the current QoC metrics the power of a thousand suns, however, I personally really enjoy the work done by PuckIQ.

It's a very long and interesting read about their methodology if you're interested: Woodmoney: Anew way to figure out quality of competition in order to analyze NHL data.

Other than the data updates, I haven't seen many updates to their model in a few years, so I don't know if they're still actively working on their model, but I found it quite useful nonetheless.
 

itwasaforwardpass

I'll be the hyena
Mar 4, 2017
5,317
5,121
I read it when it first came out. It screams "trust your eyeballs".

This summer, Micah McCurdy developed a model to try to isolate how a coach impacts on-ice performance both offensively and defensively, and he applied this model to every NHL coach who has been in the league going back to the 2007-08 season.
So, using this data, we added each coach’s offensive and defensive impact together to quantify performance last season and see if we could have some fun with the numbers.

Overall-Rankings.png


Notes: McCurdy’s model uses positive values to indicate offensive performance and negative values to indicate defensive performance. That means, in viewing his charts, positive values on offensive impact are “good” while negative numbers are “bad.” The reverse is true on defensive charts: positive numbers indicate more offense by the opponent is happening which is “bad.” Negative numbers indicate the absence of opponent’s offense which is “good.”
For the purposes of our exercise, we converted negative to positive for the defensive numbers to allow for a true additive process.
All data from hockeyviz.com. You can view all coaching impact charts
here.

The Housley ranking is an embarrassment.

I remember that graph now. The Housley assessment does ruin its credibility.
 

SnuggaRUDE

Registered User
Apr 5, 2013
8,947
6,480
Teams can already figure that out with existing data. Leveraging lines is about the big picture as well. Its not just about the performance of any one line or d-pair.. As an example, if a team's lines 1/2 can be breaking even/hold the line in their matchups while lines 3/4 are clearly winning their deployments. That's a recipe for success even though the top two lines aren't "winning" their matchups. Another example, how certain lines do with certain d-pairs vs others in certain deployments. There are a few different permutations of this.

I think you're also too focused on individual line matchups by trying to give extra value to certain ones. Using Jack vs Barkov as you are to make my point. The only time how Jack's line did vs Barkov is going to matter is when we play Florida. Its not going to matter for a complete evaluation of the season. Its a spec of data when compared to a seasons worth.

Is there a stat which captures this?
 

SnuggaRUDE

Registered User
Apr 5, 2013
8,947
6,480
I believe that is what jc17 replied with. For whatever reason quality of competition numbers vary very little over a large sample size. So does QoC not really matter or do we need better ways to measure it? Maybe both? It seems to not have much of an affect, barring further evidence.



If QoC doesn't matter than either everything is down to goalie, luck, or we're using the wrong data.
 

joshjull

Registered User
Aug 2, 2005
78,463
39,916
Hamburg,NY
Is there a stat which captures this?
If your talking about Jack's line vs Barkov's. Natural Stat trick has a line tool. I punched in both lines. Roughly 8mins VO/Jack/Sam went head to head with Huberdeau/Barkov/Hoffman or Dadonov

Here is the 6mins they went head to head with Huberdeau/Barkov/Hoffman. Line Stats - Natural Stat Trick

Here the 2mins when Dadonov was on the wing instead of Hoffman. Line Stats - Natural Stat Trick

Can trim it down to the 2 players most together on those lines. Here is the 10mins Jack/Sam were out against Huberdeau/Barkov Line Stats - Natural Stat Trick



I'm assume the team has its own tracking software to do this tailored to the data they want to track.
 
Last edited:

SnuggaRUDE

Registered User
Apr 5, 2013
8,947
6,480
If your talking about Jack's line vs Barkov's. Natural Stat trick has a line tool. I punched in both lines. Roughly 8mins VO/Jack/Sam went head to head with Huberdeau/Barkov/Hoffman or Dadonov

Here is the 6mins they went head to head with Huberdeau/Barkov/Hoffman. Line Stats - Natural Stat Trick

Here the 2mins when Dadonov was on the wing instead of Hoffman. Line Stats - Natural Stat Trick

Can trim it down to the 2 players most together on those lines. Here is the 10mins Jack/Sam were out against Huberdeau/Barkov Line Stats - Natural Stat Trick



I'm assume the team has its own tracking software to do this tailored to the data they want to track.

That's interesting the line by line. The elusive target is all of the line by lines in aggregate. From what is available we're able to tell if a player is succeeding or failing, but not how much of that is due to match-up or skill.
 

jc17

Registered User
Jun 14, 2013
11,015
7,733
Here's the biggest problem, as I see it.

QoC is an endless loop because theres no reference point. If Eichel plays against barkov (59 xGF%) we cant say we quantified barkov, because that 59 could be coming against other good or bad players. So we have to look at who Barkov played, then we have to look at who those players matched up against, and so on.

That's also the reason it comes so close to 50% so much of the time.
 

joshjull

Registered User
Aug 2, 2005
78,463
39,916
Hamburg,NY
That's interesting the line by line. The elusive target is all of the line by lines in aggregate. From what is available we're able to tell if a player is succeeding or failing, but not how much of that is due to match-up or skill.

If you're asking how a line has played all season.

Here are the numbers for Oloffson/Jack/Sam in their 111:27mins together so far vs All Teams.Line Stats - Natural Stat Trick
 
Last edited:

jc17

Registered User
Jun 14, 2013
11,015
7,733


Not terrible but I hope they pull these numbers up the next two games.

Side note, I hate to take a shot at people who do decent work with public stuff, but I think Tierney needs to cool it with the charts this early in the season. In my opinion he presents too much as fact, and dismisses the small sample sizes. I realize visualization is his thing, but I think he lacks in the analysis department at times
 

Buffaloed

webmaster
Feb 27, 2002
43,324
23,584
Niagara Falls


Not terrible but I hope they pull these numbers up the next two games.

Side note, I hate to take a shot at people who do decent work with public stuff, but I think Tierney needs to cool it with the charts this early in the season. In my opinion he presents too much as fact, and dismisses the small sample sizes. I realize visualization is his thing, but I think he lacks in the analysis department at times

Those that can, watch. Those that can't, make charts. :laugh:
 
  • Like
Reactions: Paxon and joshjull

Gabrielor

"Win with us or watch us win." - Rasmus Dahlin
Jun 28, 2011
13,023
13,353
Buffalo, NY


Not terrible but I hope they pull these numbers up the next two games.

Side note, I hate to take a shot at people who do decent work with public stuff, but I think Tierney needs to cool it with the charts this early in the season. In my opinion he presents too much as fact, and dismisses the small sample sizes. I realize visualization is his thing, but I think he lacks in the analysis department at times


Oh yeah. A lion’s share of the analytic guys don’t have enough context to their numbers.
 
  • Like
Reactions: joshjull

Ad

Upcoming events

Ad

Ad

-->