Confirmed with Link: Canucks Sign D Luke Schenn to 2-Year, 1-Way Deal ($850K AAV)

MS

1%er
Mar 18, 2002
53,682
84,504
Vancouver, BC
Your statement in bold is correct. Theoretically, this is what is supposed to happen. Does it though? I don't know. Just this past week it's been your opinion on Dmen in general, and Hughes specifically. Then it's Ceci is a good defender in absolute terms regardless of the data. Subjectively fine to say, but you're not getting any buy-in this way.

'I've watched him play' is just what almost everyone here can say. So why wonder if that perception is being questioned?

Saying Ceci is better than Schenn is not controversial. Saying he's a significantly improved player, rather than he was just used differently and is the same player he's always been, obviously is.

This absolutely bizarre thing has happened in the past few years where nobody seems to watch hockey games any more or make any effort to understand how players and teams work and how usage is affecting results. All that lazy lemmings want to do is post these absolutely shit player charts that even a half-second of critical analysis tells your are total rubbish.

Do I trust my eye test more than just using those shit charts in a vacuum? Abso-f***ing-lutely.

I'm not saying that Ceci is significantly improved. I'm saying that one rudimentary look at how his career unfolded tells you that he was forced into a ton of tough minutes on a terrible team at a young age and not surprisingly struggled with that. And not surprising his idiotic WAR charts have a whole bunch of pink numbers on them as a result so hey! Let's ignore context, this player sucks! The graph with the bad math said so!

And then when he goes back to playing 19 minutes/game on decent teams, suddenly he doesn't suck anymore. Magic! If Edmonton plays him 23 minutes/game as their #1 shutdown guy, he'll probably struggle again. If they continue his usage from Pittsburgh, they'll probably get solid mid-pairing results at a reasonable cap hit.

I'm not sure what your point is on Hughes? He was better than I expected as a rookie and then regressed to being on the development curve I originally envisioned as a sophomore.
 
  • Like
Reactions: VanillaCoke

elitepete

Registered User
Jan 30, 2017
8,136
5,455
Vancouver
This absolutely bizarre thing has happened in the past few years where nobody seems to watch hockey games any more or make any effort to understand how players and teams work and how usage is affecting results. All that lazy lemmings want to do is post these absolutely shit player charts that even a half-second of critical analysis tells your are total rubbish.

Do I trust my eye test more than just using those shit charts in a vacuum? Abso-f***ing-lutely.

I'm not saying that Ceci is significantly improved. I'm saying that one rudimentary look at how his career unfolded tells you that he was forced into a ton of tough minutes on a terrible team at a young age and not surprisingly struggled with that. And not surprising his idiotic WAR charts have a whole bunch of pink numbers on them as a result so hey! Let's ignore context, this player sucks! The graph with the bad math said so!

And then when he goes back to playing 19 minutes/game on decent teams, suddenly he doesn't suck anymore. Magic! If Edmonton plays him 23 minutes/game as their #1 shutdown guy, he'll probably struggle again. If they continue his usage from Pittsburgh, they'll probably get solid mid-pairing results at a reasonable cap hit.

I'm not sure what your point is on Hughes? He was better than I expected as a rookie and then regressed to being on the development curve I originally envisioned as a sophomore.
This is exactly how I feel.

Analytics aren’t able to accurately adjust for the quality of opponents that players play against.

Just because an offensive dman is being sheltered on the bottom pairing and getting heavy o zone starts, doesn’t make him a “shot suppression specialist” or something.
 
  • Like
Reactions: VanillaCoke and MS

Get North

Registered User
Aug 25, 2013
8,472
1,364
B.C.
Ceci or Tanev isn't the answer. The Rangers have Fox and Trouba on the right side. The Predators used to have Subban and Ellis. The Kings had Doughty and Voynov. Penguins had Letang and Schultz.

I don't think Tanev, Hamonic, Schenn, Poolman, or Myers are the answers for a SC winning RD. But they're the best we can do so far until we clear more cap space or a prospect arrives. We need somebody who can make plays if Hughes isn't beside them and honestly neither Tanev or Hamonic are creative with the puck or dangerous with it. Parayko, Subban, Dumba are players that can takeover without Quinn Hughes.
 
  • Like
Reactions: KingofSurrey

KingofSurrey

Registered User
Jan 15, 2020
608
812
in da hood
Ceci or Tanev isn't the answer. The Rangers have Fox and Trouba on the right side. The Predators used to have Subban and Ellis. The Kings had Doughty and Voynov. Penguins had Letang and Schultz.

I don't think Tanev, Hamonic, Schenn, Poolman, or Myers are the answers for a SC winning RD. But they're the best we can do so far until we clear more cap space or a prospect arrives. We need somebody who can make plays if Hughes isn't beside them and honestly neither Tanev or Hamonic are creative with the puck or dangerous with it. Parayko, Subban, Dumba are players that can takeover without Quinn Hughes.


Hard to get a #1 D man when you trade your first round draft choice and wipe out your cap space with overpaid 4 th liners... and overpaid 5-6 d men.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
This absolutely bizarre thing has happened in the past few years where nobody seems to watch hockey games any more or make any effort to understand how players and teams work and how usage is affecting results. All that lazy lemmings want to do is post these absolutely shit player charts that even a half-second of critical analysis tells your are total rubbish.

Do I trust my eye test more than just using those shit charts in a vacuum? Abso-f***ing-lutely.

I'm not saying that Ceci is significantly improved. I'm saying that one rudimentary look at how his career unfolded tells you that he was forced into a ton of tough minutes on a terrible team at a young age and not surprisingly struggled with that. And not surprising his idiotic WAR charts have a whole bunch of pink numbers on them as a result so hey! Let's ignore context, this player sucks! The graph with the bad math said so!

And then when he goes back to playing 19 minutes/game on decent teams, suddenly he doesn't suck anymore. Magic! If Edmonton plays him 23 minutes/game as their #1 shutdown guy, he'll probably struggle again. If they continue his usage from Pittsburgh, they'll probably get solid mid-pairing results at a reasonable cap hit.

I'm not sure what your point is on Hughes? He was better than I expected as a rookie and then regressed to being on the development curve I originally envisioned as a sophomore.



What are you even referencing with the first paragraph? Who are these people quoting charts and not watching games?

Nobody with a conflicting opinion cares if you trust your eyes more than those 'shit charts'. Do you understand? Nobody. Your argument is essentially "I watch the games therefore X"... Many posters have used exactly this justification to bolster poor opinions.

What I find interesting about your position is that it's at once archaic, and not. It's saying 'do away with the tools at hand, but here's my rationale that could be borne out in those tools'. Like, Ceci's usage data on OTT and then on TOR/PIT would show your argument in a subjective and objective sense, but you stop short at the former. It's weird.

But no, it's 'idiotic WAR charts' and the data lacks context... Even though the data with context exists for Ceci's performance in OTT and in PIT/TOR...
 
Last edited:

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
This is exactly how I feel.

Analytics aren’t able to accurately adjust for the quality of opponents that players play against.

Just because an offensive dman is being sheltered on the bottom pairing and getting heavy o zone starts, doesn’t make him a “shot suppression specialist” or something.


That's the interpretation of the analytics used, not the intention of the analytics as presented.

The data is there for quality of team and quality of opposition. And so, are we saying the analytics are useless and the QualComp/Team data should be thrown out? Or, should we be saying the inference made upon that data needs to be adjusted?

If we're saying throw them out entirely because they're useless... Yeah, that's not tenable.
 

Bojack Horvatman

IAMGROOT
Jun 15, 2016
4,166
7,378
This absolutely bizarre thing has happened in the past few years where nobody seems to watch hockey games any more or make any effort to understand how players and teams work and how usage is affecting results. All that lazy lemmings want to do is post these absolutely shit player charts that even a half-second of critical analysis tells your are total rubbish.

Do I trust my eye test more than just using those shit charts in a vacuum? Abso-f***ing-lutely.

I'm not saying that Ceci is significantly improved. I'm saying that one rudimentary look at how his career unfolded tells you that he was forced into a ton of tough minutes on a terrible team at a young age and not surprisingly struggled with that. And not surprising his idiotic WAR charts have a whole bunch of pink numbers on them as a result so hey! Let's ignore context, this player sucks! The graph with the bad math said so!

And then when he goes back to playing 19 minutes/game on decent teams, suddenly he doesn't suck anymore. Magic! If Edmonton plays him 23 minutes/game as their #1 shutdown guy, he'll probably struggle again. If they continue his usage from Pittsburgh, they'll probably get solid mid-pairing results at a reasonable cap hit.

I'm not sure what your point is on Hughes? He was better than I expected as a rookie and then regressed to being on the development curve I originally envisioned as a sophomore.

It's amazing how long it takes to get rid of a reputation good or bad. Just look at Holtby last year and how many people were in favor of that contract, and how good the Demko/Holtby platoon would be. I haven't watched a lot of Ceci the last few years, but his stats aren't bad at all. I don't know if I would of given him 4 years, but he isn't a bad player at all.

As far as analytics, the number 1 rule for me is don't compare players playing different roles. I will use ga/60 as an example. In general bottom 6 d are lower in this stat than top 4 D. Jordie Benn had 2.9 ga/60 in 2019-20 which is terrible for bottom pairing d but would of tied with Alex Edler, and was better than Chris Tanev that year. But if you use that stat without context he would of been on of the best defensive defenseman on the team. Using this stat alone to grade defenders would be garbage, because you would have a bunch of bottom pairing d sprinkled in with top 4 d that get results.
 

F A N

Registered User
Aug 12, 2005
18,721
5,957
I'm not saying that Ceci is significantly improved. I'm saying that one rudimentary look at how his career unfolded tells you that he was forced into a ton of tough minutes on a terrible team at a young age and not surprisingly struggled with that.

Doesn't this describe Hughes as well?
 

MS

1%er
Mar 18, 2002
53,682
84,504
Vancouver, BC
What are you even referencing with the first paragraph? Who are these people quoting charts and not watching games?

Seriously? These things are freaking everywhere. Have you seen the Dickinson thread?

We have people hyping defensive WAR graphs that show Joe Pavelski is the best defensive forward in the NHL. We have people calling Dickinson an elite shutdown C based on graphs that were generated mostly from usage on wing on a team that generated elite defensive results for practically everybody. We have people ooh-ing and aah-ing over a Canucks Army article hyping those stats when the author didn't even know what position Dickinson mostly played last year. It's just a total cesspool of bad and dubious information being presented as reliable evidence to make bad conclusions.

Pius Suter was apparently an excellent defensive forward we should totally have signed based on getting 60% zone starts mostly with Patrick Kane made his little graph look awesome. It goes on and on.

Nobody with a conflicting opinion cares if you trust your eyes more than those 'shit charts'. Do you understand? Nobody. Your argument is essentially "I watch the games therefore X"... Many posters have used exactly this justification to bolster poor opinions.

And I don't really care? I've been evaluating players here for a long time and I'm pretty comfortable that my takes are better than the JFresh no-context graphs. For some people they're probably worse.

Just because eye tests aren't perfect doesn't mean bad math is somehow useful.

What I find interesting about your position is that it's at once archaic, and not. It's saying 'do away with the tools at hand, but here's my rationale that could be borne out in those tools'. Like, Ceci's usage data on OTT and then on TOR/PIT would show your argument in a subjective and objective sense, but you stop short at the former. It's weird.

But no, it's 'idiotic WAR charts' and the data lacks context... Even though the data with context exists for Ceci's performance in OTT and in PIT/TOR...

I don't believe the 'data with context' is remotely accurate or says what it's claiming to say.

When I see numbers that make sense, great! But that hasn't remotely happened and I don't expect it to happen.

These things are nothing more than tools that are useless without watching the player play and having a deep understanding of how the player was used on the team in question, and what sort of results the team in general generates for its players. And even then they're still absolutely chalk full of noise.

Basically any WAR stat is total shit. Garbage. Advanced stats and possession stats are a useful tool if taken with a truckload of context.
 
  • Like
Reactions: elitepete

MS

1%er
Mar 18, 2002
53,682
84,504
Vancouver, BC
Doesn't this describe Hughes as well?

Hughes wasn't really playing tough minutes?

If he was out there in a matchup pair getting 30% o-zone starts, sure. But he was mostly getting creamy offensive minutes with a babysitter. Schmidt/Edler were the guys playing the hard minutes for the most part.

Hughes' goal differentials playing over/under 20 minutes in his career are insane. Put him in #5 minutes and he does great. Open up his matchups to mid-leverage 2nd pairing and he gets killed. And that's fine! There's nothing wrong with an elite PP specialist who plays #4-5 minutes. That player has value ... just not $8 million/year or something. And of course maybe he'll get better. My criticism of Hughes has been relative to claims that he's a '#1 defender', not that he's an absolutely horrible player.
 

elitepete

Registered User
Jan 30, 2017
8,136
5,455
Vancouver
That's the interpretation of the analytics used, not the intention of the analytics as presented.

The data is there for quality of team and quality of opposition. And so, are we saying the analytics are useless and the QualComp/Team data should be thrown out? Or, should we be saying the inference made upon that data needs to be adjusted?

If we're saying throw them out entirely because they're useless... Yeah, that's not tenable.
From what I’ve seen, QoC is not very useful at all. Even Advanced stats nerds say so.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
From what I’ve seen, QoC is not very useful at all. Even Advanced stats nerds say so.


Please explain. It's just shot differentials ranging from good to bad players against the key player you are tracking.

I've remarked here that QoC is being diminished by analysts like McCurdy. Instead, he favours QoT and TOI as better markers. I tend to agree. QoC is more erratic. Coaches cannot ice the best lines against a given opposition with the same efficiency that coaches trot out lines. However, does this invalidate QoC altogether? To me, no.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
Seriously? These things are freaking everywhere. Have you seen the Dickinson thread?

We have people hyping defensive WAR graphs that show Joe Pavelski is the best defensive forward in the NHL. We have people calling Dickinson an elite shutdown C based on graphs that were generated mostly from usage on wing on a team that generated elite defensive results for practically everybody. We have people ooh-ing and aah-ing over a Canucks Army article hyping those stats when the author didn't even know what position Dickinson mostly played last year. It's just a total cesspool of bad and dubious information being presented as reliable evidence to make bad conclusions.

Pius Suter was apparently an excellent defensive forward we should totally have signed based on getting 60% zone starts mostly with Patrick Kane made his little graph look awesome. It goes on and on.


And I don't really care? I've been evaluating players here for a long time and I'm pretty comfortable that my takes are better than the JFresh no-context graphs. For some people they're probably worse.

Just because eye tests aren't perfect doesn't mean bad math is somehow useful.


I don't believe the 'data with context' is remotely accurate or says what it's claiming to say.

When I see numbers that make sense, great! But that hasn't remotely happened and I don't expect it to happen.

These things are nothing more than tools that are useless without watching the player play and having a deep understanding of how the player was used on the team in question, and what sort of results the team in general generates for its players. And even then they're still absolutely chalk full of noise.

Basically any WAR stat is total shit. Garbage. Advanced stats and possession stats are a useful tool if taken with a truckload of context.


I genuinely did not know about the Dickinson stuff. Haven't read it.

The crux of the argument is right there in bold: When _you_see_ numbers that make sense. This is subjective. It's really no different than saying eye test > data (old school hockey guy approach). The threshold for viable data is not you, it's what is generally accepted by the people within the space, and WAR is one such tool.

Now, is that saying WAR is definitive? No, of course not. It's one tool among many. For instance, if I used rel and absolute data for teams, usage and QoT and WAR/GAR that should paint a picture, but for you it wouldn't because of insufficient context... That doesn't fly.

Really, the people on the fence aren't going to be convinced by what you're doing. I've defended your track record in the past, but I think your method here is closed. It's this way and no other way.
 

MS

1%er
Mar 18, 2002
53,682
84,504
Vancouver, BC
I genuinely did not know about the Dickinson stuff. Haven't read it.

The crux of the argument is right there in bold: When _you_see_ numbers that make sense. This is subjective. It's really no different than saying eye test > data (old school hockey guy approach). The threshold for viable data is not you, it's what is generally accepted by the people within the space, and WAR is one such tool.

Now, is that saying WAR is definitive? No, of course not. It's one tool among many. For instance, if I used rel and absolute data for teams, usage and QoT and WAR/GAR that should paint a picture, but for you it wouldn't because of insufficient context... That doesn't fly.

Really, the people on the fence aren't going to be convinced by what you're doing. I've defended your track record in the past, but I think your method here is closed. It's this way and no other way.

Per the bolded, if I can look at a stat and see things that are very obviously very wrong, I'm not going to give that stat a lot of credence - ie. if a WAR stat is saying that 19-20 Horvat was sub-replacement level, the way it's generating data is crap. It's like coming up with a stat that says water is more poisonous than arsenic because you've used drowning deaths in the calculation. As soon as you see that result ... it's rubbish. It isn't saying what it's supposed to say.

As I've said multiple times, I don't hate advanced stats. They're a useful tool, if you understand the context in which those stats were generated. What I do hate is this current phenomenon of taking those stats completely out of context and presenting them as fact in these godawful WAR charts.

And using a bunch of different stats that are all generated under the same context that isn't being recognized isn't painting any better of a picture.

I don't think I'm 'closed' at all. My biggest thing is trying to understand context - on every level. Context for the eye test I'm seeing. Context for the traditional stats I'm seeing. Context for the advanced stats I'm seeing. I want to understand *why* the results I'm seeing are occurring and what is causing them, and want to understand the player and the team and the usage and what's happening to generate results. And yes, that absolutely is a better process than putting some advanced stats into a spreadsheet with no context and saying THIS GUY IS A BAD DEFENSIVE PLAYER COS THE PINK BOXES SAY SO.
 

F A N

Registered User
Aug 12, 2005
18,721
5,957
Hughes wasn't really playing tough minutes?

If he was out there in a matchup pair getting 30% o-zone starts, sure. But he was mostly getting creamy offensive minutes with a babysitter. Schmidt/Edler were the guys playing the hard minutes for the most part.

Hughes' goal differentials playing over/under 20 minutes in his career are insane. Put him in #5 minutes and he does great. Open up his matchups to mid-leverage 2nd pairing and he gets killed. And that's fine! There's nothing wrong with an elite PP specialist who plays #4-5 minutes. That player has value ... just not $8 million/year or something. And of course maybe he'll get better. My criticism of Hughes has been relative to claims that he's a '#1 defender', not that he's an absolutely horrible player.

Depends on how you define "tough minutes."

Last season, the forwards Hughes spent the most time facing are (in this order): Draisaitl, Kahun, Connor, Yamamoto, Gaudreau, RNH, Matthews, Suzuki, Marner. The difference between Draisaitl and Marner was about 8 minutes total.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
Per the bolded, if I can look at a stat and see things that are very obviously very wrong, I'm not going to give that stat a lot of credence - ie. if a WAR stat is saying that 19-20 Horvat was sub-replacement level, the way it's generating data is crap. It's like coming up with a stat that says water is more poisonous than arsenic because you've used drowning deaths in the calculation. As soon as you see that result ... it's rubbish. It isn't saying what it's supposed to say.

As I've said multiple times, I don't hate advanced stats. They're a useful tool, if you understand the context in which those stats were generated. What I do hate is this current phenomenon of taking those stats completely out of context and presenting them as fact in these godawful WAR charts.

And using a bunch of different stats that are all generated under the same context that isn't being recognized isn't painting any better of a picture.

I don't think I'm 'closed' at all. My biggest thing is trying to understand context - on every level. Context for the eye test I'm seeing. Context for the traditional stats I'm seeing. Context for the advanced stats I'm seeing. I want to understand *why* the results I'm seeing are occurring and what is causing them, and want to understand the player and the team and the usage and what's happening to generate results. And yes, that absolutely is a better process than putting some advanced stats into a spreadsheet with no context and saying THIS GUY IS A BAD DEFENSIVE PLAYER COS THE PINK BOXES SAY SO.


How are you better understanding the context by removing data and relying solely on the eye test?

Your statement says "Context for the advanced stats you are seeing"... What does this look like? Please show me using data.

Is the Horvat sub-replacement example a reference to my past exchanges with Scurr? Interesting if so. You're not saying anything wrong in your first paragraph, but you're speaking about inference, not the data. You're saying WAR needs to be contextualized further, not that WAR itself is a useless stat. Much different than saying the statistics themselves are garbage.

If you hate that WAR is presented without context, provide the context that is generally applicable. What you're doing is being dismissive, providing no context yourself, and then saying everyone else is incorrect for using stats poorly. How is this productive?

Yes, using stats without the perfect contextual basis leaves them open for critique. Leaves the inference subject to question. So add the context. You're only adding eye test qualifiers to dismiss statistical assertions = Does not work.
 

Melvin

21/12/05
Sep 29, 2017
15,198
28,055
Montreal, QC
How are you better understanding the context by removing data and relying solely on the eye test?

Your statement says "Context for the advanced stats you are seeing"... What does this look like? Please show me using data.

Is the Horvat sub-replacement example a reference to my past exchanges with Scurr? Interesting if so. You're not saying anything wrong in your first paragraph, but you're speaking about inference, not the data. You're saying WAR needs to be contextualized further, not that WAR itself is a useless stat. Much different than saying the statistics themselves are garbage.

If you hate that WAR is presented without context, provide the context that is generally applicable. What you're doing is being dismissive, providing no context yourself, and then saying everyone else is incorrect for using stats poorly. How is this productive?

Yes, using stats without the perfect contextual basis leaves them open for critique. Leaves the inference subject to question. So add the context. You're only adding eye test qualifiers to dismiss statistical assertions = Does not work.

while I agree that MS tends to be overly dismissive, I think you have this a bit backwards. My problem with these metrics is that for the most part they are not backed by research. Show me the white paper or article that explains why these metrics have any value. The null hypothesis should be that they don’t. Stats should be assumed useless until they are backed by research that proves them useful.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
while I agree that MS tends to be overly dismissive, I think you have this a bit backwards. My problem with these metrics is that for the most part they are not backed by research. Show me the white paper or article that explains why these metrics have any value. The null hypothesis should be that they don’t. Stats should be assumed useless until they are backed by research that proves them useful.


I don't think there ever will be a white paper on NHL analytics (Is there one for baseball?). There are teams competing with each other, and companies that are doing more advanced data captures. All providing variable _value_ to those teams that see value in them.

It's a little surprising that someone who has employed advanced statistics here as a predictive measure is also assuming they are useless? Why employ them while knowing this?

These stats have always been a way to reduce subjectivity. Reduce the game to a series of events rather watcher X's feelings. There's value in that scope alone, even if they're not meant to conclude an outcome based upon many variables.
 

Melvin

21/12/05
Sep 29, 2017
15,198
28,055
Montreal, QC
I don't think there ever will be a white paper on NHL analytics (Is there one for baseball?). There are teams competing with each other, and companies that are doing more advanced data captures. All providing variable _value_ to those teams that see value in them.

It's a little surprising that someone who has employed advanced statistics here as a predictive measure is also assuming they are useless? Why employ them while knowing this?

These stats have always been a way to reduce subjectivity. Reduce the game to a series of events rather watcher X's feelings. There's value in that scope alone, even if they're not meant to conclude an outcome based upon many variables.

There have been hundreds of paper written in baseball, and in hockey. I'm surprised you haven't seen them. This paper on pulling the goalie has been influential to NHL teams pulling their goalies earlier in recent years, as an example.

The thing about baseball is that it is built on decades of work during which time people have argued, debated, tried different formulas, tested different things, failed, gone back to the drawing board, and performed testing and analysis under critical peer review to get us to the current state. Modern baseball WAR metrics can be traced directly back to research done on linear weights metrics published in the 1950's. In hockey, it's like we want to "catch up" by doing 70 years worth of work in 2 years, and the way we do this is by fast-forwarding through all of the pesky "research, testing and analysis" stuff.

The way modern hockey analytics works, is that someone publishes some numbers, and if they have enough Twitter followers, people will blindly accept it as valuable. Nobody will bother to test the numbers to see if they are useful. Nobody can answer what the year-to-year correlations are of these numbers, nobody will subject the numbers to any rigorous testing or analysis. They just get re-tweeted an re-posted all over social media at face value because they look cool and have a lot of red and blue. Well, that's not how science works. That is not the scientific method in action.

That isn't to say that I don't believe in these metrics, to some extent. I am open-minded to them, but I believe firmly in subjecting them to rigorous testing and analysis before I put much stock into them. What is the year-to-year correlations of these numbers? What is the value? Do they do a better of job of projecting year-to-year who will be good and who won't, especially when taking into account context changes like for example TEAM changes?

I feel like as we've moved away from long-form discussion on blogs and message boards and towards short-form hot takes like Twitter and Instagram, it has become harder and harder to publish any sort of serious analysis that anyone will read, as the incentive is to just post a pretty chart on twitter that will get re-tweeted tens of thousands of times without anyone conducting any sort of critical review or analysis. Imagine if real science worked this way? f*** publishing boring papers that take 50 people with PHd's to review meticulously for any errors and months of back and forth with the original authors before finally concluding something that is just step 1 to the next phase of other scientist needing to perform replication studies to see if they get the same results in different samples. f***, that could take years! Instead Scientists should just draw some equation on a napkin, stare at it, say that should work, translate it into a pretty chart with a lot of red and blue and then post it on Twitter. That is how science should be done. Right? Right?

MS is 100% right to be skeptical of all these metrics, because as far as I am aware none of them have proven any reason why they are more reliable than even the dreaded eye test. No matter how much something """Makes Sense"""" it is worthless if it hasn't been tested. That is what the scientific method is all about! As one example, I have posted before about how mid-season CORSI does not project the 2nd half of the season better than traditional first half rankings does. This is not being down on analytics, it's just me subjecting assumptions to a basic test. A LOT of people will post, in January, say, about how such-and-such team is in 20th place but they have good "underlying numbers" (i.e. CORSI) and thus they will be better in the second half. This is a very testable hypothesis. Do teams whose CORSI exceed their rankings in the first half perform better in the second half? According to my analysis, the answer is no. But that doesn't stop a thousand people from assuming that it does, every year. That is not being data-driven, that is not being smart or scientific, that is just assuming that your hypothesis is true without testing it, which is no better than any traditional evaluation method including the eye test.
 

Bleach Clean

Registered User
Aug 9, 2006
27,055
6,624
There have been hundreds of paper written in baseball, and in hockey. I'm surprised you haven't seen them. This paper on pulling the goalie has been influential to NHL teams pulling their goalies earlier in recent years, as an example.

The thing about baseball is that it is built on decades of work during which time people have argued, debated, tried different formulas, tested different things, failed, gone back to the drawing board, and performed testing and analysis under critical peer review to get us to the current state. Modern baseball WAR metrics can be traced directly back to research done on linear weights metrics published in the 1950's. In hockey, it's like we want to "catch up" by doing 70 years worth of work in 2 years, and the way we do this is by fast-forwarding through all of the pesky "research, testing and analysis" stuff.

The way modern hockey analytics works, is that someone publishes some numbers, and if they have enough Twitter followers, people will blindly accept it as valuable. Nobody will bother to test the numbers to see if they are useful. Nobody can answer what the year-to-year correlations are of these numbers, nobody will subject the numbers to any rigorous testing or analysis. They just get re-tweeted an re-posted all over social media at face value because they look cool and have a lot of red and blue. Well, that's not how science works. That is not the scientific method in action.

That isn't to say that I don't believe in these metrics, to some extent. I am open-minded to them, but I believe firmly in subjecting them to rigorous testing and analysis before I put much stock into them. What is the year-to-year correlations of these numbers? What is the value? Do they do a better of job of projecting year-to-year who will be good and who won't, especially when taking into account context changes like for example TEAM changes?

I feel like as we've moved away from long-form discussion on blogs and message boards and towards short-form hot takes like Twitter and Instagram, it has become harder and harder to publish any sort of serious analysis that anyone will read, as the incentive is to just post a pretty chart on twitter that will get re-tweeted tens of thousands of times without anyone conducting any sort of critical review or analysis. Imagine if real science worked this way? f*** publishing boring papers that take 50 people with PHd's to review meticulously for any errors and months of back and forth with the original authors before finally concluding something that is just step 1 to the next phase of other scientist needing to perform replication studies to see if they get the same results in different samples. f***, that could take years! Instead Scientists should just draw some equation on a napkin, stare at it, say that should work, translate it into a pretty chart with a lot of red and blue and then post it on Twitter. That is how science should be done. Right? Right?

MS is 100% right to be skeptical of all these metrics, because as far as I am aware none of them have proven any reason why they are more reliable than even the dreaded eye test. No matter how much something """Makes Sense"""" it is worthless if it hasn't been tested. That is what the scientific method is all about! As one example, I have posted before about how mid-season CORSI does not project the 2nd half of the season better than traditional first half rankings does. This is not being down on analytics, it's just me subjecting assumptions to a basic test. A LOT of people will post, in January, say, about how such-and-such team is in 20th place but they have good "underlying numbers" (i.e. CORSI) and thus they will be better in the second half. This is a very testable hypothesis. Do teams whose CORSI exceed their rankings in the first half perform better in the second half? According to my analysis, the answer is no. But that doesn't stop a thousand people from assuming that it does, every year. That is not being data-driven, that is not being smart or scientific, that is just assuming that your hypothesis is true without testing it, which is no better than any traditional evaluation method including the eye test.


Correct me if I'm wrong here: You've used these very same nascent analytics in discussions here in order to make sense of events, but you think they're worthless? Am I interpreting what you've written here properly?

Forget for a moment that WAR charts are an easy grab. What you're doing here is throwing out _all_ interim data because it hasn't yet been tested by scientific method. And while that may be due process per scientific method, was this same logic applied when linear weights metrics were used prior to the peer reviewed modern WAR metrics baseball has now?

I have not read the papers on SSRN. Will look into them now.

Fundamentally, more shots leading to more goals leading more wins is the basis for Corsi. If that's still the basis, no matter primitive the captures, we should get to where baseball is now by following a similar path. The building blocks are there.
 

elitepete

Registered User
Jan 30, 2017
8,136
5,455
Vancouver
Please explain. It's just shot differentials ranging from good to bad players against the key player you are tracking.

I've remarked here that QoC is being diminished by analysts like McCurdy. Instead, he favours QoT and TOI as better markers. I tend to agree. QoC is more erratic. Coaches cannot ice the best lines against a given opposition with the same efficiency that coaches trot out lines. However, does this invalidate QoC altogether? To me, no.
Sorry for replying so late.

What I’ve heard repeatedly from people that follow analytics closely is that over time, all players QoC’s even out. You’d have to ask someone that is deeper in this stuff than me.
 
Last edited:

HockeyNightInAsia

Registered User
Mar 22, 2020
277
187
Forgive me for not going too deeply into debates like this. Just wanna make one quick point that Quinn Hughes handled tougher minutes just fine as a rookie, but that was with Tanev of course. I remember reading analysis back then that he took over tougher matchup minutes pretty quickly by Christmas..... at least on par with the Edler pairing, and that was certainly not "sheltered". And you can't say he struggled in the tougher bubble playoff minutes, did he?

Now, obviously, Hughes struggled defensively last season. But I say there is more to it than just him alone.
 

Ad

Upcoming events

Ad

Ad