Prospect Info: Luke Hughes - part III

Status
Not open for further replies.

Captain3rdLine

Registered User
Sep 24, 2020
6,838
8,033
People can shit on Bader all they want but I highly doubt if you followed his model you’d do bad
Lol his rankings are terrible.
Don’t really need to say anything else.

2020: (STI posted this in another thread)
10 Stutzle
13 Pashin
15 Grans
18 Lundell
20 Quinn
22 Foerster
23 Coangelo
25 Johanneson
27 Ovchinnikov
30 Finley
32 Poirier
Unranked: Holloway
Unranked: Guhle
Unranked: Reichel
Unranked: Schneider
Unranked: Neighbors
Unranked: Grieg

There’s other ones not mentioned above that look bad right now. Such as Sanderson (14),” and Zary (11).

Stutzle, Lundell, and Schneider are all already doing well in the NHL and look much better than where he ranked them. His rankings are much worse than other peoples. It’s particularly bad for defensmen. For example, Schneider was ranked poorly because he’s a strong shutdown defensmen. His model goes off of points. You could watch a 5 minute highlight tape and come up with better performing rankings.

So, in conclusion, you wouldn’t do bad.
You would do horribly.

His 2021 rankings might look worse than his 2020 ones a year or two from now.
 
Last edited:

glenwo2

LINDY RUFF NEEDS VIAGRA!!
Oct 18, 2008
52,095
24,382
New Jersey(No Fanz!)
If you're citing Byron Bader, it's pretty much an admission you know nothing about hockey.

If you're citing Bader as a way to put down Luke Hughes, you're probably less equipped to talk about hockey than your average jellyfish.
to be quite honest, I'd prefer the opinion of an actual jellyfish than CJ.
 

Triumph

Registered User
Oct 2, 2007
13,576
13,992
Luke Hughes isn't going to shoot 20% again, but statistical models don't know that, so statistical models are definitely going to overrate him. They are likely also overrating the players mentioned there.
 

SteveCangialosi123

Registered User
Feb 17, 2012
28,305
49,042
NJ
The thing about Luke Hughes is that I don’t really care what his shooting percentage is. The goals certainly added to the hype, but his unbelievable ability to cut through defenses like butter is what really stands out. The fact that he will be one of the best skaters in the entire NHL the second he enters the league is why he projects to be an elite player.
 

Devs3cups

Wind of Change
Sponsor
May 8, 2010
20,359
35,511
The thing about Luke Hughes is that I don’t really care what his shooting percentage is. The goals certainly added to the hype, but his unbelievable ability to cut through defenses like butter is what really stands out. The fact that he will be one of the best skaters in the entire NHL the second he enters the league is why he projects to be an elite player.
I agree.

The stats were impressive this year, but even if he doesn't shoot 20% next year and the points drop a little, the sheer ability is even more remarkable and evident to me.
 
  • Like
Reactions: glenwo2

Blackjack

Registered User
Feb 13, 2003
18,205
15,084
keyjhboardd +bro]ke
Visit site
Lol his rankings are terrible.
Don’t really need to say anything else.

2020: (STI posted this in another thread)
10 Stutzle
13 Pashin
15 Grans
18 Lundell
20 Quinn
22 Foerster
23 Coangelo
25 Johanneson
27 Ovchinnikov
30 Finley
32 Poirier
Unranked: Holloway
Unranked: Guhle
Unranked: Reichel
Unranked: Schneider
Unranked: Neighbors
Unranked: Grieg

There’s other ones not mentioned above that look bad right now. Such as Sanderson (14),” and Zary (11).

Stutzle, Lundell, and Schneider are all already doing well in the NHL and look much better than where he ranked them. His rankings are much worse than other peoples. It’s particularly bad for defensmen. For example, Schneider was ranked poorly because he’s a strong shutdown defensmen. His model goes off of points. You could watch a 5 minute highlight tape and come up with better performing rankings.

So, in conclusion, you wouldn’t do bad.
You would do horribly.

His 2021 rankings might look worse than his 2020 ones a year or two from now.

How does this guy still have so much credibility among hockey bloggers and statheads? You'd think they'd take one look at his crack pipe rankings and discard them out of hand.

Like, if someone created a doctor bot and and you went in and entered the symptoms for a common cold, and it came back with a recommended treatment of gluing your ankle to the side of your head, you'd probably figure out that there's something wrong with the bot, not sit around coming up with weird theories about why that treatment actually makes sense.
 

Triumph

Registered User
Oct 2, 2007
13,576
13,992
How does this guy still have so much credibility among hockey bloggers and statheads? You'd think they'd take one look at his crack pipe rankings and discard them out of hand.

Like, if someone created a doctor bot and and you went in and entered the symptoms for a common cold, and it came back with a recommended treatment of gluing your ankle to the side of your head, you'd probably figure out that there's something wrong with the bot, not sit around coming up with weird theories about why that treatment actually makes sense.

Because we understand what he's doing. I don't like the way the guy talks on Twitter, but I understand the project. Nobody would ever draft with a computer ranking, it's silly. Where Bader's rankings are much more helpful is in the later rounds when teams continually pass on high scoring players to take lesser guys.
 

Blackjack

Registered User
Feb 13, 2003
18,205
15,084
keyjhboardd +bro]ke
Visit site
Because we understand what he's doing. I don't like the way the guy talks on Twitter, but I understand the project. Nobody would ever draft with a computer ranking, it's silly. Where Bader's rankings are much more helpful is in the later rounds when teams continually pass on high scoring players to take lesser guys.

His project is to replace subjective observations of hockey prospects with a quantitative model that can spit out a reliable projection based on age-adjusted production at U20, collegiate, or European senior leagues. That's it.

There's no evidence that his model is useful. There's no evidence that it provides any insight beyond the most banal shit like that high scoring kids are generally more likely to become high scoring adults. And there's certainly no evidence that the model is more useful in later rounds than earlier rounds, you just made that up.
 

glenwo2

LINDY RUFF NEEDS VIAGRA!!
Oct 18, 2008
52,095
24,382
New Jersey(No Fanz!)
His project is to replace subjective observations of hockey prospects with a quantitative model that can spit out a reliable projection based on age-adjusted production at U20, collegiate, or European senior leagues. That's it.

There's no evidence that his model is useful. There's no evidence that it provides any insight beyond the most banal shit like that high scoring kids are generally more likely to become high scoring adults. And there's certainly no evidence that the model is more useful in later rounds than earlier rounds, you just made that up.
In his case, remove the "b".
 

Unknown Caller

Registered User
Apr 30, 2009
10,216
7,711
Because we understand what he's doing. I don't like the way the guy talks on Twitter, but I understand the project. Nobody would ever draft with a computer ranking, it's silly. Where Bader's rankings are much more helpful is in the later rounds when teams continually pass on high scoring players to take lesser guys.

This. The model is useful for projecting the statistical offensive production of prospects over their career, and maybe the likelihood the player becomes an NHL regular. That's it.

For all of the poor projections from the model that were posted here earlier, you could do the exact same showing Bader's projections of a guy severly underranked vs the consensus scout rank. Those would look just as absurd if you were to handpick a few players and post screenshots.

At the end of the day, the people ripping Bader are being disingenuous about the usefulness of his model. You can't rely on it in isolation and it certainly won't bat 100% (nobody does). But if you're looking to project out the statistical offensive production of a prospect, it's a very useful tool.
 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
This. The model is useful for projecting the statistical offensive production of prospects over their career, and maybe the likelihood the player becomes an NHL regular. That's it.

For all of the poor projections from the model that were posted here earlier, you could do the exact same showing Bader's projections of a guy severly underranked vs the consensus scout rank. Those would look just as absurd if you were to handpick a few players and post screenshots.

At the end of the day, the people ripping Bader are being disingenuous about the usefulness of his model. You can't rely on it in isolation and it certainly won't bat 100% (nobody does). But if you're looking to project out the statistical offensive production of a prospect, it's a very useful tool.
"Models" like this one are really not that useful, and actually potentially harmful to decision making (yes, I know, we don't make any decision and the team doesn't care about our opinion).
Our brains are extremely good at analysing complex patterns, which is why the "eye test" or intuition is so important. If you've thought hard about a decision and understand the factors at play, then data can be extremely useful to help you understand specific things you know you're missing or are uncertain about. But aimlessly putting together a lot of data, which is what those scores are doing, will just create noise, and noise is one of the biggest issues in decision making.

I'm pretty sure I could create a score that's just as effective (useless) based on point production with a factor for the position, a factor for the age, and another factor for the league a player plays in.

Now, with that being said, I understand why those models are successful on social media. It gives something to talk about to people whose only research about a player has been watching a couple of highlight reels.

And yes, teams have stat departments, but they don't really do that kind of stuff (or at least I hope not). They will rather answer specific questions to help the coach tweak the system against a specific team, or help the scouting department understand better an aspect of a prospect's play.
 

Monsieur Verdoux

Registered User
Dec 6, 2016
1,957
2,942
Finland
"Models" like this one are really not that useful, and actually potentially harmful to decision making (yes, I know, we don't make any decision and the team doesn't care about our opinion).
Our brains are extremely good at analysing complex patterns, which is why the "eye test" or intuition is so important. If you've thought hard about a decision and understand the factors at play, then data can be extremely useful to help you understand specific things you know you're missing or are uncertain about. But aimlessly putting together a lot of data, which is what those scores are doing, will just create noise, and noise is one of the biggest issues in decision making.

I'm pretty sure I could create a score that's just as effective (useless) based on point production with a factor for the position, a factor for the age, and another factor for the league a player plays in.

Now, with that being said, I understand why those models are successful on social media. It gives something to talk about to people whose only research about a player has been watching a couple of highlight reels.

And yes, teams have stat departments, but they don't really do that kind of stuff (or at least I hope not). They will rather answer specific questions to help the coach tweak the system against a specific team, or help the scouting department understand better an aspect of a prospect's play.
I agree. It's also problematic that many hockey fans take these models as a truth and they don't think about the context. I like what Scott Wheeler has wrote about that subject.

But I can’t effectively blend what I see (all of those viewings) with what the data tells me (the raw production and the growing amount of publicly-available analytics) without understanding everything else that influences those outcomes.

Peripheral influences are often overlooked when NHL fans pivot their focus to prospects ahead of each draft. In the NHL, you can look at the NHL’s scoring race and quickly determine who the best players in the world are. Even as data and the casual fan’s understanding of the game grow exponentially at the NHL level, it only takes a couple of clicks on the NHL’s website before you have a pretty clear picture as to why Connor McDavid, Auston Matthews and Nathan MacKinnon are the best players in the league these days. (...)

But that NHL game-view creates some serious pitfalls when you transport those biases and ways of thinking to the way you approach prospect evaluation.

The assumption becomes that point-per-game Player X on Team Y in League Z is better than 0.75 point-per-game Player A on Team B in League C. And that often isn’t the case. The parity that exists in the NHL doesn’t exist anywhere else in hockey.

In junior hockey, a player’s production changes dramatically from line to line (there’s a fine line between playing on the Chicago Steel’s first line because you’re a stylistic fit there or fourth line because of the depth in front of you that makes a big difference in a player’s year) or team to team (look no further than 2021 prospect Isaac Belliveau racking up 53 points on a stacked Rimouski team a year ago only to start his draft year with five points in 16 games on a now-rebuilding Rimouski team that looks a lot different). (...)

Only by watching those players, learning their linemates, and understanding the strength (or lack thereof) of their teams can we come to the conclusion that in different roles, or with different linemates, their outcomes could vary. (...)

I try to consider all of these things (age, team, role, available data, etc.) and then use relationships with coaches, players, agents and scouts to fill in around the edges and build as complete a picture of each player as I can.
Of course a regular fan can't consider all of these things, but it's also important to remember that the points aren't everything, when evaluating a prospect.

The whole text is here:

 

AfroThunder396

[citation needed]
Jan 8, 2006
39,143
23,271
Miami, FL
Of course points aren't everything, but a model is only as good as the data going into it. If you want to evaluate non-offensive metrics then you need some quantifiable data to put in. Hits? Blocked shots? Data that varies in quality from arena to arena? Something subjective like "defense"?

Although I think the "star" designation is vague and arguable enough to the point of being worthless, guys generally aren't considered stars because of their defensive play. Stars are guys who score a lot of points. And in order to play in the league today you need a certain baseline of offensive competency, otherwise you have an army of Pandolfos and Zharkovs and McLeods.

And people will ignore the fact that NHLe has gone on to justify the selection of a lot of many eye-test picks. Mukhamadullin was drafted because of his raw talent and measurables and what he COULD be, not because of his production. Most people hated the pick. But then the tools translated, he started producing, his NHLe skyrocketed in a year, and now he's in Utica and everyone consideres him a top prospect. Same deal with Tyce, an overage drafted on raw talent who immediately exploded after he was drafted.

The inherent flaw in NHLe is not that it overprioritizes offense, it's that it assumes linear development. If you're 19 years old, it compares you to other 19 year olds. So if you're not keeping up with the other 19 year olds, it looks bad for you. When in reality we know that every player has a unique development path.
 
  • Like
Reactions: My3Sons

NJDfan86

Registered User
Dec 29, 2021
913
1,249
Of course points aren't everything, but a model is only as good as the data going into it. If you want to evaluate non-offensive metrics then you need some quantifiable data to put in. Hits? Blocked shots? Data that varies in quality from arena to arena? Something subjective like "defense"?

Although I think the "star" designation is vague and arguable enough to the point of being worthless, guys generally aren't considered stars because of their defensive play. Stars are guys who score a lot of points. And in order to play in the league today you need a certain baseline of offensive competency, otherwise you have an army of Pandolfos and Zharkovs and McLeods.

And people will ignore the fact that NHLe has gone on to justify the selection of a lot of many eye-test picks. Mukhamadullin was drafted because of his raw talent and measurables and what he COULD be, not because of his production. Most people hated the pick. But then the tools translated, he started producing, his NHLe skyrocketed in a year, and now he's in Utica and everyone consideres him a top prospect. Same deal with Tyce, an overage drafted on raw talent who immediately exploded after he was drafted.

The inherent flaw in NHLe is not that it overprioritizes offense, it's that it assumes linear development. If you're 19 years old, it compares you to other 19 year olds. So if you're not keeping up with the other 19 year olds, it looks bad for you. When in reality we know that every player has a unique development path.

A Star in Bader’s model is just a career .7/.45 PPG (F/D) so his definition isn’t really ambiguous. I suppose you can quibble with the terminology but he is pretty up front about what a Star is in his model.

As you say, non-offense impacts are tricky to project forward for non-nhl players so it’s probably for the best they are left out.
 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
The question with all those models is whether they provide any additional value.

@AfroThunder396 You said that a model is only as good as the data going into it, that's true. However you can have all the greatest data in the world and still make a useless model with it, which is extremely common nowadays, when every company wants to be "data driven".

To be useful, statistics need to answer a specific and intentional question. "What is the probability of this player becoming a star" is certainly not specific. One might say that ".7/.45 PPG (F/D)" is specific, but it's not, because there are way too many elements at play here and the probability that a model completely misses one of those elements, that would be obvious to an observer, is very high.

Let's imagine a prospect is really big for his age, has great hands, and completely dominates a junior league. He might tick all of the checkboxes in the model and be said to be very likely to become a star, but if he happens to be too slow, he might not even make it in the NHL. Now, not only the model will be wrong about this player, but it will skew the way other prospects' results are perceived.

I'm in a different but also extremely competitive industry that relies heavily on skills, training, and to some extent natural talent. When I was in college I had a very good idea of who would make it (which happened to be a small percentage despite being a top school) or not, and it turns out that I was right in most cases. It had nothing to do with grades, with academic achievements, or anything else like that. It had everything to do with work ethic, state of mind, emotional maturity, entrepreneurial spirit and other similar traits that a model could never pick up on. There were students who seemed to be involved in all kind of great projects, had great skill, but then you realized that they were always pushed by their parents: None of them made it. Other students had straight A's, were over-achieving academically, but then you realized that despite that they were avoiding the tough teachers who would make them work extra hard and would grade them more harshly: none of them made it. You had the uber-talented ones but who would not take criticism well because they were not used to it: None of them... you get the gist.


For hockey prospects those less quantifiable aspects are at least as important as skill and current production. When a prospect has those qualities from a young age it will show up in the production. But a prospect could have amazing production while missing on one of those aspects (like the student who was always pushed by his parents). and that's enough to make them a bust. On the other hand a prospect could have lackluster production for any reason and still show plenty of potential to the attentive eye. Of course, as simple hockey fans we don't have access to this info, but that doesn't make the generic models any more useful.
 

Triumph

Registered User
Oct 2, 2007
13,576
13,992
The question with all those models is whether they provide any additional value.

@AfroThunder396 You said that a model is only as good as the data going into it, that's true. However you can have all the greatest data in the world and still make a useless model with it, which is extremely common nowadays, when every company wants to be "data driven".

To be useful, statistics need to answer a specific and intentional question. "What is the probability of this player becoming a star" is certainly not specific. One might say that ".7/.45 PPG (F/D)" is specific, but it's not, because there are way too many elements at play here and the probability that a model completely misses one of those elements, that would be obvious to an observer, is very high.

Let's imagine a prospect is really big for his age, has great hands, and completely dominates a junior league. He might tick all of the checkboxes in the model and be said to be very likely to become a star, but if he happens to be too slow, he might not even make it in the NHL. Now, not only the model will be wrong about this player, but it will skew the way other prospects' results are perceived.

I'm in a different but also extremely competitive industry that relies heavily on skills, training, and to some extent natural talent. When I was in college I had a very good idea of who would make it (which happened to be a small percentage despite being a top school) or not, and it turns out that I was right in most cases. It had nothing to do with grades, with academic achievements, or anything else like that. It had everything to do with work ethic, state of mind, emotional maturity, entrepreneurial spirit and other similar traits that a model could never pick up on. There were students who seemed to be involved in all kind of great projects, had great skill, but then you realized that they were always pushed by their parents: None of them made it. Other students had straight A's, were over-achieving academically, but then you realized that despite that they were avoiding the tough teachers who would make them work extra hard and would grade them more harshly: none of them made it. You had the uber-talented ones but who would not take criticism well because they were not used to it: None of them... you get the gist.


For hockey prospects those less quantifiable aspects are at least as important as skill and current production. When a prospect has those qualities from a young age it will show up in the production. But a prospect could have amazing production while missing on one of those aspects (like the student who was always pushed by his parents). and that's enough to make them a bust. On the other hand a prospect could have lackluster production for any reason and still show plenty of potential to the attentive eye. Of course, as simple hockey fans we don't have access to this info, but that doesn't make the generic models any more useful.

You're basically saying you scouted whatever industry you're in perfectly, and that's great, but hockey scouts don't scout hockey anywhere close to perfectly, and up until recently they made gigantic errors in evaluation. So sure, looking at things holistically is great, and having this top-down approach is great, and the Devils were evaluating players qualitatively in tons of different categories in the mid 00s, and where did that approach get them?

Stat models like this can tell you who to look at. How do teams even figure that out, which players to scout? Have you thought about that? And yeah, as I keep saying, you cannot pick by a model like this, but to me it's simple - you've got to have an argument for why you're going against the model. Why is this player special, either overrated by the model or underrated? Is there a compelling reason to either disregard their success or disregard their relative failure? And plenty of times there is, but sometimes there just isn't. We watched basically every team pass over Brayden Point and Alex Debrincat, and these guys haven't hit their 30th birthday yet. Why wasn't Panarin ever drafted?

I don't consider this information 'noise'. What's noisy about it is getting into the various probabilities and comparisons and so on, and obviously any halfway decent model that was proprietary would also start guessing about ice time and 5v5 scoring rates and all of that stuff. But you're arguing for traditional scouting and traditional scouting alone just isn't enough.
 

bossram

Registered User
Sep 25, 2013
15,810
15,405
Victoria
"Models" like this one are really not that useful, and actually potentially harmful to decision making (yes, I know, we don't make any decision and the team doesn't care about our opinion).
Our brains are extremely good at analysing complex patterns, which is why the "eye test" or intuition is so important. If you've thought hard about a decision and understand the factors at play, then data can be extremely useful to help you understand specific things you know you're missing or are uncertain about. But aimlessly putting together a lot of data, which is what those scores are doing, will just create noise, and noise is one of the biggest issues in decision making.

I'm pretty sure I could create a score that's just as effective (useless) based on point production with a factor for the position, a factor for the age, and another factor for the league a player plays in.

Now, with that being said, I understand why those models are successful on social media. It gives something to talk about to people whose only research about a player has been watching a couple of highlight reels.

And yes, teams have stat departments, but they don't really do that kind of stuff (or at least I hope not). They will rather answer specific questions to help the coach tweak the system against a specific team, or help the scouting department understand better an aspect of a prospect's play.
This seems like quite a misunderstanding about how the eye test or our brains work. They're really not good at recognizing true patterns over large samples. Really, humans are laughably bad at it. The human brain doesn't so much recognize patterns as invents them. We see something, make note of it, see it again, and attribute it to some true phenomenon or pattern, when in reality it's just random. Humans survive on confirmation bias. This is why most people do not intuitively understand statistics. Or why Apple Music/Spotify's "random" shuffle modes aren't actually random. People won't like it.

IIRC several years ago, the guys at Canucks Army (Canucks blog) created a scouting "model" that just selected the highest first-time draft-eligible CHL scoring player at each draft slot available. They backtested this "model" over an 11 draft sample, comparing how the model would draft in every team's selections to the actual teams themselves and found that they had out drafted half the teams in the NHL - teams with reams of eye test scouts and millions of dollars at their disposal. The teams were worse than a "model" that excluded any player outside the CHL and didn't consider how the players played one iota. The model only looked at points (different iterations used age-adjusted points). You could have an average-drafting NHL organization without watching the players at all! Says something about the eye test, huh.

Now, I wouldn't draft based on Bader's model. But the project is interesting: His model gives you a list of historical comparables of players of a similar size that scored at a similar rate, and how likely that kind of player is to make the NHL and/or become a star. That can give you a baseline about how this kind of prospect would turn out. Now ideally, you watch the player to try to make a determination of whether he'll outperform or underperform that historical average grouping of players.

 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
You're basically saying you scouted whatever industry you're in perfectly, and that's great, but hockey scouts don't scout hockey anywhere close to perfectly, and up until recently they made gigantic errors in evaluation. So sure, looking at things holistically is great, and having this top-down approach is great, and the Devils were evaluating players qualitatively in tons of different categories in the mid 00s, and where did that approach get them?

Stat models like this can tell you who to look at. How do teams even figure that out, which players to scout? Have you thought about that? And yeah, as I keep saying, you cannot pick by a model like this, but to me it's simple - you've got to have an argument for why you're going against the model. Why is this player special, either overrated by the model or underrated? Is there a compelling reason to either disregard their success or disregard their relative failure? And plenty of times there is, but sometimes there just isn't. We watched basically every team pass over Brayden Point and Alex Debrincat, and these guys haven't hit their 30th birthday yet. Why wasn't Panarin ever drafted?

I don't consider this information 'noise'. What's noisy about it is getting into the various probabilities and comparisons and so on, and obviously any halfway decent model that was proprietary would also start guessing about ice time and 5v5 scoring rates and all of that stuff. But you're arguing for traditional scouting and traditional scouting alone just isn't enough.
I don't argue for traditional scouting, stats are extremely important, I simply argue against giving too much importance to generalist models. The more precise a question they are trying to answer, the more likely stats are to be useful. The more "advanced" they are, the more likely they are to miss things and to be biased in different ways. This is obvious in science, where studies try to have a very narrow scope as it makes it easier to avoid biases in the data (and it's why during Covid people would quote all kind of studies that where saying the opposite of each others), but also true in quantitative trading for example, where most decisions are usually first made based on a human understanding of a situation which is only then further analysed with stats.
 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
This seems like quite a misunderstanding about how the eye test or our brains work. They're really not good at recognizing true patterns over large samples. Really, humans are laughably bad at it. The human brain doesn't so much recognize patterns as invents them. We see something, make note of it, see it again, and attribute it to some true phenomenon or pattern, when in reality it's just random. Humans survive on confirmation bias. This is why most people do not intuitively understand statistics. Or why Apple Music/Spotify's "random" shuffle modes aren't actually random. People won't like it.
The human brain has all kind of biases, but the human brain, while terrible at processing logic or remembering facts, is extremely good at recognizing patterns. Which is why we are still so much better than even the most powerful computers at recognizing things. It's also why oncology still works with human doctors (even though they are spending billions to develop cancerous cell recognition software, but 99% of the work is still made by pathologists that look at cells with their microscopes ). It's also why computers suck so much at reading emotions. The human brain, but actually the mammal brain in general, is ultra specialized at pattern recognition, it's a huge evolutionary advantage.

Part of what makes us so good at pattern recognition is that our brains will tend to fill up missing data (which is can draw a stylized cat with a couple of lines and you will still recognize it's a cat.) This is a feature, not a bug. Of course it means that if we are unaware of our biases we might have a tendency to make up stuff, but people who are used to work with data learn to avoid those biases.

Now, models based on historical data are a whole other story. It's actually trivial to make models that outperforms passed decisions, it doesn't mean it's useful at all to predict the future.
 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
Stat models like this can tell you who to look at. How do teams even figure that out, which players to scout? Have you thought about that? And yeah, as I keep saying, you cannot pick by a model like this, but to me it's simple - you've got to have an argument for why you're going against the model. Why is this player special, either overrated by the model or underrated? Is there a compelling reason to either disregard their success or disregard their relative failure? And plenty of times there is, but sometimes there just isn't. We watched basically every team pass over Brayden Point and Alex Debrincat, and these guys haven't hit their 30th birthday yet. Why wasn't Panarin ever drafted?
I agree 100% with that part, a model could be designed to try to find potential hidden gems. It wouldn't mean those players would be good, but it could tell the scouts that they should have a look. With that being said, I'm curious how Point, Debrincat, or Panarin would score on the current models in their draft eligible years.
 

Triumph

Registered User
Oct 2, 2007
13,576
13,992
I don't argue for traditional scouting, stats are extremely important, I simply argue against giving too much importance to generalist models. The more precise a question they are trying to answer, the more likely stats are to be useful. The more "advanced" they are, the more likely they are to miss things and to be biased in different ways. This is obvious in science, where studies try to have a very narrow scope as it makes it easier to avoid biases in the data (and it's why during Covid people would quote all kind of studies that where saying the opposite of each others), but also true in quantitative trading for example, where most decisions are usually first made based on a human understanding of a situation which is only then further analysed with stats.

Comparing this to scientific questions doesn't make sense because science, even medical science with its attendant issues, works on more obvious causal chains than the development of prospects from ages 17 to 25. I think asking very specific questions in this arena is generally a terrible idea and will lead nowhere - you could ask, e.g. what is the effect of coming to development camp versus not coming to it, and your results will just be nonsense, because you have no controls over anything else in the experiment. All a generalist model seeks to do is step back from the specificity which tends to get prospect evaluators in trouble, falling in love with certain aspects of a player while not assessing the most important thing, can this player ultimately provide offense in the NHL? Most of what makes a hockey player good springs from that, and even if their offense fails them at the highest level they can sometimes get by with other stuff.

I don't know enough about quantitative trading to say more about it except that I'm pretty sure that's what Sunny Mehta did before and after he worked for the Devils.

I agree 100% with that part, a model could be designed to try to find potential hidden gems. It wouldn't mean those players would be good, but it could tell the scouts that they should have a look. With that being said, I'm curious how Point, Debrincat, or Panarin would score on the current models in their draft eligible years.

I don't know if 'the current models' refers to Bader's model, but according to Bader's model these players all score great, Panarin might not in his D0, but he certainly would in his D+1 and D+2.
 

swiiscompos

Registered User
Dec 9, 2018
1,050
1,511
London, UK
Comparing this to scientific questions doesn't make sense because science, even medical science with its attendant issues, works on more obvious causal chains than the development of prospects from ages 17 to 25. I think asking very specific questions in this arena is generally a terrible idea and will lead nowhere - you could ask, e.g. what is the effect of coming to development camp versus not coming to it, and your results will just be nonsense, because you have no controls over anything else in the experiment. All a generalist model seeks to do is step back from the specificity which tends to get prospect evaluators in trouble, falling in love with certain aspects of a player while not assessing the most important thing, can this player ultimately provide offense in the NHL? Most of what makes a hockey player good springs from that, and even if their offense fails them at the highest level they can sometimes get by with other stuff.
There aren't that many obvious causal chains in medicine. It looks obvious because they break down extremely complex issues into small parts so that they can work on it and they end up having high probability results. The issue with your example ("e.g. what is the effect of coming to development camp versus not coming to it") is that the question is not intentional, it's just a random data. This is not useful. As I said above, stats have to be both specific and intentional to be useful.

Let's imagine that a prospect that plays in a pro league shows flashes of great skill but has a pretty average production. Now the eye test shows that this player gets pushed around a lot because the previous year he was playing in a junior league where he could be physical, but now playing against heavier and stronger men it just doesn't work. So you might ask the stat departments to study the development of past players with a similar size and style of game at the same age, and give a 3 years projection in the same league. It will give an average development curve as well as an idea of how likely the curve is (maybe data points are all over the place and the results are therefore not that useful). But reading this player bio you also learn that he started playing hockey at 13 years old, so pretty late. Now you can ask your stats department to run the maths on that part of the equation, and then the result can be used to alter the previous model. It's all maths and stats... but specific and intentional. It takes time, it's expensive, and it's super important. And maybe, before running any stats, you just talked to the prospect's coach and he tells you that despite being very skilled the player has a hard time listening to advice and adapting his game. In this case you already know that this player has basically no chance to become an NHLer so you don't bother with stats.
 
Status
Not open for further replies.

Ad

Upcoming events

Ad

Ad