Improving Adjusted Scoring and Comparing Scoring of Top Tier Players Across Eras

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Yes, there is a high correlation according to Pearson's correlation coefficient.
http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
But, that does not necessarily be taken as proof that the estimations are "96 %" correct, because when actually (for example) taking a table with estimated icetimes in one column, and factual ones in another, one can see that they are significantly less accurate than that. The correlation coefficient (for example +.96) needs to be viewed in context. And what I did, was to actually take the time to study how wrong (or right) the estimations actually were, rather than relying on a number.

You probably know your numbers better than I do. As far as the accuracy of the numbers, I am just parroting what I was told, and I do know 96% correllation doesn't necessarily mean "96% correct". I do think the numbers pass the smell test though - what about you?

I don't recall seeing this but I'd be interested in what you found.

I have long suspected that the ice time estimates which were fudged to match up with player usage from the late 90s would get less accurate the farther we go back from the calibration seasons.

Particularly for seasons before the mid-80s when ice time management changed dramatically.

There was no fudging; the same formula was followed all the way through. A factor was applied to all 1st/2nd line/pairing players but it was uniform. there was no "wait, this doesn't look right for Gretzky so I'm just going to give him two more minutes and take a minute each from Messier and Krushelnyski..."

- most are results-oriented, in that if the results aren't what they're "looking for", they find it easy to dismiss or ignore them (which is why I don't find the "eyeball test" very important in most cases)

This has to be in response to my earlier comments. Let me clarify, I don't think anything needs to look a certain way "for me". I have no problem with laying out the methodology and then saying "according to this specific methodology this is what results we get". But when I say the end results need to look a certain way it is more for other people than me. I want more advanced work like yours to be accepted, and if it gives a result like I calculated (Turgeon 5-6% better than Savard) then that is just too different from some more conservative peoples' perceptions to ever get widely accepted.

As you said though, this does a better job than regular adjusted stats, at least from what little testing I've done.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
It's difficult to decide how many and which players to include.

I think that no matter what you do, you should keep it scaled relative to league size. So if you choose 30 players in a 6-team league, you need to use 150 in a 30-team league. Not because the talent pool is five times larger, BUT, the amount of players with the opportunity to play x number of minutes and thus score x number of points in a season, does change proportionally with the number of teams.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Meeting eye-ball tests is important. But when I post, rather than just pointing out well-known and obvious things, I often write about things that may not be as obvious or well-known. One example was regarding New Jersey's penalty killing in 2002-03. It is very well-known that Scott Stevens is considered one of the best defensive defencemen ever. Yet, their 2nd penalty killing unit had far better "per minute" stats.
http://hfboards.mandatory.com/showthread.php?t=1116663&page=3
(posts 52 and especially 62)
Facts are that S.Stevens was on ice during 51.54 % of the penalty killing icetime. The "adjusted icetime" suggested he was on ice for 81.25 %.
Facts are that S.Niedermayer was on ice during 40.61 % of the penalty killing icetime. The "adjusted icetime" suggested he was on ice for 18.75 %.
The "adjusted stats" shows that "Wow!!! Stevens played an amazing 81 % during PK! Niedermayer played far, far less, just 19 %."
Facts says "Stevens played 51.54 % of the time, while Niedermayer played 40.61 %".
Why manipulate stats to say that a player had 4 times more icetime than another, when in reality he had just 1.3 times more icetime?
The reply I got was - when analyzed - that Stevens faced more than 3 times harder opposition than Niedermayer. (Someone is welcome to show me factual proof that supports that claim.) I think I even researched that, by looking at which opponents that was on the ice during the goals, and found that things were not as black and white as I was told here on the board. It should also be noted again that when New Jersey played on the road, they didn't have the benefit of choosing which opponents Stevens or Niedermayer played against.

The above case is admittedly definitely rather extreme, and it's about penalty killing ice time.

Just to make sure, how sure are you that you executed the estimate formula the same way that the originators did? I know that a factor of 1.2 or 1.3 is applied to first lines at ES, so it wouldn't surprise me if something similar or greater was applied on the PK too. Because it's true, top unit PKers should end up with more GA, typically. the fact that they are facing top PP unity typically outweighs the fact that they are the best PKers, right? I can't imagine they would have done this project without accounting for that, but I could be wrong.

You are right, this is an extreme example either way.

But there are many cases where even strength adjusted icetimest turned out to be rather wrong. I ranked every defenceman within each team based on estimated versus factual icetimes, and looked at the top-5 ones to see if the estimations at least managed to tell if a defenceman was e.g. "2nd defenceman", and I think the estimations produced errors in about 50 % of the cases.

What did you classify as an "error"? How far away from the actual result did it have to be to be classified as such?

Basically I just want it to be clear that estimated icetimes are estimations and not facts. They are fairly accurate, but are unreliable to use to "rank" players based on "who played most". I also find them unreliable as a parameter in larger formulas, like "best scorers per 60 minutes of icetime". I understand it's up to everyone to decide how to use estimated icetimes.

I think the unreliability of these numbers gets overstated by the conservative types around here. Everywhere you look, the players with the reputations of being the best players end up with the highest results and the worst end up with the lowest. What's more, with no change to the methodology used, it ends up with higher totals of top players in the first 20 years post-expansion compared to what we're used to nowadays, which is exactly how everyone remembers it.
 

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
This has to be in response to my earlier comments. Let me clarify, I don't think anything needs to look a certain way "for me". I have no problem with laying out the methodology and then saying "according to this specific methodology this is what results we get". But when I say the end results need to look a certain way it is more for other people than me. I want more advanced work like yours to be accepted, and if it gives a result like I calculated (Turgeon 5-6% better than Savard) then that is just too different from some more conservative peoples' perceptions to ever get widely accepted.

As you said though, this does a better job than regular adjusted stats, at least from what little testing I've done.

No, it really wasn't directed at you in particular. I know it's not only a common, but a natural and familiar way to judge results. This is what makes it so difficult to overcome. No matter what adjustements are made and how correct or incorrect they may be, some results are never going to be accepted, because they clash with one's subjective preconceived ideas of how things "should look."

I'm glad you've found the results useful to any degree.

I think that no matter what you do, you should keep it scaled relative to league size. So if you choose 30 players in a 6-team league, you need to use 150 in a 30-team league. Not because the talent pool is five times larger, BUT, the amount of players with the opportunity to play x number of minutes and thus score x number of points in a season, does change proportionally with the number of teams.

Maybe you are right, that it is more important to keep the number of players fixed more in relation to opportunity (league size) than quality (fixed or more gradually increasing number).

Again, I can only stress the following:

- the basis of all results are pairs of consecutive seasons

- any attempt to keep the opportunity factor static is going to adversely affect the average quality of player studied, unless the amount of players of minimum or average quality Q remains proportional to the number of teams in the league

- If opportunity increases and this influences production of top players, it would be captured in the results of the study. Also, if the data from seasons of the distant past were composed more of that of lesser quality players on average (more "second liners"), then one would expect it to appear easier over time to score, since more recent seasons are composed of the data of higher quality players on average. Yet the exact opposite trend emerges in the results and this would seem to refute any such bias.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Where, praytell, does the factor come from?

a response to real life results, of course.

simple example: if it's observed that GF/GA happen about, say, 30% more frequently when top line players are on the ice then the formula is adjusted to include that, so that it doesn't end up saying "the guy with 180 combined GF/GA in 80 games must have played exactly twice as often as the guy with 90 in 80 games".
 

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
It's not so easy to explain. Basically I focus on the opponent's GA.

Yet, as you acknowledge, schedule likely does matter and can sometimes alter scoring stats by say 5-8 % or so. It is common here to compare players. If a player scored 4 % more points than another. But there seem to be no attention being paid to things like schedule.
If I remember right, it was fairly common to see seasonal top-ten scoring lists being altered. In some case(s) I even think it affected the leading scorer (Art Ross winner) of the season.

I was referring to your comment that you had done some work on scoring from one season to the next. Was this work also primarily focusing on the effects of schedule?

As I said before, I do remember at least some of your post(s) on the effect of schedule on team/individual scoring. I thought at the time that your work was worthwhile and your methodology seemed sound. Thank you for the additional explanation, this only further affirms my previous belief of your work.

This IMO is the type of effect that, once perfected, should be standardly incorporated into NHL adjusted statistics. You say the effects were as much as 5-8%, which is a significant amount. However, I'm guessing such large effects are more limited to teams that were extremely high/low scoring and/or to eras when the schedule was very unbalanced (mainly the 70's-80s). An example of this would be the '80's Smythe Division.

How do you properly isolate the effects, given the following:

You say you (wisely) removed the games in which Edmonton's opponents played Edmonton, effectively adjusting the team goal data for the opponents. However, isn't the opponents' data still biased to some degree due to the unbalanced schedule? I.e. if you removed from the Kings' data those games in which they played against Edmonton, how did you account for the fact that due to the unbalanced schedule, the Kings still may have played against a remaining schedule of high or low scoring teams (I would guess the former, if either). It seems like repeating the process would soon reach a limit where no further adjustment would substantially impact the results. Did you look at this factor? If so, what effect did you find and how did you further adjust for this?

Yes, criticism is usually better than silence (although blunt and discouraging one-liners can be an exception). The end result is usually what I'm after, so suggestions on how to improve things are welcome.
What's ITT?

Any publicity is good publicity, which leads to the possibility of more being exposed to the project. Also, perhaps more importantly, we can learn about flaws in and ways to improve each study.

ITT = in the thread

(This is not reserached yet. -->) For example, during seasons with a lot of powerplays, scoring within teams may look differently than seasons with little powerplays. Seasons with much powerplay may lead to power play specialist scoring points on a higher percentage of goals, than otherwise. Scoring during even strength appear to be much more balanced between players on a team.

There is definitely a power play effect. This can be seen in many studies, including this one. Just as it seems proper that schedule is a standard adjustment, adjusting team/individual data from even strength vs. special teams scoring data seems like it should someday be standard as well. However, just as in the "assist per goal ratio", there is no standard for what the proper "even strength to special teams" ratio of scoring should be, since it varies over time.

One benefit of the type of study I presented, is that it captures several effects without explicity defining and measuring them directly:

- changes in roster size (and therefore average ice time)
- changes in power play opportunities (and therefore changes in scoring within a team)
- changes in general strength of era
- changes in the distribution of talent within the league, primarily the depth of strength of forward talent (if scoring % changes are uneven among different types of skaters)

I think I have posted table showing things like (made up):
Season|1st|2nd|3rd|...|15th
1984-85|40.2|36.3|33.0|...|12.5
1985-86|40.7|35.9|32.5|...|9.6
where 1st is the average for the leading scorer on each team. 2nd is the average for 2nd best scorer on each team. And so on...
I've also posted the above but with factual, as well as adjusted, stats.

I have done similar things, for example looking at what the top 3 scorers on each team averaged and what different tiers of scorers averaged, both in comparison to league averages.

I found this interesting, but it seems to be much more dependent on other factors which are not easily removed, such as the quality and distribution of talent in the league.

If I remember right, some thought the schedule adjusted stats still didn't do the 1980s players total justice (based on "eye-test").

Then guys like Canadiens1958 seem able to tell us about how coaching and roster sizes has changed over the years. To take an extreme example, let's compare today's NHL with the NHL where some players played 60 minutes per game (if I remember right).

By the way, adjusted points hasn't really been on my mind during the last months.

As I said, the "eye test" is familiar, but inherently subjective and therefore flawed way to primarily evaluate such results. In the case of your study, how would one even be able to say that the results "look right"?

Roster sizes changed and should somehow be adjusted for (either directly or indirectly), but the distribution of ice time likely changes disproportionately when the roster sizes change.

I haven't really been working on this project for several months, which is one reason I wanted to present it before I was less clear on some of the aspects of the study.

I think what you have done is one piece of the puzzle, but to get it the "whole picture" needs to be integrated with other pieces.
I started studying the year-to-year changes, but found that I wanted to include more things in the equation. Age is one of those things.
I think it was during the best defenceman project that I did a fairly advanced study on strength of different seasons. I don't remember the details right now, but I think the strongest season for defencemen appeared to be around 1981. I think I didn't post it, or possibly posted it but deleted it. (It probably was yet another of those cases where people on one hand were constantly doing more or less arbitrary adjustments within their heads, but on the other hand didn't find a study trying to determine it to be of much value.)

Defensemen sounds like a difficult way to study the strength of season, which may make your results especially unique and interesting. I have thought looking at goalies would be another way to examine strength of season, but would also guess the small number of goalies (esp. in earlier eras) would yield a very small sample and less reliable data

Thank you. I do enjoy studying stats and doing research to try to find out "how things really are (or may be)". Part of my problems may also be that I think that some things (like strength of eras, etc., etc.) ought to be "settled" and might require partly narrow studies to build upon.

You're welcome. I think we all want things to be "settled", but on issues as complex as strength of era, I don't expect things to be "settled" any time soon.

One reason I believe regression would work so well, is that it can not only attempt to simultaneously measure many variables, but produce exact coefficients for those variables and indicate which variables have insignificant effect (at least in comparison to the error which they add).

I've been more interested in building upon your win % thinking that Overpass' thread on adjusted +/- developed into. I spent quite some time integrating SH and PP play into the study. I even "adjusted" for goaltending, which (goaltending) I think is among the most overlooked things when focusing on +/-. I was planning on posting a thread on it. I posted a small example, but got discouraging replies, got the impression that no matter how thorough and/or complete the study would be, it would just not affect the already made up minds on how things are.

Goaltending obviously influences +/- in a dramatic way. Does adjusted plus-minus factor in goaltending at all? I'm guessing it doesn't, which (if I'm correct about this) would be an instance of making an assumption out of practicality (much more work in an attempt to remove an effect which may be more random than signficant to the results).

I haven't looked at even strength win% in some time either. I think I've posted the last thoughts and formulas I had on the matter. It definitely seemed to produce some good results, just not sure the limits of its accuracy. I think the eventual end results, when combined with special teams data, could produce something similar to HR's "point shares".

During the last 1-2 months, I've studied how team performance is affected when a player is out of the team (for example being injured). To me very interesting. I posted a chosen example showing that Pavol Demitra actually significantly made his team perform far better with him playing than when his out injured. Not during one team during one season, but season after season on 4-5 different teams. No interest whatsoever, apart from one comment more or less automatically dismissing the study.
(In the "best defencemen" project, there sometimes were mentioning of how a team performed when a player (don't remember if Eddie Shore or Sprague Cleghorn) played or not. I have done that for every player on every team since 1987-88 to 2010-11. In the project, this stat was considered meaningful, even if there was no comparison at all made to other players. When I do it, it's considered uninteresting or meaningless.)
To me, it's amazing to see Lidstrom place very highly, with his team being nearly average with him not on the team (and this not even including 2011-12, and even not counting games during end of regular season where Detroit rested players).
I would have pointed out that Gretzky didn't seem to make LAK better during the regular seasons, something that meets my own eye-ball test. But how ridiculed would I be if posting something like that?

Both of the above studies have a holistic approach, which I find is a good way to go. Compare team with a player with team without player. In my opinion more useful than studying +/- when on ice, compared to +/- when on the bench.

I have also started studying how different players actually affect each others scoring stats. For example, how did Mario Lemieux benefit by playing with Kevin Stevens, and vice versa. I can find out by filtering out games where both played, or just one of them.

I found the weighted differential in team win% with or without a player in the lineup to be a great metric, because it combined simplicity with direct measurement of what we all agree is the most important hockey value (winning). The limitation is that for players who don't miss many games, the amount of data without them is very small, so the results are very unreliable.

This would apply to teammates that rarely played separately, at least in certain situations (ES, PP, SH).

See above (one piece of the puzzle, or rather several pieces).
I have to say I agree with some of the criticism you have received, but I suppose you basically do too. I basically agree with your replies to the replies you have gotten. You have started something good, that should be able to be improved and built upon.

Yes, I can see the potential for bias in certain areas, and I don't claim the results to be anything close to exact in magnitude. However, the general trend is clear to me, and the reasons for some of the broader and larger effects (whether over decades or from one season to the next) make sense to me.

I would break the period studied into 3 sub-periods:

'46 to '67
: The constant number of teams is a positive. However, the inherent limitation in the number of players included, when combined with the generally shorter careers of players, results in a larger uncertainty error. The potential error is compounded when comparing across longer timespans, since as Overpass points out, the multiplicative link of seasonal factors is longer. The exact magnitudes of the changes from the '50s to post-WHA seasons is certainly up for debate, but there should be little doubt that it's become much, much tougher for top players.

The broad effect is clear to me. Talent quickly files back into the league after WWII, and becomes compressed (quality depth) in the last few years before expansion.

Expansion to WHA Merger: At least from an analytical perspective, this is a decade or so of pure chaos. This makes it a quite dynamic, interesting and important time to examine. It also is one of the most difficult. The number of teams immediately doubles, with repeated expansions during the decade, while talent flows to the WHA. In addition, the result of the expansion is a glaring disparity from the top to bottom. In an era of bullies and weaklings, along with rapidly changing environment, it's a very challenging time to analyze properly.

WHA Merger to Present
: The larger number of teams, more gradual expansions, and better availability of data make this the most ideal period to examine. The main challenges are the large change in scoring from the '80s and early '90s to later years, and the large and (at least at first) somewhat disproportionate addition of talent from overseas.

Regarding adjusted points (or goals), I think one needs to understand and keep in mind how the most common methods work. We first normalize scoring to say 6 goals per game, to make different seasons comparable.
I can't find the words properly now, but I think it's valuable to understand what we're normally doing. We have a set number of "total goals" and what the common methods does is to tell how much different players stand out compared to some sort of league average. How much they stand out depends in things like:
* How many teams were there in the league? The more teams, the more spread out quality, and the more easy it may be for the top scorers to stand out compared to their average teammate.
* What was the strength of era? Again, the higher quality per team, the more hard to stand out.
I'm very tired now, and can't think very straight, but just wanted to point out that traditional adjusted scoring has a lot to do with percentages. It's "team GF divided by league average GF" multiplied by "player's pts divided by team's GF". Or just "player's pts" divided by "league average GF".

Yes, you are basically talking about what I would term "talent compression" and "talent dilution." It is a crucial part of adjusting the data properly and one of the primary factors intended to measured in this study. In times where talent is diluted (after war, large expansion, defection to WHA) it is easier to stand out in relation to one's peers (whether in simple adjusted data or seasonal/period rankings). In times where talent is compressed (long periods without expansion, WHA merger, influx of talent from overseas) it is more difficult to stand out.

I think you're among the better/best ones here.

Thanks. I have been happy with a lot the work I have done and have advanced my own knowledge in the process. I hope it's been of some interest and use to others as well, and it seems to have been to at least a few.

However, I know that I lack the knowledge of advanced statistics, any programming skills, and the computational resources that some others have. I try to compensate for this with rigorous logic in my methodology, genuine interest in the subject, and a natural aptitude for math.

Thanks. I think people understand me. It's rather that I need to express myself in simple, perhaps childlike, school English, and I suspect that may affect the way I'm being perceived(?) here by some.

Perhaps you can't fully express yourself in English, but you are able to communicate clearly both through language and your analysis of data. It is often very difficult to present one's results in a form that is easy to understand even to others with an interest and aptitude in such things, let alone the "common fan." Hence the questions and misunderstandings that seem to often arise (but which are also usually helpful and enlightening in some manner).

From our limited interaction and the work of yours which I have reviewed, one of the last adjectives I would use to describe how I perceive you is "simplistic".
 
Last edited:

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
Is there actually any data to support this notion? That Jagr was more likely to pick up extra points running up the score but Lafleur wasn't?

I can't speak to Lafleur specifically, and will not comment re: Jagr.

However, I am skeptical of this claim. Lafleur played on better teams and in an era with much less parity (more blowout wins) than did Jagr. Therefore it seems likely that Lafleur had more opportunities to run up the score (whether he did or not is another matter).
 

BraveCanadian

Registered User
Jun 30, 2010
14,522
3,360
a response to real life results, of course.

simple example: if it's observed that GF/GA happen about, say, 30% more frequently when top line players are on the ice then the formula is adjusted to include that, so that it doesn't end up saying "the guy with 180 combined GF/GA in 80 games must have played exactly twice as often as the guy with 90 in 80 games".

So in other words, you say factor, I say fudge.
 

plusandminus

Registered User
Mar 7, 2011
1,404
268
Just to make sure, how sure are you that you executed the estimate formula the same way that the originators did? I know that a factor of 1.2 or 1.3 is applied to first lines at ES, so it wouldn't surprise me if something similar or greater was applied on the PK too. Because it's true, top unit PKers should end up with more GA, typically. the fact that they are facing top PP unity typically outweighs the fact that they are the best PKers, right? I can't imagine they would have done this project without accounting for that, but I could be wrong.

Just to make sure, what formula and project do you refer to?
You mention 1st lines, but is that really so "black and white"? It seems that many players move around a lot between different lines?

When I wrote here yesterday, it was about the simple kind of situational estimations that focused only on situational GF+GA.
Estimated ES icetime = ESGF+ESGA / teamESGF+teamESGA.
Estimated PP icetime = PPGF+PPGA / teamPPGF+teamPPGA.
Estimated SH icetime = SHGF+SHGA / teamSHGF+teamSHGA.
The above results in percentages. So as in the Stevens/Niedermayer case, I focused only on SHGF+SHGA.


What did you classify as an "error"? How far away from the actual result did it have to be to be classified as such?

Let's take a made up example:
Team|Name|FactualRankOnTeam|RankOnTeamAccordingToEstimation
DET|Defenceman1|1|1
DET|Defenceman2|2|3
DET|Defenceman3|3|2
DET|Defenceman4|4|4
Two of the defencemen got correct rankings, while two didn't. 50 % correct. Sometimes the ranking was correct, sometimes off by 1 placement, sometimes by more.


I think the unreliability of these numbers gets overstated by the conservative types around here. Everywhere you look, the players with the reputations of being the best players end up with the highest results and the worst end up with the lowest.

I too think that they basically pass the "eye-test" in the way you describe here. But I still think they may be up to 1-4 minutes wrong for the top players. If one player ends up with 28:22 per game, and another 29:20, I think we cannot be sure at all which one of them that actually had more factual icetime.
To keep to the original topic of this thread, that really shouldn't matter much anyway.

My main concern might be to be careful to distinguish between what we do know and what we don't know.
For stats from say 1970-71, we just hame GF, GA, PPGF, PPGA. An estimation is being made to calculate ESGF, ESGA, SHGF and SHGA. After that, those numbers are being used to estimate situational icetime shares within teams. After that, we estimate how much situational icetime each team as a whole might have had during the specific season. Then we combine the first estimation(s) with the last, to get our final result.

Maybe icetime in itself shouldn't be the ideal criteria anyway? Maybe "a minute is a minute" isn't the best way to approach this, as it is so often pointed out that there could be a significant difference between a "hard" minute and an "easy" one?


You probably know your numbers better than I do. As far as the accuracy of the numbers, I am just parroting what I was told, and I do know 96% correllation doesn't necessarily mean "96% correct". I do think the numbers pass the smell test though - what about you?

Well, basically it does. There surely is a strong correlation between minutes played and goals on ice for. The more one plays, the more goals one will be on ice for, even though the pace may be different for different players. I do think it passes the smell test overall, but in cases of a less than 2-3 minutes difference I think one should keep an open mind to that we don't know for sure which player actually played the more minutes. And we certainly don't know everything about the quality of the minutes.

By the way, here is a curiosity that I've been thinking about. Lidstrom has several times led the best team in the league in icetime. Sometimes he has even has had the most ES and PP and SH minutes on the team, despite playing on such a great team. How come? Is it because he has greater endurance(?) than other players in the league? Wouldn't one expect every team to have their 1st defenceman logging huge minutes? An average defenceman playing on a team with poor defencemen should be able to have the same role as Lidstrom has in Detroit and log similar numbers. Yet it's often the best defencemen of the league that ends up with the most minutes. (I haven't studied this yet, it's just a more or less spontanous thought.)


What's more, with no change to the methodology used, it ends up with higher totals of top players in the first 20 years post-expansion compared to what we're used to nowadays, which is exactly how everyone remembers it.

Yes.


I think that no matter what you do, you should keep it scaled relative to league size. So if you choose 30 players in a 6-team league, you need to use 150 in a 30-team league. Not because the talent pool is five times larger, BUT, the amount of players with the opportunity to play x number of minutes and thus score x number of points in a season, does change proportionally with the number of teams.

Sounds logical to me too.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Did Savard create more of his offense than did Turgeon? If you have ES data for Savard, you could compare each player's ratio of ES points to ESGF while one the ice. This is a far from perfect way to determine such a thing, but at least may give some indication as to whether or not this was the case.

I did check this, and they were dead even.

So in other words, you say factor, I say fudge.

Call it what you want, but it wasn't arbitrary or "pick and choose", it was uniform and across the board.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Just to make sure, what formula and project do you refer to?
You mention 1st lines, but is that really so "black and white"? It seems that many players move around a lot between different lines?

When I wrote here yesterday, it was about the simple kind of situational estimations that focused only on situational GF+GA.
Estimated ES icetime = ESGF+ESGA / teamESGF+teamESGA.
Estimated PP icetime = PPGF+PPGA / teamPPGF+teamPPGA.
Estimated SH icetime = SHGF+SHGA / teamSHGF+teamSHGA.
The above results in percentages. So as in the Stevens/Niedermayer case, I focused only on SHGF+SHGA.

my mistake, you made it sound like you had done work using the actual TOI file that we've been referring to. It sounds like you're referring more to overpass' method of estimating PP/PK usage.

What I was saying is, in the TOI file where GF/GA in PP and PK situations are used to determine TOI, there is probably an adjustment for top unit players that accounts for the fact that they score and get scored on more often. This would "smooth out" the effect you're seeing in the extreme Niedermayer/Stevens example.


Let's take a made up example:
Team|Name|FactualRankOnTeam|RankOnTeamAccordingToEstimation
DET|Defenceman1|1|1
DET|Defenceman2|2|3
DET|Defenceman3|3|2
DET|Defenceman4|4|4
Two of the defencemen got correct rankings, while two didn't. 50 % correct. Sometimes the ranking was correct, sometimes off by 1 placement, sometimes by more.

I wouldn't necessarily call that an error. It does, however, underscore the importance of looking at the actual numbers and not getting caught up in rankings. You're right that there can easily be differences from actual to estimated results; I think that this would only happen in cases where they were very close to begin with. And I don't think it would be very often that estimates would change this. i.e. if you're close in actual numbers you'll be close in estimated numbers, and I don't think it should really concern anyone if one player is 30 seconds ahead in actual numbers and 30 seconds behind when estimated; this is not a huge deal. People should be getting away from the whole "see, he was the #2 defensemen because he played 30 seconds more than the #3 guy" mindset and more towards the "these two guys played about the same minutes, you could say they were the co #2/3" mindset.

When I asked about errors I wanted to know about differences in the calculated times. When you say that it might swap who the #2 and #3 defensemen are, it might only take a 10 second swing to make that swap, or it might take a 3 minute swing, so it really says nothing about the quantity of the error.

So let me rephrase the question - how often are the estimated results more than 10% away from the actual results?

I too think that they basically pass the "eye-test" in the way you describe here. But I still think they may be up to 1-4 minutes wrong for the top players. If one player ends up with 28:22 per game, and another 29:20, I think we cannot be sure at all which one of them that actually had more factual icetime.
To keep to the original topic of this thread, that really shouldn't matter much anyway.

I agree that this really shouldn't matter either, as per the above.

My main concern might be to be careful to distinguish between what we do know and what we don't know.
For stats from say 1970-71, we just hame GF, GA, PPGF, PPGA. An estimation is being made to calculate ESGF, ESGA, SHGF and SHGA. After that, those numbers are being used to estimate situational icetime shares within teams. After that, we estimate how much situational icetime each team as a whole might have had during the specific season. Then we combine the first estimation(s) with the last, to get our final result.

True. Keep in mind that the estimate of how much situational icetime a team had in a season is a very easy thing to estimate. You know how many PPs they had for and against, and we know the average length of a PP over time. As long as a team didn't have a massively dispropotionate propensity to score PP goals very early in the PP, or allow then really late, then those numbers are pretty solid indeed.

Maybe icetime in itself shouldn't be the ideal criteria anyway? Maybe "a minute is a minute" isn't the best way to approach this, as it is so often pointed out that there could be a significant difference between a "hard" minute and an "easy" one?

Yes and no. Typically more minutes = tough minutes because the more minutes one plays, the less likely it becomes that they are being "sheltered" from the other team's best players and crucial situations.

In more "hockey" terms, it's really unlikely that a coach says "I like this player so much that I'll give him all kinds of minutes, but he scares me so much that I'll never use him in important situations."


Well, basically it does. There surely is a strong correlation between minutes played and goals on ice for. The more one plays, the more goals one will be on ice for, even though the pace may be different for different players. I do think it passes the smell test overall, but in cases of a less than 2-3 minutes difference I think one should keep an open mind to that we don't know for sure which player actually played the more minutes. And we certainly don't know everything about the quality of the minutes.

I agree, and as I said, I don't think we should care too much.

By the way, here is a curiosity that I've been thinking about. Lidstrom has several times led the best team in the league in icetime. Sometimes he has even has had the most ES and PP and SH minutes on the team, despite playing on such a great team. How come? Is it because he has greater endurance(?) than other players in the league? Wouldn't one expect every team to have their 1st defenceman logging huge minutes? An average defenceman playing on a team with poor defencemen should be able to have the same role as Lidstrom has in Detroit and log similar numbers. Yet it's often the best defencemen of the league that ends up with the most minutes. (I haven't studied this yet, it's just a more or less spontanous thought.)

I have my own thoughts on this, but not sure this is the time and place.
 

Canadiens1958

Registered User
Nov 30, 2007
20,020
2,773
Lake Memphremagog, QC.
Defensemen TOI- Critical Minutes

Critical minutes are the key components to TOI. Specifically the teams best defenseman should be on the ice to start each period and end each period in critical minutes.Overtime minutes where applicable.

Then you look at the ES, PP, PK relative to the dman's strengths - Scott Stevens with the Devils would not be expected to have significant PP time
 

plusandminus

Registered User
Mar 7, 2011
1,404
268
my mistake, you made it sound like you had done work using the actual TOI file that we've been referring to. It sounds like you're referring more to overpass' method of estimating PP/PK usage.

Yes I did. Are you refering to an Excel sheet named NHL68-06TOI.xls ?
I haven't yet put that data into my database.
It's color coded. I suppose white background means factual data. Blue seems to be recent seasons when players have changed team during the season, and where an estimation has been made. I find that estimation unreliable, as I for example found it to be way wrong for Ozolinsh. Green seem to be completely estimated.
But is there really a way to know how wrong the estimated (green) data are, when we don't have any factual data to compare with?
Okay, one may use the same estimation algorithm for the recent (white) seasons, but if so I need to know the formula. (Sorry but I'm a bit weak at keeping track at things. And I suppose this is a case where an Excel skilled person may do this faster than me.)


What I was saying is, in the TOI file where GF/GA in PP and PK situations are used to determine TOI, there is probably an adjustment for top unit players that accounts for the fact that they score and get scored on more often. This would "smooth out" the effect you're seeing in the extreme Niedermayer/Stevens example.

That is an assumption, based on a generalization. As I see it, I don't know for sure.
If I was to assume, I would basically agree with you that it probably would be smoothened out, even though my guess it that prime Niedermayer - during that particular season - still would have better PK stats than 39 or 40 year old Stevens.
If I get the time, I might study this case even more (to for example see what pts per minute pace the oppoents who was on ice during goals had).

I haven't yet put the data in the Excel sheet into my database (because it didn't seem very reliable), but when/if I do, I will likely compare


I wouldn't necessarily call that an error. It does, however, underscore the importance of looking at the actual numbers and not getting caught up in rankings. You're right that there can easily be differences from actual to estimated results; I think that this would only happen in cases where they were very close to begin with. And I don't think it would be very often that estimates would change this. i.e. if you're close in actual numbers you'll be close in estimated numbers, and I don't think it should really concern anyone if one player is 30 seconds ahead in actual numbers and 30 seconds behind when estimated; this is not a huge deal. People should be getting away from the whole "see, he was the #2 defensemen because he played 30 seconds more than the #3 guy" mindset and more towards the "these two guys played about the same minutes, you could say they were the co #2/3" mindset.

I basically agree.

When I asked about errors I wanted to know about differences in the calculated times. When you say that it might swap who the #2 and #3 defensemen are, it might only take a 10 second swing to make that swap, or it might take a 3 minute swing, so it really says nothing about the quantity of the error.

So let me rephrase the question - how often are the estimated results more than 10% away from the actual results?

I have searched for more than an hours without finding the necessary data or code, so I'm afraid I have to let you wait for an answer.
I have, however, posted about it here on the board. I got some replies saying that the differences overall looked small, which I don't agree with. The average error could have been say 6 % (or I may remember wrong).


True. Keep in mind that the estimate of how much situational icetime a team had in a season is a very easy thing to estimate. You know how many PPs they had for and against, and we know the average length of a PP over time. As long as a team didn't have a massively dispropotionate propensity to score PP goals very early in the PP, or allow then really late, then those numbers are pretty solid indeed.

Here is something I spent many hours on earlier this year.
Basically you're right. But average powerplay time actually do change both between seasons and between teams.
At least two things seems to affect their length:
1. Powerplay percentage. The better powerplay percentage a certain team had, the shorter their powerplays on average tended to last.
2. Total number of penalties. A powerplay ends when a) the power play team scores, b) the period ends, or c) the power playing team takes a penalty. If I remember right, this is not as important to account for as power play percentage (1), but seem to affect things.
I spent an awful lot of time trying to integrate these two parameters in the estimation formula, but it wasn't easy, and I got more and more dizzy. (Maybe this is a case for CzechYourMath.)

Below are seasonal data, showing league averages.
Seas|Seas2|PPopp|PPshots|PPtimeMin|PPGF|PPGA|PPGFperOpp|PPGAperOpp|PPGDperOpp|PPGFper2Min|PPGAper2Min|PPGDper2min|PPtimeGF|PPtimeGD|PPOpplength|SHsavePerc
1997|1998|380.154|450.885| 0.0000|57.346|10.000| 0.1508| 0.0263| 0.1245| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8724
1998|1999|359.111|440.222| 0.0000|56.778| 8.148| 0.1581| 0.0227| 0.1354| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8706
1999|2000|330.821|397.964| 0.0000|53.429| 7.714| 0.1615| 0.0233| 0.1382| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8661
2000|2001|376.067|449.400| 0.0000|62.567| 8.900| 0.1664| 0.0237| 0.1427| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8608
2001|2002|338.467|414.133| 0.0000|53.367| 7.333| 0.1577| 0.0217| 0.1360| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8709
2002|2003|362.533|454.600|617.668|59.567| 7.667| 0.1643| 0.0211| 0.1432| 0.1929| 0.0248| 0.1681|10.3694|11.9011| 1.7038| 0.8679
2003|2004|347.567|428.700|587.632|57.233| 8.133| 0.1647| 0.0234| 0.1413| 0.1948| 0.0277| 0.1671|10.2673|11.9681| 1.6907| 0.8657
2005|2006|479.667|606.633|789.325|84.833|10.600| 0.1769| 0.0221| 0.1548| 0.2150| 0.0269| 0.1881| 9.3044|10.6330| 1.6456| 0.8598
2006|2007|397.833|514.633|662.948|69.967| 8.933| 0.1759| 0.0225| 0.1534| 0.2111| 0.0270| 0.1841| 9.4752|10.8621| 1.6664| 0.8633
2007|2008|351.367|468.933|570.239|62.367| 7.967| 0.1775| 0.0227| 0.1548| 0.2187| 0.0279| 0.1908| 9.1433|10.4823| 1.6229| 0.8669
2008|2009|340.933|485.967|549.594|64.600| 7.833| 0.1895| 0.0230| 0.1665| 0.2351| 0.0285| 0.2066| 8.5076| 9.6816| 1.6120| 0.8665
2009|2010|304.533|437.767|495.165|55.467| 6.367| 0.1821| 0.0209| 0.1612| 0.2240| 0.0257| 0.1983| 8.9273|10.0848| 1.6260| 0.8729
2010|2011|290.533|417.000|477.996|52.367| 6.867| 0.1802| 0.0236| 0.1566| 0.2191| 0.0287| 0.1904| 9.1279|10.5054| 1.6452| 0.8742
(GD is goal difference, or in this case "net" stats (SHGA-SHGF and PPGF-PPGA).

It's not what you specifically asked for, but might perhaps be of interest anyway.
One can see what appear to be correlating things in the table, but I eventually got very dizzy in my attempts at creating formulas to e.g. estimate average PP time based on the different data.

On a team level, normalized to 1.0, PP time lengths were from .9249 to 1.0640. Half of the teams were between .9844 and 1.0154 (i.e. half of the teams were within 1.5 % of the average). I don't remember if the normalization were to all seasons combined, or if I normalized each season individually. 8 seasons studied, from 2002-03 to 2010-11. 240 teams.
Like I said above, there is a very strong correlation between a team's power play percentage and their average powerplay time time. The 14 teams with lowest average PP time all were above average PP percentage wise. About 20 out of the 21 teams with the highest average PP time were below average PP percentage wise.
If I'm not too confused, the estimated powerplay lengths should in half of the cases be less than 1.5 % wrong, and in half of the cases more than 1.5 % (but in this case lower than 7.5 %) wrong.
I don't know how to apply this to older seasons. I suspect the estimation formula should for some older seasons would show considerably less correct estimations than for the more recent seasons. But I don't know which seasons and to what extent.

I don't have in my head now if I studied this on player level too. On a team level, the more effective (or ineffective) you were on the PP, the more the estimated PP time will differ from the factual one. (That is a generalization, because we have no idea about the specific cases.) The same thinking might, or might not, be appliable to player level.

Sorry for not being able to immediately answer your main questions.


By the way, this is a stats heavy post, and probably a bit unappealing to the general reader here. Dealing with things like this can often be a bit boring (and very time consuming) to me too. But somehow I think this history forum is a good place for this anyway, as this is a very stats oriented section. We use many statistical "components", and the better we can make each component, the better the components relying on them will be.
 
Last edited:

plusandminus

Registered User
Mar 7, 2011
1,404
268
I was referring to your comment that you had done some work on scoring from one season to the next. Was this work also primarily focusing on the effects of schedule?

I didn't study it as thorough as you seem to have done. I didn't really feel/think it was as interesting as other things. But now that you posted this thread, and I got involved in a conversation with you, I have thought more about it. And I think what you're doing is interesting and a good approach. As you have pointed out in the thread, your method automatically accounts for several things, like partly strength of era. If I have the time, energy and focus, I intend to try to repeat your study and experiment with it a bit. I have things like age and nationality in my database. Nationality may be of intrerest if we for example want to study Canadian players in comparison to say European.

As I said before, I do remember at least some of your post(s) on the effect of schedule on team/individual scoring. I thought at the time that your work was worthwhile and your methodology seemed sound. Thank you for the additional explanation, this only further affirms my previous belief of your work.

Thank you. I think I have found the coding I made, but don't recall what is what so I thought I might redo it. If so, I will try to iterate it the way you suggest.
(With "what is what", I mean that I use many variables and table columns, and GF in one case should be compared to GA in another, etc., and sometimes one should multiply and sometimes divide. I need to sort of sort it out again.)

Being interested in stats, I wish the NHL could have a schedule where each team played each other the same number of times, as is standard in other leagues and in NHL too before it expanded.

This IMO is the type of effect that, once perfected, should be standardly incorporated into NHL adjusted statistics.

Indeed.

There is definitely a power play effect. This can be seen in many studies, including this one. Just as it seems proper that schedule is a standard adjustment, adjusting team/individual data from even strength vs. special teams scoring data seems like it should someday be standard as well. However, just as in the "assist per goal ratio", there is no standard for what the proper "even strength to special teams" ratio of scoring should be, since it varies over time.

Agree. But, maybe there are ways we can get around that, as at least partly suggested by your method.

As I tried to say yesterday... Traditional adjustment just makes each season equal, no matter what quality level. Each season gets the same number of goals per team to distribute throughout the league. First, we distribute the goals on a team level. Then we distribute them within each team.
The higher gap between the best and worst teams, the more "favoured" (rightfully or not) the players on those team's will be compared to seasons where the gap was smaller.
The higher gap between the best and "worst" players, scoring wise, within a team, the more favoured the best scorers will be.
"Traditional" adjustment of scoring is basically based on comparing teams and players to averages. But the average team of one era/season might be considerably better than the average team of another era. Same with players.


I have done similar things, for example looking at what the top 3 scorers on each team averaged and what different tiers of scorers averaged, both in comparison to league averages.

I found this interesting, but it seems to be much more dependent on other factors which are not easily removed, such as the quality and distribution of talent in the league.

I agree.



Defensemen sounds like a difficult way to study the strength of season, which may make your results especially unique and interesting. I have thought looking at goalies would be another way to examine strength of season, but would also guess the small number of goalies (esp. in earlier eras) would yield a very small sample and less reliable data

Goalies seem hard. Defenceman may be harder than forwards, yes.

Goaltending obviously influences +/- in a dramatic way. Does adjusted plus-minus factor in goaltending at all? I'm guessing it doesn't, which (if I'm correct about this) would be an instance of making an assumption out of practicality (much more work in an attempt to remove an effect which may be more random than signficant to the results).

Regarding +/-, I'm currently leaning towards separating defencemen and forwards (which isn't black and white either). If we take forwards, they show a consistency in scoring, in that their scoring (except when playing with Mario, Gretzky, etc.) is fairly constistent from season to season. Their GA (goals against) may however change dramatically from season to season, and we also know that goaltending is hugely affecting +/-. So maybe the best thing for forwards would be to treat their "+" as they are, and then goalie adjust their GA and then perhaps even half them.

Because... It is often being said (at least it used to be said) that the "goalie is half the team". So why then "discredit" forwards so much for the goals againsts they are on the ice for? Maybe points scored is a better measure of forward performance than +/- is? I have experimented with formulas giving a point e.g. 1.25 and being on the ice on a goals for without getting a point say 0.7. Or maybe even 1.4 and 0.55. Then we get their offensive value. (Now I should have learnt more about Alan Ryder's methods.)


I haven't looked at even strength win% in some time either. I think I've posted the last thoughts and formulas I had on the matter. It definitely seemed to produce some good results, just not sure the limits of its accuracy. I think the eventual end results, when combined with special teams data, could produce something similar to HR's "point shares".

Yes. I took it further and integrated PP and SH. I was going to post it here, and even started writing about the methodology, etc., but didn't follow it through.

First, I was going to give each team a number of points. (To make it simple one could just look at the standings and take their factual points there. I, however, used another method.) Then one distribute those points among the players on the team.

I like the method. It rewards players for playing and for contributing. +/- doesn't work like that and may actually "punish" a player for playing, compared to a teammate who is benched. The win method also produce results that can be meaningfully divided by games played.

I found the weighted differential in team win% with or without a player in the lineup to be a great metric, because it combined simplicity with direct measurement of what we all agree is the most important hockey value (winning).

Exactly.

The limitation is that for players who don't miss many games, the amount of data without them is very small, so the results are very unreliable.

Yes. I have tried to handle that wisely.


I haven't yet digged deep into your analysis of the three "eras" you mention, but now is the time for others to give their thoughts on your findings. (If I remember right, what you write seem similar to the impressions others here have.)

Thanks again for positive words.
 

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
I didn't study it as thorough as you seem to have done. I didn't really feel/think it was as interesting as other things. But now that you posted this thread, and I got involved in a conversation with you, I have thought more about it. And I think what you're doing is interesting and a good approach. As you have pointed out in the thread, your method automatically accounts for several things, like partly strength of era. If I have the time, energy and focus, I intend to try to repeat your study and experiment with it a bit. I have things like age and nationality in my database. Nationality may be of interest if we for example want to study Canadian players in comparison to say European.

I'd certainly be interested in what results you generate with a similar approach. Have you given any consideration to using regression? It seems to me that this could study several different variables without needing separate studies, and should have a relatively high degree of accuracy and low error. It would also eliminate some variables as irrelevant and so reduce the error when the calculations are redone without such variables.


Agree. But, maybe there are ways we can get around that, as at least partly suggested by your method.

As I tried to say yesterday... Traditional adjustment just makes each season equal, no matter what quality level. Each season gets the same number of goals per team to distribute throughout the league. First, we distribute the goals on a team level. Then we distribute them within each team.
The higher gap between the best and worst teams, the more "favoured" (rightfully or not) the players on those team's will be compared to seasons where the gap was smaller.
The higher gap between the best and "worst" players, scoring wise, within a team, the more favoured the best scorers will be.
"Traditional" adjustment of scoring is basically based on comparing teams and players to averages. But the average team of one era/season might be considerably better than the average team of another era. Same with players.

I just wonder how much the distribution of scoring within a team, for instance, really tells us. Gretzky's teams were or were among the top scoring teams in the league. He outdistanced his teammates by incredible margins. Does that somehow make his production (relative to the league averages) better or worse? If so, by how much and why?

I definitely agree that the general strength of the league and distribution of talent within the league (among different teams) is an important factor though.

Regarding +/-, I'm currently leaning towards separating defencemen and forwards (which isn't black and white either). If we take forwards, they show a consistency in scoring, in that their scoring (except when playing with Mario, Gretzky, etc.) is fairly constistent from season to season. Their GA (goals against) may however change dramatically from season to season, and we also know that goaltending is hugely affecting +/-. So maybe the best thing for forwards would be to treat their "+" as they are, and then goalie adjust their GA and then perhaps even half them.

Because... It is often being said (at least it used to be said) that the "goalie is half the team". So why then "discredit" forwards so much for the goals againsts they are on the ice for? Maybe points scored is a better measure of forward performance than +/- is? I have experimented with formulas giving a point e.g. 1.25 and being on the ice on a goals for without getting a point say 0.7. Or maybe even 1.4 and 0.55. Then we get their offensive value. (Now I should have learnt more about Alan Ryder's methods.)

Raw plus-minus is obviously a severely flawed statistic, influenced greatly by the quality of each player's team and to the assumption that each skater is equally responsible for each GF or GA while he is on the ice.

You make some good suggestions about dividing the responsibility less equally. It seems much easier to appropriately divide GF than it is to divide GA among those on the ice. Whatever the assumptions made, I would guess the end result is still going to be a very rough estimation and subject to a large error (as I believe points shares and even strength win% are).

Yes. I took it further and integrated PP and SH. I was going to post it here, and even started writing about the methodology, etc., but didn't follow it through.

First, I was going to give each team a number of points. (To make it simple one could just look at the standings and take their factual points there. I, however, used another method.) Then one distribute those points among the players on the team.

I like the method. It rewards players for playing and for contributing. +/- doesn't work like that and may actually "punish" a player for playing, compared to a teammate who is benched. The win method also produce results that can be meaningfully divided by games played.

I briefly reviewed the "Even Strength Win%" formulas which I created and have mixed feelings about them. As you say, it rewards players for playing and contributing, but my hunch is that it tends to give too much credit for simply "being there" on great teams and too little credit for being outstanding on hapless teams.

That's likely no accident, since it was created after Overpass expressed concern that his "adjusted plus-minus" system didn't give enough credit to lesser players on great teams.

I think with substantial further improvement, such a system might eventually be a viable alternative or complementary statistic to HR's "point share" system. However, there seem to be enough arbitrary assumptions inherent in either of these systems to leave me preferring "adjusted plus-minus" at the end of the day.

Yes. I have tried to handle that wisely.

It's impossible to get significant results for any player who didn't miss many games. No matter how wisely you handle the data, there's just not enough "without player X" data to make any sort of reliable conclusion.

I haven't yet digged deep into your analysis of the three "eras" you mention, but now is the time for others to give their thoughts on your findings. (If I remember right, what you write seem similar to the impressions others here have.)

Thanks again for positive words.

There seem to be inherent limits in comparing across eras, but especially from the O6 to more modern hockey. I think it's fairer to basically separate it into pre-expansion and post-expansion periods, with the '70s basically being the buffer zone. So Howe, Hull, etc. would be in the former group, while Esposito, Orr, etc. would be in the latter group.
 

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
Just wanted to give some "rule of thumb" average numbers for different periods, based on what I have so far (divide simple adjusted points by this number, so higher number means it was easier to produce adjusted points):

'46 to '56: 1.08

'57 to '63: 1.02
'64 to '67: 0.92
'68 to '72: 1.02
'73 to '79: 1.09
'80 to '92: 0.91
'93 to '07: 0.99

Note that this is a divisor for simple adjusted points (using league gpg and assists per goal), which can vary significantly from HR's adjusted points.

To adjust raw points, these numbers could be used:

'48 to '51: 0.98
'52 to '58: 0.92
'59 to '63: 1.01
'64 to '68: 0.90
'69 to '72: 1.03
'73 to '82: 1.21
'83 to '94: 1.13
'95 to '07: 0.93
 
Last edited:

plusandminus

Registered User
Mar 7, 2011
1,404
268
...(Scheduled adjusted scoring)...

I think I now have managed to redo my study. I have to say it's a bit tricky, and not quite as easy as I explained it some day ago. The results I got cannot be taken as they were, but have to be compared to standings where all games of a certain team has been removed.
During seasons with a balanced schedule, like in 1970-71, all teams get 1.0, which is the way I think it should be. 1980-81 is another balanced season.
During other seasons, the differences between team's factual GF and their adjusted one is up to 6 %.
Examples:
The lockout shortened 1994-95 season was divided into two halfs, where the teams of each half never met a team from the other half. Some of the Eastern teams seem "unfavoured". But I think this case is hard, as the halfs were separated. Maybe the Western half simply had better goalscorers, or maybe the Eastern half had better goaltenders. (Looking at 1993-94 and 1995-96 for comparison may of course cast some light over that.)
The 2006-07 season, where some of the Western half teams had low GA per game, is another case. Teams having to play a lot against them had a harder time than other teams.
During the 1980s, Eastern team would generally benefit by schedule adjusting. Edmonton was "favoured" by 0 to 2.3 %.

I think we have a problem here in that my method does not consider the factual strength of scorers. If the Western teams scored more in the 1980s, it may have been because they simply were better at scoring, rather than having to face better defence/goaltending. I think I would need to focus more on the two halfs and how Western teams scored vs Western teams compared to how they scored vs Eastern teams. And vice versa.

I haven't iterated.
 

seventieslord

Student Of The Game
Mar 16, 2006
36,080
7,132
Regina, SK
Yes I did. Are you refering to an Excel sheet named NHL68-06TOI.xls ?

yes.

I haven't yet put that data into my database.
It's color coded. I suppose white background means factual data. Blue seems to be recent seasons when players have changed team during the season, and where an estimation has been made. I find that estimation unreliable, as I for example found it to be way wrong for Ozolinsh. Green seem to be completely estimated.
But is there really a way to know how wrong the estimated (green) data are, when we don't have any factual data to compare with?

Yes. I thought that's what we were talking about.

Take the formula used to estimate ice times, apply it to seasons where the results are known, compare the estimates to the actual results.

This was already done though. It's where the "96% correlation" thing comes from.

I have, however, posted about it here on the board. I got some replies saying that the differences overall looked small, which I don't agree with. The average error could have been say 6 % (or I may remember wrong).

6% I would be very comfortable with, considering these are estimates.


Here is something I spent many hours on earlier this year.
Basically you're right. But average powerplay time actually do change both between seasons and between teams.
At least two things seems to affect their length:
1. Powerplay percentage. The better powerplay percentage a certain team had, the shorter their powerplays on average tended to last.
2. Total number of penalties. A powerplay ends when a) the power play team scores, b) the period ends, or c) the power playing team takes a penalty. If I remember right, this is not as important to account for as power play percentage (1), but seem to affect things.
I spent an awful lot of time trying to integrate these two parameters in the estimation formula, but it wasn't easy, and I got more and more dizzy. (Maybe this is a case for CzechYourMath.)

Below are seasonal data, showing league averages.
Seas|Seas2|PPopp|PPshots|PPtimeMin|PPGF|PPGA|PPGFperOpp|PPGAperOpp|PPGDperOpp|PPGFper2Min|PPGAper2Min|PPGDper2min|PPtimeGF|PPtimeGD|PPOpplength|SHsavePerc
1997|1998|380.154|450.885| 0.0000|57.346|10.000| 0.1508| 0.0263| 0.1245| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8724
1998|1999|359.111|440.222| 0.0000|56.778| 8.148| 0.1581| 0.0227| 0.1354| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8706
1999|2000|330.821|397.964| 0.0000|53.429| 7.714| 0.1615| 0.0233| 0.1382| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8661
2000|2001|376.067|449.400| 0.0000|62.567| 8.900| 0.1664| 0.0237| 0.1427| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8608
2001|2002|338.467|414.133| 0.0000|53.367| 7.333| 0.1577| 0.0217| 0.1360| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.0000| 0.8709
2002|2003|362.533|454.600|617.668|59.567| 7.667| 0.1643| 0.0211| 0.1432| 0.1929| 0.0248| 0.1681|10.3694|11.9011| 1.7038| 0.8679
2003|2004|347.567|428.700|587.632|57.233| 8.133| 0.1647| 0.0234| 0.1413| 0.1948| 0.0277| 0.1671|10.2673|11.9681| 1.6907| 0.8657
2005|2006|479.667|606.633|789.325|84.833|10.600| 0.1769| 0.0221| 0.1548| 0.2150| 0.0269| 0.1881| 9.3044|10.6330| 1.6456| 0.8598
2006|2007|397.833|514.633|662.948|69.967| 8.933| 0.1759| 0.0225| 0.1534| 0.2111| 0.0270| 0.1841| 9.4752|10.8621| 1.6664| 0.8633
2007|2008|351.367|468.933|570.239|62.367| 7.967| 0.1775| 0.0227| 0.1548| 0.2187| 0.0279| 0.1908| 9.1433|10.4823| 1.6229| 0.8669
2008|2009|340.933|485.967|549.594|64.600| 7.833| 0.1895| 0.0230| 0.1665| 0.2351| 0.0285| 0.2066| 8.5076| 9.6816| 1.6120| 0.8665
2009|2010|304.533|437.767|495.165|55.467| 6.367| 0.1821| 0.0209| 0.1612| 0.2240| 0.0257| 0.1983| 8.9273|10.0848| 1.6260| 0.8729
2010|2011|290.533|417.000|477.996|52.367| 6.867| 0.1802| 0.0236| 0.1566| 0.2191| 0.0287| 0.1904| 9.1279|10.5054| 1.6452| 0.8742
(GD is goal difference, or in this case "net" stats (SHGA-SHGF and PPGF-PPGA).

It's not what you specifically asked for, but might perhaps be of interest anyway.
One can see what appear to be correlating things in the table, but I eventually got very dizzy in my attempts at creating formulas to e.g. estimate average PP time based on the different data.

On a team level, normalized to 1.0, PP time lengths were from .9249 to 1.0640. Half of the teams were between .9844 and 1.0154 (i.e. half of the teams were within 1.5 % of the average). I don't remember if the normalization were to all seasons combined, or if I normalized each season individually. 8 seasons studied, from 2002-03 to 2010-11. 240 teams.
Like I said above, there is a very strong correlation between a team's power play percentage and their average powerplay time time. The 14 teams with lowest average PP time all were above average PP percentage wise. About 20 out of the 21 teams with the highest average PP time were below average PP percentage wise.
If I'm not too confused, the estimated powerplay lengths should in half of the cases be less than 1.5 % wrong, and in half of the cases more than 1.5 % (but in this case lower than 7.5 %) wrong.
I don't know how to apply this to older seasons. I suspect the estimation formula should for some older seasons would show considerably less correct estimations than for the more recent seasons. But I don't know which seasons and to what extent.

I don't have in my head now if I studied this on player level too. On a team level, the more effective (or ineffective) you were on the PP, the more the estimated PP time will differ from the factual one. (That is a generalization, because we have no idea about the specific cases.) The same thinking might, or might not, be appliable to player level.

Sorry for not being able to immediately answer your main questions.


By the way, this is a stats heavy post, and probably a bit unappealing to the general reader here. Dealing with things like this can often be a bit boring (and very time consuming) to me too. But somehow I think this history forum is a good place for this anyway, as this is a very stats oriented section. We use many statistical "components", and the better we can make each component, the better the components relying on them will be.

That is interesting stuff. I figured that if there was any other factor other than just randomness that would cause a team's average time per PP, it would be their PP efficiency.
 

plusandminus

Registered User
Mar 7, 2011
1,404
268

The table below may be of interest to you (and/or others). It shows the percentage of Pts, etc. from forwards of different age. Seasons aggregated are 1942-43 to 2010-11.
Ages are measured at 31th December of the season. (I basically take year of the season start minus birth year of the player.)
I first calculate each season individually. I have then aggregated them and divided by 68 (which is number of seasons studied).
Example (look at Age 22 in the table): During a typical season, 22 year olds have accounted for 7.580 % of the games played, 7.625 % of the goals scored, 7.136 % of assists scored, and 7.348 % of the pts being scored. Their pts percentage was slightly lower than their games played percentage, which is indicated by the last column (pPts/pGP=0.969).

Age|pGP|pG|pA|pPts|PtsPerGP
17| 0.068| 0.036| 0.048| 0.043| 0.635
18| 0.511| 0.477| 0.451| 0.462| 0.904
19| 1.452| 1.401| 1.320| 1.356| 0.934
20| 3.670| 3.434| 3.315| 3.362| 0.916
21| 5.576| 5.256| 5.041| 5.129| 0.920
22| 7.580| 7.625| 7.135| 7.348| 0.969
23| 8.662| 8.874| 8.305| 8.551| 0.987
24| 8.919| 9.203| 8.926| 9.043| 1.014
25| 9.182| 9.314| 9.191| 9.243| 1.007
26| 8.752| 9.014| 8.882| 8.936| 1.021
27| 8.283| 8.339| 8.429| 8.388| 1.013
28| 7.403| 7.496| 7.648| 7.581| 1.024
29| 6.491| 6.720| 6.785| 6.758| 1.041
30| 5.628| 5.776| 5.930| 5.867| 1.042
31| 4.800| 4.850| 5.053| 4.968| 1.035
32| 3.815| 3.750| 3.969| 3.875| 1.016
33| 2.968| 2.806| 3.090| 2.970| 1.001
34| 2.218| 2.035| 2.297| 2.185| 0.985
35| 1.483| 1.398| 1.590| 1.508| 1.017
36| 0.961| 0.841| 0.953| 0.906| 0.942
37| 0.666| 0.579| 0.702| 0.650| 0.975
38| 0.391| 0.343| 0.417| 0.386| 0.988
39| 0.251| 0.231| 0.275| 0.256| 1.020
40| 0.123| 0.120| 0.142| 0.133| 1.081
41| 0.068| 0.043| 0.051| 0.048| 0.697
42| 0.052| 0.032| 0.045| 0.039| 0.760
43| 0.014| 0.000| 0.003| 0.002| 0.148
44| 0.006| 0.000| 0.000| 0.000| 0.000
45| 0.000| 0.000| 0.000| 0.000| 0.000
46| 0.000| 0.000| 0.000| 0.000| 0.000
51| 0.006| 0.004| 0.006| 0.005| 0.886
Sum|100|100|100|100|

I think the table reveals some things. For example, players under the age of 29 generally have a higher goal percentage than assist percentage, while the players over 30 generally have the opposite. Differences are small, though.
Another thing is Gordie Howe being responsible for the 51 year old row in the table.

Players seem to somewhat peak around age 25 (where GP, G, A and Pts are highest). On the other hand, their points per game is highest around age 29-30. Perhaps injuries are starting to take their toll on players during their mid-20s, so that some can't play as much as before, but while playing still being able to produce? This study is daily new to me, so I haven't studied further.

Having played around with the data, and for example only included forwards that finished top-x among forwards in scoring, my impressions are that age 27 is generally a peak age.

The table above does not show trends. I will probably look more into that later, to for example see if peak/etc. may have changed from era to era.

For your study, I think age is important to consider. Maybe it would be somewhat ideal to for each season to focus mainly on players that are in a fairly "stable" age? At least you should look out for cases where a significant share of the players in your group are at a "improving" or "declining" age.

Edit: Notice that this post is completely forwards only. Everything in it has to do only with forwards.
Edit: It might have been better if I did this for Canadian players only, as European players ("green unit", etc.) starting their NHL careers at a later age might "bias" things.

Late Edit... And here's looking at Canadian forwards only (or rather players from 'canada','england','ireland','scotland','wales','south wales'):
Age|pGP|pG|pA|pPts|PtsPerGP
17| 0.073| 0.038| 0.050| 0.045| 0.618
18| 0.543| 0.505| 0.490| 0.497| 0.915
19| 1.473| 1.446| 1.346| 1.390| 0.943
20| 3.717| 3.462| 3.332| 3.384| 0.910
21| 5.565| 5.213| 4.956| 5.061| 0.909
22| 7.461| 7.425| 6.902| 7.130| 0.956
23| 8.450| 8.564| 8.026| 8.260| 0.977
24| 8.782| 8.962| 8.676| 8.797| 1.002
25| 9.022| 9.213| 9.028| 9.106| 1.009
26| 8.714| 8.937| 8.797| 8.854| 1.016
27| 8.227| 8.257| 8.362| 8.314| 1.011
28| 7.411| 7.496| 7.619| 7.564| 1.021
29| 6.474| 6.638| 6.731| 6.692| 1.034
30| 5.701| 5.935| 6.068| 6.014| 1.055
31| 4.816| 4.928| 5.184| 5.078| 1.054
32| 3.857| 3.893| 4.113| 4.019| 1.042
33| 3.057| 2.938| 3.246| 3.116| 1.019
34| 2.290| 2.129| 2.413| 2.291| 1.001
35| 1.587| 1.521| 1.708| 1.628| 1.026
36| 1.048| 0.938| 1.077| 1.018| 0.972
37| 0.727| 0.664| 0.795| 0.740| 1.017
38| 0.406| 0.375| 0.454| 0.421| 1.036
39| 0.285| 0.274| 0.328| 0.305| 1.073
40| 0.148| 0.149| 0.174| 0.163| 1.104
41| 0.082| 0.053| 0.061| 0.058| 0.708
42| 0.058| 0.041| 0.053| 0.048| 0.822
43| 0.013| 0.000| 0.002| 0.001| 0.077
44| 0.006| 0.000| 0.000| 0.000| 0.000
45| 0.000| 0.000| 0.000| 0.000| 0.000
51| 0.007| 0.005| 0.006| 0.006| 0.880
At quick first glance, I see no major differences.
 
Last edited:

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
The table below may be of interest to you (and/or others).

Actually a bit surprised by that data. I looked at very top forwards years ago, and remember their best years seemed to be ~22-29.

I would think very top players would tend to peak later than average, so wouldn't expect the average PPG highest at ages 28-31. Defensemen peaking later may have a good deal to do with this.
 

blogofmike

Registered User
Dec 16, 2010
2,178
927
Actually a bit surprised by that data. I looked at very top forwards years ago, and remember their best years seemed to be ~22-29.

I would think very top players would tend to peak later than average, so wouldn't expect the average PPG highest at ages 28-31. Defensemen peaking later may have a good deal to do with this.

I still think top players peak in their early-mid twenties. The data may be skewed by the fact that anyone who is playing at 30-31 is a good player, otherwise they would have been culled, but the age is not old enough where everyone that age isn't getting top minutes any more.

Players who are 22 include more marginal players, like prospects who may or may not pan out than later age groups that have smaller shares of GP.

Gretzky, Lemieux, Yzerman were far better PPG producers at 24 than at 31, but top scorers account for a higher % of the games played at age 31 than at age 24, where numbers are dampened by the high number of GP by lesser players.
 

Czech Your Math

I am lizard king
Jan 25, 2006
5,169
303
bohemia
I still think top players peak in their early-mid twenties. The data may be skewed by the fact that anyone who is playing at 30-31 is a good player, otherwise they would have been culled, but the age is not old enough where everyone that age isn't getting top minutes any more.

Players who are 22 include more marginal players, like prospects who may or may not pan out than later age groups that have smaller shares of GP.

Gretzky, Lemieux, Yzerman were far better PPG producers at 24 than at 31, but top scorers account for a higher % of the games played at age 31 than at age 24, where numbers are dampened by the high number of GP by lesser players.

Good point, makes sense.
 

plusandminus

Registered User
Mar 7, 2011
1,404
268
Actually a bit surprised by that data. I looked at very top forwards years ago, and remember their best years seemed to be ~22-29.

I would think very top players would tend to peak later than average, so wouldn't expect the average PPG highest at ages 28-31. Defensemen peaking later may have a good deal to do with this.

It's not really points per game, even though I happened to call the column that. It's rather some kind of ratio.
Notice that the differences are small.

I still think top players peak in their early-mid twenties. The data may be skewed by the fact that anyone who is playing at 30-31 is a good player, otherwise they would have been culled, but the age is not old enough where everyone that age isn't getting top minutes any more.

Players who are 22 include more marginal players, like prospects who may or may not pan out than later age groups that have smaller shares of GP.

Gretzky, Lemieux, Yzerman were far better PPG producers at 24 than at 31, but top scorers account for a higher % of the games played at age 31 than at age 24, where numbers are dampened by the high number of GP by lesser players.

Yes, it is uncommon to win the scoring title when being 30+ years old.
The table below focus on being a top-5 scorer, and here the peak seems to be at age 26. I have excluded Gretzky, Mario and Gordie, who sort of "always" were top-5 scorers (but still only counting top-5 scorers, so if Mario and Gretzky were top-5, we'll get only three that season). From 1942-43 to 2010-11.

Age|Cnt
19|3
20|8
21|12
22|21
23|18
24|29
25|31
26|35
27|29
28|23
29|23
30|15
31|18
32|15
33|9
34|3
35|3
36|1

Here are the "scoring title" winners only (among forwards), again excluding Gretzky, Mario and Gordie:
Age|Cnt
19|1
20|1
22|5
23|2
24|5
25|3
26|10
27|4
28|4
29|5
30|2
31|2
32|2
Esposito was the leading forward scorer seven straight seasons, from age 26-32, so if you want to exclude him you should subtract 1 from those ages.
Edit: Speaking of Esposito, his best seasons coincides very much with playing with a peak/prime Bobby Orr.
 
Last edited:

plusandminus

Registered User
Mar 7, 2011
1,404
268
I have now created a large matrix, with years on the up-down axis, and ages on the left-right axis, showing percentages per season. It's a 23 columns by 68 rows matrix, and although it seems to fit, it seems a bit "brutal" to post it here (not wanting to make stats disliking persons fall off their chair).
The percentage for a certain age varies a lot from season to season.
There is consistency when looking diagonally, which can be done to follow how the players born a certain year perform throughout their career at different ages. That is, almost no matter what year the players were born, their scoring generally tended to build up until age 25-26, to then start dropping. But there are exceptions, and I have only glanced, so don't think of it as some kind of "law".
I may tro to reorganize the matrix to show birthyear on up-down axis and Ages on left-right axis.

Edit: Currently, the 1985 year born players (Perry, Getzlaf, Paul Stastny, Mike Richards, Jeff Carter, Bergeron, Zajac, Horton and so on) dominates by a large margin, with 15 players among the top-100 scoring Canadian forwards. The same guys has dominated during the last 4 or so seasons, far ahead of their 1984 and 1986 year born colleagues.
 
Last edited:

Ad

Upcoming events

Ad

Ad

-->