Is there a stat for adj. save percentage that weighs scoring chances more heavily?

kmad

riot survivor
Jun 16, 2003
34,133
61
Vancouver
5574591295_290295c6d8.jpg


If not, do we have sufficient data to make it happen? Is scoring chance information available anywhere?
 

Larry Hoover

Registered User
Sep 16, 2012
1,009
1
5574591295_290295c6d8.jpg


If not, do we have sufficient data to make it happen? Is scoring chance information available anywhere?

Interested in this too. Also interested in a stat like SV% Close (like when a game is within 2 goals, 1 goal, or tied). Me thinks Pekka Rinne would have a way better SV% close than his overall SV% because when he loses, boy does he lose bad.
 

me2

Go ahead foot
Jun 28, 2002
37,903
5,595
Make my day.
Interested in this too. Also interested in a stat like SV% Close (like when a game is within 2 goals, 1 goal, or tied). Me thinks Pekka Rinne would have a way better SV% close than his overall SV% because when he loses, boy does he lose bad.

A bit like Luongo this year. 18 starts,iirc something like .929 (2.0 GA) in 16 of them and 1/3 of the goals in (15 goals) in the other 2.
 

Bear of Bad News

Your Third or Fourth Favorite HFBoards Admin
Sep 27, 2005
13,549
27,108
Taco, when you get time, I wouldn't mind hearing your take on this if it's of interest to you...

Sorry for the delay in response; I was in Ireland for eleven days.

The notion of evaluating goaltenders on a benchmark shot distribution is an interesting one, and something that's been tried out to a lesser degree in other articles - in particular, I've seen many analyses that rely upon weighting all goaltenders' ESSV, PKSV and PPSV by the same proportion of situations.

I've done something similar (as have others) where one "risk adjusts" each shot faced by a goaltender. In this fashion, instead of molding a goaltender's shot distribution to a league norm, you evaluate a goaltender based upon league-normed expectations. So, for instance, if a goaltender faces a shot that (historically) has a 20% chance of going into the net, he gets credit for 0.2 goals prevented (if he stops it) and gets credit for -0.8 goals prevented (if he doesn't).

Ultimately, both of these types of models are only going to be as good as the RTSS data that feeds into it. Not all wrist shots from location X are the same (and even this assumes that each scorer accurately categorizes shot type and location), and once you start distinguishing between "Brett Hull" and "Chris Dingman", you run into sample size considerations.

Accepting this, the math behind what Michael did appears sound, and it's an interesting way of viewing the problem.
 

Cunneen

Registered User
May 8, 2013
94
0
Sorry for the delay in response; I was in Ireland for eleven days.

The notion of evaluating goaltenders on a benchmark shot distribution is an interesting one, and something that's been tried out to a lesser degree in other articles - in particular, I've seen many analyses that rely upon weighting all goaltenders' ESSV, PKSV and PPSV by the same proportion of situations.

I've done something similar (as have others) where one "risk adjusts" each shot faced by a goaltender. In this fashion, instead of molding a goaltender's shot distribution to a league norm, you evaluate a goaltender based upon league-normed expectations. So, for instance, if a goaltender faces a shot that (historically) has a 20% chance of going into the net, he gets credit for 0.2 goals prevented (if he stops it) and gets credit for -0.8 goals prevented (if he doesn't).

Ultimately, both of these types of models are only going to be as good as the RTSS data that feeds into it. Not all wrist shots from location X are the same (and even this assumes that each scorer accurately categorizes shot type and location), and once you start distinguishing between "Brett Hull" and "Chris Dingman", you run into sample size considerations.

Accepting this, the math behind what Michael did appears sound, and it's an interesting way of viewing the problem.



I am most interested in Schucker's way because he seems to have used perhaps the most advanced (mathematically) method I've seen in evaluating goalies. Now advanced doesn't necessarily mean better, but from what I have read of Schucker's work I have come to trust his analysis. Obviously I trust the math (being that he his a professor of Statistics at St. Lawrence), but he seems to do a good job in his past papers.
 

Bear of Bad News

Your Third or Fourth Favorite HFBoards Admin
Sep 27, 2005
13,549
27,108
I am most interested in Schucker's way because he seems to have used perhaps the most advanced (mathematically) method I've seen in evaluating goalies. Now advanced doesn't necessarily mean better, but from what I have read of Schucker's work I have come to trust his analysis. Obviously I trust the math (being that he his a professor of Statistics at St. Lawrence), but he seems to do a good job in his past papers.

You'd be surprised at some of the backgrounds of people that we have contributing here.
 

Cunneen

Registered User
May 8, 2013
94
0
You'd be surprised at some of the backgrounds of people that we have contributing here.

I was not aware, my apologizes. I do not mean to insult anyone here either. I'm happy if we have some people with a lot of data analysis experience or backgrounds, so that I can learn from those people (I'm only 17 but I plan on majoring in some form of applied and computational mathematics and statistics in college).
 

StormCast

Registered User
Jan 26, 2008
4,691
2,808
Raleigh, NC
There are so many variables that it's hard to quantify even if you could distill it down to scoring chances.

I've always thought a version of baseball's ERA would be interesting. Just as an error is subject to the interpretation of the official scorer and ultimately impacts the pitcher's stats, a similar approach could be adopted for goalies.

They let in a softie, it's earned. An obvious D breakdown leads to a doorstep goal and it's unearned.
 

Bear of Bad News

Your Third or Fourth Favorite HFBoards Admin
Sep 27, 2005
13,549
27,108
I was not aware, my apologizes. I do not mean to insult anyone here either. I'm happy if we have some people with a lot of data analysis experience or backgrounds, so that I can learn from those people (I'm only 17 but I plan on majoring in some form of applied and computational mathematics and statistics in college).

No offense taken (so no worries). I'm just genuinely impressed at some of the backgrounds I've come across here (including Michael's) - it's one of the things that makes this particular sub-forum so interesting.
 

Talks to Goalposts

Registered User
Apr 8, 2011
5,117
371
Edmonton
Sorry for the delay in response; I was in Ireland for eleven days.

The notion of evaluating goaltenders on a benchmark shot distribution is an interesting one, and something that's been tried out to a lesser degree in other articles - in particular, I've seen many analyses that rely upon weighting all goaltenders' ESSV, PKSV and PPSV by the same proportion of situations.

I've done something similar (as have others) where one "risk adjusts" each shot faced by a goaltender. In this fashion, instead of molding a goaltender's shot distribution to a league norm, you evaluate a goaltender based upon league-normed expectations. So, for instance, if a goaltender faces a shot that (historically) has a 20% chance of going into the net, he gets credit for 0.2 goals prevented (if he stops it) and gets credit for -0.8 goals prevented (if he doesn't).

Ultimately, both of these types of models are only going to be as good as the RTSS data that feeds into it. Not all wrist shots from location X are the same (and even this assumes that each scorer accurately categorizes shot type and location), and once you start distinguishing between "Brett Hull" and "Chris Dingman", you run into sample size considerations.

Accepting this, the math behind what Michael did appears sound, and it's an interesting way of viewing the problem.




The issue is in the accuracy of the RTSS data for shot location. Its brutally inaccurate to the point of uselessness. Which seems to be why these exercises like DIGR don't end up have all that much predictive or explanatory value because of GIGO.
 

Bear of Bad News

Your Third or Fourth Favorite HFBoards Admin
Sep 27, 2005
13,549
27,108
The issue is in the accuracy of the RTSS data for shot location. Its brutally inaccurate to the point of uselessness. Which seems to be why these exercises like DIGR don't end up have all that much predictive or explanatory value because of GIGO.

Agreed - I should have said this more clearly.
 

Talks to Goalposts

Registered User
Apr 8, 2011
5,117
371
Edmonton
Agreed - I should have said this more clearly.

Its a bummer that the NHL is so bad at that stuff, especially seeing what they are doing in baseball and basketball these days.

I'm friends with a guy that's building his own system of shot value based on video and incorperating both shot location and movement rather than the RTSS stuff that he's supposedly getting closed to finished on. Hopefully he'd be able to crack the problem.
 

I am Lorde

Registered User
Feb 20, 2013
107
0
Its a bummer that the NHL is so bad at that stuff, especially seeing what they are doing in baseball and basketball these days.

I'm friends with a guy that's building his own system of shot value based on video and incorperating both shot location and movement rather than the RTSS stuff that he's supposedly getting closed to finished on. Hopefully he'd be able to crack the problem.

I'd be interested in seeing what your friend comes up with. It would be great if a system like that could be adopted by the NHL where shot location, type (wrist, slap etc), movement of player at the time of the shot, could all be recorded and databased. Then you could see how valuable the shots a player is taking or setting up are.
Though it seems like a lot of video for one guy to go through :/
 

Ad

Upcoming events

Ad

Ad