Ainec
Panetta was not racist
- Jun 20, 2009
- 21,784
- 6,429
I can see how it can be difficult to grasp how I am using the subjective evaluations so here are further explanations; I rate players attributes which is converted into an overall score, this score is sort of used as an expected value or you could say a barometer to remove anomalies in statistical performance (if the player overperformed or underperformed base on what you would expect). Those evaluations are also seen by the software sort as a projection of the player skills, affecting the final score again at the end, you can often see players with good stats but bad skating fall in the draft, so I particularly value skating so the software can match that. So it matters that I watch the players in order to have the appropriate subjective evaluations. If you were to be using the software you might obtain different results, but it should normally tend to look like mine.
I have been entering everything myself, i'll be looking into EP's API/a webscraper it could save me a lot of time.
Nice to see people sharing my interest, I believe there is true potential I that type of approach. I am not sure I fully understood your last question; I just started out less than a year ago on my own sort of as challenge I set myself (thinking I could have better scouting results with a well programmed software) and here I am now. Do you mean how did I achieve those type of results with my algorithm?
Is there a reason why you think subjective opinion + "software" is superior to subjective opinion alone?
Is there a reason why you think subjective opinion + "software" is superior to subjective opinion alone?
No, just about how you got into it in general. I've used EP's API before and I'd be happy to help you out.
Things I stated in the past in that regard to people I entered in contact with:
"I firmly believe that this software can be a useful tool to support the evaluation of prospects, in fact, I would even go as far as to say that it has the potential to give better results than the standard way of evaluating players. When I started doing the exercise of creating my own rankings of draft eligible players I quickly realize how complex of a mental process it is considering all the variables of evaluation that have to be taken into account when evaluating a player (speed, skill, shot, defensive game, compete level, age, offensive production, quality of teammates, league difficulty, ice time, height, weight, tournament play, etc, etc, while assessing to each criteria their degree of importance), ...
Apparently the snake oil impresses some.If I even own an NHL team I'm hiring you as my head scout. Awesome list with tons of work put into it
I can see the value of keeping detailed notes as an aid to memory. As much as putting a concrete number for a few subjective dimensions helps you hold on to information you might forget, it necessarily glosses over intangibles and subtleties as well. Forcing all the numbers through some ad hoc formulas (correct me if they are more than that) doesn't seem better to me than going with an expert's subjective opinion in the first place.
That said your rankings seem as good as the next guys.
Can you compare TOTAL scores across years? Or is it meant to be an index showing relative value within each year's draft?
Fantastic work, nice to see the algorithm is as high on Wahlstrom as I am and low on Tkachuk as well.
Interesting the hit Zadina takes in your software.
Information is a useful tool.
Depending too much on this stuff gets you where Florida and Arizona are today.
I think some teams use it because other teams are and don’t want to seem like they are missing anything.
Give me an old scout who can see the intangibles in a player over some data filled software package any day.
That'll drive a guy to drink.2017
5. Elias Pettersson
33. Nicolas Hague
55. Josh Brook
64. Max Gildon
95. Ostap Safin
135. Calle Sjalin
181. Alexander Chmelevski
188. Pavel Shen
2016
5. Matthew Tkachuk
64. Vitaly Abramov
140. Tim Gettinger
154. Jesper Bratt
184. Stepan Falkovsky
194. David Bernhardt
2015
23. Boeser (Thats an important one, easy to say you called it, I actually found my old crappy list https://bit.ly/2H0Y5X9 ; no one drafted below him is in front of him on my list. List matching my MTL picks having Sprong and Kylington high)
66. Filip Ahl
114. Dmytro Timashov (I think)
144. Cooper Marody
149. Nikita Krorostelev
174. Gustav Bouramman (I think)
210. Nathan Noel
Thats probably the year I followed the less for the later rounds ending up with pretty bad results, not fully sure about the selections though.
2014
6. Nikolaj Ehlers (damn, most people would have went Ehlers or Nylander to be fair I believe)
24. David Pastrnak (oh god)
36. Roland Mckeown
66. Brayden Point (ok that year would have been insane)
126. Ondrej Kase (wait... wasn't over)
156. Adam Ollas Mattsson
186. Vladimir Tkachev
2013
9. Valeri Nichushkin (sorry for that)
24. Hunter Shinkaruk (same mistake)
85. Pavel Buchnevich
115. Eric Roy (damn you missed the good ones, late 3rd Bjorkstrand & Lehkonen with 2nd)
145. Lucas Wallmark
175. Nikita Tryamkin (got him 1 year before!)
205. Brendan Harms
Did not follow that draft enough before that, but your former Vancouver Giants Gallagher would be on your team for sure (but I may have messed up some other picks tho who knows?).
Pastrnak-Point-Boeser
Ehlers-Buchnevich-Tkachuk
Bratt-Kase-Pettersson
Not bad , trade whatever extra for a D. You got 3 good Ds in the early rounds of 2017, thats a good bank to start securing the future too.
Information is a useful tool.
Depending too much on this stuff gets you where Florida and Arizona are today.
I think some teams use it because other teams are and don’t want to seem like they are missing anything.
Give me an old scout who can see the intangibles in a player over some data filled software package any day.
Don't mistake this for serious analytics or machine learning! That can't replace expert judgement yet, but used right it is a great complement to it.
This isn't that, from what I can see.
Subjective ratings are being fed into an ad hoc formula and being tweaked when the results aren't "right". It is just a roundabout way of doing subjective rankings, which is why it looks an awful lot like the consensus rankings. There's no reason to think that where it deviates that it is on to something.
It is impossible to know a player's defensive game/competitiveness purely by looking at stats, I don't have a choice to add this data in a form of subjective evaluation to have a complete overall picture.