The issue with that is that the list would be based on incomplete information. If you were a talented regional scout you don't really know much about players outside of your region. You may be right most of the time with your assessment of the players you have viewed. And sometimes rankings are judgement call based on individual biases. The player you think could turn out to be the best player in the draft might not be the guy you would draft #1 overall for reasons.
Gillis attempted to draft using past data. They were prioritizing regions based on the number of NHL players that came out of that league.
For one thing, AI is a f***ing moron. People seem to think it's this massive cure-all for everything. But AI machine learning is ultimately just a really complex parrot. It mimics whatever you train it on. That means it's inherently backward focused. It can't truly innovate or project. It's just a really crappy annoying way of compiling human generated information and material and distilling it to a common denominator.
But this specifically, is a big problem in how you'd even set about assessing the "skill" of an individual scout. Contrary to sites like this where people somehow individually compile lists that are like 200 or 300+ players long...and somehow claim to have seen enough of these players to have those rankings be credible,
actual NHL scouts tend to be more specialized by region. They'll have a read on players within that area based on more extensive viewings, interviews, conversations with coaches, teammates, the equipment guy, whoever.
But you can't really frame a bigger picture with that, because it's not reasonable to expect scouts to be fully versed in all of the prospects from all of the regions. Even within a region...some years, a particular region might just have a particularly lackluster crop of prospects. There are clearly strong years and weak years for every league/region.
The other aspect of this that doesn't work, is that in order to even assess whether a scout is "performing well" within their own region...you have to set out some sort of criteria for what "good performance" looks like.
Who is better: The scout who consistently gets "on base" with a bunch of guys who top out as NHL/AHL tweener journeymen and racks up a ton of "games played" across a plethora of pretty cruddy players? Or the guy who is swinging for the fences and whiffs on most of his picks but hits an absolute home run picking out a hidden gem superstar? Obviously a hyperbolic hypothetical...but illustrative of the problem in subjectivity of how you even evaluate "scouting performance".
There's also the issue of "scouting directives". When you go back and look at previous lists, even if you're working with individual lists from a specific regional scout for their area...those rankings are still subject to that top-down management directive. From the Owner/GM/Director of Scouting. Over the years, the Canucks have very publicly mentioned changes in scouting direction and priorities. If you're telling your scouts to go out and rank players while prioritizing a particular attribute, position, skillset, demeanor, statistical category, whatever...it's going to completely skew that list away from the natural instincts of an individual scout. The Canucks have meddled with all sorts of frameworks for evaluation over the years...most of them have been utterly garbage. But those are still going to be poisoning your "data" when evaluating a specific scout. Basically depending on how well they followed instructions...where the more closely they followed directives, the worse off they'll probably fare in an analysis of their scouting ability.
Not to mention the way the game is continuously evolving and changing. What worked before isn't going to be what works in the future. Teams around the league are constantly shifting in their scouting priorities as well...and fluctuating in the way they value certain traits and types of players. An efficiency a scout may have found in drafting small skilled guys back in the day, isn't going to be nearly as effective today...where teams are far more gung ho on selecting those sort of talented players than they were during an era where that type of prospect was hard pressed to make it in a league that enforced it's rules very differently, to favour size and physicality.
I wouldn’t be using a GP type criteria. I’d just be looking for smart and looking for stupid.
On the leaked 2010 draft list, our WHL scouts put a double-overage with 3 points in Teigan Zahn into the 3rd round and didn’t get any of Mark Stone/Brendan Hallagher/Radko Gudas (the eventual 3 best WHL players from that draft) rated anywhere on our draft list. Doesn’t take a complicated formula or external review to see that the WHL scouts had no idea what they were doing.
This however, is something that doesn't require AI bullshit. It really shouldn't be that difficult to go through those lists and see where things are going right, and where things are going wrong. You should be able to see patterns in a scouts process. Not necessarily even evaluating the "results" per se...but evaluating the process they're using to
get to those results. You also have access to these scouts to have a little tete-a-tete where you can have them explain things about their process that aren't clear from the data. That can allow you to understand better, if good results are due to ability and process or dumb luck, or whether underwhelming results are a product of bad luck with a reasonable process or just being a complete nimrod with an idiotic process.
I think working through the staff and establishing those sort of things is a lot more valuable a process than trying to integrate AI on a process that is extremely nuanced and complicated, where AI tends to perform pretty terribly.