The full explanation is here if you want to read it:
Woodblog: WoodMoney: A new way to figure out quality of competition in order to analyze NHL data
But to summarize it determines Elite/Middle/Bottom 6 through a combination of ice time, points, and CF% rel. So the players that are good in all 3 metrics are "elite", the ones that are either not getting ice time, not getting points, or are terrible play drivers are "gritensity".
As for the quality of competition argument not mattering, Daniel Wagner wrote a great piece about it here:
Why Quality of Competition doesn’t matter to analytics experts anymore
But to summarize that: the distribution for the "toughest competition" guys and the "easiest competition" guys is pretty tight and there isn't a huge difference in terms of just the competition faced between the top and the bottom, at least on a macro level.
As for the "how did they make the adjustments", they have a pretty long write-up on RAPM that will do a much better job explaining how it works than I ever could, so I'll link that here:
Reviving Regularized Adjusted Plus-Minus for Hockey
As for your Yandle vs Suter exercise:
View attachment 342201
But I'll also post the prior 3 seasons because Suter had a rough year due to injuries.
View attachment 342204
As you can see, Suter had much better adjusted numbers than Yandle did, even though Yandle played easier minutes and Suter faced really tough ones. Just because Suter played tough minutes doesn't automatically make his numbers bad, in fact it's the opposite! Even though his minutes were among the toughest in the league, and he'd likely have better numbers in easier minutes, the model recognizes that Suter's deployment impacts his results, and adjusts accordingly. Just like it adjusts Yandle's results due to his easier minutes. It's not impossible to do well in tough minutes, and most of the top players according to RAPM do actually face tough competition and are pretty much universally recognized as great players. Basically, getting tough minutes doesn't excuse shitty results, as there are plenty of players who get similarly tough minutes to Lindell but don't get caved in.
Just to further illustrate that there's not much of a difference between Lindell's and Klingberg's minutes, I'll just post their overall splits against each tier of competition.
| TOI% vs Elites | TOI% vs Middle | TOI% vs Bottom 6 |
Lindell | 39 | 35 | 26 |
Klingberg | 35 | 35 | 29 |
[TBODY]
[/TBODY]
So to summarize:
1. Lindell and Klingberg play similar minutes, and when apart their minutes aren't that radically different. Klingberg does better without Lindell, and Lindell tanks without Klingberg. Both things happen to such a large degree that the slight change in competition can't be the only reason.
2. Competition doesn't matter
too much, because at the end of the day, everyone sort of just plays against everyone. Yes there are differences in how difficult the competition is, but the differences aren't super large.
3. Using adjusted metrics, Lindell looks terrible, meaning that his poor results can't just be explained away by his usage.
4. It's not impossible to have good adjusted results with hard minutes (as demonstrated by Ryan Suter) and it's also not impossible to have shitty results despite easy usage (as demonstrated by Keith Yandle). That leads me to believe that the model is doing something right. If all the players with tough minutes had shitty results and all the players with easy minutes had terrific results, then there'd be a problem. To me, this passes the sniff test.
5. Given all of the above, it's fair to say that Lindell is not a very good defenseman, and his shitty results can't just be waived away because of the minutes he plays.