I think a major problem is the people who model these public advanced stats do a poor job marketing them to the public. As a quant person, it's not exactly surprising to me that the people publishing these models tend to present them from a black and white viewpoint (Player X is good, Player Y is bad), or at least that can be the interpretation from people who are less versed in them.
I think understanding what these models are is important for interpreting what they mean. At their core, advanced stats are event tracking combined with factual results (goals, assists, takeaways, giveaways, etc) that are used to say "when X happens, then we are confident Y should result". If a player consistently makes plays where we statistically expect a positive result to happen, then that's a sign that they're a good player.
I'm a firm believer that you need a combination of "eye test" and advanced stats to see the full picture, neither is good enough on its own. I want the stats to confirm what I've seen, and if they don't, figure out why that is. Is there a problem with the stat, did it not consider something, am I underestimating certain player actions? Theoretically, everything observable in a hockey game can be turned into a stat, and then you can combine them to create a picture of positives and negatives, where the greatest value is then being able to translate that into coaching players how to best perform.
Simply put, there's a reason the best teams in the league are the ones who have invested a lot in advanced stats. When the margins between winning and losing are so close, you need every micro-advantage you can find, and that's exactly what these statistical models provide.