1. How can you get an upper limit based on exact, unchanging valuations of talent when those don't exist? That also does not excuse how terrible the predictive value is. It is essentially saying it is terrible, but it is slightly better than correlating two random half-seasons, so lets go all willy-nilly with predictions that are just as likely wrong, and treat them as fact.
2. Since that is what you first said you would do, that would be good. Though of course those numbers will only see the light of day if you think they support your theories.
3. Then why were there predictions about this season based on last season?
How is cutting the time frame in half supposed to be any less amateurish?
4. Yeah. How do you get 100 data samples per season representing 80 games when there are only 30 teams playing 82 games per season. That does not add up.
5. Then explain what you did, in actual and not intentionally misleading terms.
1. It's a theoretical upper limit. It goes without saying that the theoretical upper limit will somewhat exceed the practical upper limit. Which actually assists my argument.
In any case, the predictive validity is not terrible. Adjusted fenwick predicts 75% of the non-luck variance in future results. The underlying numbers model predicts 90% of the non-luck variance. Far from terrible. There's tonnes of utility there.
Slightly better than correlating two random half seasons? Hardly. If you think otherwise, you simply don't understand.
2. I've run the numbers in the past, and the predictive was virtually identical to points percentage. And no - I have no issue posting the results.
3. Frankly, I have no idea what you're getting at here. Because predictions were made about this season on the basis of last season, that somehow precludes utlizing a within-season analysis? The chain of reasoning is so bizarre that I'm forced to wonder whether you're simply being wilfully obtuse at this point.
A within-season analysis is obviously preferable as it mitigates the impact of roster turnover. Does that mean that a between-season analysis is useless? No. It's simply not as rigorous.
4. I meant to write 1000. I'll try one more time - for each season from 2007-08 to 2010-11, I randomly selected two independent sets of 40 games. I looked at the correlation between the two data sets for each of the three variables in question, in order to assess the predictive validity of each variable. I repeated the process 1000 times for each season. The figures I posted represent the average values for all four seasons.