Actually, if you want a theory that says that the virus isn't going to be as bad as we fear because of flaws in the current statistical analysis, iceberg theory is old and busted. The new hotness is variable susceptibility, and I'm pretty intrigued by it. It's a long way from proven, but I think it deserves more consideration than it's being given.
Our current models didn't predict and can't explain why the infection rates in different areas sometimes plummet and sometimes stay stubbornly plateaued. A great example would be New York vs. California. New York's infections dropped like a rock the last month and a half, while California's have stayed stubbornly around the same numbers or growing slightly. Both have similar political climates, both were locked down for relatively similar amounts of time in relatively similar manners.
An obvious difference between the two is infection levels. New York was one of the hardest-hit places in the world, reaching ~20% of the population infected. California only had a small fraction of that. But our popular understanding of herd immunity says that 20% shouldn't be enough to make that big of a difference.
Enter variable susceptibility. Most of the projection models out there being used are SIR models, which stands for Susceptible, Infected, Recovered. It lumps everyone into those three categories and treats them equally.
But that's a simplifying assumption (which sometimes models need to make, you can't model reality perfectly, the point of a model is to simplify things) that may actually end up being an important mistake. We know for a fact that all susceptible aren't identical. For a lot of reasons, some people are more vulnerable to infection and/or death than others.
Some common factors that we know or strongly suspect influence susceptibility: Level of social contact in normal lifestyle, age, gender, blood type, strength of immune system. There are almost certainly other influences that we haven't discovered yet (some intriguing ones include recent infection with other coronaviruses, exposure to region-specific childhood vaccines).
Like I said in my previous post, one study doesn't prove anything, but here's one that I thought should have been given more notice and incorporated into further work more often:
https://www.medrxiv.org/content/10.1101/2020.04.27.20081893v1.full.pdf
It shows that if you include variable susceptibility into your models, the threshold for herd immunity drops. Under some sets of assumptions, it drops a *lot*, to the point where as low as 10% is plausible. Given that we have areas with infection rates reaching 30%, we know it probably isn't that low. But given how we've seen the virus plummet in places where 20-30% infection was reached, I think it's reasonable to guess that the threshold is a lot lower than the 70-80% that the zero-variability models imply.
That would be very good news, because it means that the virus burns itself out quicker than we feared. This would *not* mean that it isn't very serious or that locking down to flatten the curve and buy ourselves time was a bad idea. But it would mean that as we come out of lockdowns with R0's staying stubbornly at or above 1, we aren't looking at death totals in the millions in the US.