The discussion with Vanadium 50, along with an article in today's NY Times ( http://www.nytimes.com/reuters/2009/12/14/health/news-cancer-radiation.html?_r=1&scp=5&sq=ct%20scan&st=cse ), prompted me to go back and study whether I'd been getting my information from biased sources.
The following two short papers turn out to be pretty useful, because they advocate opposite points of view about LNT and hormesis, and they're fairly recent. Each explicitly refers to and criticizes the other.
Tubiana et al., "The Linear No-Threshold Relationship Is Inconsistent with Radiation Biologic and Experimental Data", doi: 10.1148/radiol.2511080671, April 2009 Radiology, 251, 13-22
Little et al., Risks Associated with Low Doses and Low Dose Rates of Ionizing Radiation: Why Linearity May Be (Almost) the Best We Can Do, doi: 10.1148/radiol.2511081686, April 2009 Radiology, 251, 6-12.
Unfortunately both are behind a paywall, so I don't know how accessible they'll be to other pf users.
Essentially there seem to be three ways of knowing about the effects of low doses of ionizing radiation:
(1) reasoning from knowledge about cell biology
(2) epidemiological studies in humans
(3) animal studies
#1 is not very helpful because it's purely theoretical, and in any case the current theoretical understanding of cancer is very poor. You can cook up arguments based on cell biology and cancer biology to say that the graph of risk versus dose should be anything you like: linear, concave up, concave down, threshold, no threshold, etc.
#2 is also not very helpful because the studies are mostly not sensitive enough to tell us anything about the kinds of doses that are relevant in terms of policy or individual decision-making. The available sources of data don't involve large enough populations and low enough doses to be very useful, and they often are very difficult to interpret due to the inability to control all the variables. For instance, the Japanese bomb survivors are a unique source of information, but these people were exposed to all kinds of nasty carcinogenic chemicals from the burning of their cities. (Cf. concerns about carcinogens in NY from 9/11. Firefighters have a higher risk of cancer.)
So Little and Tubiana examine #1 and #2 and reach opposite conclusions, because the data are so weak. And when I say "opposite conclusions," they aren't really strong conclusions. They both say things like "the data are consistent with LNT" or "the data are consistent with a threshold," often referring to the same data. In other words, the data simply don't test LNT.
#3, animal studies, is the source of information that really has the ability to test LNT, and it uniformly demonstrates that LNT is wrong, and that doses of less than about 10^4 mSv have a beneficial effect (see http://www.radpro.com/641luckey.pdf for a review). Unless we believe that there is something drastically different about the cancer biology of humans compared to the cancer biology of lab rats, it would seem logical to me to use this as our source of information about effects in humans.
Why, then, does the debate in medical journals seem to focus so much more on #1 and #2, which are inconclusive? This editorial http://www.sciencemag.org/cgi/content/summary/302/5644/378 gives some intersting insight:
But even if animal data and new mechanistic studies give support to the hormesis theory, nobody thinks BEIR-VII will abandon the current linear model of risk just yet. That would be a "complete shift" for public health, says Preston, adding: "If you've got human data, you use it."
This tends to reinforce my belief that behavior and public policy on this topic are mainly determined by culture (50's horror movies about radiation, etc.), psychology (people's inability to reason rationally about risk), and politics (NIMBY) rather than by scientific evidence.