Tghu Verd said:
possibly because the underlying theories do not have the embedded repeatability of physics?
To the extent they even have underlying theories, yes.
Tghu Verd said:
I do wonder whether "one’s own common sense should be a good guide in evaluating their [scientists] claims" isn't a disconnected concept for many people, and esp. non-experts.
Bear in mind that I suggested that specifically for the case where an ordinary lay person, not well versed in the specifics and jargon of the field, is trying to evaluate claims made by experts in the field. Obviously the ordinary lay person can't critique the details of those claims and the theories that lie behind them, since to do that one would have to be well versed in the specifics and the jargon of the field. (Note that this latter is
not exactly the same as having academic credentials in the field, though there is of course much overlap.) But it's still possible for the ordinary person to look at the actual predictive track record and apply common sense to it, as well as how the claims are presented (and I gave specific examples to illustrate how particular claims have been presented to the public).
Tghu Verd said:
having read many PF posts with references to peer-reviewed papers, I am often still in the dark about the predictive track record and the uncertainties presented, let alone whether the papers even make sense
Peer-reviewed papers, ironically enough given PF's rules about sources, are often
not the best places for a lay person to get that information, since they are typically written for other experts, not for the lay person. In many cases that's fine, because what is being discussed in the papers does not have any direct relevance to public policy questions that a lay person, as a citizen, might want to have an opinion on. For cases where the science
does have such direct relevance, part of the duty of the scientist, as I've said, is to accurately communicate the current state of knowledge, including all uncertainties; I should have added that this needs to be done in terms the lay public can understand, i.e., by distilling the details and jargon of the field into a predictive track record that a lay person can reasonably evaluate. (The example of astronomers predicting the future trajectories of asteroids is a good one here.)
Tghu Verd said:
I agree with your gate-keeping idea
I'm not sure I proposed a gate-keeping idea. Can you be more specific?
Tghu Verd said:
I remain unsure whether being an expert in one field necessarily makes it easier to critique the validity of claims made by experts in another field?
I think it depends on the fields; I'm not sure there is any useful general rule.
Tghu Verd said:
if experts struggle, how do laypeople know which 'science' to listen to?
Unless there is some reason why laypeople
need to know, such as a public policy question that needs to be decided, the laypeople should not have an opinion at all. That's difficult for many lay people to accept, but it's the only rule that makes sense.
If there
is a public policy question that needs to be decided, then there are, as far as I can see, three possibilities:
(1) Scientists are able to present a solid predictive track record that stands up to scrutiny. This is the easy case: take the scientists seriously. (An example would be astronomers predicting the trajectories of asteroids.)
(2) Scientists are unable to present any significant predictive track record at all, or if they do present one, it does not stand up to scrutiny. This is a harder case than the first one, but the answer is still pretty clear, though disappointing: science is simply unable to provide any useful guidelines for public policy in this area. So any public policy decision in this area will need to be made on other grounds entirely. (An example would be something like global poverty: nobody really has a good predictive track record on how to address poverty. So whatever public policy decisions we make about it cannot rely on any significant scientific guidelines. Which, unfortunately but unavoidably, means that such decisions tend to be ad hoc and the resulting policies don't work very well.)
(3) There are multiple disputing communities of scientists, none of which has a predictive track record that is compelling enough to overcome the others. This is the hardest case, and I don't think there is a general rule that can be given about it, except that, like the second case, the grounds for whatever public policy decision gets made will end up
not being based on the relative scientific merits of the various proposals. (An example of this case would be decisions that various countries have made about funding high energy physics experiments, for example the cancellation of the SSC in the US in the 1990s vs. the European decision to fund the LHC, in the light of the disputes within physics about the status of string theory vs. other approaches to going beyond the Standard Model of particle physics, and also about the relative status of high energy physics vs. other subdisciplines, such as condensed matter physics, that many physicists think are underfunded.)