- Problem: direct assessment cannot reliably distinguish ‘realists’ from ‘denialists’
Conspiracy theories and logical fallacies often abound on both sides of a long contested domain that has major social significance. So bullets 1 and 5 are unreliable criteria for who, overall, is ‘denying’ reality. This is because major social conflicts attract individuals with all sorts of beliefs and motivations; some of these people will generally back the evidential side, i.e. the ‘right’ side, yet for the wrong reasons and / or deploying the wrong arguments. Their impacts on the contest may be modest, e.g. as is often the case regarding folks with theoretical rather than emotively driven motivations2, or strong, e.g. typically from folks who’ve slipped into noble cause corruption3. In some cases a much more systemic promotion of the ‘right’ side yet via culturally driven and not evidentially driven arguments, will also occur because of cultural alliance effects.
So, consider a contested issue which features a largely evidential position, E, opposed mainly by religious believers. The religious side has a strong cultural alliance with a political party, X, which hence is pulled in for that side. This sparks a reaction whereby X’s political opponent, Z, weighs in on the evidential side, yet by default not with evidential arguments but instead deploying their regular range of cultural weapons, such as ‘folks who support the X party (or via association oppose E) have inferior brains’, which range will typically include some conspiracy theory, logical fallacies and so on. Hence the ‘right’ side ends up inextricably tangled with various cultural promotion and defensive behaviors (footnote 6 illustrates this for the climate domain).
Due to these various effects (plus another immediately below) not only will conspiracy theories and logical fallacies arise on both sides of a socially contested domain, it is likewise for cherry picking and false experts too. The only underlying criteria that D&M2009 recommends to which we might turn for some guidance regarding who is who within a contest featuring such mirrored behaviors, is that of a ‘dominant’ scientific consensus. The paper claims that the ‘right’ side must be the consensus side. Yet there is no acknowledgement of the difference between a scientific consensus and a social consensus, or that the latter can pose as the former7. Influence from an enforced social consensus increases the chances that scientists too will straddle the rift between sides, or maybe even end up mostly on the ‘wrong’ side. Authoritative, apparently settled science has been overturned many times8; scientists and policy makers are not magically separate from society and like everyone else they are subject to dynamic bias patterns that evolve across their society, for instance emotional bias regarding climate issues.
- Problem: direct assessment cannot escape domain bias
Considering the above effects one would expect most cherry picking to be the inadvertent result of bias, and probably subtle in nature. Yet even for more blatant cases, in a complex domain mired in claims and counter claims to the nth degree, it can be difficult to correctly identify cherry-picked data without fairly extensive domain knowledge. And likewise the picking of ‘discredited papers’ is a subjective criteria. It depends upon believing those who did the discrediting and their reasons for doing so, which implies a prior judgment that can only be based upon reasonable domain knowledge (and/or bias). Indeed the very allegation of cherry picking could itself be a cherry pick, if for instance this only presents an unfavorable part of the original case. So the criteria that reveal evidence choices as cherry picks are in themselves domain dependent, which tends to thwart objectivity.
It is likewise regarding experts. To reliably know whether an expert is ‘false’ or not requires domain knowledge. What they are paid and by whom is not on its own a definitive criteria (or even major criteria; ideological bias often motivates more than money, though the two can also be aligned). Navigating the often labyrinthine funding paths within a contested domain can be almost as complex as evaluating direct domain evidence; the public certainly don’t have time for this, and interpretation of funding network influences is itself subject to bias and polarization. For a major contested domain one expects opposing networks, nor is there a simple rule of thumb to interpret them, such as: ‘scientists paid by industry are less reliable’. Via the grant funding circus, government scientists or university employees have just as much skin in the game as industry has via market influence. It’s also the case that where strong culture is present in a contested domain (absent this there wouldn’t likely be ‘denialism’ anyhow), the more domain knowledgeable individuals are the more polarized they are too9. Hence advice sought from further up the knowledge chain on say cherry picking, or anything else, is potentially a slave to that polarization, though whether the effect continues up to the level of true ‘experts’ would be hard to determine, and also depends on how domain expertise is defined. There many accounts of highly polarized experts, albeit anecdotal.
So absent some novel methodology (D&M2009 does not suggest any) we have fatal recursion: correctly identifying cherry picking and false experts implies a reasonably deep and yet also unbiased domain knowledge. In turn this means already knowing, despite the confounding factor of a highly polarized environment, which side is in fact ‘speaking to truth’ and which is ‘denying’; yet this is essentially what we were meant to be finding out in the first place. Or in other words, the domain knowledge needed to investigate these characteristics brings with it domain bias, which bias may lead to erroneous judgment.