One factor that can help towards reducing the CIE is a suspicion towards the source(s) of information that may later turn out to be false. In other words, possessing a healthy skepticism (e.g. regarding the potential politicization of the source). It seems that a skeptical stance considerably reduces the CIE. Two other papers with Lewandowsky as lead author are referenced by E2010 in support of this finding, and please take a moment to truly absorb the underlined text at the bottom of this quote [my underline]:
‘The second factor that seems to reduce the CIE is suspicion toward the source of the misinformation. In the WMD studies discussed earlier, belief in the existence of MDs in Iraq was correlated with support for the war and was especially pronounced in those people who obtained news from sources that supported the invasion (e.g., Fox News; Kull et al., 2003). Lewandowsky et al. (2005) uncovered a more direct link between suspicion and the ability to update misinformation related to the Iraq War. They operationalized suspicion as the extent to which respondents doubted the official WMD-related reasons for the invasion. Lewandowsky et al. (2005) found that, when this measure was used as a predictor variable, it explained nearly a third of the variance in people’s belief in misinformation. Moreover, once suspicion was entered as a predictor, previously striking mean differences between respondents in the U.S. and two other countries (Germany and Australia) disappeared and were, instead, found to reflect differing degrees of suspicion between those countries. Lewandowsky, Stritzke, Oberauer, and Morales (2009) extended the notion of suspicion by suggesting that it may be related to a more stable personality trait of skepticism—skeptics will generally tend to question the motives behind the dissemination of information.’
Yes, that’s right. Lewandowsky et al are suggesting here that skepticism is a stable personality trait which makes those who possess it less subject to the influence from misinformation, more able to update their position in the light of corrections; a finding that can only mean skepticism is in fact a positive and healthy trait. Lewandowsky echoes this in L2012 [underline = section heading]:
‘Skepticism: A key to accuracy. We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.’
Well, that makes sense. But given the position of Lewandowsky in the climate debate, plus his attempted use of psychology to paint climate-catastrophe skeptics as ‘deniers’ and way-out conspiracy theorists, this insight is highly ironic to say the least.
In the paper Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction by Ecker et al (E2011, one of the other authors is Lewandowsky), we are further warned that the CIE cannot wholly be eliminated by any retraction method known to date:
‘…however, the finding that retractions never eliminate continued influence altogether is pervasive and robust.’
Worse still, E2011 also finds that if there is a cognitive load at the time of absorbing any retractions (i.e. the subject’s attention is divided), then these will be much less effective. It occurs to me that this opens an avenue for those who are compelled to retract (e.g. by the law) yet actively seek to lessen the retraction’s impact, for instance by choosing the style of delivery. Further, mixed messages in the retraction itself may create an ‘automatic’ cognitive load, plus explicit emotive content might also add or subtract from the retraction’s effectiveness, per the sections above, as desired. One cannot always assume that retractions themselves will be neutral, even if they are supposed to be.
E2011 concludes that [my underline]:
‘The practical implications of the present research are clear: If misinformation is encoded strongly, the level of continued influence will significantly increase, unless the misinformation is also retracted strongly. Hence, if information that has had a lot of news coverage is found to be incorrect, the retraction will need to be circulated with equal vigor, or else continued influence will persist at high levels. Of course, in reality, initial reports of an event, which may include misinformation (e.g., that a person of interest has committed a crime or that a country seeks to hide WMDs), may attract more interest than their retraction. Moreover, retractions apparently need full attentional resources to become effective; hence, retractions processed during conditions of divided attention (e.g., when listening to the news while driving a car) may remain ineffective.’
I think ‘significantly increase’ in this context essentially means ‘spread strongly within society’.
So, the type three warning amounts to: ‘beware of the bias from the CIE’, which it appears can never be wholly eliminated. Further, we are told that unless specific warnings about uncertainty in information (e.g. from lax fact checking, and implied from any other source) plus the possibility of being misled, are given ahead, then the resulting bias from any information that turned out not to be wholly true, will be significant. And any retraction will need to be circulated with equal vigor, otherwise the created bias will not be significantly reduced.
Next page for more…