Wrapped in Lew Papers: The psychology of climate psychologization – Part1

Note: no original work on psychology is contributed in these posts; the conclusions of the prior papers from Lewandowsky and associated authors are taken at face value, with explanatory comment but without significant critique. And no prior knowledge of psychology is required to follow the series, which is (hopefully!) broken down into a logical trail of modest steps that folks can follow. This first and shorter post introduces the list of cognitive biases.

The first warning type can be termed ‘worldview bias’. The paper Misinformation and Its Correction: Continued Influence and Successful Debiasing by Lewandowsky et al (L2012), is about the spread of misinformation plus strategies to correct this misinformation and counteract its damaging effects. Yet the paper spends a lot of time on the role that an individual’s pre-existing worldview plays in absorbing misinformation in the first place, and likewise in being resistive to later concerted attempts at correction. In a short article summarizing the paper at the Association for Psychological Science online, the authors state that: “Individuals’ pre-existing attitudes and worldviews can influence how they respond to certain types of information”. By ‘information’ here they are referring both to original misinformation and also to corrected information. L2012 contains numerous quotes about how one’s worldview can seriously bias the processing of incoming information, including the rejection of information that challenges the held worldview, maintaining misinformation that supports it, and even (via a ‘backfire’ effect) perceiving misinformation corrections as a justification or reinforcement of the held worldview. E.g.
∙∙∙∙∙∙It is possible that one’s worldview forms a frame of reference for determining, in Piaget’s (1928) terms, whether to assimilate information or to accommodate it. If one’s investment in a consistent worldview is strong, changing that worldview to accommodate inconsistencies may be too costly or effortful. In a sense, the worldview may serve as a schema for processing related information (Bartlett, 1977/1932), such that relevant factual information may be discarded or misinformation preserved.
And also this:
∙∙∙∙∙∙Thus far, we have reviewed copious evidence about people’s inability to update their memories in light of corrective information and have shown how worldview can override fact and corrections can backfire.

As long as taken in the lightweight form (worldview likely doesn’t have an overwhelming influence in all cases; there’s evidence that within some topic domains at least, particular worldviews appear to introduce only modest bias), then this position seems both consistent with other literature and also not particularly controversial. Indeed it seems like common sense when considered as a part of the expression below, also from L2012 (and if we take worldview as a subset of ‘assumed truths’):
∙∙∙∙∙∙As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974).
While it appears that more discovery on the underlying psychological mechanisms is required, and no doubt there are challenge positions, let’s just take it as read here that L2012 is right and one’s worldview has a strong influence (the paper is emphatic at various places) on the information one does or does not accept as ‘true’. Whatever the reader’s opinion, the important thing in this post is what the opinion of psychologists is, with Lewandowsky and co-authors as our main example 🙂 . So the type one warning amounts to: ‘beware of the bias from one’s worldview’.

Type two concerns incoming information that contains a significant emotional component. The paper Theoretical and empirical evidence for the impact of inductive biases on cultural evolution by Griffiths et al (G2008, one of the other authors is Lewandowsky), includes this paragraph:
∙∙∙∙∙∙Sperber (1996, p. 84) states that ‘the ease with which a particular representation can be memorized’ will affect its transmission, and Boyer (1994, 1998) and Atran (2001) emphasize the effects of inductive biases on memory. This idea has some empirical support. For example, Nichols (2004) showed that social conventions based on disgust were more likely to survive several decades of cultural transmission than those without this emotional component. This advantage is consonant with the large body of research showing that emotional events are often remembered better than comparable events that are lacking an emotional component (for a review, see Buchanan 2007).

This quote is a little hard to grasp outside the context of the paper, but says that (mental representations of) social conventions or cultural concepts with an emotional component are easier to memorize, which appears to result in them also being retained for longer plus better transmitted to others in society, than would be the case for concepts minus the emotional load. Social conventions based on ‘disgust’ are an example explored by Nichols, but in fact other literature referred to in this quote (and also elsewhere), suggests that the same effect occurs for a range of different emotive stimuli which can be carried within generic information. The word ‘advantage’ applied to the Nichols example here presumably refers to the enhanced prospering of the concept itself, and perhaps also because ‘disgust’ would typically accompany concepts deemed unhealthy for society. So in short, concepts that include an emotive load will possess an arbitrary bias in their favor.

This same point also appears back within L2012, while positing that because an emotive load strongly affects the prospects of (generic) information being passed on, quoting Peters et al in support, then this should also hold for misinformation (which is the main theme for L2012). I.e. an emotive load should have an effect on the degree to which misinformation both spreads and persists.
∙∙∙∙∙∙Concerning emotion, we have discussed how misinformation effects arise independently of the emotiveness of the information (Ecker, Lewandowsky, & Apai, 2011). But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009), which means that the emotiveness of misinformation may have an indirect effect on the degree to which it spreads (and persists). Moreover, the effects of worldview that we reviewed earlier in this article provide an obvious departure point for future work on the link between emotion and misinformation effects, because challenges to people’s worldviews tend to elicit highly emotional defense mechanisms (cf. E. M. Peters, Burraston, & Mertz, 2004).

There is an important extra observation on the end of this quote regarding the inter-relatedness of emotive content and worldview. To some extent, the emotive load is in the eye of the beholder. Information (or misinformation) that strongly challenges a specific worldview may produce an emotive response in one individual but not in another, arousing in the former ‘highly emotional defense mechanisms’. What the quote doesn’t say is that this is a subset. Information (or misinformation) that powerfully promotes a specific worldview may likewise produce a strong emotive response, e.g. euphoria, self-justification, enhanced feelings of security and identity [to worldview-aligned social entities]. While no doubt future work is required as the paper suggests, the implication is that these implicit emotional responses will add or subtract to any explicit emotional content, aiding transmission still further in the case of a worldview alignment, and attenuating it in the case of a worldview clash (and possibly for the latter, spawning the transmission of countering information). This would cause a social amplification of any polarization that already existed regarding the perceived ‘truth’ of the original information. (Note: I used inverted commas on the word truth, only to remind folks that for many speculative and / or complex concepts not examined long in retrospect, there would already be some fuzziness and interpretation regarding the level of truth, even without any emotional interference).

I mention in passing that this part of the above quote: ‘But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009)’, is a major contributor to the spread of memes; in terms of narrative success, emotive punch is rewarded more than veracity. The lens of memetics is extremely useful for examining this whole area, but I digress and we are staying with Lewandowsky’s work here.

So, the type two warning amounts to: ‘beware of the bias from emotive content’, to which we might also add the rider for any particular information: is there implied emotional content, essentially via a powerful type 1 reaction, which may enhance or attenuate any explicit emotive bias?

Next page for more…

Advertisements
This entry was posted in Climate and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s