Self-Deception and Bullshit

Self-Deception and Bullshit

Ryan Rusiecki

Ryan Rusiecki

Jack has lived the better part of his life on pork sausages and bourbon, and now he is in the hospital with clogged blood vessels and liver damage. Despite unfavorable medical test results, he has recently convinced himself that his health is improving (after all, one month on a low-sodium, low-sugar diet has got to do something). Jack’s situation sounds like a paradigmatic case of self-deception. But now suppose that, as it so happens, his health really is on the mend. Would we still consider him to be deceiving himself in believing that he is recovering? 

Though self-deception is widely accepted as a prevalent “epistemic malady,” there is little agreement on its definition and nature, and even archetypal cases are contested among philosophers. Conventionally, it is considered isomorphic to standard interpersonal deception, where a person intentionally gets another person to believe some proposition p, despite knowing that the proposition is false ~p. In other words, self-deception is usually thought to be like deceiving someone else, except that the self-deceiver is both the deceiver and the deceived. Two paradoxes arise from this interpretation. The first is that it seems impossible that a person can hold two contradictory beliefs, p and ~p, at the same time. The second is that it seems impossible that a person can intentionally deceive themselves and actually be deceived, since the former requires some kind of awareness and the latter, some kind of unawareness. The first paradox concerns the state of self-deception while the second concerns its process.

These two paradoxes have produced a great body of literature, which can roughly be divided under two opposing views: that of the intentionalists and that of the non-intentionalists. The intentionalists generally maintain the contradictory belief and the intention requirement by appealing to some kind of temporal or psychological partitioning (they might say that a person tries to deceive themself at t1 and is deceived at t2, or that one part of the mind does the deceiving while the other gets deceived). In doing so, they preserve the traditional model for self-deception. The non-intentionalists, on the other hand, take the paradoxes that arise from these requirements as grounds to discard modelling self-deception on interpersonal deception.

I find the requirement that the proposition p be false equally puzzling, though it has not shaped the discourse on self-deception as the other two have. The quintessential examples of self-deception certainly seem to uphold it. But what about Jack’s case? Can a self-deceiver deceive themselves into believing something that turns out to be true? 

Amelie Rorty, who is an intentionalist, thinks that it is possible. She argues that:

Self-deception need not involve false belief: just as the deceiver can attempt to produce a belief which is—as it happens—true, so too a self-deceiver can set herself to believe what is in fact true. A canny self-deceiver can focus on accurate but irrelevant observations as a way of denying a truth that is importantly relevant to her immediate projects. 

Though Rorty claims that self-deception need not hinge on the falsity of some belief p, she still suggests that p must at least impede some other truth q. Perhaps an illustrative case of what she describes might be one where a writer focuses solely on the good reviews they receive “as a way of denying the truth” that most critics think that their work is awful. So, if p is “people think that my writing is good,” then the writer does not exactly believe p falsely, but in doing so, obstructs the truth q that “most people think that my writing is bad.”

I am not convinced that Rorty’s interpretation of the kind of self-deception that does not involve false beliefs does not in fact involve false belief. This is because I think that focusing on some irrelevant belief p in order to deny another belief q involves the self-deceiver falsely believe ~q. And further, that the self-deception critically takes place in falsely believing ~q and not in believing p. In the case of the writer, I would argue that they deceive themself in falsely believing that it is not the case that “most people think that my writing is bad”, and that focusing on the insignificant truth that “[some] people think that my writing is good” is rather part of the process of the self-deception. 

But are there instances of self-deception that do not involve false belief and also do not forsake some other truth? Alfred Mele thinks that this is the case when someone “acquires a true belief that p on the basis of evidence, e, which does not warrant the belief that p.” A self-deceiver, therefore, need not believe something false; they might belief something true that is based on unwarranted evidence. On this view, Jack’s case is accommodated: he believes that his health is improving based on dubious intuitions that are not supported by medical tests.

Mele’s suggestion that self-deceivers can produce true beliefs coheres with his overall account of self-deception. He denies that self-deceivers need to be aware that their belief is false, and hence hold contradictory beliefs, nor that they must intend to deceive themselves. Instead, “what generates the self-deceived person's belief that p,” on his account, “is a desire-influenced manipulation of data which are, or seem to be, relevant to the truth value of p.” 

Bertrand Russell, in a section from The Analysis of Mind, proposes a comparable interpretation of self-deception that gives conceptual authority to the self-deceiver’s desires. He claims that we can distinguish self-deception as a species of motivated belief, not by looking to the falsity of the belief and the amount of evidence against it, but by looking at the desire that motivated it. A self-deceiver, according to Russell, has a desire for a certain belief rather than a desire for fact. (That is not to say that the evidence is entirely irrelevant. Russell contends that what differentiates self-deception from wishful thinking is that self-deception takes place in the face of contrary evidence, while wishful thinking occurs when the evidence is inconclusive.) 

Whatever distinguishes self-deception as an epistemic defect, it must have something to do with the self-deceiver’s desires: their particular contribution to the deception. If it were merely contingent on the falsity of the belief or its lack of evidence, it would not be considered cognitively insidious, but rather a product of ignorance or bad epistemic practices.

I think that Mele and Russell’s interpretations are apt. Whatever distinguishes self-deception as an epistemic defect, it must have something to do with the self-deceiver’s desires: their particular contribution to the deception. If it were merely contingent on the falsity of the belief or its lack of evidence, it would not be considered cognitively insidious, but rather a product of ignorance or bad epistemic practices. Note also that this analysis of self-deception falls in the non-intentionalist’s camp. The self-deceiver need not intend to deceive themselves, knowing that their belief is false; they must merely desire a certain belief and, unintentionally or not, exploit whatever limited evidence there is to maintain it. 

Interestingly, Mele and Russell’s characterization of self-deception sounds a lot like Harry Frankfurt’s “bullshit.” In his timely and entertaining essay On Bullshit, Frankfurt differentiates bullshitting from lying. While the liar seeks to deceive their audience by telling a falsehood, the bullshitter solely intends to persuade their audience to suit their personal purposes, without regard for truth or falsity. The liar, therefore, must have some grip on the truth to disavow it. The bullshitter, however, does not, and as a result tells both truths and falsehoods in accommodating their motives. 

Analogously, on Mele and Russell’s picture, the self-deceiver does not necessarily intend to believe something that they know is false. They might deceive themself into believing a truth. What is crucial is that the self-deceiver believes something simply because they want to believe it. 

If, then, self-deception is like reflexive bullshitting, perhaps its pervasiveness—assuming it is pervasive—can be similarly accounted for. While Frankfurt’s diagnosis of the prevalence of bullshit is a little far-fetched (he thinks that it is closely related to the postmodern rejection of objective truth), it offers some perceptive insights that relate to self-deception. His claim that “bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about,” for example, is comparably applicable: self-deception seems inevitable whenever circumstances compel someone to believe something without affirmative evidence. 

The danger of bullshitting is, in part, related to its potential to produce true statements: it is hard to challenge the credibility of a bullshitter who tells many truths. And something similar can be said of self-deception. If we deceive ourselves into believing something that turns out to be true, we are less likely to question our epistemic practices and more likely to develop malign ones.

Though bullshit may be widespread, why does Frankfurt think it is any more harmful than lying? The danger of bullshitting is, in part, related to its potential to produce true statements: it is hard to challenge the credibility of a bullshitter who tells many truths. And something similar can be said of self-deception. If we deceive ourselves into believing something that turns out to be true, we are less likely to question our epistemic practices and more likely to develop malign ones. If Jack, for example, finds out that his health is genuinely improving, then he might attribute reliability to his biased “gut feeling.” He might be more inclined to use it to guide his beliefs in future and less inclined to trust medical opinion or other more authoritative evidence. 

While Mele and Russell’s self-deception might not appear epistemically threatening at first glance, in that it does not require that the self-deceiver intend to deceive themself nor produce a false belief, it turns out that it could be surprisingly harmful. When it yields true beliefs, self-deception is more difficult to recognize and more likely to foster unhealthy cognitive habits. Of course, our old friend Jack does no great harm in believing that his health is improving based on intuition, given his belief is true. But if he continues to privilege his instincts as a result, he is unlikely to be so lucky—especially if his instincts tell him that sausages and whiskey are back on the menu.

On the Death Penalty: Hegel vs. Foucault

On the Death Penalty: Hegel vs. Foucault

Interview with Dena Shottenkirk

Interview with Dena Shottenkirk