On Different Ways of Being Wrong
I suspect a major reason why
political discourse is the way it is has to do with how we perceive what it
means to be – and how we explain being – wrong. I don’t merely mean in our ‘polarized
times’ (the extent to which we are more polarized today than in earlier eras is a matter of dispute among political psychologists) but rather the way people think about politics in general,
and the way it seems they always have. There are a couple persistent and remarkable phenomena
about politics that should make us a
priori suspicious of how political beliefs are formed: 1) people tend to
cluster into few (mostly, two) political ideologies with remarkably little
intra-cluster heterogeneity; and 2) there is little transit between clusters. 1
is remarkable because what one believes about abortion should not really
predict what one believes about gun control, capital gains taxes, foreign
policy, or the macroeconomics of business cycles. And yet, one’s opinion on any
one of these issues predicts one’s opinion on the others remarkably well. One
popular explanation for these phenomena goes something like: well, my side is
right about everything, so that’s what all these opinions have in common. But
how does explain how half the population is precisely
wrong about everything? Once we leave off the unserious but common explanation
that that half is just dumb, evil, or has been indoctrinated by a malicious
conspiracy, we must acknowledge that the distribution of political opinions is
highly irregular. Most intelligent people I suspect will further acknowledge,
if pressed, that many, if not most sincerely held political beliefs even among those they agree with are not
held for irrefutable reasons. Few people read the empirical
literature (and fewer still read the literature from researchers of differing
opinions) on all these matters, few spend many hours thinking critically and analytically
about their views and their opponents’ views, and few aspire toward robust
internal consistency in their ideology. Even if half the population is right,
most of that half is right for the wrong reasons.
Much more likely than the simple,
partisan view of the world is the view that the world – including in ways that
relate to politics - is in fact extremely complicated, and any one of us has
only snippets – heavily biased (in the statistical sense of the word) snippets
of information about it from which we are expected to make grand inferences.
But in such a world, we would probably not expect the tight bimodal
distribution of political opinions. This distribution is not what we should see
from a bunch of rational agents reaching different conclusions due to each
having access to a different subset of the whole data set from the next.
Moreover, if a social/political world as complex and largely unknowable as the
one we live in, we should probably expect people to have weaker priors, to
change their opinions as they gain access to different subsets of data; to move
around about the ideological parameter space more than they do. And I don’t
merely mean we should see more dramatic conversions from conservative to
communist or vice versa; in fact, to me it seems dramatic conversions make up
an unusually large proportion of ideological shifts. I mean that we should see
more conservatives going from being anti-gun control to pro-gun control; more progressives
turning against the estate tax; more libertarians finding themselves supporting
intellectual property rights. Instead, most people remain almost completely
fixed in their views for their entire lives; and when they change, all of their
opinions change at once. To put it simply, this is not a pattern suggestive of
rational thinking.
What seems clear to me (much empirical
evidence seems to back this up) is that there is a great deal of
path-dependency in political ideology. I think that I have something of an
explanation, albeit a rather informal and speculative one, for why this is. The
first premise of my explanation is that politics matters deeply to one’s
self-image. This is so for some more than others, but it is pretty true for
most people. One’s ‘political self-image’ can take many forms. People associate
moral virtues or desirable personality traits with broad ideological moods and,
in turn, with the set of opinions that make them up. “Caring about the environment”
is intuitively associated with banning plastic bags; “tough on crime” with
stiff sentences for criminals; “individual autonomy” with low taxes, and so on.
Thinking of oneself as compassionate, or just, or self-reliant, or productive,
etc. will lead to intuitive, emotional associations with particular beliefs or
policies well before one gets the change to analyze their merits in terms of
one’s values. A second premise of my
explanation of the peculiar nature of how we think about politics is that our
identities are socially reinforced. People who believe a particular belief
reflects compassion will regard others who hold that belief as more
compassionate, and people who have similar self-images will tend to cluster.
E.g., individuals who think of each other as compassionate will tend to
congregate, and also reward each other socially by acknowledging their
compassion – or punish each other by doubting their compassion - for sharing
the opinions they associate with the virtue they aspire to possess (or at least
imagine they possess). So too for intelligence, individualism, religious faith,
etc. This helps us explain both why people tend to converge toward highly internally
homogeneous clusters, and why there’s so little ideological movement. But
beyond identification with particular virtues, people like to think of
themselves as possessing broader virtues; as being smart and as being moral.
But people don’t just identify with
virtues in a temporal vacuum. They identify with the totality of their
political views up to the present, of course with the weight increasing with
recency. This is of course just like with morality in general. One does not,
after all, merely think of oneself as a good person because one is currently
trying to behave morally. If one just murdered someone yesterday, one will tend
to have difficulty reconciling that fact with a self-image as a ‘good person’ until
a considerable duration of time has passed. And of course it is not just time,
but also the opportunity to behave the way a good person would so as to
convince both others and oneself that one is indeed currently a good person. I
think that people judge themselves and others much the same way when it comes
to politics. Of course, people give moral weight to who you vote for (far too
much in my opinion), who you donate money too, who or what you campaign or
advocate for. But we do so even on an
intellectual level. Obviously, if you knew someone who admitted to only just
reaching the conclusion that murder is wrong, you might not regard them with as
much reprehension as If they had been murdering people up to that point, but
you wouldn’t regard it as merely an honest intellectual mistake. So, when we
ask people to change their political opinions, we are often asking them to
admit the emotional equivalent of having committed egregious intellectual and
moral mistakes for their entire life up to that point. We are asking them to
accept that they have not been compassionate or intelligent up to this point,
but ruthless and idiotic.
But this, I contend, is probably
the wrong way to think about what it means – and what it implies – to discover
that you were wrong about something. Admittedly, mistakes made from
irrationality or other biases or intellectual or moral vices do happen, but
right now I’m questioning how much of the ‘wrongness’ we see in politics (usually
in our opponents’ views of course) is this kind of wrongness. I’m going to
posit that there are two basic ways one can be wrong (I’ll complicate matters
further later on but let’s start with two). I’ll illustrate them with
blackjack:
Type 1: The first two cards you’re
dealt are a 9 and a 2, and you stay, and lose.
Type 2: The first two cards you
draw are a 10 and an 8, and you stay, and lose the round.
Scenario 1 is a dumb mistake, you
should feel ashamed. But in scenario 2, you were just as wrong, but you were
wrong for rational reasons. You shouldn’t feel ashamed. In politics, I think there’s
an argument that most mistakes – in our own thinking as well as in that of
others - should be treated as type 2 mistakes. However, we habitually treat
them as type 1 mistakes. This is somewhat intuitive, of course, even if it’s
wrong. This, I would opine, is because we attribute moral relevance to the
stakes themselves. So, if we were to tell you that if you lose the game of
blackjack, someone will be killed, and you lose, many people will feel a
natural inclination to blame you even if you made the rational choice, and you
may feel blameworthy yourself. In politics, the stakes seem very high, so even
if one reaches the logical conclusion given the limited available evidence.
Of course, some people will argue
that, when someone is mistaken (especially if they still believe their
ostensibly mistaken beliefs), they should know better. The world isn’t that complicated (in reality, it probably
is), and even so, you can see errors in their reasoning, empirical studies they
believed that, if read thoroughly, are clearly flawed, and one can find
important studies they overlooked entirely, angles they completely failed to
consider. I argue, however, that even if all such criticism are correct, the
ones making them are still probably wrong At this point, I will introduce a
third type of mistake:
Type 3: You are dealt a 10 and an
7, which usually would warrant staying, but you’ve been counting cards and given
what you’ve seen in the deck thus far, you compute that you’re most likely to
win if you take a third card; you do so, and you still lose.
I do so to make a distinction
between this type of mistake – which I’m inclined to call a perfectly rational
mistake – and type 2 mistakes. If someone got the same first two cards, stayed,
then lost, this would be a type 2 mistake. Should you me ashamed of such a
mistake? Is it an irrational mistake? I would argue that it really isn’t if the
player doesn’t know how to count cards. The evidence that would’ve been
necessary to make a more informed decision was not accessible. You could point out that the evidence would’ve
been accessible if he’d learned how to use it, but then there’s the question of
whether it even have been worth it to learn how to count cards? If the person
making the type 2 mistake will not gamble enough to reap the returns necessary to
make it worth it to invest the time and effort to learn to count cards, then it
is rational for him to follow a simpler, less granular set of rules than the
one followed by the card counter.
To bring the analogy back to
politics: confidently discerning the right policy position on a typical issue
requires a good deal of research and thought. On almost any major issue, there
are a good number of intelligent people who study the relevant topic
professionally who reach widely different conclusions. It is unfair to even
expect people to only make type 3 mistakes. If you work a full time job, have a
family, friends, hobbies, etc., all of which should rightly matter far more
than politics, how many studies can you read per day on estimating the social
cost of carbon? How much time can you devote to thinking about what discount rate
should be used in said estimations? Even if you believe that it’s possible to
reach a confident conclusion about the rate of technological progress over the
next 200 years or understand the statistical minutia on which the soundness of
teacher value added measures depend, one cannot reasonably expect people to
invest the time and effort to learn what is necessary to reach said conclusions.
But to make a fully informed opinion on almost any policy issue, it is
necessary to obtain a level of knowledge and understanding of something that it
is simply not reasonable to expect almost any person to obtain. And what’s
more, to expect them to reach that level on all
major political debates! Inevitably, a normal, rational, well-intentioned
person is going to make a great many type 2 mistakes.
This probably sounds a lot like
Bryan Caplan’s concept of rational irrationality. And it is fairly similar, but
I prefer to think of it in terms of orders of rationality. Political and social
reality is governed by rules (or patterns, but we’ll call them rules for now).
But those rules are themselves governed by second order rules, which determine
when each first order rule applies or doesn’t apply; and the second order rules
may be governed by third order rules, and so on. We might describe first order
rationality as understanding which rules apply in this particular situation –
by extension understanding the whole system, or at least the rules relevant to
the situation. Second order rationality might entail knowing that, say, a certain
rule applies in most cases, and basing one’s conclusion on that alone, without
delving into figuring out whether it applies in this particular case. An
example that comes to mind: if asked to support or oppose raising the minimum
wage in your city, you might study all of the research on your city’s labor
market and minimum wage hikes in similar cities to decide if it will have a net
positive effect in your city. That’s first order rationality. Second order
rationality would entail making a decision based on what one believes, on
balance, the effect of a minimum wage increase is in general. Third order
rationality might entail reaching a conclusion based on one’s assessment of the
general effects of price floors in general, minimum wages being a subtype of
price floors. And so on. I would argue that many of mistakes people are
inclined to make are due to intellectual habits that are rational on a higher
order because it would not be worth the return to assemble the necessary
evidence to reach a lower order of rationality.
But doesn’t all of this contradict
everything I wrote earlier? I argued above that, clearly, many people are
irrational in their political thinking, given the distribution of political
opinions. And this is sort of true. I do think many people habitually reach (or
stick with) their conclusions irrationally. But I think a major reason why they
do so is because, even when they are rational and moral in their process, they
are punished socially for reaching what turns out to be (or more likely, what
their peers regard as) the wrong conclusion. Moreover, people either
internalize or just habitually think of mistaken conclusions as type 1
mistakes. There is thus little social or psychological incentive to think
rationally about politics. It is common for people to reject rational but
emotionally straining conclusions. The general tendency to treat mistaken
conclusions as, well, mistakes in an intellectual moral sense of the word is a
self-fulfilling prophecy. Ordinarily, socially punishing incorrect conclusions
might work out and ultimately discipline people into thinking rationally, but
in politics, there is almost never a definitive conclusion. There is
essentially always enough room for psychological maneuvering that people can
usually justify deferring to emotionally driven priors. As a result, socially punishing
what one believes to be ‘rationally ignorant’ conclusions will tend to
disincentivize people from using what evidence is available to them to be as
rational as is worth it for them to be, since it will almost always be easier
to reify one’s erroneous conclusions to salvage one’s self-image than admit
error.
To
conclude, I contend that, even if people were perfectly rational, in the
limited sense of the word, differences in what evidence is available to them
and scarcity of time and effort will lead most of them to be wrong about a
great many things. Though perhaps many wrong opinions in the world we live in
are reached irrationally, the inevitability of wrongness should influence how
we deal with it, in ourselves and others. It is thus unreasonable to treat
mistaken conclusions – in others and oneself - as ipso facto evidence of moral
or intellectual defects, but rather as reasonable bets that were nonetheless
lost. In politics this is difficult to do and probably runs counter to our
moral intuition, given the seemingly limitless stakes. But the complexity of
the ‘game’ is so high that we can be fairly sure it is irrational and counterproductive
to attribute moral fault to one’s failure to fully understand it at any given
point in time, even if one is – likely mistakenly – quite certain one
understands it oneself.
Comments
Post a Comment