Wednesday, February 08, 2012

The Half-life of Leaders and the Half-life of Regimes


Thinking back on the last couple of posts, a couple of questions arise naturally. First, there is the question of the survival of regimes in general, not just democracy: if most democracies die within 15 years or so, what is the median duration (the “half-life,” if you will: the time it takes for half of them to be gone) of other regimes? And second, there is the question of the relationship between the half-life of regimes and the half-life of leaders: do regimes whose leaders tend to have longer half-lives also have longer half-lives? My interest in these questions stems from my current research on the question of legitimacy: my sense is that legitimacy matters much less than people usually think to the survival of large-scale patterns of political power and authority, so I’m interested in trying to figure out if there are systematic differences in survival between more and less “legitimate” regimes and other political structures. So this is another exploratory post, with lots of graphs.

How do we measure the duration of non-democratic regimes relative to democratic regimes? Though democratic regimes are not always straightforward to identify, non-democratic regimes come in a much wider variety of forms – from hereditary, absolute monarchies to single party regimes and multiparty hybrids, and some of these forms shade gradually into one another over the course of many years. (For a sense of this variety, consider the differences between Mexico before the 1990s under the PRI, whose presidents succeeded each other with clockwork regularity every six years and a lively opposition existed but could never win the presidency, North Korea today, where opposition is non-existent and succession is controlled by a tiny clique, and Mubarak’s Egypt.) To get a handle on this question, I’m going to use the Polity IV dataset, which codes “authority characteristics” in all independent countries (with population greater than 500,000 people) from 1800 to 2010. (I’ve been convinced by Jay Ulfelder’s work that the DD dataset I used in my earlier post is not appropriate to study comparative regime survival due to the way it codes certain democracies where alternation in power has not occurred as dictatorships, which systematically biases the survival estimates of democracies upwards).

The Polity dataset is fairly rich. Most researchers seem to use only the composite indexes of democracy and dictatorship it offers, but these indexes, while useful, do not have a strong theoretical motivation, as Cheibub, Gandhi, and Vreeland argue here. For my purposes, it is best to use the dataset to extract those authority characteristics of political regimes it purports to measure: the mechanisms of executive recruitment, the type of political competition, and the degree of executive constraint. Mechanisms of executive recruitment include hereditary selection, hybrid forms combining hereditary and electoral mechanisms, selection by small elites, rigged elections, irregular forms of seizing power, and competitive elections; types of political competition range from the repressed (all opposition banned, as in North Korea) to the open (typical of thriving democracies); and executive constraints range from unlimited to “parity” with the legislature. (See the Polity IV codebook for a full discussion). In theory, the dataset distinguishes eight kinds of executive recruitment mechanisms, ten types of political competition, and seven degrees of executive constraint, plus three different kinds of “interruption” (including breakdowns of state authority, loss of independence, and foreign invasion and occupation), leading to a possible 563 possible patterns of political authority, but these dimensions are all highly correlated (over .99); indeed, only 212 combinations of executive recruitment, political competition, and executive constraint actually appear in the date, most of them only once and for short periods of time, and it is obvious that some combinations do not even make sense. (And those that do make sense do not always capture all the information we would normally want about a political regime: Polity has no good measure for the extent of suffrage in competitive regimes, for example). But the dataset helpfully indicates how long each of these patterns last, so we can attempt a first cut at the question of the half life of regimes using a Kaplan-Meier graph:

The half-life of an “authority pattern” – a combination of an executive recruitment mechanism, a type of political competition, and a specific form of executive constraint – is 6.6 years, though the tail of the distribution is very long: some of them have lasted for upwards of a century. Switzerland, for example, has had the same authority pattern for 162 years, and Afghanistan retained the same authority pattern from 1800 to 1935 (a hereditary monarchy). As it happens, social and political life comes to be mostly structured in most places by the long-lasting patterns, but most patterns of authority do not last that long. Incidentally, at this level of abstraction there are no great regional differences in the half-lives of authority patterns, though it does seem as if authority patterns last slightly longer in Europe and the Americas than in Africa and Asia:


Yet an “authority pattern” is too amorphous a unit of analysis. We might get a better handle on the question of comparative regime survival by looking specifically at the mechanism of executive selection, since the manner in which the chief power in the state is selected is normally thought to be quite important and to have far-reaching consequences: whether supreme power is achievable by hereditary succession only or through designation within a closed elite or via competitive elections or some other means seems to have important consequences.

Of all the mechanisms of executive selection identified in the Polity IV dataset, only one, “Competitive Elections,” is unambiguously democratic by most people’s lights. Though within the dataset the fact that a regime has competitive elections is no guarantee that it will also have universal suffrage, for the most part “competitive elections” identifies most countries that most people think are democratic. We can thus calculate the duration of all periods of “competitive elections” and compare them to the duration of all “non-democratic” periods – those periods where executive selection happened through some other means. The details are somewhat tricky (see the code), but here are the results:


Some notes. As we might have expected from the discussion in the previous post, full hereditary monarchies (Russia under the Tsars, Saudi Arabia, Iran under the Shah, Portugal and Romania in the 19th century, Nepal in the 19th century, among others; there are 65 episodes in 40 countries in the dataset) have the longest half-lives (nearly 32 years; this increases if we collapse the two hereditary monarchy categories. Note these are not “constitutional” monarchies like the British one). But competitive electoral regimes are no slouches, with a half-life of about 17 years (in keeping with Jay’s numbers in this post, though he uses a different dataset), and as time goes on their survival rates seem to converge with those of monarchies. Similarly, “limited elite selection regimes” (e.g., single party-communist regimes, where a narrow clique selects the leader without open competition) have a half-life comparable to that of democracies, but as time goes on they tend to break down more; their survival rates seem to diverge from those of competitive electoral and monarchical regimes. Low survival rates are found especially among political forms that appear to have internal tensions, such as competitive authoritarian regimes, where elections exist and are contested by an opposition, but it is very hard for the opposition to attain real power (e.g., Zimbabwe today). I confess I don’t really understand Polity’s “Executive-guided transition” category, but it’s obviously a regime that is turning into something else (the Pinochet regime in Chile after the 1980 referendum but before the return of competitive elections counts, for example), and “ascription plus election” includes regimes where the monarch retains some real power but the legislature and other executive offices are no longer under its thumb  (only a few are recorded in the data, including Belgium in the late 19th century and Nepal in the 1980s and 90s); it makes sense that such regimes, halfway between “real” monarchies and purely constitutional monarchies like the British, should have short half-lives as the conflict plays out and either turn into competitive electoral regimes or into more absolute monarchies.

It is also interesting to compare the relative survival rates of competitive electoral patterns of authority vis a vis periods where selection happens by non-competitive electoral means (regardless of whether the selection means stay the same):

Though the difference seems to narrow as time passes, the half-life of non-democracy since the 19th century has been a bit longer than the half-life of competitive electoral regimes (23 vs. 17 years). In sum, political regimes do not last much more than a generation.

(For those still following, the regional breakdown indicates that competitive electoral periods have had the longest half-lives in Europe and the Americas, whereas non-democracy has had the longest half-lives in Africa and Asia; no special surprises there, though I am not sure about the reason).  

How does this relate to the half-life of leaders? For that, we turn to the ARCHIGOS dataset by Goemans, Gleditsch, and Chiozza, which contains information about the entry and exit date of almost all political leaders of independent countries in the period 1840-2010. It’s a fantastic resource – more than 3000 leader episodes, and information on their manner of exit and entry. And the conclusion one must draw from examining it is that power is extremely hard to hold on to; a ruler’s hold on power seems to decay in an exponential manner (note I haven’t checked that the decay really is exponential in the technical sense, though I'm thinking of doing that). Over this vast span of time, covering all kinds of political regimes, the half-life of leaders is only about 2 years, or a third of the median authority pattern, as we might have expected from the previous post (though the half-life of leaders is even smaller here):



Yet of course it is the people who beat the odds – those who last much longer than the average leader – the ones who shape social and political life. (There’s an endless parade of mediocrities in the dataset, two-bit prime ministers gone after a few months of ineffectual dabbling and the like).

(But don’t some leaders come back to power after losing it? In fact, the vast majority of leaders only attain power once, and never return to power, though about 100 did manage the feat three or more times. In fact, practice does not help; survival in power only appears to decrease the more previous times the leader had been in power, though note that the uncertainty of the estimates also increases, and one might expect that age would take its toll too).

We are now in a position to extend the analysis in the post below by merging the Archigos and the Polity dataset to calculate the survival curves for leaders conditional on the pattern of executive recruitment. Though I would take these curves with a grain of salt, here are the results:


As expected from the previous post, it’s good to be king – the half-life of absolute kings is about 12 years (and it’s almost always king: there are only 41 female leaders in a 3000 case dataset). Interestingly, a similar result for the half-lives of Chinese emperors is reported here (10 years: Khmaladze,  Brownrigg, and Haywood 2010, ungated) as well as for the half-lives of Roman emperors (11 years: Khmaladze,  Brownrigg, and Haywood 2007, ungated). There is something about the deep structure of monarchies in many different periods and societies, it seems, that points to a half-life in power of about 10-13 years for monarchs. 

More generally, authoritarianism pays in terms of leader tenure, despite the fact that non-competitive regimes do not always last longer than competitive ones. The highest half-lives of leaders beyond monarchs are found in limited elite selection regimes, executive-guided transitions (where non-democratic leaders are changing the rules), and competitive authoritarian regimes; but democracies are more lasting than most of these regimes (except for monarchies; see above).

Another way of looking at this is to calculate what we might call the “personalization quotient” of a regime: divide the half-life of the leader (for a given regime) by the half-life of the regime to get an idea of the percentage of the regime half-life that a leader is expected to last. So a monarch is expected to last about 37% of the half-life his regime (31.86 / 12); this is the most intensely personalized of regimes, as one might have expected given that it is devoted to the maintenance of a family line. The next most personalized regimes are competitive authoritarian regimes (28%), “self-selection” regimes (15%), limited elite selection regimes (16%), and “executive-guided transitions” (40%; this is pretty much by definition, however, so I don't make much of them). A competitive electoral regime has a personalization quotient of 8% - an expected leader half-life of about 2, divided by an expected regime duration of about 17 years. From the point of view of such a leader, it pays to try to move towards a competitive authoritarian regime, and it pays for the leader of a limited elite selection regime to move towards a formal hereditary monarchy (as is happening, in a sense, in North Korea right now, and almost happened in Egypt and Libya). 

But are authoritarian regimes more risky, so that leaders will try to hang on to power more? We can also look at that using the archigos dataset. Though leaders in non-democratic regimes have a slightly higher risk of leaving office with their heads on pitchforks or hanging from lampposts, the vast majority leave by "regular" procedures.  

More, perhaps, could be said. I’ve been wondering, for example, about whether there is a relationship between the breakdown of particular regimes and the tenure of leaders, though I’m not sure how to go about tackling that question. From the point of view of the study of legitimacy, however, what strikes me is the general fragility of patterns of authority and rule: few patterns of authority are expected have half-lives that exceed a single generation, and most don’t last nearly as long, regardless of their “legitimation formula” – heredity, competitive elections, ideology, whatever. Of course, some beat the odds, especially some competitive regimes and some monarchies, and these shape history. But the historical evidence suggests that they are in a sense the exception rather than the rule.

Code necessary for replicating the graphs in this post, plus further ideas for analysis, here and here. You will need to download the Polity IV and ARCHIGOS datasets directly, and this file of codes from my repository.

[Update: fixed some typos,  9 Feb 2012]

Tuesday, January 31, 2012

Comparative Political Leader Survival, 1946-2008

After playing around with Jay Ulfelder's data on the survival of democracy in the previous post, it occurred to me that I have not seen survival estimates for leaders in different kinds of regimes like the ones he discusses for democracies. So, in the spirit of exploratory data analysis, here are some graphs using data from the DD dataset of political regimes by Cheibub, Gandhi, and Vreeland, which provides information about regime type, effective heads of government, and leadership tenure for most countries in the world for the period 1946-2008. (Fuller data and methods note at the end of the post).

First, let's look at a simple estimate of leader survival for all (effective) political leaders in all regimes in the post WWII era:


The figure shows an estimate of the proportion of leaders who are expected to still be in power after n years. So, for example, after four years in power, less than half of all leaders are expected to still be in power, and after 20 years less than 10% of all leaders are expected to still be in power; the majority of all leaders last less than 4 years in power, and the vast majority less than 5. [Update: of course, some of these leaders come back to power after a shorter or longer period out of power.] This may be easier to see if we draw the plot on a logarithmic scale:
This looks like a classic "long tail" distribution of a kind often produced by "rich get richer" processes: most leaders don't last in power very long, but those who beat the odds can do very well indeed, as power feeds on itself and leaders become increasingly difficult to dislodge. (I won't say anything about power laws for fear of attracting the ire of the statistical gods). 

Nevertheless, democratic leaders and non-democratic leaders aren't equally successful at hanging on to power:
While the median democratic leader can expect less than 3 years in power, the median autocrat can expect a bit less than 7. And the gap widens with time: less than 8% of all democratic leaders can expect to hang on to power for more than 10 years, but more than 40% of autocrats do, and no democratic leader in the sample has lasted more than 25 years in power (Lynden Pindling of the Bahamas and Eric Williams of Trinidad and Tobago; your mileage may vary as to how democratic you think they were, but that's how they are coded), whereas nearly 20% of autocrats do. This may seem obvious (after all, autocrats typically impose larger barriers to political competition than democratic leaders, and ordinary people face larger obstacles in trying to get rid of them) but it also presents a bit of a puzzle, for democracies are supposed to be more responsive to popular wishes and more legitimate, and dictators are always at risk of being overthrown by their close associates. (For one influential explanation of the observed pattern of survival by Bueno de Mesquita, Smith, Siverson, and Morrow, see here and here). The greater legitimacy of democratic leaders, and their closer connection to popular opinion (to whatever degree: let's not exaggerate, either), does not seem to translate into a surer hold on power. 

Not all autocrats do equally well; absolute monarchs are especially successful at holding on to power:

Though the uncertainty of the survival estimate is larger for monarchs than for other regimes (there are just fewer monarchs in the sample) their advantage is large enough to be noticeable above the noise: nearly 60% of all monarchs can expect to last 20 or more years in power, while only 20% of other autocratic rulers can expect to survive that long, and less than 1% of democratic leaders can hope for such a career. This is another reason to think the Middle Eastern monarchs are probably safer from being overthrown than the leaders of the "republican" regimes, as Victor Menaldo has recently argued. His argument points to specific features of the political culture of these monarchies that enable elites to better monitor and discipline leaders; but other things may be going on as well (monarchs elsewhere in the world also appear to have done well, so whatever enables monarchs in the Middle East to survive appears to also work elsewhere, though admittedly most of the world's absolute monarchs since 1946 have been concentrated in the Middle East). It is also interesting to note that military and civilian dictators do not differ (much) in terms of their survival expectations (the estimates fall within each other's 95% confidence intervals), despite theoretical and empirical work that suggests that military regimes are less stable than civilian dictatorships. (Of course, this could be due to any number of things, including problems with the coding of the data and the fact that the stability of regimes is a different thing from the stability of any given leader's grip on power).

I was also curious to see whether the survival of leaders differs across regions of the world. And at least for non-democratic leaders, that seems to be the case:
There's a lot of uncertainty in these estimates (and I could have made a mistake), but in general it seems to be the case that autocratic leaders have had less success hanging on to power in Latin America, despite the USA's not always benevolent influence in the region. That was surprising to me, so perhaps someone will tell me why this is wrong. By contrast, democratic leaders all have very similar survival expectations all over the world; no evidence of "regional" effects seems evident:
Now that I've mentioned the USA's influence, we might as well look into whether autocrats (or democratic leaders) have had more trouble hanging on to power during or after the cold war. Surprisingly, it seems they have not: leaders in both regimes had the same survival expectations in both periods. But this was tricky to figure out how to calculate, and it is the most likely spot where I might have made a mistake (see sources and methods note below):

Sources and methods. A full description of the DD dataset can be found here, including the criteria it uses for categorizing regimes as democratic or non-democratic and a general defense of its methodological approach. (It used to be possible to download it as well from that page, but the form no longer seems to be working. I've animated the dataset here.) These criteria have been criticized for a variety of reasons, but in general DD does not suffer from worse problems than many of the other common datasets of political regimes (like Polity IV or Freedom House). It is possible that some of the coding decisions they make might influence the estimates of survival presented above, e.g. because they err on the side of classifying some regimes as dictatorships that could have been considered democratic (when there has been no alternation in power). This would tend to bias downward the survival estimates of democratic leaders. At any rate, DD includes information about leaders and their tenure, which is missing in other datasets and makes the data-wrangling easier, though this information is not always complete (there is sometimes more than one leader in a year for a given country, a fact that the dataset must omit, given its country-year resolution) and is not quite in the right format for survival analysis. I thus had to reshape  it (R code and a general description of the process; rank amateurism on display). I created three data files: one for the plots of survival for all leaders and leaders by democracy/non-democracy (ddsurvival.csv); one for the plots of survival by autocratic regime type (ddsurvival2.csv); and one for the plots of survival during and after the cold war (ddcoldwar.csv). (R code for generating all plots is here). These files treat the leader spell as a case; "right censoring" occurs when the leader dies or if the leader is still in power by 2008 (see the DD codebook; the files use a variable called ecens2). Since DD does not distinguish between deaths by natural causes and political assassinations or death in revolution, this introduces a certain amount of bias; in theory, "political" deaths should not result in"censoring" of the data. I should note that the plot of survival by autocratic regime  type does not take into account some cases where "left censoring" occurs (i.e., when a regime starts before 1946), though the number of cases where that is a problem is very small. Finally, there are a small number of repeated cases in ddsurvival.csv and ddcoldwar.csv due to problems guessing the right "entry date" for the leader; these must introduce some small amount of error, though I couldn't possibly say how much or in what direction the bias would work.

[Update, 1/31/2012: Fixed minor typos]
[Update, 1/02/2012: Changed location of code and data files]

Wednesday, January 25, 2012

How Fragile is Democracy? A Footnote on Jay Ulfelder’s Dilemmas of Democratic Consolidation

Stories about what makes democracy stable tend to take one of two forms. The first stresses socialization, learning, and the gradual development of norms of tolerance for political opposition and alternation in power. On this view, stability usually comes with time, and the breakdown of democracy is to be explained primarily by reference to cultural and normative deficits: corrupt socialization, the emergence or widespread acceptance of anti-democratic ideologies, the weakness of democratic norms of tolerance, and the like. In its simplest form, this view suggests that democracy fails due to a lack of a proper “democratic culture” and/or “inexperience” with democratic institutions. The second kind of story, by contrast, stresses the strategic relationships among major collective actors, like political parties, the military, the government, and foreign powers. On this view, stability emerges only when democracy is a self-enforcing equilibrium (ungated), that is, when all actors find that supporting democratic institutions is their “best response” to the actions of everyone else in light of their own interests, and its breakdown is to be explained primarily by reference to the changing interests and expectations of these actors.

These two types of stories are not wholly incompatible, to be sure: norms can be strategically undermined and manipulated, and an organization’s view of its interests is typically shaped, even constituted, by whatever shared values its members have been socialized into. Strategic equilibria may only be achievable after a period of learning, and the resulting strategic configurations may crystallize into norms. Yet scholars of democratization typically fall on one or the other side of this divide, and the differing stresses they place on normative vs. strategic considerations when explaining the stability or fragility of democracy have important practical implications. Jay Ulfelder’s Dilemmas of Democratic Consolidation: A Game-Theory Approach falls squarely on the “strategic” side of the divide. (Full disclosure: I don’t know Jay personally, but he’s a virtual acquaintance. I enjoy reading his blog and interacting with him online; I have linked to his work before, and he has returned the favour. And while I was writing this post, he cheerfully answered my ignorant questions about his data and models).

Jay identifies three actors as especially relevant to the stability of democracy: the incumbent government (and its key constituencies), opposition forces, and the military. (One might have included foreign powers as well, as some people with similar models of democratic stability do, but simplicity is a virtue in game-theoretic approaches, and anyway the basic framework can easily be extended in that way). The importance of the first two actors is clear: democracy (in a minimal sense, at least; more on this in another post) survives when the government is willing to yield control of the state apparatus to the opposition when it loses an election (it prefers to yield power rather than stage an autogolpe or engage in fraudulent electoral practices), and when the opposition is not tempted to overthrow the government by extra-legal means if it is unable to win an election. The military has independent importance in this framework because, if the government faces substantial opposition mobilization, the military is likely to be the only actor with sufficient organizational capacity to repress it. But this capability, of course, introduces a familiar problem: if the military can repress the opposition, why wouldn’t it overthrow the government as well? A cultural or normative explanation of military self-restraint would stress the development of a particular military culture (professionalism, a strict norm of civilian supremacy); and while such an explanation is certainly part of a sociologically complete answer (even though I do not in general find it convincing), a strategic understanding of motivation suggests that given sufficient incentives, all norms break. One must explain the self-restraint of not only the military but of all relevant actors – their willingness to respect whatever norms of political competition structure the political system – at least partly in terms of how respecting these norms is strategically appropriate for them given their interests (regardless of how these interests come to be constituted in the first place, a question which we leave aside for the moment). If the “best response” (in terms of the protection of their interests) of at least some of these actors to the actions of others is to defect from the norms of political competition, then democracy will not survive. Democratization is not like growing up: democracy can fail even after long experience, if the key actors involved find that in order to protect their important interests their best response to the actions of others is to undermine it.

Given this basic framework, two important questions emerge. First, we would want to know how different regimes affect the interests of particular actors, and how these actors in turn come to understand the ways in which different regimes affect these interests. An answer to this question tells us how the “preferences” of different groups for different kinds of democratic and non-democratic regimes come to be structured, and indicates what factors can change their evaluation of the available possibilities. Jay glosses over this question perhaps too quickly, speaking simply of the “material” interests of the various actors under different regimes, even though his own case narratives later in the book suggest that other kinds of interests – status interests, for example – are also quite important. (Militaries seem to understand their interests in terms of status, in particular). At any rate, the model he develops assumes that the preferences for particular regimes of the key collective actors will vary depending on whether there is open political competition for power, and on whether or not they or their allies are in control of the state. Whatever the regime type, the benefits that come from controlling the state (or having one’s allies control the state) will be partly offset by the expected costs of losing power, though these prospects will differ between regime types; at least in theory, the average costs of losing power in a democracy will tend to be smaller than the costs of losing power in an autocracy, but the probability of losing power might seem to be larger in a democracy (and conversely, for opposition forces, the probability of attaining power might seem to be smaller in an autocratic regime). Changes in the expected benefits of holding power (or having one’s allies hold power) and/or the expected costs of losing it (or having one’s allies lose it) will thus affect the evaluation of different regime types, and hence the preferences of actors for different regime types. Big oil booms or busts in petrostates, changes in the ethnic composition of a population or in the prevalence of identity voting, the availability of rents to allies of the opposition and the government, expected cuts to military budgets or threats to territorial integrity, are all among the sorts of things that should have an impact on whether relevant collective actors prefer democracy to autocracy (and on whether they prefer the opposition or the military to be in control in their disfavoured options). Incidentally, here we have a potential explanation for the fact (documented in the book) that democracy tends to last longer in richer countries: to the extent that poorer countries have more rent-driven economies (economies where wealth depends primarily on control of political power), the costs of losing power will be proportionately larger for the government, and the probability of attaining power proportionately smaller for the opposition, leading to greater incentives to “defect” on all sides - to undermine democracy if you are the government, or to attempt extra-legal seizures of power if you are not.

Second, we want to know what parameters determine whether an actor thinks their best response to the actions of others in light of their interests should be to support democracy or to attempt to undermine it (“stage a coup,” for simplicity), given a set of preferences over regime types. Using a “reduced form” game theoretical model, Jay suggests that the relevant parameters are the degree of uncertainty about the preferences of other actors for different regime types, and their capacities to pull off a coup. (Jay actually distinguishes between the coordination costs of different actors and their capacity for pulling off a coup; but these seem to be entangled with one another, since a collective actor’s capabilities to pull off a coup are greatly influenced by its coordination costs, and it might have simplified his model to have a single parameter summarizing the combined effect of both technical capabilities and costs of coordination.) Generally speaking, the government and the military should have greater capacities to undermine democracy than opposition forces, since the latter are (by definition) shut out from power; and indeed of the 195 episodes of democracy in the 1955-2007 period that Jay identifies, fully 27% ended via  “executive coup” (a shorthand for all the ways in which sitting incumbents can undermine political competition, from the quick autogolpe of a Fujimori to the more drawn-out dismantling of the independence of all institutions of a Chavez or Putin), 21% via classic military coup, and only 2.6% via opposition-led rebellion (44% survived beyond 2007, and about 5% ended in other ways, such as via foreign intervention or the splitting of the country). Nevertheless, militaries and governments are not always capable of pulling off coups due to organizational disarray or other divisions (as in Ukraine in the early 2000s, a case discussed in the book, when the bits of the Soviet army that became the Ukrainian army were in no position to stage a coup despite consistent government attempts to cut its budget and privileges), and opposition forces sometimes have access to considerable resources (e.g., during the coup attempt against Chavez in 2002 the main television stations were in the hands of opposition forces); and capacities can change, sometimes abruptly. (In general, while technical capacities are relatively stable, costs of coordination are not constant or wholly under the control of the relevant forces; relatively insignificant events can suddenly lower or raise them, as when a man setting himself on fire in Tunisia catalyzed protests and collective action that continues to this day. To speak of a “costs of coordination” parameter is merely to summarize a wide array of factors that affect the capacities of groups to engage successfully in risky collective action, a point that I don’t think is sufficiently stressed in the book).

The question of uncertainty is more complicated. Jay argues that even if every relevant collective force prefers democracy, given enough uncertainty about the intentions and capacities of other actors, particular groups may still wish to “strike first” to avoid being dominated by others, regardless of their capacities. If the opposition thinks the government is trying to set up a dictatorship (even if this is not the intention) it may feel that its interests would be better protected by staging a preemptive coup, even if its first preference would be for a democratic regime (this was arguably the case in the 2002 coup against Chavez, as the book notes, though I should note that some of the leaders of the coup might in fact have preferred an autocratic regime). This naturally raises the question of how different actors can credibly signal commitment to a democratic regime, something that is a bit underexplored in the book. For one thing, signals of commitment to democracy are partly tied to changes in capabilities to undermine it: saying that one supports democracy is less credible when one is busy building up ethnic armies, as Jay discusses in the case of the breakdown of democracy in Cyprus in the 1960s. (Oddly, this case does not appear in the dataset in the appendix). Indeed, uncertainty can be induced by well-meaning “democracy aid” that appears to change the relative capacities of actors. For example, aid to democracy-promoting NGOs may make the government feel more threatened by opposition forces, and hence more likely to undermine electoral institutions and harass such organizations in order to prevent a later loss of power (as appears to have been the case, in part, in Russia after the “color revolutions” of the 2000s).

Perhaps the most interesting section of the book is its critique of democracy promotion as a way of "consolidating" democracy. Drawing on work by Carothers, Jay argues that much democracy promotion is insufficiently sensitive to the strategic implications of particular interventions, and too wedded to functionalist assumptions about the proper “components” of democracy. (Some of the arguments here echo Bill Easterly’s critique of development assistance, especially the points about the difficulties of evaluating a lot of “democracy promotion” activities, our lack of knowledge about important aspects of the democratization process, the bureaucratic incentives to measure the effectiveness of democracy promotion by inputs rather than outputs, and the tendency to disregard the wider strategic effects of these interventions on the domestic politics of the target country). It is too easy to give money for “civil society” (in fact, the availability of such money will usually stimulate the creation of organizations with high-sounding names but dubious democratic credentials); it is much more difficult to manage a delicate strategic situation so that all actors end up signalling credible commitments to democratic norms. The book’s framework suggests that investments in institutions that guarantee credibility (like election observation missions, the judiciary, and electoral commissions) are thus better than direct aid to opposition forces or “civil society” promotion in fragile democracies.

Now for some (small) critique. Aside from some quantitative evidence, Jay uses various case studies of democratic breakdown – in Ukraine, Fiji, Cyprus, Venezuela, and Thailand – and survival – in Spain – to illustrate the explanatory power of the basic model. Basically, what he does is to show how if we look at the parameters that the model indicates are “interesting” – the apparent costs and benefits of different regimes for the relevant actors, their signals of commitment to democracy, and their (changing) capacities for staging coups – we can construct narratives that shed light on the process of breakdown (or survival). Despite the fact that I am basically inclined to believe the strategic model of democratization, I confess I was not always entirely convinced by the interpretation of events in these narratives. For example, in the case of Venezuela it appears that the undermining of electoral competition occurred during a boom in oil prices, which increased the government’s capacity for subverting democracy, whereas Jay stresses the previous period of stagnation (which did, to be sure, loosen the commitment of various actors to democracy, and led to a number of unsuccessful coup attempts). And while the narrative Jay constructs can certainly be adjusted to account for this, it may be too easily adjustable, and one may need a clearer picture of how interests are affected by different kinds of regimes to make it fully convincing.  (Including status considerations: the demand for respect was an important ingredient in the rise of Chavez, for example).

Moreover, the evidence presented does not always sufficiently establish how important strategic considerations are relative to alternative explanations stressing norms, learning, and the like. Consider, for example, figure 3.4 from the book:


The figure represents the estimated proportion of democracies that survive after n years since their founding elections (the "Kaplan-Meier" estimate); the dotted red lines indicate that about half of all democracies last 15 years or less. (The dotted black lines represent the 95% confidence interval). Clearly, most democracies don’t last long; but the curve does not seem to indicate that longer-lasting democracies have a relatively constant risk of breakdown (which is what one would expect from a purely strategic model). If learning and other “cultural” considerations were really important, should one expect a slower decay process, or not? (I’m not actually sure). What one would like to know, among other things, is how likely it is that a democracy breaks down given that it has lasted n years, a quantity I would not know how to estimate (and perhaps cannot be cleanly estimated). Similarly, Jay shows that democracies seem to last longer in Eurasia and Latin America than in Africa – by quite substantial margins – but it is unclear that the parameters identified by the model can account for this difference, and the basic survival curves seem at least consistent with various “cultural” stories. Or consider the following figure, which I’ve produced using the data in the appendix (it's not actually in the book):


If I haven’t made any obvious mistakes, the figure suggests that most democracies last longer on their later attempts than on their first attempt. (So Egypt and Tunisia are not likely to end democratic 10 years from now, even if they manage a relatively successful transition now, but might have better luck in a later attempt). But given the book’s model, this is puzzling: why is there a “learning” effect at all? Why should the strategic situation “improve” on the second attempt at democracy?

It is also interesting to look at countries that have had lots of democratic spells (and hence lots of breakdowns). From the data in the book, 9 countries have had 4 or more “spells” of democracy: Greece, Argentina, Syria, Peru, Turkey, Ghana, Peru, Sierra Leone, and Ecuador (Argentina is the world champion, with 7 spells and 6 breakdowns). Why have democratic regimes in these countries been so unstable? From the model in the book, one would expect that some features of the state would either impose major costs on some actors if they accepted democracy; make their capacities to undermine it (or demand it) fluctuate widely; and/or make it difficult for credible commitments to democracy to be sustained (leading to a great degree of uncertainty about the intentions of major actors). Are these features especially pronounced in these countries, relative to others? Or is the degree of political conflict over regimes in these countries to be explained in other ways? Inquiring minds want to know.

Anyway, I enjoyed the book and recommend it to anyone interested in democratization. Also, if you have made it this far you should definitely read Jay’s blog.

[Update 1/26/2012: a few minor stylistic adjustments]

Wednesday, January 18, 2012

Belief and Action


 (Warning: somewhat abstruse and probably wrong philosophical argument. Follows up in a more philosophical vein some of the themes in the post on emotion in authoritarian states)

What does it mean to “believe” something? In one traditional model of belief, a person believes X if, on asking herself the question of whether X is true, or is the case, she answers affirmatively. Moreover, in assenting internally to this proposition, she also commits herself to the implications of this proposition for action; to the extent that she rejects some of these implications, she must revise her assent, or else she does not really believe in X. (Let’s put aside the complications introduced by the idea of believing something with some greater or lesser degree of confidence). This model of belief is as old as Plato (see Sophist 263e-264b): to have a doxa (a seems to me condition, translated belief or opinion), is to assent internally to a proposition, and such assent implies commitment to the implications of that proposition, both theoretical and practical, at least to the extent that these implications can be made out. (Sometimes they cannot be made out very far). The key point in this model is that my subjective, first-personal answer to the question of whether X is true has special weight as evidence that I believe (or do not believe) X; and that my holding of that belief (in the sense of answering the question of whether X is true in some particular way, as if the belief were a kind of database record – if present, I believe it, if not, I don’t) is part of the causal pathways that determine my acting in particular ways rather than others. The model thus posits a separation between my private beliefs (which can be identified by asking myself questions, but are not observable to anyone else) and my public behaviour; and indicates that the later is partly caused by the former (insofar as my beliefs and desires jointly determine my actions).

Yet though this model of belief is serviceable in most cases, I’ve come to think it misleads us when trying to understand beliefs in contexts where the consequences of not believing something are either dire or minimal. To see this, consider a slightly different model of belief, one that takes a third-personal rather than first-personal perspective. In this model, to say that A believes X is to say that A acts in ways consistent with X and its implications. If asked whether X is true, A will say yes; if A asks herself whether X is true, she will say yes; if X implies doing F, then A will try to do F. To say that A believes X (with some greater or lesser degree of conviction), then, is to summarize the degree to which A’s actions are consistent with X; and the greater the consistence of this behaviour, the greater the degree of “belief” we are justified in attributing to A. (If I say to myself that I do not believe the rope bridge will hold my weight, and that I do not want to fall down to the raging rapids below, but then nevertheless walk on the bridge, then others can justifiably say that I do not quite believe what I said to myself, even if I said it with conviction and sincerity). My first-personal answer to the question of whether X is true, in other words, has no special weight as evidence that I believe X; what matters is the consistency of a pattern of thought and behaviour (of which my first-personal answer to the question of whether X is true is a part, to be sure). Here belief and action are not causally connected; on the contrary, to say that A believes X is simply to say that A is likely to act in ways consistent with X (including saying to herself that X is true).

Now let’s consider a context in which not acting in (some) ways that are consistent with X produces bad consequences. Consider Vaclav Havel’s famous example of the greengrocer who puts up a sign saying “Workers of the World, Unite.” Not putting up a sign did not necessarily mean horrid punishment in the Czechoslovakia of the 1980s, but it certainly prevented “a quiet life.” Did the greengrocer believe what the sign said? Havel was doubtful; the sign, he thought reasonably enough, was merely a way of declaring his loyalty to the regime, and had little to do with the greengrocer’s private beliefs (my views exactly). But perhaps when the greengrocer asked himself the question of the unity of the workers in the dead of night, he thought yes, that is a good idea; and he certainly acted in some ways that were consistent with the slogan. Does this mean that he believed in the slogan? In the first model, this is the most important piece of evidence; to the extent that the greengrocer said to himself yes, the workers of the world should unite, then he believed in the slogan (even if he was also cowardly and incapable of fully committing himself to the full implications of his belief). In the second model, however, whether or not the greengrocer answered that question to himself in the affirmative is much less important, for it is clear that he will not really act in ways consistent with the slogan absent the constraint. The attribution of belief here implies a prediction as to how the greengrocer would act if the constraint changed (if collective action became possible, for example, or if he had the opportunity to emigrate), but the prediction does not depend on the greengrocer's private answer under constrained conditions to the question of whether the workers of the world should unite. In fact, the attribution of belief here does not say anything about the subjective state of the greengrocer’s mind. In this model, we would only say that the greengrocer really believes in the slogan if, absent the constraint, he continued acting in the same ways as before, or even started proselitizing for the coming revolution, remained in the communist party, etc.

A similar problem arises in contexts where the bad consequences of assenting to some proposition and of behaving in accordance with its implications are minimal. In such circumstances, people often express “beliefs” (i.e., assent to particular statements) that are not particularly predictive of what they will do, or especially well-integrated with their actions. Consider people who, in anonymous telephone polls, give assent to the proposition that “Obama was not born in the USA.” This assent is part of a pattern of action and behaviour that is clearly hostile to Obama (it shows disapproval), but it need not indicate that these people are willing to engage in civil disobedience, or otherwise act in ways that are fully consistent with that proposition. The question of whether they truly believe that Obama was not born in the USA can only be answered by observing the extent to which these people are willing to attain consistency in their pragmatic orientation towards that particular proposition; the true believers are those who make fools of themselves on TV or who bankrupt themselves attempting to “prove” the proposition even in the face of a hostile social reaction.

A lot of public opinion surveys basically measure assent to propositions under conditions where the consequences of assent (or lack of assent) are minimal. This is fine when we are talking about the relationship between such assent and acts with small costs like voting (there’s very little cost to me in harmonizing a stated preference for a candidate and voting for that candidate), but far more problematic when talking about the relationship between stated assent and acts with large costs, or complex actions involving many changes in habitual patterns of behaviour. Consider, for example, the fact that vast majorities of people in countries with all sorts of political systems assent to the proposition that “democracy is the best system of government.” Here’s table one of Inglehart 2003 (ungated earlier version):


Does this mean that “democracy” could be easily sustained in all of these countries, since so many people seem to think it a very good system of government? Does it mean that authoritarian systems in all these countries are in danger? This seems unlikely: measured assent to propositions when such assent has no particular consequences is not predictive of people’s actions under different constraints. I would go so far as to say that most of the people here do not really believe democracy is the best form of government in the third-personal sense explicated above; pressures towards pragmatic consistency with respect to such a vague statement are minimal in the survey context. To be sure, Inglehart finds a correlation between aggregate answers to a number of related questions about the goodness of democracy and the history of democracy in a country, as measured by the Freedom House index - see table 3 – but this correlation says nothing about the consistency of individual answers to the questions Inglehart uses to measure support for democracy, which may in fact be quite inconsistent among themselves and with respect to implied behaviours. Assent to propositions, in other words, is not sufficient evidence of particular present or future commitments to action. But to say that belief in the sense of the second model (as the consistency of a pattern of thought and behaviour) drives behaviour is merely to state a tautology, since belief in the third-personal sense simply is a certain consistency of behaviour.

Sunday, January 01, 2012

The Complexity of Emotion in Authoritarian States


Seeing the videos of crying North Koreans after the death of Kim Jong-il, many people gravitate to the question of whether the emotion on display there is “genuine.” As I’ve written before, I think this question misses the point: to the extent that cults of personality matter politically (that is, secure ongoing commitments to a regime and its institutions), the genuineness of emotion hardly matters (though it doesn’t hurt). Cults of personality work precisely by making it very hard for people not to provide credible signals of commitment to a political leader (including, if necessary, proper public mourning when they die, complete with sufficient displays of crying and rending of garments). And North Korea is not a place where people who do not feel the requisite emotions can safely stay home, much less display unapproved emotions in unapproved ways. If nothing else, the inminban (neighborhood committee: like your nosy neighbors, only superempowered to snoop on you) will note your uncooperative and recalcitrant disposition, and then you may be passed over for job opportunities or promotions (especially important in relatively prosperous Pyongyang, where most of the videos are coming from); your family may encounter difficulties in securing educational opportunities and various material goods (the state, after all, controls most of these opportunities); and of course you (and your family) may be punished in a variety of ways, depending on how severe your “lack of respect” for the late and dear leader is judged to be. [Update 15 January 2012: via Doug Mataconis, I learn that people are in fact being punished for insufficient mourning, as expected].

Under the circumstances, a bout of competitive crying (helpfully encouraged here and there by zealous supporters or genuinely distressed people) is a relatively low price to pay to be left alone; and there is some evidence that at least some people engaged in this sort of strategic mourning the last time North Korea had a leadership transition, when Kim Il-Sung died (as I discussed at the end of this post, on the basis of some anecdotes presented in Barbara Demick’s fantastic Nothing to Envy). But of course by participating in the official ritual of mourning regardless of your “sincere” feelings you confuse everyone around you, including, it must be noted, supposedly “well-informed” North Korea watchers. How could you possibly tell who might not feel genuinely sad (outside a very small circle of close family members, perhaps), when everyone around you seems to be crying so hard about the death of the leader, and the state broadcasts carefully chosen images that suggest that the entire nation is in shock and mourning? (Note how few images from cities like Chongjin have been shown, where people are far less privileged than the residents of Pyongyang, have more access to news and information coming across the border from China, and where anti-Kim feeling is not entirely unknown). Natural cognitive biases (the “availability heuristic,” for example) and social cues all conspire to tell the disaffected that they are alone in their indifference or hatred for the recently departed; in fact, they tell them that their very feelings must be mistaken, and that they better get the right kind of feelings, pronto. Could you, dear reader, remain sulkily at home in these circumstances, with no certainty of receiving any support from anybody should you get in trouble with the authorities, just to make a statement? If so, you are probably made of sterner stuff than most.

Incidentally, it is worth noting that crying convincingly is not that hard to do, especially in groups, though it seems as if only genuinely distressed people could manage it. Like yawning or laughing, crying is often contagious, and just as groups of people often laugh hard and genuinely at unfunny jokes, groups of people can cry hard and genuinely for reasons that have little to do with “real” grief. Funeral practices in many nations often include or have included groups of mourners who are expected, sometimes even paid, to engage in ostentatious displays of grief that may be far out of proportion to the sentiments of those present, and that at any rate amplify whatever actual feelings of grief others may be experiencing. As some have noted, funeral attendance (accompanied by appropriate displays of emotion) is an important part of Korean cultural norms, indicating respect for the dead; flattery inflation can take care of the rest. And even if you are not directly ordered to cry, “spontaneous” sorrow is a useful signal to express in these circumstances, and the appropriate language for expressing such sorrow is known to all in North Korea, and helpfully reinforced by state propaganda. (This includes the knowledge of where to congregate, what to bring, what to wear, etc.)

Nevertheless, the question of whether the people being shown in those videos are actually feeling distress and sadness is understandable. In many social situations, the genuineness of emotion really matters to us, and the possibility that North Koreans genuinely cared for Kim Jong-il makes us uneasy. It suggests that people can be easily “brainwashed,” in this case to care for a man who, by almost any objective measure, made their lives much worse than otherwise, in fact actively harmed them by his rule. 

North Koreans are certainly exposed to much propaganda claiming that their leader has godlike powers, and have often great difficulty in accessing alternative sources of information. (It is not, however, impossible for them to access such information, especially since the 90s, and many people, especially in places close to the Chinese border, appear to have done so). The North Korean propaganda agencies have long experience in creating narratives of national resentment that deflect responsibility for outcomes from leaders onto outsiders, and these narratives appear to resonate at some level with many people in the DPRK. Indeed, sometimes their claims are even minimally plausible: the US and other powers do bear some responsibility for North Korea’s current state, and the atrocities of the Korean War were not all (or even mostly) committed by communist forces. It is but a short step from here to the thought that in this sort of information environment most people are likely to believe special claims about the Kim family, and hence are likely to have felt genuine grief at Kim Jong-il’s passing. Our folk-psychological ideas postulate a simple connection between information, belief and emotion, and hence suggest a quick “fix” for this situation: change the information environment and you change the emotion; change the emotion and you change the regime (eventually). Yet I think emotion in highly authoritarian contexts is a much more complex matter. It is not even clear what “genuine” emotion could possibly mean here.

Consider, to fix ideas, a context where belief, emotion, and action are all aligned. Here, reports of belief (saying “I love so and so” if asked whether you love so and so), displays and signs of emotion (including the appropriate physiological reactions at the mention of so and so’s name), and actions (voting for so and so, giving them gifts, etc.) are all consistent with one another: we do not observe discrepancies between what people say (even to themselves) and what they feel or do. We might say that people in such contexts exhibit pragmatic consistency.

Pragmatic consistency is not always achievable even in settings where the costs of exit are low. We are not necessarily consistent in everything we say, feel, and do, for reasons having to do with everything from fears of social exclusion to an inability to figure out which actions are actually consistent with our beliefs (consider the epistemic difficulties involved in identifying what counts as the “environmentally friendly” thing to do in particular circumstances), or which of our beliefs are actually consistent (if nothing else, computational complexity considerations prevent us from always identifying such inconsistencies). We sometimes even speak of “integrity” when we sense that the achievement of pragmatic consistency is uncommon in some context: the person of integrity is the person who can achieve consistence in belief, emotion, and action, even when such achievement is difficult. Yet the ideal of pragmatic consistency makes it possible to speak meaningfully of “genuine” emotion – emotion that aligns with our beliefs and actions. (By contrast, we tend to understand signs of emotion that do not align sufficiently with beliefs and actions as indicating ersatz emotion).

We constantly strive for pragmatic consistency, sometimes by dubious means: we manage cognitive dissonance by discarding inconvenient beliefs, avoid information that might threaten cherished values or that increases our anxiety, rationalize our choices in various ways, regret actions that are too obviously inconsistent with what we tell ourselves or our loved ones, etc. This is complicated by the fact that we appear to have deeply rooted biases towards interpreting the status quo as just, and that these “system justification” motivations may conflict with “ego justification” (self-image) and “group justification” (group identity) motivations. In any case, the greater the dissonances to be managed, and the greater the costs of exiting a context, the harder the achievement of pragmatic consistency, and the less meaningful talk of genuine emotion becomes.

States like North Korea induce enormous cognitive and emotional dissonances, despite their large degree of control over the information environment: they claim that there is “nothing to envy” and that the nation is “most prosperous” while offering hunger and decaying infrastructure; they claim that the leader loves you while threatening the most horrendous punishment if you fail to obey the slightest arbitrary rule; they tell you to be proud of the nation while constantly discouraging all real comparisons; they blame all bad outcomes on outsiders, and all good outcomes on insiders; they proclaim freedom while restricting it in myriad ways, and so on. (In fairness, such claims are not only made in authoritarian states; but the dissonances are more obvious there). Achieving pragmatic consistency under circumstances that involve high exit costs and credible threats of punishment for failing to say, feel, or do particular things is very hard; it is hardly surprising that those who merely say what they think in such contexts often appear as heroes of integrity – the Havels and Solzhenitsyns of Soviet times, for example.

Managing these cognitive and emotional dissonances sometimes requires ignoring or reinterpreting inconvenient information (e.g., most people in the GDR were able to watch West German TV, but did not necessarily change their behavior in response to it); blaming the Tsar's ministers rather than the Tsar for bad outcomes; rationalizing the status quo in various ways; and so on. But just as cognitive dissonance can induce belief adjustment in either direction (and hence "providing"  North Koreans with more information will not necessarily imply that they will revolt), emotional dissonance can induce emotional adjustment in either direction: one can learn to feel the required emotions in order to avoid the anxiety of not feeling the right emotions. (One should not underestimate the human capacity for self-deception). Imagine what not feeling the approved emotions might entail in the North Korean case: negatively evaluating one's own country; feeling ashamed of it; feeling duped; feeling betrayed; feeling despair at the magnitude of the errors committed in the past; feeling unable to have pride in the achievements of one's community. Some people are capable of living with such feelings without falling into deep depression; most people, I suspect, compensate by aggressively chauvinistic nationalism and other strategies. (“Sour grapes,” for example).

But, precisely because such emotions are formed under a distinct kind of pressure, they cannot be easily interpreted as a guide to what might happen when conditions change – when exit costs are lowered, or collective action suddenly becomes possible, and so on. Those who cried the loudest and most “genuinely” at the death of the leader are not necessarily those who are most likely to defend the regime if conditions were to change; there is in fact surprisingly little evidence that the people who are most “emotionally invested” are always the most likely to defend a regime in times of crisis. (Defenders are typically found among those who have obvious material stakes in the regime, or who clearly stand to lose status). In other words, the crying of thousands is not a meaningful guide to what the people of North Korea would say, feel, or do under conditions more conducive to pragmatic consistency. 

(Happy new year everyone!)

[Update 2 January 2012: added "in times of crisis" to the last paragraph, the bit about the good Tsar to the next to last paragraph, and fixed some grammatical problems]

Tuesday, December 20, 2011

Endnotes: Solstice Edition

It's the summer solstice here in New Zealand, a day which always seems full of meaning: a more suitable end for the year than the astronomically meaningless 31st. Perhaps because I grew up in Venezuela, where every day is about the same length, I always enjoy the idea of getting to the longest day (or the shortest day, in the winter solstice), and like to mark the occasion; among other things, it feels like a fitting time to take stock and look back on the year.

I (re)started blogging a year and a half ago, mostly as a way to force myself to write while I was on research leave, and I'm grateful and astonished at the fact that I seem to have acquired a bit of a readership. Some 200 people seem to read this blog regularly via RSS feed, and perhaps 100-200 more read it through various other means. Several of the posts on cults of personality and related phenomena have been picked up by very high traffic sites, garnering thousands of pageviews, and the unexpected attention pushed me into starting an actual research project on the topic, which will consume me probably for years :).  Thanks to the people who have linked to or shared my posts, and thank you readers!

In the spirit of celebrating the holidays, I give you some links for your holiday reading (or viewing) pleasure:
Some biologically-themed links:
And finally, some beautiful holiday extremophiles for you: haloarchaea turn Lake Eyre in Australia pink:

Wednesday, December 14, 2011

Flattery Inflation

Reading Aloys Winterling’s entertaining revisionist biography of Caligula (which combines my interests in crazy dictatorships and the classical Greco-Roman world – two great tastes that go even better together!), I came across the useful concept of “flattery inflation” (cf. p. 188). Though Winterling is talking about the relationships between the emperors and the senatorial aristocracy in the early Roman Empire, the idea seems more broadly useful to anyone interested in understanding the development of cults of personality and other forms of status recognition gone haywire.


First, the context. From Augustus (Caligula’s great grandfather, the first emperor) onward, the emperor was the most powerful person in Rome, partly due to his control of the Praetorian Guard, and partly due to the economic resources the imperial household had come to control. At the same time, the emperor depended (at least early on) on the senatorial aristocracy to rule the empire. In more technical terms, the 600 or so member senate constituted the emperor’s selectorate, the group from which the emperor needed to draw the people who could command the legions, coordinate the taxation of the provinces, and in general govern the empire and keep him in power. The emperor could differentially favour members of the senatorial aristocracy (by promoting them to various high-status positions), but segments of the aristocracy could also conspire against him and potentially overthrow him, selecting a different emperor, especially since principles of hereditary succession were never clearly institutionalized (though emperors early on had wide latitude in selecting their own successors). Nevertheless, though senators as a group might dislike a particular emperor, they did not necessarily agree on any given alternative (much less on any alternative acceptable to the Praetorian Guard, which also had some say in the matter), and at any rate individual senators could always benefit from convincing the emperor that some other senators were conspiring to unseat him (via maiestas [treason] trials, in which the convicted were executed and their property confiscated – something which incidentally provided an incentive for accused senators to commit suicide before their trial, so that their families could keep their property). Senators thus faced some coordination costs in acting against even a hated emperor. These obstacles were not insurmountable (conspiracies did take place, and sometimes succeeded), but they were not insignificant either.


So far, so good: nothing too different here from any number of autocracies in the ancient world (and many modern ones as well).Yet there is one thing that makes this strategic situation interesting: despite the huge disparity in military and resources between the emperor and the members of the aristocracy, emperors and senators did not at first have widely different social statuses, and the senate remained the central locus for the distribution of honours in Roman society. Senators jockeyed over relative status (marked by such things as the seating order in the circus or the theatre, the order of voting in the senate, the lavishness of their hospitality in their private parties, the achievement of political office, the number of their clients, etc.) while recognizing the primacy of the emperor, but they remained notional social equals. Augustus was known as the princeps, literally the “first citizen” (hence the early Roman Empire is normally called the “principate”); the standard republican offices were filled more or less normally and retained their meaning as markers of status (though elections were basically rigged, when they were held at all, to produce the results decided in advance by the emperor); the senate voted triumphs and special festivals in honour of particular people and events, and technically confirmed the emperor’s own position; even the title imperator originally meant nothing more than military commander (though it came to be applied exclusively to the princeps or certain members of his family). Most importantly for our purposes, the first two emperors (and many later ones as well) did not (and could not, for reasons that should become clear shortly) compel the sorts of marks of obeisance typical of Hellenistic monarchies, where the “status distance” between the rulers and the members of the traditional elite had been much larger than in Rome: proskynesis (prostration), kissing the feet or the robe, worship as a god, elaborate forms of address, clear hereditary succession, etc. (Incidentally, in these monarchies, as in the later Roman empire, the immediate “key supporters” of the ruler tended to be assimilated to or incorporated into the ruler’s “household,” which limited the extent to which they could gain status at his expense: though the ruler might treat you like family, you could always be seen as his “slave”).


In fact, Augustus in particular went out of his way not to signal any sort of intention to become a “king,” that is, a ruler like the Hellenistic monarchs of an earlier time (including, most famously, Alexander the Great), despite the fact that the Roman polity had obviously become a “monarchy” in all but name, something that was common knowledge among all members of the elite. He lived in a relatively small house on the Palatine hill; stood for office in the normal way, and sometimes resigned it; and let the senate conduct the business of the republic in appearance, cleverly signalling his intentions so that senators could reach the “right” result (i.e., the result Augustus wanted). Why?


Part of the answer to this question has to do with the way in which signalling any intention to become a king was thought to risk nearly certain conspiracy. This was, after all, what happened to Julius Caesar (Augustus’ adoptive father). By behaving in ways that signalled an intention to become a king in the Hellenistic sense (whether or not he actually wanted to do so), he threatened to destroy the foundations of senatorial status in the Republic, i.e., to drastically humiliate them vis à vis the emperor. The Republic was built on norms that rejected kingship and competitively allocated relatively “equal” high social status among the senatorial class, so that any credible signals of an intention to re-establish kingship seem to have greatly lowered the coordination costs of dissatisfied senators for conspiring against the emperor.


So how do we get from Augustus to Caligula, who attempted (among other things) to widen enormously the social distance between himself and the senatorial elite, especially in the last year of his reign, when a full-blown emperor cult – a cult of personality – was instituted? More generally, how do we get to the later empire of 100-150 years later, which was not too different from the hereditary Hellenistic monarchies that had been seemingly abhorrent to the senatorial aristocracy of a few generations earlier, and which included proskynesis, emperor cults, etc?


Here is where the idea of flattery inflation comes in. The process is grounded in the “disequilibrium” between material resources (military and economic, in particular) and social status noted above. The emperor controlled more material resources than any given senator, but his social status was not fully commensurate with his resources. Senators as a group liked this situation. But individual senators could benefit (both materially and in status terms) from credibly signalling special loyalty to the emperor. Such signalling could take two forms, which I’ll call “negative” and “positive.” The negative form consisted of informing on each other. The disadvantage of such negative signalling (for the emperor), however, was that denunciations also increased the risk of actual conspiracies and devastated the elite on which he relied. The positive form consisted in what we normally call “flattery.” The problem here was that any particular form of flattery quickly became devalued, and the emperor lost the ability to distinguish genuine supporters from non-supporters. Moreover, flattery inflation tended to diminish the collective social status of the senatorial aristocracy: the more the emperor was praised, the more the senators were abased. For example, in Roman elite society the morning salutatio was an important indicator of status: friends and clients visited their friends and patrons in the mornings, and the more visitors a senator had, the higher his status. But nobody could afford not to visit the emperor every morning, or to signal that they weren’t really “friends” with the emperor. So the morning salutatio at the emperor’s residence turned into a crush of hundreds of senators, all of them jostling to get a little bit of the emperor’s attention, and all of them pretending to be the emperor’s “friends,” regardless of their private feelings. Similarly with senate votes granting honors to the emperor. In principle, the senate retained some discretion in the matter, but individual senators could always sponsor extraordinarily sycophantic resolutions in the hopes of gaining something from the emperor (offices, marriages, etc.), and other senators could not afford not to vote for such resolutions.


In sum, flattery inflation was, from the point of view of the senators, a kind of tragedy of the commons: as each senator tried to further his relative social status within the aristocracy, they tended to devalue their collective status. And it was not necessarily a good thing from the point of view of the emperors either, who could not easily distinguish sycophantic liars and schemers from genuine supporters, and who often disliked the flattery. So the emperors tried to dampen it or manage it to their advantage. Winterling distinguishes three different responses.


First, as noted earlier, Augustus managed flattery inflation through ostentatious humility. Everybody could then pretend that things remained the same even though they all knew that Augustus was ultimately in charge. But this required indirectly signalling his intentions so that senators had enough guidance to know what to vote for and who to denounce without ordering them to do anything (which would have resulted in a catastrophic loss of status for the senators, potentially risking a conspiracy). Such indirection could lead to confusion when practiced by a less able political operator, like Tiberius. Tiberius apparently detested flattery, but he was at the same time unable to clearly communicate his intentions to the senate, unlike Augustus. His inability to master the complex signaling language that Augustus had used prevented him from containing flattery inflation very well, leading him to use increasingly blunt instruments to tame it (like moving to Capri permanently and banning the senate from declaring certain honours: this is sort of the equivalent of price controls in "economic" inflation, and was just about as effective). This provided endless opportunity for denunciations, since senators were constantly making “mistakes” about what Tiberius really wanted. The more denunciations, moreover, the less actual conspirators had to lose, leading to a poisoned and dangerous atmosphere, especially as factions of Tiberius’ family schemed over the succession. Most potential heirs didn't live long; Caligula was the last man standing.

At first, Caligula tried the Augustan policy, and was reasonably good at it. But for a number of reasons that Winterling describes, he seems to have changed tack in the third year of his reign to deliberately encourage flattery hyperinflation. He did this, in part, by taking the senators literally: when they said that he was like a god, he basically demanded proof of this, thus forcing them to worship him as a god. Or when he was invited to dinner, he forced senators to ruin themselves to please him. And he demonstrated contempt for their status by the way he behaved in the circus and elsewhere. (The famous story of how he named his horse a consul can be understood as one such insult). Yet the senators could not retaliate by revealing their true feelings; their coordination costs had increased insofar as their individual incentives were always to flatter Caligula.

Strategically speaking, the point of this seems to have been to lessen his dependence on the senatorial aristocracy and to move the regime towards a Hellenistic model. (Winterling discusses some suggestive evidence that Caligula might have been planning to move to Alexandria, an obviously symbolic move to the historic capital of Hellenistic dynasts). Runaway flattery inflation not only makes it exceedingly difficult for conspirators to succeed (even the most innocuous comment can be used against you when flattery inflation is in full swing) but also succeeds in completely humiliating the flatterers (in this case the senatorial aristocracy) and lowering their collective social status vis a vis the ruler. If flattery hyperinflation is not stopped, the end result is that the ruler no longer has to use "ambiguous" language to manage his relationship to the selectorate. He can just order them to do things, without worrying about slighting their status. One might also speculate that it also helps to institutionalize the principle of hereditary succession, which was not clearly established in the early empire, and which would contribute to a shift in the selectorate from the aristocracy to the imperial household. (It does not seem to be coincidental that cults of personality in the modern world appear to be associated with forms of hereditary succession even in regimes that are not in principle hereditary, like North Korea or Syria). But of course flattery hypeinflation doesn't always work for the ruler: the humiliation of the aristocracy eventually led to the downfall of Caligula, and (according to Winterling) contributed to his characterization by later writers as the "mad emperor."

Anyway, I think one can extract a more general model of flattery inflation from all this. When material resources are more much more unequally distributed than status, and status is competitively allocated, flattery inflation can result. But rulers (or those who control material resources) will usually try to dampen or manage this kind of inflation, since flattery has obvious disadvantages from their perspective. Yet there seem to be circumstances under which they will try to encourage flattery hyperinflation, e.g., when the costs of coordination for challengers are relatively low and the maintenance of "low inflation" requires extensive communication management. One could also imagine other ways in which this process may play out. For example, if status is more unequally allocated than material resources, high status rulers may encourage flattery (hyper)-inflation (e.g., cults of personality) in order to accumulate these resources. (This seems to have happened in the Soviet system under Stalin and in North Korea). And if material resources become more equally distributed, or more diverse in their effects, as in many modern economies, one might see flattery deflation.

[Update 15/12/2011 - added the bit about Tiberius moving to Capri, clarified a transition]

[Update 17/12/2011 - fixed some minor typos.]