Small Year-to-Year Changes in CPI Scores Are Meaningless. Small Year-to-Year Changes in CPI Scores Are Meaningless. Small Year-to-Year Changes in CPI Scores Are Meaningless

Last month, Transparency International (TI) released the latest version of its Corruption Perceptions Index (CPI)–an index that I continue to believe is useful and important, and that I regularly defend against the blunderbuss critiques sometimes leveled by a few of my colleagues in the academy. Yet every year when the CPI comes out, we see a spate of articles and press releases that focus on individual countries’ score changes from one year to the next. (For some examples from this year, see here, here, here, here, and here.) TI contributes to this: Despite the qualifications and cautions one can find if you search TI’s web site diligently enough, TI’s lead press release and main CPI report inevitably play up these changes, connecting them to whatever larger narrative that TI hopes to convey. This year was no exception. This time around, the press release emphasizes that “(f)our G7 countries score[d] lower than last year: Canada (-4), France (-3), the UK (-3) and the US (-2). Germany and Japan have seen no improvement, while Italy gained one point”–and TI treats this as evidence for the assertion, in the title of the press release, that the “2019 Corruption Perceptions Index shows anti-corruption efforts stagnating in G7 countries.”

Sigh. I feel like I have to do this every year, but I’ll keep doing it until the message sinks in. Repeat after me:

  • Small year-to-year changes in an individual country’s CPI score are meaningless.
  • Small year-to-year changes in an individual country’s CPI score are meaningless.
  • Small year-to-year changes in an individual country’s CPI score are meaningless.
  • Small year-to-year changes in an individual country’s CPI score are meaningless.
  • Small year-to-year changes in an individual country’s CPI score are meaningless.
  • Even big changes in an individual country’s CPI score may well be meaningless, given the fact that, in a collection of 180 countries, random noise will sometimes produce unusually large changes an a handful of countries (for the same reason that if you flip a set of five coins 180 times, odds are a few of those times you’ll get five heads or five tails).
  • Because year-to-year changes in an individual country’s CPI score usually meaningless, they are not newsworthy, nor can they be invoked to make substantive claims about corruption’s causes or consequences, or the success or failure of different countries’ anticorruption policies.

I don’t want to repeat everything I’ve written before explaining why this is so; I explained this at length in my post last year, after the 2018 CPI came out. (That post, in turn, relied on my prior writing on this topic: See here, here, here, here, here, and here.) I’ve kind of given up hope that TI will actually modify the way it talks about within-country year-to-year CPI score changes in its press releases. I know enough people at TI (great people, I should add) who are aware of what I (and plenty of others) have had to say on this topic that I can only assume that the failure to change is a deliberate decision on the part of TI’s leadership and communications team. I strongly suspect that the serious researchers at TI who work on the CPI are slightly embarrassed by how the index is framed by the organization for public and media consumption, but there’s nothing they can do about it. Despite the apparent futility of my prior efforts, I’ll keep harping on this, in the vain hope that the message will gradually trickle out.

7 thoughts on “Small Year-to-Year Changes in CPI Scores Are Meaningless. Small Year-to-Year Changes in CPI Scores Are Meaningless. Small Year-to-Year Changes in CPI Scores Are Meaningless

  1. Matthew, I see your perspective here. As an academic, you feel professionally obligated to point out the superficiality of these changes in data, both on principle and because you and other may reasonably believe that putting too much stock in these changes. And, you’re not wrong, at least in the technical sense, so it’s good you’re offering that critique to round out people’s perspective.

    But, speaking as someone who work is steeped in advocacy efforts, I also feel like the critique either misses or under appreciates a critical reality in the advocacy space, that for better or for worse, has little to do with academic accuracy. That reality is, in a nutshell – nuance usually isn’t news either, and sometimes leading with nuance means your target audiences misses your overall message, or worse, someone else co-opts your work, and uses a better message against you.

    I can’t speak for TI directly; I appreciate their work and believe it largely has credibility from a technical expertise standpoint (though with highs and low, just as our work has as well). But, I know they are an advocacy organization, pursuing advocacy objectives, using advocacy strategies. That’s the field I’ve worked in for 20 years.

    In that time, I’ve seen very little that fundamentally undermines Saul Alinksy’s (seen as a thought leader on social change organizing) maxim that (paraphrasing widely) “a leader may…weigh the merits and demerits of a situation which is 52 percent positive and 48 percent negative, but once the decision is reached he must assume that his cause is 100 percent positive and the opposition 100 percent negative.”

    An advocate/organizer’s fundamental job is to build a network of build committed to some form of social change. Alinksy’s point is that you don’t often inspire people by rallying them around a 52% solution; you don’t move a decision maker by asking for 52% of a policy change. You often need to magnify that small margin into something larger to galvanize people to act.

    You have to do this with integrity and with strong strategic sense of when it’s needed, and what the trade-offs are. You don’t lie or manipulate. But, I would argue you don’t apologize for using effective communication techniques that will move people from indecision to action, especially if the need for that action is firmly rooted in a deeper, consistent truth (or in this case, data and analysis). Why?

    First, because arguably people aren’t moved to act by data alone. They are moved to act by narratives that make the problem you are seeking to solve relatable, urgent, solvable and that includes their agency in the solution, somehow.

    Second, you don’t apologize because if you don’t this, someone else will. For example, speaking in broad generalities, you could chalk up our failure to address the climate crisis as a partial result of a consistent over reliance by academics, advocates and others on facts, accuracy and nuance – which sometimes came at the expense of other efforts to create a way for people to be emotionally invested in solving this problem, and in the solutions we’d use to do so. Meanwhile, the forces opposed to action on climate – in addition to their economic clout – mastered this latter strategy and used it well to block real action.

    Where would we be if long before, scientists had decided that, against the convention of their profession to hedge and qualify to convey nuance, they had raise the alarm much louder, much clearer, much sooner? That’s not to put the climate crisis on them (that blame lies squarely with the fossil fuel industry) but only to make the broader point that being ‘right’ is not on its own a winning strategy.

    So when I see TI (or another group doing something similar) magnifying small differences in their analysis, I don’t see a group ignorantly touting social math either to blow their own horn, or to seek superficial media content (though that certainly can and does happen in the advocacy space).

    I see an advocacy group grappling with the perennial dilemma that (stealing from McLuhan) ‘the medium is the message’ – doing their best to remain faithful to the underlying truth their whole body of work represents, while acknowledging that a deep and accurate report that never gets read isn’t much use from an advocacy perspective.

    Perhaps this is something you’ve grappled with and it’s not reflected here. Or, perhaps you believe that in this case the negatives of amplifying these small changes in the index greatly outweigh any positive benefit. I’d beinterested in your thoughts on either point.

    • The general issue you raise is indeed one that I’ve grappled in the past, and continue to struggle with. See here:

      More on the Tension between Analysis and Advocacy for Anticorruption Academics

      Assessing Corruption: Do We Need a Number?

      The Role of Academics in Anticorruption: Some Tensions

      On this particular issue, though, I do indeed believe that the negatives of amplifying these small changes in the index far outweigh any positive benefit, even when one takes into account the fact that, as you correctly point out, the advocacy context is quite different from an academic research context.

      For starters, I’m not at all convinced that, in terms of shaping the larger narrative, it’s all that helpful for advocates to fixate on these relatively small up-and-down changes. There are all sorts of ways to use this data–including cross-sectional variation and longer-term trends–to advance a compelling narrative.

      For another thing, one of the most pernicious effects of focusing on such changes (from the perspective of effective advocacy) is that it can get countries to articulate their measure of anticorruption success in terms of achieving particular CPI score/rank changes. But the stickiness of the index means big moves are unlikely, and sometimes effective anticorruption efforts can actually increase the scores in the short-term. This can produce cynicism, fatalism, premature declarations of defeat, etc. And sometimes this fixation on small recent changes in CPI scores can lead to advocacy conclusions that are flat-out wrong or backwards.

      And finally, call me old-fashioned, but I tend to think that in the long term, rigorous evidence-based advocacy will be more effective than seizing on whatever superficial-but-meaningless blip can be used to support a narrative.

  2. Thank you, Matthew, for this reminder. The opinion piece I wrote for our local paper (El Norte/ Grupo Reforma: https://refor.ma/bbS9a) essentially warns against taking the statistically insignificant one-point increase in Mexico’s CPI as proof of success. I then presented three possible explanations for the lack of results: potential hypocrisy in this anti-corruption administration; ineffective sincerity in the face of the complexity of corruption; and the statistic itself. The last is a geeky exploration of the changes in the underlying sources, ending in a warning that the combination of lower bribery incidence for firms and deterioration in the (perception of) rule of law may signal a re-centralization of corruption in Mexico. This nuanced view was applauded–privately–by my anti-corruption friends and received no additional attention, which should not surprise Mark Hays.

    Our current president, known as AMLO, and his supporters seized on the fact that Mexico has stopped falling on the CPI and, thanks to that insignificant one-point improvement, passed those countries still tied at 28 to jump eight spots in the ranks. The message was repeated and will certainly be touted continuously until next year as “proof” of the sincerity of AMLO’s anti-corruption rhetoric.Their approach is much more effective.

    According to TI, only two countries’ scores experienced a statistically significant change from 2018 to 2019: Angola (up 7 points) and Nicaragua (down 3). To find a statistically significant change for Canada, it is necessary to compare 2019 to 2015 (down six points) or to 2012 (down seven). And despite falling below 70 this year, the CPI for the United States is not significantly different from any year going back to 2012, when the current methodology was introduced. (Is that why the Trump administration pays no attention to it?)

    As Mark Points out, the CPI is not meant to measure “corruption”–as if it were a single item–with perfect accuracy, but rather to promote anti-corruption efforts by shining a light on the problem. It takes into account a variety of sources that refer to some of the different ways that corruption manifests itself, although most of these sources are expert surveys. The current methodology does allow year-to-year comparisons and controls for some of the “noise” by keeping the same sources. Any remaining noise is most likely the result of corruption scandals or other short-term media coverage.

    Over the past several years, I’ve stretched my academic identity by working with civil society organizations. (Full disclosure: I’m also on the CPI Advisory Group.) They appreciate the subtlety of my insights, but the final public messages are always black or white. They understand, far better than I, how to make a point that will get the public’s attention.

  3. It does get frustrating, and you are not alone. (Applies to the way the WGIs are interpreted also.) I agree with your comment that it is not just a matter of insisting on perfection. And one can support the advocacy of the importance of the issue while questioning the unfounded statements about lack of progress here or there. But to say there is no progress when there is no evidence of that can undermine and demotivate sincere reform efforts.
    Just a few other thoughts:
    – At some point I subscribed to an email newsletter by Harvard’s Shorenstein Center on Media, Politics and Public Policy in which they make research findings known and accessible to journalists. Since it is usually journalist who repeat the messages, and they generally don’t have the time or inclination to look into the details, perhaps you could get a link to your blog into their newsletter…
    – Even criticisms often, seems to me, miss the main reason to downplay the big composite indices, and that is that there are so many holes in the data itself. When I wrote a paper on governance indicators in EAP, I showed that for the WGI-CC index, 70% of the values were missing and had to be imputed. (That was in 2009. I’m in the process of updating that now…)
    – When discussing the CPI, it is worth highlighting that TI’s Global Corruption Barometer is a bona fide survey and brings new information whenever it is released. And similarly for much of the other work they do, there is some interesting and helpful stuff. ..

  4. As a fellow transparency campaigner, I fully agree with Mark Hays. There is no point producing the CPI and then putting out a press release stating that “there hasn’t been any significant, meaningful change”. Arguably, there is no point putting out the CPI except to kick-start hundreds of discussions about corruption and anti-corruption efforts around the world. It’s just a conversation starter.

    I’ve done work for TI, and have done academic work. A lot of the research TI puts out is academically top notch. The CPI is in a different category – it’s a genius PR tool.

    If we want to constructively criticise the CPI, a more fruitful avenue is that taken by Oliver Bullough (“Moneyland”), who asks why countries where billions get stolen are rated as highly corrupt, while countries where those same billions are then laundered (think London and New York real estate markets) are rated as clean.

  5. Pingback: Episode 191 – the All Hail Airbus edition | The Compliance Podcast Network

  6. Pingback: This Week in FCPA-Episode 191 – the All Hail Airbus edition - Compliance ReportCompliance Report

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.