The Debate on How to Measure “Openness”

9 November 2012 – In a public comment issued today, Helen Darbishire, Executive Director of Access Info Europe and Toby Mendel, Executive Directorof the Centre for Law and Democracy respond to criticisms of the RTI Rating in the paper Measuring Openness: A Survey of Transparency Ratings and the Prospects for a Global Index by Sheila Coronel, Director of the Stabile Center for Investigative Journalism and Profesor at the Graduate School of Journalism, Columbia University.

The authors of the comment also raise concerns about the various global indices of “transparency” and “openness” and call for a more rigorous discussion about how to measure levels of access to information in practice.

update 12 November 2012 – A response from Professor Coronel is posted below the comment. Access Info welcomes this debate and invites other readers of our website to send us their comments.

The need for clarity on what we are measuring

Sheila Coronel’s paper, Measuring Openness: A Survey of Transparency Ratings and the Prospects for a Global Index, is the first serious piece of research about the systems for assessing government openness which have mushroomed in recent years, alongside a corresponding growth in overall interest in openness. It is useful inasmuch as it provides an overview of what is being done around the world, and also in its analysis of the challenging question of the practicality and utility of a global or super index. It largely misses the mark, however, in its assessment of existing systems, mostly because it compares them to the idea of a super index, rather than against their own objectives and the functional utility they actually provide.

A number of IGOs and NGOs have conducted or are developing systems for measuring the quality of government openness. Coronel’s paper divides these into five categories: rating right to information (RTI) laws; measuring transparency through a governance lens; evaluating RTI practice; assessing supply side interventions; and sector-specific initiatives. This mapping exercise is one of the strong points of the paper, providing those working in the field with an overview of what is going on.

This aspect of the paper could, however, be improved by providing a clearer overview of the various initiatives. As it is, the detail is somewhat buried in the text, leaving the reader to try to extract a broader picture on his or her own. A brief introductory statement at the beginning of each section (as in: “This section looks at the five main …”) or a table would help orient readers.

Another drawback with the paper is that it does not define what is meant by “openness” or “transparency”. As a result, it refers to initiatives which do not actually aspire to measure levels of access to information, such as governance indicators, without providing a clear critical analysis of what they assess or how extensive their assessment of the right to information is, in either law or practice.

The paper could also have made a more rigorous assessment of the content of the various indices which it cites, evaluating and comparing more precisely what they actually assess and how that data is obtained. For example, some of the governance indicators are based on very rough assessments of the legal framework for access to information and on a rather unscientific approach to assessing transparency in practice. The paper also qualifies some indicators as not being based on perceptions when a review of the questions and answers shows that they evidently are, at least in part.

By throwing governance indices into the bag without evaluating in any depth their access to information content adds a layer of confusion to the paper because it is not clear which indicators might be included in a possible “super index”. The paper could usefully evaluate what national and international research exists in specific areas of advancing transparency: legal framework, infrastructure, proactive disclosure, and reactive release of information.

A key thrust of the paper is to explore what the value of a global or super transparency index might be, and to assess whether such an exercise would be practical (logistically, in terms of funding, in terms of bringing various actors together). As an initial exploration of this issue, the paper makes a useful contribution here, in particular by exploring the various challenges in preparing such an index. It notes, for example, the enormous likely costs of such a venture, the challenges of bringing together sufficient expertise to prepare it, the many different actors, with sometimes divergent or competing interests, which have been involved in existing initiatives, and the difficulty of identifying what exactly it should measure.

It also addresses the utility of such a tool, recognising that it would be useful to donors and also as an advocacy tool, but also noting that it might oversimplify what is ultimately too complex an issue to be reflected in one index. The paper also tackles the question of whether a set of national or regional indices would be more useful than a global index, again noting pros and cons on both sides.

The paper could, however, be criticised for an undue focus on the idea of a super index, which many might deem to be a straw man. This is reflected in one of the headings: “VIII. The Ratings Paradox: Many Measures But No ‘Super Index'”. While catchy, this heading only makes sense if one starts from the assumption that there should be a super index. This becomes clear by analogy (no one would say: “So many universities but no super university”). As a result, we have not so much a paradox as a begging of the question. We do not believe that the idea of a super index is viable and note that indices and ratings in other sectors do not take this approach. A good example is the UNDP Human Development Index, which relies on only three key indicators: life expectancy, education and wealth. Other measures, including macro-economic indicators such as GDP, are based on agreed indicators which are measured by multiple actors rather than as part of one mega monitoring exercise.

One consequence of the focus on a super index is that the piece fails to explore the issue of what gaps are left by the other initiatives, apart from a very brief section at the end, since it works on the assumption that any gaps would be filled by a super index. This is a hugely important subject, which it would have been natural for a paper along these lines to broach.

We submit that a better approach would have been to assess the research gaps and then consider the best way to fill them. In particular, it would be useful to assess whether more extensive and systematic indices along the lines of or extrapolating from those that have already been conducted might largely satisfy the research needs that exist in this area.

Another consequence is the constant theme running through the paper that there are too many indices, as reflected in the claim that the openness community “suffers from a surfeit, rather than a lack, of indices”. This is not supported by proper analysis, for example showing that there was no cost-benefit correlation to a particular body of work or that there is unnecessary overlap and repetition in the indices. In some cases, highly contestable claims are made. For example, the paper states in several places that there is no dearth of comparative assessments of RTI laws. While it is true that several indices do include a few indicators relating to RTI laws as part of a wider basket of openness measurements, so far only the Access Info Europe and Centre for Law and Democracy RTI Rating claims to be a rigorous assessment of legal quality in this area. Prior to the existence of the harmonised standards proposed by the RTI Rating, national groups had no proper framework for comparing their country’s law with that of other countries.

The focus on a super index leads to the most serious shortcoming in the paper, namely its assessment of the value of the existing indices. Thus, indices are repeatedly criticised for not being global in nature or for only measuring certain types of openness, while in other cases indices are praised simply because they measure more features. In very few cases is reference made to the underlying purpose of the index in question, whether this is a useful purpose, or the extent to which the index satisfies the purpose. This is a bit like criticising a Ferrari for not being able to carry a family of four, on the basis that it does not meet all of the needs one might wish for in an automobile.

This shortcoming is clearly evident in relation to the RTI Rating, developed by our two organisations, which comes in for strong criticism because it only assesses legal protection and not implementation. This fact and its implications have been acknowledged from the beginning. What the paper fails to recognise, however, is that as a tool for legal improvement – a matter of not inconsiderable importance to which enormous energies are being directed in countries around the world – the RTI Rating has consistently proven its value. This is reflected, among other things, in the very high demand for it. Ensuring a strong legal framework for access to information is an indispensable step towards increased government transparency in practice in countries around the world. Many civil society organisations are campaigning for a stronger legal framework and the RTI Rating provides valuable comparative arguments to support such campaigns. No doubt many of those responsible for the other indices covered in the paper would make similar claims to the effect that their own indices also serve useful purposes in the various different sectors they address.

Coronel’s paper will hopefully initiate more of a debate among advocates about existing initiatives to measure government openness, something which would undoubtedly prove useful. Despite the criticisms above, the paper provides us with a good starting point for this debate, which we should take advantage of. A natural next step in this process would be to host a workshop bringing together some key players – people who have been involved in developing existing indices, both on transparency and in other areas, academics and civil society groups who have used the indices – to identify gaps in the existing research and look at ways to address them.

Response by Sheila Coronel: The need for further debate

Helen Darbishire and Toby Mendel make interesting points but seem to misunderstand the intent of my paper, Measuring Openness: A Survey of Transparency Ratings and the Prospects for a Global Index.

A bit of background might help. The paper was commissioned by the Open Society Foundations in late 2011 with the specific task of addressing the question, Do we need a Global Right to Information Index? This was a question that OSF staff had been debating among themselves as well as with other right-to-information activists. The study was intended primarily to inform the discussion on that issue within OSF.

The paper’s ambitions were modest and stated clearly on the first page: To survey what’s out there and examine whether a new RTI Index made sense. The critique misreads the paper’s intent. So let me clarify. The study was not intended to:

1) provide a critique of existing indices;
2) compare the relative merits of the existing ratings and to privilege some over others; or
3) criticize the laudable efforts of many who have undertaken to measure transparency, however broadly and vaguely defined.

The paper took a democratic approach, recognizing the work that has been done in this field, including the governance ratings, which may not focus as squarely on right to information and therefore be dismissed by transparency advocates. I thought that despite the methodological issues associated with their indices, the governance community’s pioneering efforts to rate government openness deserve acknowledgment.

To accomplish the goal of the paper, I asked representatives of various RTI groups whether a new Global RTI Index would be useful, what constructing a hypothetical Index would entail, and what it should include. A careful reading of the paper would show that I was not comparing existing ratings to the “straw man” of a super index. My paper reflected the opinions of those I interviewed: Some of them saw the usefulness of an index with 600-800 indicators; others envisioned something something more modest. I was surveying points of view, not endorsing them.

The last part of the paper suggests ways to move forward by having a conversation on how the gaps in the research can be addressed. We hope the conversation that the paper has opened and Darbishire and Mendel took up can continue and broaden to include other colleagues.