Editorial statement: Lessons from Goodhart’s law for the management of the journal

In this editorial statement we summarise some of the discussions we have had in the last months regarding the risks associated with the use of indicators for the measurement of research outputs, and how these risks should affect the management of the European Journal of Government and Economics . In particular, we focus on the consequences of the so-called Goodhart’s law, which states that when a measure becomes a target, it ceases to be a good measure. We also explain the latest developments in the journal in the light of our previous editorial statements, and present our strategy for the upcoming years.


Introduction
In the past year, our journal has continued its expansion along the lines set out in our last editorial statement of December 2013 . Since then, we have expanded our journal's base by including new authors from diverse disciplines and geographical locations. This year we have published papers from authors affiliated to institutions in Sweden, Italy, Turkey, Germany, New Caledonia, and Spain.
We have also been accepted by one new index in this period, namely Latindex, which is probably the most prominent index for journals based in Spain, Portugal and Latin America. Some Spanish universities place Latindex on a par with Scopus in their bureaucratic assessment methods for open competitions. Joining Latindex has also meant indexing by the ISOC database of the Spanish higher council of scientific research (CESIC), which is the largest public research institution in Spain and the third largest in Europe. We do not have news from Scopus yet but we expect to have some advances in the coming year.
Another index we expect to join soon is Econlit, which is an abstracting database service published by the American Economic Association and dating back to 1969. The service focuses on literature in the field of economics and uses JEL classification codes for classifying papers by subject. We have been using this classification system since the start of the journal, and now that we are about to enter our fourth year it is time to apply for inclusion in this database.
In spite of the relative youth of the journal, many of our papers have started to get cited in other journals and working papers. Some papers are in fact harvesting a great number of citations, such as a paper on corruption and growth with evidence from the Italian regions (Fiorino et al., 2012) or one about cyclical synchronization in the EMU along the financial crisis (Cancelo, 2012). But there are many others with a great potential impact.
Indexing by certain prestigeous databases is a form of recognition for a journal, and it can even affect the number of citations received by the papers it publishes (Varela, 2012). However, both joining a given index and being cited should not be seen as ends in themselves but as indicators of quality in academic performance. We shall explain why.

The risks of Goodhart's law
The problem arises when an indicator is used by policy makers to try to influence the performance of their institutions or academics. Quite often, university funding authorities try to base their decisions on some objective measure of research productivity, and it is not uncommon to see that this productivity is measured by some rough indicator, such as the number of publications in journals belonging to a given index or with a certain impact factor. University authorities, in their selection and promotion decisions, also make use of such objective indicators.
If publishing in a given journal or having a lot of citations will affect a department's funding or academic promotion opportunities, indicators become ends in themselves, therefore creating a set of incentives to "trick" the system, for example by concentrating publications in the easiest journal of a given index and boosting the number of self-citations. This may eventually end up breaking the original relationship between the indicator and the quality it intended to measure, an instance of Goodhart's law. Although originally applied to monetary policy (see Goodhart, 1975), this is now a well established regularity in many fields of economic policy that can be summarised by saying that when a measure becomes a target, it ceases to be a good measure. As a consequence, policy makers are forced to change their indicators periodically.
The problem of Goodhart's law for research assessment is especially serious in centralised bureaucratic systems that, on the one hand, rely heavily on such objective measures of performance and, on the other hand, are very slow to react to attempts to trick the system. It is much easier for an autonomous selection panel to detect how a candidate has concentrated his/her publications in lesser quality journals belonging to a given index than for the minister of education that approves performance indicators every number of years. But even in centralised bureaucratic systems indicators are updated from time to time, and it is not uncommon for some of those who play by their rules to feel that such updates are unfair change in the rules in the middle of the game. But that is not necessarily the case. Others might argue that those who complain were just too slow to trick the system.

Lessons for the management of the journal
The main implication of Goodhart's law for the management of the journal is that it is risky to rely on a set of politically-sanctioned indicators such as being indexed by a given service or having a lot of citations. The key to reduce this risk is to treat such indicators as imperfect measures of quality and not as ends in themselves.
Does this mean that indicators are not important? Not at all. It just means that efforts should not be blindly directed to joining a given index or boosting a journal's impact factor, but to increasing the underlying quality of the journal, which will bring about indexing and citations as a side effect. Focusing on underlying quality implies an effort to induce what the current journal quality indicators are really trying to measure without trying to influence them directly. This requires monitoring indicators regularly but, since they vary across institutions and countries, and throughout time, we should look at a wide array of them instead of focusing on the ones prevailing in a particular institutional and temporal setting.
We believe that such a policy is much safer in the long run because we cannot know for sure what the prevailing indicators will be in the future, but at least they should be positively correlated with the underlying quality they aim to measure. If we focus on underlying quality based on a wide array of different indicators (not just the prevailing ones in a particular institutional setting), we shall be better insured against changes in the ones adopted by policy makers.
In order to continue improving the journal's quality, it is important to continue benefiting from the cooperation of the members of the editorial board, the authors and the reviewers, who represent our strongest asset. But if we are to continue growing it is also important to be able to strengthen the journal's infrastructure. This will require improving its institutional support and funding so that will make it possible to continue improving the journal's technological and human resources. A way to achieve this would be by partnering with some commercial publisher, as we have suggested before, but that is not the only option. An interesting venue to explore would involve pooling resources from different institutions through some sort of formal cooperation agreement that would allow to share the burdens (mostly in terms of human resources) and the benefits of the journal (mostly in terms of visibility). This will be one of the challenges for the upcoming years.