Marcus Nicol is Director, Research Analysis, in the Research Excellence Branch of the Australian Research Council (ARC). In this role, he is jointly responsible for the planning and delivery of ERA 2015 with Sarah Howard.
Leanne Harvey is Executive General Manager of the Australian Research Council and it responsible for ARC's research policy, assessment and services.
Aidan Byrne is CEO of the Australian Research Council. He is a leader in establishing and implementing the ARC’s key priorities and deliverables.
The evolution of Australia’s national research assessment exercise
By Marcus Nicol, Leanne Harvey and Aidan Byrne, Australian Research Council
Assessment of the effectiveness of research is becoming increasingly important for a range of reasons. These factors include monitoring the development of research activity, providing incentives for research improvement, and justifying research expenditures.
The Australian Research Council (ARC) is responsible for Excellence in Research for Australia (ERA). This exercise aims to identify and promote excellence across the full spectrum of research activity, including both discovery and applied research.
ERA evaluates research quality within Australia’s higher education institutions using a combination of indicators and expert review by committees comprising experienced, internationally recognized experts. It also provides a national appraisal, by research discipline, of Australia’s research strengths compared with international benchmarks.
Assessment differences between Australia and the UK
The ERA differs in a number of significant ways from the UK Research Excellence Framework (REF). While the REF assesses only the “best four” research outputs for each academic, ERA assesses all eligible research outputs for each eligible academic. In other words, ERA is a comprehensive collection and assessment; the REF is selective.
Both exercises are critically dependent on peer review processes. The REF, however, differs from the ERA by involving only traditional peer review in relation to outputs in all disciplines, a process central to assessment in all cases. By contrast, expert assessment in ERA is always informed by indicators of research quality, application and recognition. The metrics used to inform assessments depend on the discipline and include such variables as external funding, publications, competitive awards and patents.
The ERA uses additional, specific, peer review of outputs as a principal indicator of quality in some disciplines, whereas in others, particularly in the science disciplines, citation analysis of papers provides critical information. In other words, while the REF relies substantially on peer review processes, ERA relies on assessment by peer review informed by metrics.
ERA modifications over time
The ARC has conducted three full rounds of ERA, in 2010, 2012 and 2015. Between 2010 and 2012, it modified the ERA process, following consultation with members of the higher education research sector, the Research Evaluation Committee (REC), and the public.
In ERA 2010, institutions could assign a journal article only to Field of Research (FoR) codes specified by the ERA Journal List. Developed in consultation with disciplinary experts, this list maps each journal to no more than three FoR codes. For ERA 2012, the ARC introduced an exception to this rule, in response to feedback that researchers in enabling disciplines, such as statistics, do not necessarily publish work in journals specific to those disciplines. Instead, they may choose to publish in journals coded to other fields.
This dashboard from the SciVal Overview module reports on Australia’s Medical and Health Sciences discipline.This module provides an easy-to-use dashboard to analyze the research performance of any entity, from a researcher to a university to a country. Other modules within SciVal enable deeper reporting on specific metrics (e.g., number and proportion of articles in the world’s 1%, 5%, 10%, etc.) and direct benchmarking of those metrics across entities (such as universities or countries). SciVal supports multiple subject classification systems, including Australia and New Zealand’s Fields of Research (FoR). Source: SciVal. Scopus data snapshot from October 5, 2015.
A holistic approach
ERA is a holistic evaluation of research quality. The ERA indicator suite for each FoR is presented to REC members as a “dashboard of indicators,” providing a full range of relevant information. In disciplines that use citation analysis, the dashboard displays a number of citation metrics, providing a summary of the citations attached to the journal articles submitted for a given Unit of Evaluation (UoE).
ERA uses the following five-point rating scale, based on the suite of indicators that constitute the UoE profile. To allow for international comparison, this scale is broadly consistent with the approach taken in research evaluation processes in other countries.
5 - Outstanding performance well above world standard
4 - Performance above world standard
3 - Average performance at world standard
2 - Performance below world standard
1 - Performance well below world standard
For disciplines that use citation analysis, the citation metrics represent the indexed journal articles submitted for a given UoE. These metrics include:
- Relative citation impact (RCI) of the articles
- Number and proportion of articles in the world that are mostly highly cited for that discipline (top: 1%, 5%, 10%, 25% and 50%)
- Number and proportion of articles in seven citation classes: output with no citations, output with 0.01 to 0.79 RCI, output with 0.80 to 1.19 RCI, output with 1.20 to 1.99 RCI, output with 2.00 to 3.99 RCI, output with 4.00 to 7.99 RCI, and output with RCI above 8.00
Following ERA 2010 and ERA 2012, the ARC conducted internal analyses to investigate the relationship between the citation metrics and the ratings assigned to UoEs in the relevant citation disciplines. While there is an undeniable relationship between higher citations and higher ERA ratings, this relationship is not strong enough to suggest that the peer review process by evaluation committees should not remain an essential component of the ERA process.
Committee members are selected for disciplinary expertise and standing in their fields, and thereby provide the essential element of peer review on top of citation metrics. As with all metrics, subtleties in interpretation, and disciplinary behaviors and norms must be taken into account when viewing the suite of indicators. The ARC does not assign any weightings to the different metrics, which allows committee members the freedom to use their own expert judgement.
A measure of research quality
The ERA exercise provides a measure of research quality. It does not attempt to quantify other topical parameters such as research effectiveness, research benefits, or the academic community’s degree of engagement outside universities. It is the ARC’s view that a focus on quality offers the best single indicator to provide an overall enhancement of a research system.
The success of the ERA exercise should, however, not lead us to be complacent. Although Australia has a very high quality (and improving) university research system, it has been less successful in translating many research outcomes to other sectors. One reason for this is that our industrial/commercial systems are beyond the immediate control of universities. The challenge for Australia now is to find more direct indicators of research activity that can be used to provide positive feedback to institutions, thereby generating a greater degree of meaningful and effective translational activity.
The ARC is looking forward to continuing to work with the sector to develop evaluative instruments that provide universities with rich data and discourse that enable institutional advancement.