Academic indicators (h-index, impact factor, modified h-indexes, etc) have a long string of criticism by academics and not (see here, here or here).
Instead of debating (endlessly) on how "not representative" and flawed these numbers are (or howling generic rants...), I would be interested to know if Academia.SE community members have objective facts and reports on how academic workplaces are currently using these indicators. For instance in the upcoming REF2014 (UK), academics in my institution are urged to use, as their contributions, the papers accepted in highest impact factors journals.
In summary, can you trade your h-index for a better paying job?
Answer
On the European research market, various opinions have been offered on the use of h-index by ERC grant evaluation committees, and the broader relationship between h-index and ERC funding success. People who provide such opinion can be classified in three categories:
- affiliated with the ERC
- consultants whose business is to offer advice to (potential) candidates
- individuals involved in another way, whose advice is most probably only anecdotal
So, what do the first two categories have to say? The official word from the guidelines is, well… absent. But it is politically correct to assert that h-index is not a good indicator of scientific quality and, as such, not used. This gives quotes like:
Quality in science is not proved by accumulating quantitative points. The role of commercial impact factor and h-index is limited. Overemphasizing of publish or perish policy leads to a gradual perishing of all.
On the other hand, practical advice might be a bit more nuanced:
There will also be a new h-index study the successful 2011 awardees in the PE and LS domains. The h-index is regarded as a background indicator rather than a determining factor and the study on the 2010 Advanced Grant awardees showed that each panel made awards across a considerable range and that there were significant differences across the different ERC panels. There was a big variation between different disciplines within the main domains.
and this:
Every applicant had to choose his 10 best publications published in the recent decade and add how many times each of these papers were cited in the literature. The total number L of these citations describes well how the scientific community perceives their recent achievements. These numbers were provided by the ERC in the dossier of every applicant. However, during the evaluation process the panel did not put much emphasis on any bibliometric data. It was the opinions of the experts which did matter, not the bare numbers. Only after completing the evaluation process, I realised a correlation between these data and the final outcome.
To give another perspective: in France, a new evaluation system for higher education and research was put in place 5 years ago (the newly created agency performing the evaluation is called AERES). AERES evaluates each research group every 4 to 5 years, in order to give it an overall rating, which could be A+, A, B or C. This had at least two very practical consequences that I know of:
- For yearly financial negotiations between each university and the Ministry for research, the ministry started to require a spreadsheet with the number of university teams rated A+, and the number of A team (B and C didn't seem to count). Financial support was then dependent on that number, at least as a starting point for the negotiations.
- It became customary to include this grade in your French grant applications, because a A+ rating was considered a serious advantage. This was written in the “rules”, however…
So, all in all, are bibliometric and, speaking more broadly, academic indicators really adopted by institutions? Hell yeah! They make deciders’ job easier: quantification of research quality makes it easier to make decisions. Smart decision makers do realize, however, that a single indicator does not make for good decisions.
No comments:
Post a Comment