The REF post mortem -- and hype

  REF2014_full


I apologise to those reading this who don't give a toss about the Research Excellence Framework (though you can click on the link to find out more). This is a process which attempts to judge the research coming out of UK universities and to distribute money on that basis. It involves each member of "research active" staff submittig four"outputs"  (ie books and articles) to be rated by a panel, plus an evaluation of the whole research "environment" and "impact". It is an extremely costly process of peer review and many of my friends and colleagues give up a lot of time to ensure that it is as fair as it can be. To be honest, I am grateful to them.


The results of this were just announced. I am not going to pretend that I am not pleased that my own faculty did very well. On most indicators, we at Cambridge were "top" of the Classics list . And, after you have put a whole load of work in, that is a relief -- at least until you reflect of how much research you could have done of you were not preparing the submission to the REF (for our chair I would estimate two articles, the rest of us one).


Maybe this (pseudo) transparency is all inevitable (though more vocal opposition to the "pseudo" might have been nice). What  I can't stand is the bloody crowing once the results have come out.


There are, of course, all kinds of different ways of cutting the figures. Cambridge Classics comes out best if you count four star research outputs (very top ranking), St Andrews does better if you count four plus three star outputs together. That's a tribute to the subject as a whole  I think (and to St Andrews).


But when you see the definition of four and three star outputs, your heart sinks. Four stars are "world leading" ouputs'; three stars are "internationally excellent". I am sure that the panels debated that division long and hard. But I challenge anyone to give me a clear idea of the difference.



Meanwhile, university PR departments have gone into overdrive to cut the figures to come out well for them and put them on their website. St Andrews Classics is admirably realistic and modest in saying that they came "second" (even though they could have made a case for "top" - RIGHT ON  St Andrews). Other universities have not been so sensible. One London uni for example insists that its Classics department is 3rd in the country according to the "power metric". This is actually a really excellent place to study Classics, but the "power metric" involves multiplying the average score by the number of people in the department -- so any large institution will do well just on the multiplier. It is hardly meaningful, except to insist that it is big. If you take a look at this uni's website more generally, you will find the same trick has been played time and again. And where that wont work, you find headline claims like "100% of impact rated in international categories" in a subject that came in towards the bottom of the teens in national rankings.


At this point, I look as if I am taking these figures seriously. I hope I am not. I just wish that we would all stop bragging if we seem to have done well, that we would rein our press offices in, that we would think about how UK research was better collaborative than competitive -- and that we would stop trying to make everything into a PR headline. (On that score well done St Andrews, and -- dare I say it -- Cambridge.)


 

 •  0 comments  •  flag
Share on Twitter
Published on December 21, 2014 14:08
No comments have been added yet.


Mary Beard's Blog

Mary Beard
Mary Beard isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Mary Beard's blog with rss.