[GOAL] Fwd: [SIGMETRICS] Open Access Metrics: Use REF2014 to Validate Metrics for REF2020
Stevan Harnad
amsciforum at gmail.com
Thu Dec 18 19:24:36 GMT 2014
Apologies for the volume I am X-posting to GOAL, but the REF/OA connection
is potentially very important for OA, and today happens to be the day that
the REF
<http://www.timeshighereducation.co.uk/news/ref-2014-results-table-of-excellence/2017590.article>
facts hit the fan...
On Dec 18, 2014, at 12:58 PM, David Wojick <dwojick at CRAIGELLACHIE.US> wrote:
Regarding the organization, I thought you were trying to match the REF
> rankings.
Yes, I am.
Those were produced by a single organization, using a specific decision
> process, not by all the researchers and universities whose work was
> submitted.
The decision process consisted of a panel of peers for each discipline who
had to assess and rank the 4 papers per researcher from each university for
research quality. (There were other considerations too, but I think
everyone agrees that the bulk of the ranking was based on the 4 outputs per
researcher per discipline per university.)
Also, the credibility I am referring to is that of the analysis, not of the
> metrics you choose to use. You seem to be giving this analysis more
> credence than it probably deserves. As i said, multiple regression analysis
> is a crude approach to decision modeling.
The analysis consists of measuring the correlation of a battery of metrics
with the REF rankings.
The analysis has not been done. I merely proposed the method. So I am not
sure what it is that is being given "more credence than it probably
deserves.” (It’s certainly not my proposal to do this analysis that is
getting "more credence than it probably deserves”: As I said, so far my
proposal has been unheeded!)
Let’s reserve judgment on how crude an approach it will be until it is
tried, and we see how well it can predict the REF rankings. After that we
can talk about refining it further.
And it is not “decision modelling” that is being proposed, but the testing
of the power of a set of metrics to predict the REF rankings.
But maybe the “analysis” whose credibility you are questioning is the REF
peer ranking itself? *But that’s not what’s at issue here! What is being
proposed is to validate a metric battery so that if it proves to predict
the peer rankings* (such as they are, warts and all) *sufficiently well,
then it can replace (or at least supplement) them.*
But those candidate metrics, until they are validated against some
criterion, cannot have any credence at all: they are simply untested,
unvalidated metrics. (This has not hitherto discouraged people from using
them blindly as if they had been validated [e.g. the JIF], but we can’t do
anything about that here! REF2014 provides an excellent opportunity to test
and validate multiple metrics, at long last, weighing their independent
predictive power.)
You don’t think the REF2014 peer rankings for all disciplines in all
institutions in all of the UK is a sufficiently good criterion against
which to validate the metrics? Then please propose an alternative
criterion. But not a hypothetical alternative that is not even as available
as the REF rankings and what metric and OA data we have so far. An
alternative that is as readily doable as what we have in hand, with REF2014.
(This is where you can help out by backing the most effective OA policy for
the US federal agencies, based on the evidence, so that those policies can
then generate the OA that will maximize the predictive power of the metrics
that depend on OA.)
I do not see what any of this has to do with OA policy, especially US
> policy, just because you want to do some computations based on the REF
> results. And it sounds like you cannot do them because the metrical data is
> not available. It is a possibly interesting experiment, but that is all as
> far as I can see, not a reason to make or change policies.
I stated exactly what it has to do with OA policy: Many of these potential
metrics are unavailable or only partially available because the research
publications are not OA. This means that the proposed analysis will
underestimate the power of metrics because the underlying data is only
partly available.
Effective OA policies will generate that missing OA, maximizing the
predictive power of the metrics.
(By the way, the analysis we have used to test and validate the metrics
that predict the effectiveness of OA policies is very similar to the
analysis I have proposed to test and validate the metrics that predict the
REF rankings.)
Harnad, S. (2009) Open Access Scientometrics and the UK Research Assessment
Exercise <http://eprints.ecs.soton.ac.uk/17142/>. *Scientometrics* 79 (1)
Also in *Proceedings of 11th Annual Meeting of the International Society
for Scientometrics and Informetrics* 11(1), pp. 27-33, Madrid, Spain.
Torres-Salinas, D. and Moed, H. F., Eds. (2007)
Gargouri, Y, Lariviere, V, Gingras, Y, Brody, T, Carr, L and Harnad, S
(2012) Testing the Finch Hypothesis on Green OA Mandate Ineffectiveness.
Open Access Week 2012 http://eprints.soton.ac.uk/344687/
Vincent-Lamarre, Philippe, Boivin, Jade, Gargouri, Yassine, Larivière,
Vincent and Harnad, Stevan (2014) Estimating Open Access Mandate
Effectiveness: I. The MELIBEA Score.
<http://eprints.soton.ac.uk/370203/> (under
review) http://eprints.soton.ac.uk/370203/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ecs.soton.ac.uk/pipermail/goal/attachments/20141218/fe1644af/attachment.html
More information about the GOAL
mailing list