[GOAL] Re: Open Access Metrics: Use REF2014 to Validate Metrics for REF2020
Stevan Harnad
amsciforum at gmail.com
Wed Dec 17 15:51:39 GMT 2014
On Dec 17, 2014, at 9:54 AM, [deleted] wrote on another list:
*Those that advocate metrics have never, to at least my satisfaction,
> answered the argument that accuracy in the past does not mean effectiveness
> in the future, once the game has changed.*
I recommend Bradley on metaphysics and Hume on induction
<http://plato.stanford.edu/entries/induction-problem/>:
"*The man who is ready to prove that metaphysical knowledge is wholly
impossible… is a brother metaphysician with a rival theory
<https://www.goodreads.com/quotes/1369088-the-man-who-is-ready-to-prove-that-metaphysical-knowledge>*”
Bradley, F. H. (1893) *Appearance and Reality*
One could have asked the same question about apples continuing to fall down
in future, rather than up (as Hume notes).
Yes, single metrics can be abused, but not only can abuses be named and
shamed when detected, but it becomes harder to abuse metrics when they are
part of a multiple, inter-correlated vector, with disciplinary profiles of
their normal interactions: someone dispatching a robot to download his
papers would quickly be caught out when the usual correlation between
downloads and later citations fails to appear. Add more variables and it
gets even harder,
*Even if one was able to define a set of metrics that perfectly matches
> REF2014. The announcement that these metric would be used in REF2020
> would immediately invalidate there use.*
In a weighted vector of multiple metrics like the sample I had listed, it’s
no use to a researcher if told in advance that for REF2020 the metric
equation will be the following, with the following weights for their
particular discipline:
w1(pubcount) + w2(JIF) + w3(cites) +w4(art-age) + w5(art-growth) w6(hits)
+w7(cite-peak-latency) + w8(hit-peak-latency) +w9(citedecay) +w10(hitdecay)
+ w11(hub-score) + w12(authority+score) + w13(h-index) + w14(prior-funding)
+w15(bookcites) + w16(student-counts) + w17(co-cites + w18(co-hits) +
w19(co-authors) + w20(endogamy) + w21(exogamy) + w22(co-text) + w23(tweets)
+ w24(tags), +w25(comments) + w26(acad-likes) etc. etc.
The potential list could be much longer, and the weights can be positive or
negative, and varying by discipline.
"*The man who is ready to prove that metric knowledge is wholly impossible…
is a brother metrician with rival m
<https://www.goodreads.com/quotes/1369088-the-man-who-is-ready-to-prove-that-metaphysical-knowledge>etrics*
*…*”
On Wed, Dec 17, 2014 at 2:26 PM, Stevan Harnad <harnad at ecs.soton.ac.uk>
wrote:
>
> Steven Hill of HEFCE has posted “an overview of the work HEFCE are
> currently commissioning which they are hoping will build a robust evidence
> base for research assessment” in LSE Impact Blog 12(17) 2014 entitled Time
> for REFlection: HEFCE look ahead to provide rounded evaluation of the REF
> <http://blogs.lse.ac.uk/impactofsocialsciences/2014/12/17/time-for-reflection/>
>
> Let me add a suggestion, updated for REF2014, that I have made before
> (unheeded):
>
> Scientometric predictors of research performance need to be validated by
> showing that they have a high correlation with the external criterion they
> are trying to predict. The UK Research Excellence Framework (REF) --
> together with the growing movement toward making the full-texts of research
> articles freely available on the web -- offer a unique opportunity to test
> and validate a wealth of old and new scientometric predictors, through
> multiple regression analysis: Publications, journal impact factors,
> citations, co-citations, citation chronometrics (age, growth, latency to
> peak, decay rate), hub/authority scores, h-index, prior funding, student
> counts, co-authorship scores, endogamy/exogamy, textual proximity,
> download/co-downloads and their chronometrics, tweets, tags, etc.) can all
> be tested and validated jointly, discipline by discipline, against their
> REF panel rankings in REF2014. The weights of each predictor can be
> calibrated to maximize the joint correlation with the rankings. Open Access
> Scientometrics will provide powerful new means of navigating, evaluating,
> predicting and analyzing the growing Open Access database, as well as
> powerful incentives for making it grow faster.
>
> Harnad, S. (2009) Open Access Scientometrics and the UK Research
> Assessment Exercise <http://eprints.ecs.soton.ac.uk/17142/>.
> *Scientometrics* 79 (1) Also in *Proceedings of 11th Annual Meeting of
> the International Society for Scientometrics and Informetrics* 11(1),
> pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007)
>
> See also:
> The Only Substitute for Metrics is Better Metrics
> <http://openaccess.eprints.org/index.php?/archives/1136-The-Only-Substitute-for-Metrics-is-Better-Metrics.html>
> (2014)
> and
> On Metrics and Metaphysics
> <http://openaccess.eprints.org/index.php?/archives/479-On-Metrics-and-Metaphysics.html>
> (2008)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ecs.soton.ac.uk/pipermail/goal/attachments/20141217/44b65d89/attachment-0001.html
More information about the GOAL
mailing list