<div dir="ltr"><div class="gmail_quote"><div style="word-wrap:break-word"><div>Apologies for the volume I am X-posting to GOAL, but the REF/OA connection is potentially very important for OA, and today happens to be the day that the <a href="http://www.timeshighereducation.co.uk/news/ref-2014-results-table-of-excellence/2017590.article">REF</a> facts hit the fan...<br><blockquote type="cite"><div><font face="Arial">On Dec 18, 2014, at 12:58 PM, David Wojick <<a href="mailto:dwojick@CRAIGELLACHIE.US" target="_blank">dwojick@CRAIGELLACHIE.US</a>> wrote:</font></div><div><font face="Arial"><br></font><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="Arial">Regarding the organization, I thought you were trying to match
the REF rankings. </font></blockquote></div></blockquote><div><font face="Arial"><br></font></div><font face="Arial">Yes, I am.</font></div><div><font face="Arial"><br></font><blockquote type="cite"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="Arial">Those were produced by a single organization, using a
specific decision process, not by all the researchers and universities
whose work was submitted.</font></blockquote></blockquote><div><font face="Arial"><br></font></div><font face="Arial">The decision process consisted of a panel of peers for each discipline who had to assess and rank the 4 papers per researcher from each university for research quality. (There were other considerations too, but I think everyone agrees that the bulk of the ranking was based on the 4 outputs per researcher per discipline per university.)</font></div><div><font face="Arial"><br></font><blockquote type="cite"><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="Arial"> Also, the credibility I am referring to is that
of the analysis, not of the metrics you choose to use. You seem to be
giving this analysis more credence than it probably deserves. As i said,
multiple regression analysis is a crude approach to decision
modeling.</font></blockquote></div></blockquote><div><font face="Arial"><br></font></div><font face="Arial">The analysis consists of measuring the correlation of a battery of metrics with the REF rankings.</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">The analysis has not been done. I merely proposed the method. So I am not sure what it is that is being given "more credence than it probably deserves.” (It’s certainly not my proposal to do this analysis that is getting "more credence than it probably deserves”: As I said, so far my proposal has been unheeded!)</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">Let’s reserve judgment on how crude an approach it will be until it is tried, and we see how well it can predict the REF rankings. </font><span style="font-family:Arial">After that we can talk about refining it further.</span></div><div><font face="Arial"><br></font></div><div><font face="Arial">And it is not “decision modelling” that is being proposed, but the testing of the power of a set of metrics to predict the REF rankings.</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">But maybe the “analysis” whose credibility you are questioning is the REF peer ranking itself? <i>But that’s not what’s at issue here! What is being proposed is to validate a metric battery so that if it proves to predict the peer rankings</i> (such as they are, warts and all) <i>sufficiently well, then it can <u>replace</u> (or at least supplement) them.</i></font></div><div><font face="Arial"><br></font></div><div><font face="Arial">But those candidate metrics, until they are validated against some criterion, cannot have any credence at all: they are simply untested, unvalidated metrics. (This has not hitherto discouraged people from using them blindly as if they had been validated [e.g. the JIF], but we can’t do anything about that here! REF2014 provides an excellent opportunity to test and validate multiple metrics, at long last, weighing their independent predictive power.)</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">You don’t think the REF2014 peer rankings for all disciplines in all institutions in all of the UK is a sufficiently good criterion against which to validate the metrics? Then please propose an alternative criterion. But not a hypothetical alternative that is not even as available as the REF rankings and what metric and OA data we have so far. An alternative that is as readily doable as what we have in hand, with REF2014.</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">(This is where you can help out by backing the most effective OA policy for the US federal agencies, based on the evidence, so that those policies can then generate the OA that will maximize the predictive power of the metrics that depend on OA.)</font></div><div><font face="Arial"><br></font><blockquote type="cite"><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="Arial">I do not see what any of this has to do with OA policy, especially US
policy, just because you want to do some computations based on the REF
results. And it sounds like you cannot do them because the metrical data
is not available. It is a possibly interesting experiment, but that is
all as far as I can see, not a reason to make or change
policies.</font></blockquote></div></blockquote><div><font face="Arial"><br></font></div><div><font face="Arial">I stated exactly what it has to do with OA policy: Many of these potential metrics are unavailable or only partially available because the research publications are not OA. This means that the proposed analysis will underestimate the power of metrics because the underlying data is only partly available.</font></div><div><font face="Arial"><br></font></div><div><font face="Arial">Effective OA policies will generate that missing OA, maximizing the predictive power of the metrics. </font></div><div><font face="Arial"><br></font></div><div><font face="Arial">(By the way, the analysis we have used to test and validate the metrics that predict the effectiveness of OA policies is very similar to the analysis I have proposed</font><span style="font-family:Arial"> to test and validate the metrics that predict the REF rankings.)</span></div><div><font face="Arial"><br></font></div></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><font face="Arial"><span style="color:rgb(51,51,51)">Harnad, S. (2009) <a href="http://eprints.ecs.soton.ac.uk/17142/" style="color:rgb(0,51,102)" target="_blank">Open Access Scientometrics and the UK Research Assessment Exercise</a>. <em>Scientometrics</em> 79 (1) Also in <em>Proceedings of 11th Annual Meeting of the International Society for Scientometrics and Informetrics</em> 11(1), pp. 27-33, Madrid, Spain. Torres-Salinas, D. and Moed, H. F., Eds. (2007) </span></font></div></div><div><div><span style="color:rgb(51,51,51)"><font face="Arial"><br></font></span></div></div><div><div><font face="Arial">Gargouri, Y, Lariviere, V, Gingras, Y, Brody, T, Carr, L and Harnad, S (2012) Testing the Finch Hypothesis on Green OA Mandate Ineffectiveness. Open Access Week 2012 <a href="http://eprints.soton.ac.uk/344687/" target="_blank">http://eprints.soton.ac.uk/344687/</a></font></div></div><div><div><p><span style="font-family:Arial">Vincent-Lamarre, Philippe, Boivin, Jade, Gargouri, Yassine, Larivière, Vincent and Harnad, Stevan (2014) </span><a href="http://eprints.soton.ac.uk/370203/" style="font-family:Arial" target="_blank">Estimating Open Access Mandate Effectiveness: I. The MELIBEA Score.</a><span style="font-family:Arial"> </span><span style="font-family:Arial">(under review)</span><span style="font-family:Arial"> </span><a href="http://eprints.soton.ac.uk/370203/" style="font-family:Arial" target="_blank">http://eprints.soton.ac.uk/370203/</a></p></div></div></blockquote><div><div><font face="Arial"><br></font></div></div><br></div></div></div>