[GOAL] Fwd: DORA: SAN FRANCISCO DECLARATION ON RESEARCH ASSESSMENT
Stevan Harnad
amsciforum at gmail.com
Thu May 23 19:22:59 BST 2013
On 2013-05-23, at 12:44 PM, http://am.ascb.org/dora/ wrote:
*"clearly highlight, especially for early-stage investigators, that the
scientific content of a paper is much more important than publication
metrics or the identity of the journal in which it was published."*
Of course it is absurd to use the journal impact factor as a proxy for (1)
journal, (2) article or (3) author quality -- although it has to be
admitted that in most fields there is a positive correlation (though a
correlation that is increasingly weak, moving from (1) to (3)).
Article and author citation counts are better, though still not enough.
There are ways in which OA can help research assessment (generating and
harvesting rich new metrics of importance, influence and impact), and, of
course, maximising access and impact.
But research first has to be made OA, before OA can generate the rich new
metrics.
I did not sign DORA, however, because of this piece of patent absurdity:
*"...clearly highlight, especially for early-stage investigators, that the
scientific content of a paper is much more important than publication
metrics or the identity of the journal in which it was published."*
Of course an article's quality is more important than metrics or journal
name. And what's in a student's head is more important than the marks he
gets.
But the marks (metrics) are the way to assess what is in a student's head,
when you do not have X-rays at your disposal!
And of course the journal name matters, because the journal's track-record
for peer review quality standards matters.
Ideally, every single paper written should be read and evaluated by
qualified experts (maybe even by God) every time its author's contributions
are assessed.
But in practice that simply cannot be done, realistically. And that's what
peer review is supposed to have done.
*
*
*And the journal name has a track-record for peer review standards behind
it.*
I've become allergic to recommendations of the kind I quoted above, because
they have so often been used as special pleading for new Gold OA journals,
implying that they have an unfair handicap because they don't have an ISI
impact factor, or don't have a high enough one -- whereas the truth is that
*they simply have no track record for peer review quality standards*, hence
there is no way to know what their quality standards are.
(If every paper could in fact be read by a qualified expert every time it
enters into an assessment, that would be a solution, but there's no way on
earth that can be practically implemented, so we have to rely on metrics
and track records.)
In the Gold OA context, some people have also implied that authors should
choose journals on the basis of their economic model (Gold OA) instead of
their quality standards -- and that tenure committees should somehow give
special compensatory weight to journals without track records, simply
because they are OA. I think that is nonsense, and also does a disservice
to OA (along lines similar to the way the predatory Gold OA journals do OA
a disservice).
The only advice to give to early-stage (or any-stage) investigators is to *try
to publish in the journal with the highest quality standards your work can
meet* -- and that their performance assessment committee should weight
their work accordingly (if they cannot assign a qualified peer to re-do the
peer review every time it is assessed!).
Stevan Harnad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ecs.soton.ac.uk/pipermail/goal/attachments/20130523/5577d6db/attachment.html
More information about the GOAL
mailing list