Evaluate the Evaluations!
Bad practices seem to spread very rapidly and go to near fixation, making it very hard to subsequently replace them with good practices. Hence they linger long after their ‘badness’ has become widely recognized. Evaluating scientists and scientific papers by the Impact Factors (IF) of the journals in which they are published is one of the most pernicious of such lingering bad practices. In retrospect, it seems shocking that the practice of using IF has become so widely and uncritically accepted. The odds are so heavily stacked against the practice that I would have guessed that it won’t take off the ground. As has been mentioned in many forums, IF measures the impact of the journal and not of the paper, citation practices vary from discipline to discipline and ‘IF pressure’ is sure to lead to bad publishing practices. The bad practices associated with the evaluation procedures go well beyond the use of Impact Factors. Since evaluations are best done by peer groups, the business of eliminating bad practices and ushering in good practices is best attempted as a self-organized process by academics themselves with as wide a participation as possible. Science academies have a critical role in functioning as conscience keepers to usher in good practices and as gatekeepers to keep out bad practices. I believe that science academies around the world are not doing as good a job in this regard, as they potentially can. Recently, three prominent academies, Academie des Sciences of France, Leopoldina of Germany and The Royal Society, London have issued an excellent, joint statement about what they consider good and bad practices.
Although the points they make have been repeated time and again, coming as a joint statement from three of the world’s prominent science academies brings with it a certain amount of authority and credibility. Besides, there are some points in this statement that are especially worthy of note. Apart from unambiguously pointing the finger at the excessive use of bibliometric data, the statement suggests reducing the number and frequency of evaluations in the first place; evaluating, training and nurturing the best evaluators, and cautions that the new so-called ‘Altmetrics’ may not be much better after all. I believe that this statement should be widely read by all scientists and hence I am reproducing it below. Nevertheless, it is a statement by three ‘foreign’ academies. Hence, I would urge our own three science academies to seriously study the matter in the Indian context and issue our own statement and bring to bear the pressure of their authority on the conduct of evaluations in India. Clearly, we need an urgent evaluation of the evaluation process itself