sexta-feira, 5 de fevereiro de 2010

Moss's view of the "quality of science"

My View
Stephen Moss
Rothamsted Research
Harpenden, Hertfordshire AL5 2JQ, United Kingdom;
stephen.moss@bbsrc.ac.uk

‘‘Quality of science’’ is a term increasingly cited as a key measure of the excellence of research projects. The question arises, should the ‘‘quality of science’’ be assessed in a different way for ‘‘pure’’ and ‘‘applied’’ science, such as weed research? ‘‘Why should it?’’ you might well ask. Surely, good science is good science, regardless of whether you label it ‘‘pure’’ or ‘‘applied?’’ So, where do you stand on this issue? What are the key criteria, in your opinion, for assessing scientific quality?
Consider the following scenario. To avoid appearing to single out a specific aspect of weed research for criticism (which is not my intention), I will use a medical research scenario. A government department is funding a research project on antibiotic resistance. The stated objective of the research is to produce recommendations for the best use of antibiotics in hospitals to minimize problems with resistance. Compare the approaches by two research groups, led by Drs. Laurel and Hardy, respectively.
Dr. Laurel’s research group took samples of resistant bacteria from a local hospital, studied them very intensively, and characterized them at the molecular level using novel and sophisticated techniques. Many of their approaches were highly original, and their research was featured in a series of publications in high-impact journals. The results were also presented to other research scientists at prestigious international meetings. The research showed that the best way to use antibiotics to contain their resistant strain was to rotate the three different types available, A, B, and C.
Dr. Hardy’s group took a different approach. They quickly ecognised that numerous different strains of resistant bacteria exist, so took samples from 50 hospitals, after developing an effective and representative sampling strategy. Their research was not as ‘‘cutting edge’’ as Dr. Laurel’s, and largely used established techniques. They published relatively little in scientific journals, but placed greater emphasis on ensuring that every doctor and nurse was aware of their findings. They achieved this by publishing in medical magazines and presenting results at meetings attended by medical practitioners. Their research showed that the best antiresistance strategy, overall, was to rotate use of antibiotics A and B, and only use C as a last resort. Indeed, they showed, convincingly, that rotating A, B, and C was likely to lead to the rapid evolution of resistance to all three groups in many bacterial strains, putting patients’ lives at risk.
So, if you were on a review panel, how would you assess the ‘‘quality of science’’ of Dr. Laurel and Dr. Hardy? Surely, on grounds of innovation, originality, and publication record (key criteria in most assessment exercises), Dr. Laurel would win hands down. Does this seem fair to Dr. Hardy, whose research has the potential to save lives and is far more focused on meeting the objective of the sponsors? I think not. However, what if the objective of the research had been purely to develop novel methods for studying antibiotic resistance, without any requirement for practical recommendations? In this case, Dr. Laurel’s research would certainly justify greater recognition for its superior level of innovation. Clearly, any assessment of the ‘‘quality of science’’ must take on board the aims and objectives behind theresearch, and these might well differ for ‘‘pure’’ and ‘‘applied’’ research projects.
So, what is the critical difference between ‘‘pure’’ and ‘‘applied’’ research? Tome, pure science can be considered primarily an ‘‘end in itself’’ whereas applied science is a ‘‘means to an end.’’ In relation to weed research, that ‘‘end’’ is better practical weed management. I should emphasis that I am not suggesting that pure research is irrelevant to weed management—that would be foolish indeed. The fact that about 63% of the estimated 114 million ha sown with GM crops worldwide in 2007 possess herbicide-resistance traits is a good example of a practical development in weed management that has its origin in fundamental studies on genetic manipulation of plants.
Cousens (1999), in a very thought-provoking paper, raised the issue of whether one should accept a different set of values for judging weed science because it is an ‘‘applied science.’’ He also argued that much ‘‘weed science’’ is, in reality, ‘‘weed technology.’’ Those are pertinent issues, but I would argue that, when judging the value of any applied research project, the critical point is how the outputs relate to the scope of the work, regardless of whether it is labelled ‘‘science’’ or ‘‘technology.’’
One might argue that ‘‘quality of science’’ is but one of many assessment criteria, and that other factors, such as ‘‘public good,’’ should be given equal, or greater, weighting. I agree, but believe that, at least in Europe, a disproportionate weighting is given to journal ‘‘impact factors’’ and citation indices. In Europe, the ‘‘h’’ index (Hirsch 2005) is growing in popularity in many ‘‘quality of science’’ exercises, as it provides a simple measure of the broad impact of an individual’s publication record. It is significant that Hirsch’s paper deals almost exclusively with the ‘‘impact’’ of publication in the scientific literature. That might be appropriate for assessing pure science, but is that really the critical factor for assessing the value of applied science, such as weed research, or indeed, the relative merits of Dr. Laurel’s and Dr. Hardy’s research in the example above?
In my opinion, applied research should not be judged by the same criteria as pure research. Certainly in the UK, it is easy to find examples of applied research that have been highly praised by sponsors, but severely criticized in research assessment exercises as a consequence of appraisal using criteria more appropriate for pure research. The motivational effects of being patted on the back one day, and proverbially stabbed in the back the next, are not good. In relation to weed research, it should never be forgotten that, however great the ‘‘impact’’ of a publication or the ‘‘quality of the science,’’ it achieves nothing in terms of improving our ability to manage weeds, until the information is used in practice.

Weed Science, 56:337. 2008
DOI: 10.1614/WS-08-037.1

Nenhum comentário:

Postar um comentário