A well-known diabetes expert has abused his
function as peer reviewer for the renowned The New England Journal of Medicine. The reviewer broke confidentiality and leaked
a damaging report about a substantial hike in the risk of
heart attack when using the popular diabetes drug rosiglitazone, sold under the
brand name Avandia, to the drug’s manufacturer
weeks ahead of publication (see Nature or ScienceNews).
of independence and integrity of scientific journals and all codes of scientific
conduct. But there seems to be more
to the whole story than the violation of blatant rules by an individual. The NZZ views this incident as the “gateway
to a yawning abyss” that opens up a fatal sleaze between medical industry
and medical research.
The problems of intertwining economic interests
and scientific research mainly arise due to two related features – the first
one being fairly obvious, and a second one that seems to be commonly
overlooked.
expensive surveys – some of which would or could not be carried out otherwise. This entails obvious dangers to corrupt the
scientists, and indeed medical research history in
the second half of the 20th century offers many examples of
corrupted researchers. But why is it too difficult to regulate such financing
in order to avoid corruption? There seems to be more than contingent
limitations to such regulations: (ii) In many cases of applied research – and
medical research seems to be only one example here – it is not and it cannot be
clear cut as to what constitutes truth
or even empirical adequacy (compare,
for example, to what J. Ravetz refers to as `post-normal science’).
Statistical data always leave room for interpretation – What constitutes an
outlier? What is a suitable sample? etc. The answers to all these questions hinge
on the experimental paradigm, the accepted research practice in the field, … Close
interrelations between scientists – who, in the ideal case, are in search for
some kind of truth – and industry agents – who have completely different, although in itself not condemnable, goals –
can thus lead the scientist on a slippery slope: Even with the very best intentions,
working with or for companies might blur her or his view.
in the field of medical research, with its close interrelations to
pharmaceutical companies and its enormous impact on society, a thorough
discussion does not blink the lack of absolutely objective measures within
certain types of applied research is vitally needed. The recent and similar incidents
cannot simply be viewed as arising from conflicts
of interest – how the scientist has to act in view of incomplete knowledge is
not clear. Hopefully, the recent incident will not simply
yield a pawn sacrifice – as happened only recently within the physics research
community where the opportunity to rethink the research funding on the occasion
where one scientists published falsified and fabricated
data, was missed.
Rafaela: do you really mean “(ii) In many cases of applied research … it is not and it cannot be clear cut as to what constitutes truth or even empirical adequacy”? Or do you just mean that it’s harder to know what is true?
I meant that in many cases of statistical analysis, there are at least contingent limitations to determine what is empirically adequate.
Recently Elliot Sober and other philosophers have argued that the curve fitting problem is not as severe as commonly perceived within philosophy of science. However these authors overlook that principles like the Akaike information criterion are not applied to the `raw data`, but only after it has already been decided what counts as an outlier, for example.
An example for this is the determination of the so-called inertial range in a turbulent signal: compare Homann et al. (http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=1423444) and Biferale et al. (http://prola.aps.org/abstract/PRL/v93/i6/e064502)
Comments are closed.