Skip to Content

Arnold School of Public Health

John Ioannidis gets candid about his research…on research

April 27, 2015 | Erin Bluvas, bluvase@sc.edu 

[Editor’s note: please see the full video of his lecture for additional examples/details as well as more information on the studies he cites throughout the lecture. Photos of the event are also available.]

Renowned in scientific circles for challenging existing research practices and funding models, as well as evaluating academic performance as a whole, Professor John Ioannidis of Stanford University, opened his presentation at the Arnold School of Public Health’s Delta Omega Lecture Series by acknowledging the tension in the title of his talk. Indeed, the presentation title (“Objective Academic and Research Evaluation in the Modern University: Challenges and Opportunities for Quality Improvement”) contained seemingly contradictory terms in research language, such as “objective” and “quality,” as Ioannidis pointed out. “Quality is subjective,” he said, “so when we are trying to evaluate research, can we do that objectively?” His lecture also contained constructive tension in that it did what Ioannidis does best—it challenged audience members’ perceptions of how the scientific community currently assesses research and stimulated discussions about how and why it may be time for a different approach.

Throughout his presentation, Ioannidis shared research (much of it his own) about the science of research. He has looked at publishing trends (e.g., frequency of citations, publishing frequency/consistency by individuals, claims of statistical significance, replication of studies, and many other factors) in peer-reviewed journals across a wide variety of fields of scientific study. As he discussed these findings, he poked holes in many of the prevailing beliefs about what makes research valuable and how researchers’ success should be measured.

For example, some of the primary ways that the scientific community assesses the value of research include whether it leads to statistically significant findings (96% of the 1.5 million studies pulled in a PubMed search found statistical significance) or how many times the paper is cited (the vast majority of NIH study section members have never had a highly cited paper). Similarly, one of the key performance indicators of a particular researcher is the amount of funding they attain (what about innovative ideas that do not fit within the parameters of major funding?) or the number of publications they produce (only 1% of authors published a paper each and every year in a sample of 15 million scientific papers over a 15-year period). Citations are another major indicator of academic success and perceived prestige (94% of primary authors of papers with at least 1,000 citations in the U.S. are not currently serving on an NIH study section). [See video for more on this discussion.]

A single-minded focus on these current measures of productivity, Ioannidis warned, can lead to the exclusion of other factors that may be equally important or assessed in different ways. “Right now we reward productivity, but maybe we should change this system,” he said. To ensure progress toward improving the system of evaluation, he calls for more evidence. Similar to how we use evidence, rather than committees, to evaluate the effects of public health interventions, Ioannidis said “we need empirical evidence—and not just committees—to decide how to change scientific practices.” He further suggested that these changes to evaluating individual academic performance should incorporate several characteristics, such as integrating the performance indictors into the rewards and incentives system. [See video for more on this discussion.]

Ioannidis himself is one of the most highly cited authors in the scientific world. His controversial paper, Why Most Published Research Findings are False, is the most-accessed article in the history of the Public Library of Science (PLOS) with 1.2 million hits and counting. With a self-deprecating sense of humor, Ioannidis acknowledged the irony and offered some ways that we can make research findings “more true.” Rewarding large-scale collaboration and adopting a culture of replication are just two of his suggestions. “Research is the best thing that has happened to human beings, but it’s extremely difficult to evaluate,” Ioannidis concluded. Evaluating research, which reflects on the researcher, the team, the department and the institution by extension, is challenging but possible. Ioannidis advocates focusing on the quality of the broader practices applied, rather than singular projects.

“We need to find solutions that take into account a complex landscape…and enhance the science, not add bureaucracy,” he said. But first, “we need more empirical research…on research.”

 


Challenge the conventional. Create the exceptional. No Limits.

©