Workshop on Research Ecosystems and Peer-review Practices (REPP)

Date, Time, and Location

9:30 AM – 3:30 PM, January 13 (Monday), 2025

MIT Bldg 46-3310

Overview

The REPP Workshop aims to deepen understanding and skills of peer-review within the research ecosystems, with a primary focus on early-career research trainees (students, postdocs, technicians, research scientists). The workshop features morning talks by 3 invited professors, offering insights from their study on peer-review in scientific ecosystems. The afternoon includes an interactive discussion and a hands-on training on publication peer-review.

Program

Section I: Lectures on Peer-review in Research Ecosystems

9:30–9:35Opening Remarks & Overview
9:35–10:25Charles Yokoyama (Fujita Health University; Former Neuron Senior Editor)
“The Future of Scientific Ecosystems”
10:30–11:25Micah Altman (MIT CREOS)
“Scholarly Peer Review: How is it used? How well does it work?”
11:30–12:25Pierre Azoulay (MIT Sloan)
“Risk & Return in Scientific Research”
12:30–1:30Lunch break

Section II: Publication Peer-review Practices – Human vs AI (with Charles Yokoyama)

1:30–2:30Discussion on Peer-Review Practices
2:40–3:25Publication Peer-Review Training (Laptops with WiFi required)
3:25–3:30Closing Remarks

Section I Invited Speakers

Charles Yokoyama

The Future of Scientific Ecosystems

Abstract: The current scientific ecosystem is the most powerful knowledge generation machine in the history of humankind. However, due to sociological problems it is not as efficient in producing knowledge as it could be, as exemplified by the reproducibility problem. Many efforts to improve the scientific ecosystem are in progress under the umbrella of open science and transdisciplinary collaboration. Here, I assert these efforts will fail in the long term, because the major epistemology of science, material reductionism, and its linguistic methodology, English, cannot ever achieve ontological coherence. Fortunately, artificial intelligence and holistic philosophical and scientific methods have potential to build an accurate ontology of nature using mathematical and computational tools on large data. Thus, future scientific ecosystems will use AI for real-time computational peer review.

Biography: Charles Yokoyama is a Professor at Fujita Health University in Japan, with joint affiliations at the International Center for Brain Science and Office for Research Administration. In addition to supporting research at the university, he maintains academic interests in the epistemology of neuroscience, scientific publishing and communication, and research on anomalous phenomena. He has served in executive management at The University of Tokyo (International Research Center for Neurointelligence), and at RIKEN (Brain Science Institute). He also served as Senior Scientific Editor for Neuron, as a policy writer for the G7 Summit science academies, and as Associate Editor for BrainFacts.org. As a researcher, he did a postdoc at Cornell University, PhD at the University of Washington, and MS at MIT.

Micah Altman

Scholarly Peer Review: How is it used? How well does it work?

Abstract: This talk provides an overview of scholarly peer review and discusses some open research questions about it. It begins by reviewing the purposes for which it can be used and discussing how its implementation and role vary across different contexts. It then summarizes the state of the art, characterizes open research questions, and discusses the need to design peer review policies and systems to enable quality improvement.

Biography: Micah Altman, Research Scientist, Center for Research in Equitable and Open Scholarship. Dr Micah Altman is a social and information scientist at MIT’s Center for Research on Equitable and Open Scholarship. He conducts research, provides public commentary, and collaborates in initiatives related to how information technologies change politics, society, and science. He is the author of over one hundred scientific and scholarly articles – as well as a spectrum of books, opinion pieces, databases, and software packages.

Pierre Azoulay

Risk & Return in Scientific Research

Abstract: Most researchers would, upon just a bit of introspection, agree with the claim that not all research projects they undertake display the same amount of ambition. And most of us operate on the assumption that just like in the world of finance, scientific risk and return are positively correlated, so that more ambitious projects are more likely to fail. But is this true? How should we think about scientific risk, and the trade off between risk and return in science? How could we measure scientific risk? And is it true that scientific institutions, like peer review, punishes risk taking? And if there is such a penalty for risk taking, is it “excessive” by some benchmark? The talk will review empirical work that seek to address the questions below.

Biography: Pierre Azoulay is the International Programs Professor of Management at the MIT Sloan School of Management, and a Research Associate at the National Bureau of Economic Research. His research focuses on the impact of different funding regimes on the rate and direction of scientific progress. He is also part of a large team surveying management practices and culture in scientific laboratories on a large scale. His latest projects examine the complex relationship between risk and return in scientific research. At MIT Sloan, he teaches courses on competitive strategy, technology strategy, and platform strategy, as well as a PhD class on the economics of ideas, innovation, and entrepreneurship. He holds a Diplôme d’Études Supérieures de Gestion from the Institut National des Télécommunications, an MA from Michigan State University, and a PhD in Management from MIT.

Section II Outline

Peer review is fundamental to steering the scientific ecosystem. At scientific journals, editors manage peer review of research manuscripts with reviewers who have expertise relevant to the study under review. How would early career researchers compare to such professional scientists in reviewing the same manuscript? How would both compare to ChatGPT4o review of the same manuscript? In this workshop, we will do this experiment. Participants will read and review an unpublished manuscript and compare their effort to those of reviewers for a high-profile journal. Finally, they will compare the reviews of both with ChatGPT4o review of the same manuscript and learn how to improve the AI-derived review with human scientific insights. Attendees will discuss using AI for science and in peer review specifically. Our goal is to become more active about the future of science.

Inquiry Contacts

Ray Lee (raylee@mit.edu; workshop design and content)

Jiafu Zeng (jiafuz@mit.edu; workshop administration and logistics)

Sponsors

MIT School of Science—SQoL Grant (AY24-25 Fall #004106)

MIT Picower Institute for Learning and Memory