Addressing the Growing Crisis in AI Conference Peer Review
The rapid expansion of artificial intelligence research has created an unprecedented challenge for the scientific community: a crisis in peer review. At this year’s International Conference on Machine Learning (ICML2025), Jaeho Kim, Yunseok Lee and Seulki Lee received an outstanding position paper award for their work highlighting this issue and proposing innovative solutions. Their research, Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards, examines the problems plaguing current systems and offers a path toward improvement. Consequently, understanding these challenges and potential remedies is vital for maintaining research integrity.
The Problem: An Overwhelming Surge in Submissions
The core of the problem stems from the exponential growth in paper submissions to leading AI conferences. Notably, NeurIPS received over 30,000 submissions this year, and ICLR experienced a staggering 59.8% increase in just one year. This surge in volume significantly outpaces the growth of qualified reviewers available to assess these submissions. As a result, many papers are not receiving the rigorous scrutiny they deserve.
The Consequences of Inadequate Review
This reviewer shortage has serious repercussions. The fundamental function of peer review – acting as a gatekeeper for scientific knowledge – is being compromised when reviews are rushed or superficial. Unfortunately, inadequate reviews can allow flawed research and inappropriate papers to enter the scientific record. Furthermore, considering AI’s growing impact on society, this breakdown in quality control presents risks that extend far beyond academia; it can influence policy decisions and hinder genuine progress.
Proposed Solutions: Author Feedback & Reviewer Rewards
The position paper proposes two significant changes to address the peer review crisis: an author feedback mechanism and a reviewer reward system. These solutions aim to increase accountability and incentivize high-quality reviewing.
Author Feedback for Enhanced Accountability
The proposed author feedback system allows authors to formally evaluate the quality of reviews they receive. For example, authors can assess reviewers’ comprehension of their work and identify potential instances of AI-generated content. Therefore, this isn’t about punishing reviewers but establishing a basic level of accountability, protecting authors from inadequate or biased assessments.

.
Reviewer Incentives & Impact Metrics
To incentivize high-quality reviews, the paper proposes an incentive system with both immediate and long-term benefits. Initially, author evaluations will determine eligibility for digital badges recognizing “Top 10% Reviewer” status on platforms like OpenReview and Google Scholar. Furthermore, a novel “reviewer impact score,” analogous to the h-index but calculated based on subsequent citations of papers reviewed, is suggested as a measure of long-term professional value. This system aims to elevate the status and reward reviewers who contribute meaningfully to the scientific process, ultimately improving the overall quality of peer review.
Conclusion: Reimagining Peer Review for the Future
The ongoing crisis in AI conference peer review demands immediate attention and innovative solutions. The proposed author feedback mechanism and reviewer reward system offer a promising framework for enhancing accountability, incentivizing quality, and ensuring that rigorous evaluation remains at the heart of scientific advancement. As AI research continues to evolve, adapting and strengthening our peer review processes is crucial for maintaining trust in the field and maximizing its positive impact on society.
Source: Read the original article here.
Discover more tech insights on ByteTrending.
Discover more from ByteTrending
Subscribe to get the latest posts sent to your email.











