On July 20, 2022, a seemingly unremarkable document was released in Brussels. The Agreement on Reforming Research Assessment was jointly signed by dozens of European research funding agencies, research institutes, national evaluation bodies, and scholarly organizations. Although the document—23 pages in total—did not attract much public attention on the day of its release, it quickly became the central text of a research-assessment reform movement sweeping across Europe and, increasingly, the world. Two years later, an international coalition named “CoARA” has taken shape, attracting more than 730 institutions globally—an expansion speed rarely seen in the realm of academic governance.
CoARA, formally known as the Coalition for Advancing Research Assessment, is not a top-down authority. It was jointly initiated by the European Science Foundation, All European Academies, Science Europe , and the European Commission. These organizations play pivotal roles in Europe’s research ecosystem: they represent research funders, national academy systems, and multinational research-policy coordination mechanisms. In other words, CoARA’s founders already possessed the institutional networks and policymaking influence needed to drive research-assessment reform—crucial conditions that enabled the coalition’s rapid expansion.
Shortly after the agreement’s publication, the four institutions established the CoARA Secretariat to coordinate membership applications, provide toolkits, organize training, and monitor reform progress. Unlike traditional bureaucratic structures, CoARA’s governance operates more like an “open network”: all member organizations—whether national academies, universities, research funding agencies, or evaluation institutions—join on equal footing and have the right to participate in shaping the rules. Lidia Borrell-Damián, Secretary General of Science Europe, emphasized repeatedly at public forums: “CoARA’s strength comes from its members, not its managers.” This horizontally structured organization has significantly increased the coalition’s appeal, making universities more willing to engage in reform discussions without political pressure.
The true reason CoARA was able to attract hundreds of institutions within two years, however, lies in the fact that the agreement directly addressed shared pain points in research systems. In the opening pages, the drafters pointed out bluntly that research assessment had become trapped in a culture defined by publication counts, journal impact factors, and rankings—a culture that undermines research integrity, encourages short-termism, restricts collaboration, and forces researchers to chase metrics instead of meaningful scientific questions. These problems are particularly acute in Europe, where cross-border researcher mobility is common and divergent evaluation systems impose continuous hidden pressure. The agreement’s call for “common principles” met a deep structural need. More importantly, it proposed a concrete, institutionalized, and actionable reform framework based on four core commitments—many of which challenge entrenched practices.
The most striking commitment requires research institutions to stop the inappropriate use of journal impact factors, the H-index, and journal rankings. The agreement states clearly that the impact factor is designed to evaluate journals—not individual papers—and should not be used to assess researchers; that H-indices and citation counts must not serve as proxies for research quality; and that global university rankings, whose algorithms lack transparency and bear no direct relationship to research quality, should not influence evaluation processes. In many European universities, this commitment represents a cultural shock: although researchers rarely admit it openly, journal prestige continues to shape hiring, promotion, and grant review in profound ways.
The second commitment expands the definition of research contributions. The agreement stresses that the value of research activities extends far beyond papers, and should include datasets, software, algorithms, models, research-infrastructure development, open-science practices, academic service, teamwork, student supervision, policy advice, and science communication. This clause has drawn close attention from research administrators, because it redefines the boundaries of “academic labor”—much of which has long been invisible within assessment systems but is now expected to receive formal recognition.
The third commitment restores qualitative assessment as the core evaluative method. The agreement states that peer review—not easily quantifiable numerical indicators—should be the primary mechanism for judging research quality; that peer review must be “transparent, verifiable, and contestable”; and that safeguards must be established to reduce bias, including multi-stage or cross-disciplinary reviews where appropriate. While quantitative indicators may be used as supplementary information, they must not determine outcomes. This “qualitative-first” philosophy has faced criticism for being difficult to operationalize, yet it directly responds to widespread dissatisfaction with metric-driven research cultures.
The fourth commitment institutionalizes the reform: every organization must establish public and visible reform processes, including drafting action plans, building internal training systems, developing transparent standards, publishing progress reports, sharing practices with other institutions, and periodically updating their assessment system. Reform is therefore not symbolic but a structured effort with timelines, responsibilities, and oversight.
Two years after the agreement’s release, implementation has unfolded unevenly across Europe. Nordic countries acted fastest. Aalto University in Finland and the Norwegian University of Science and Technology (NTNU) fully adopted “narrative CVs,” requiring applicants to describe contributions across dimensions such as research quality, open science, teamwork, and societal impact instead of listing publications. Several Dutch universities formally banned the use of impact factors, even specifying in job advertisements that “publication counts will not be used as evaluation criteria.” Meanwhile, the European Research Council (ERC) updated its application template to replace publication lists with contribution narratives.
Elsewhere, particularly in parts of Eastern and Southern Europe, progress has been slower. Some universities remain tied to traditional indicators—not out of resistance but because national evaluation and funding systems still depend heavily on publication counts, making independent institutional reform risky. At one CoARA workshop, a vice-rector from an Eastern European university remarked: “We are not opposed to reform, but we cannot change alone while national rules remain unchanged—we would be punished in competition.” This dilemma is explicitly acknowledged in the agreement’s annex: research organizations “are constrained by national legislation and international competition,” and without system-level policy alignment, individual institutions cannot bear the risk of reform.
Researchers themselves express mixed feelings. Early-career scholars generally welcome reforms that recognize datasets, code, collaboration, and supervision—“the work we actually do,” as one put it. Yet doubts persist about potential subjectivity in qualitative assessment. “If not impact factors, then who decides what counts as ‘high-quality’?” asked a young biologist. Some worry that reform may reinforce the authority of senior insiders, while others fear that narrative CVs will increase administrative burdens, forcing scientists to spend more time “writing stories.”
Despite these concerns, CoARA continues to expand. By late 2024, several European funding organizations had announced plans to complete the first round of internal assessment reforms by 2025. Research institutions in Asia-Pacific countries are also exploring membership or observer status. Even in post-Brexit Britain, several research bodies are independently pursuing similar reforms to maintain alignment with international research norms.
For the global research community, the significance of the agreement extends far beyond Europe. It signals a profound shift following years of metric-driven evaluation, rising commercial influence in scholarly publishing, reproducibility crises, and mounting academic pressure. Whether the reform succeeds remains to be seen, but the agreement has already changed the terms of debate: research assessment is no longer merely an administrative technique—it has become a central topic in the scientific community’s self-governance.
The next few years will be critical in determining the trajectory of this institutional experiment. Will research culture change? Will early-career paths become fairer? Will collaboration and open science be properly valued? The answers will shape not only the success of the reform but also the future landscape of scientific practice.
Reference: CoARA (2022). Agreement on Reforming Research Assessment. https://zenodo.org/records/13480728
