Skip to content

Model Reconciliation Council

The Model Reconciliation Council is an in-person, structured working group designed to help a scientific field move from fragmented, implicit disagreement toward explicit, evidence-based recommendations for model use. It is not primarily a technical benchmarking exercise. It is a coordinated process for surfacing, structuring, and resolving differences in modeling philosophy, assumptions, and interpretation in a way that produces durable guidance for the broader community.

Across many areas of science, models accumulate faster than shared understanding. Different research groups develop models under different assumptions, for different purposes, and evaluate them using different criteria. Over time, this produces not just technical divergence, but epistemic divergence: disagreement about what models are for, what counts as explanation, what constitutes validation, and how results should be interpreted. These differences are rarely addressed directly. Instead, they persist across papers, reviews, and informal debate, making it difficult for the field to converge on best practices.

The Council is designed to address this problem directly. It brings together model developers and users in a structured setting where differences are made explicit, treated as objects of analysis, and translated into a shared framework for understanding. The goal is not to eliminate disagreement, but to organize it—so that differences in modeling approach become legible, comparable, and ultimately useful.

The process begins by recognizing that disagreement in modeling is often rooted in differing purposes and assumptions rather than simple performance differences. Some models are designed for prediction, others for explanation, others for scenario exploration or theoretical insight. These distinctions are rarely formalized, yet they shape how models are built, evaluated, and defended. The Council therefore begins by eliciting and documenting these underlying commitments. Participants articulate, in a structured and comparable form, the intended purpose of their models, the assumptions they consider acceptable, the kinds of evidence they trust, and the criteria they use to judge success or failure. This step establishes a shared understanding of what is being compared and prevents later disagreements from being misinterpreted as purely technical.

Building on this foundation, the Council develops a shared evaluative framework that allows different modeling approaches to be examined within a common analytical space. Rather than asking which model is best in general, the process focuses on questions that can be jointly recognized as meaningful across approaches: under what conditions does a model perform as intended, where do its assumptions become limiting, what types of evidence support or challenge its use, and how do different models relate to one another in terms of behavior and applicability. This reframing shifts comparison from competition toward clarification.

The Council is implemented as a two-meeting process with a structured period of analysis between meetings. The first meeting is designed to surface and structure disagreement. Participants are presented with shared data, standardized outputs, and common diagnostic representations, and are asked to interpret points of agreement and divergence across models. Importantly, the process requires participants to articulate competing interpretations of observed differences. These interpretations are recorded explicitly, without immediate resolution, creating a structured inventory of contested claims. The outcome of this meeting is not consensus, but a clear map of where and why participants disagree, along with a set of questions that can be examined further.

The period between meetings functions as a coordinated analytical phase. Project personnel translate the structured disagreements from the first meeting into a set of targeted analyses, comparisons, and summaries designed to make competing claims testable and comparable. This work may include additional model evaluations, sensitivity analyses, comparative summaries of assumptions, and synthesis of results across cases. Participants remain engaged through focused checkpoints, where they review interpretations, confirm that their models are represented accurately, and refine the questions under consideration. The purpose of this phase is to transform disagreement into a form that can be evaluated systematically, reducing ambiguity and isolating the sources of difference.

The second meeting is designed to convert this structured disagreement into a set of community recommendations. Participants revisit the contested claims identified in the first meeting, now informed by the intervening analyses. Competing interpretations are evaluated against the shared body of evidence, and participants work to clarify which differences can be resolved, which reflect distinct modeling purposes, and which remain open questions. The emphasis is on producing a framework that accurately represents the range of approaches while providing clear guidance for their use.

The resulting recommendations do not take the form of a ranked list of models. Instead, they provide a structured map of the modeling landscape: identifying major modeling approaches, their intended purposes, their key assumptions, the conditions under which they are most appropriate, and their known limitations. Where disagreement remains, it is documented explicitly rather than obscured. This produces guidance that is both actionable and transparent, allowing users to select and interpret models in a way that is aligned with their specific goals.

For this process to succeed, it must be both rigorous and fair. The Council is therefore designed to ensure that all participants are accurately represented, that interpretations are grounded in shared evidence, and that conclusions are traceable to explicit reasoning. By separating critique of models from critique of individuals, and by framing outcomes in terms of domains of validity rather than overall correctness, the process maintains the professional integrity of participants while still enabling strong evaluation.

The primary product of the Council is a citable report that synthesizes these outcomes into a set of best practices for the community. This report provides a common reference point for future work, enabling more consistent evaluation, clearer communication, and more cumulative progress. More broadly, the Council establishes a replicable model for how scientific fields can address persistent disagreement: by making assumptions explicit, structuring comparison, and grounding conclusions in shared evidence.

In this way, the Model Reconciliation Council extends beyond a single application. It represents a general approach to integrating diverse modeling traditions within a coherent framework, enabling scientific communities to move from fragmented debate toward shared understanding without requiring uniformity of approach.