Approach to develop the Matching Q-M tool and its uses

The Matching policy and practice Questions to Methodological approaches (Matching Q-M) tool is an effort to connect two critical parts of evidence support systems:

  1. Converting a decision-making need into a specific type of question
  2. Identifying the most suitable methodological approaches to address that type of question.

A few considerations to take into account while using the Matching Q-M tool:

  1. Each decision-making stage includes different goals that may need to be achieved, and each goal includes multiple types of question that may be addressed.
  2. While this tool is built following a logical path (i.e., from the problem, through the options, up to implementation and evaluation), it doesn’t mean that a given issue will bring actors to go to every single stage. Please pick up the stage, goal and question that is most relevant to your specific needs.
  3. Questions or decision-making needs can arise from issues created in other stages. For example, problems can also arise from issues created in other stages of the policy cycle (e.g., no feasible option is available, an implementation strategy is able to address a barrier, or the option has not had the impact that it should have had or its impact have failed to be sustained). In these cases, please consider the new problem arising and identify a question that could match this issue in the corresponding stage.
  4. There might be cases when a decision maker could ask a bunch of different iterative questions that would take the form of different types of questions presented here. In these cases, please pick up the stage, goal and questions in an iterative way.
  5. The Matching Q-M tool is organized around questions and not the results that research answering these questions could have. Hence, since they are essentially an assessment of the answer of a specific type of question, we considered questions such as “What are the evidence gaps or the methodological limitations of the existing evidence for a given topic?” out of the scope.
  6. There are several types of question that are addressed by building on other complex frameworks (e.g., agenda setting of a policy issue, chances of a policy to be developed looking at institutions, interests and ideas or the political economy; or the external validity of a given body of evidence). These questions are important, and several types of questions from the taxonomy could contribute to conducting an assessment in these complex frameworks.

Depending on different areas of expertise, alternative terms that different disciplines use for the same study design are included in parentheses (e.g., a longitudinal study for epidemiologists may also be called a panel study for economists).

The tool was developed first, by building a taxonomy of mutually exclusive and collectively exhaustive types of questions which was done by critically and iteratively analyzing existing frameworks and questions that were asked to evidence-support units around the world. Following the creation of this taxonomy, a Delphi study was conducted among methodological experts to create a ranking of the most suitable methodological approaches to address each one of the questions included in this taxonomy.

Step 1: Building a taxonomy of types of questions

A cross-sectional study was conducted to all global units providing some type of evidence-support at explicit request of decision makers. These units were asked to provide either the questions or the evidence products that they have produced in response to these questions.

Later, an iterative conceptual analysis was conducted to build a mutually exclusive and collectively exhaustive list of questions, structured in the four main stages of the policy cycle. Complementary, other existing frameworks (GRADE EtD and CFIR) were consulted to check whether other questions had not been considered. 

Finally, the draft taxonomy was presented at the Global Symposium in Health Systems Research 2022, where audience and panelists provided critical feedback to ensure the comprehensiveness of the taxonomy.

The taxonomy has four decision-making stages, that includes different goals that may need to be achieved (14 in total), and each goal includes multiple types of question that may be addressed (41 in total).

The taxonomy has been published in a paper.

Health Research Policy and Systems

Step 2: Matching questions to study designs

An online Delphi study was conducted to a sample of methodological experts to reach consensus to what study designs would be more suitable to answer each one of the questions that were included in the taxonomy described above. In each question, the methodological experts needed to rank study designs in their suitability to answer each one of the questions.

Experts were sampled based on their expertise across the eight forms of evidence, and their geographical region. More than 40 methodological experts participated in at least one of the two rounds of the Delphi study, in which 28 types of question reached consensus.

The results of this second step were published in a second paper.

Matching the right study design to decision-maker questions: Results from a Delphi study

New developments

After the publication of the first version of the Matching Q-M tool, we have updated new content in two main areas:

  1. Adding methodological approaches for different forms of evidence

For each form of evidence, we are working with methodological experts to address what type of approaches would be most relevant to address each one of the questions included in the Matching Q-M tool. 

We acknowledge and appreciate the invaluable contribution of our methodological experts:

  • Modelling: Dr Danielle Currie, Deputy Director – Simulation Modelling at The Sax Institute. 
  • Evaluation: Dr Hugh Sharma Waddington, Assistant Professor, London School of Hygiene and Tropical Medicine.
  • Behavioral/implementation research: Dr Jeremy Grimshaw, Senior Scientist Ottawa Hospital Research Institute. 
  • Evidence syntheses: Prof Zachary Munn, Director, JBI Adelaide GRADE Center. For the evidence syntheses part, we are matching the different types of questions to the types of reviews that have been identified by the Evidence Synthesis Taxonomy Initiative 
  1. Reaching consensus for the questions in which the Delphi (step 2) did not reach. We convened a series of structured meetings with methodological experts, in which a consensus was achieved on what study design should be used to answer each question. We appreciate the work of Prof Gordon Guyatt and all methodological experts contributing to this part.

Applications of the Matching Q-M tool

The Matching Q-M tool has been used in multiple contexts to pursue different objectives. Here is a non-exhaustive list of its potential uses:

  • Clarifying and narrow down a decision-maker request into a more manageable question that can be addressed by evidence.

Some examples on this area have been: 

  • Getting to a clear question to be answered by an evidence synthesis to support a World Bank project on strengthening the health authority in Chile.
  • Identifying key priority areas where evidence can be leveraged on mental health and addictions issues at the Government of Antioquia, Colombia.
  • Facilitating the process of producing research agendas on key priority topics.

Some examples on this area have been:

  • Creating research questions out of priority topics for a national research agenda on human rights at the Federal Ministry of Human Rights in Brazil.
  • Choosing the right methodological approach to use when facing a decision-maker question
  • Once clarified a potential question, the tool provides a clear guidance on what methodological approach to use to provide a strong answer to the question. 
  • Organizing a body of evidence by the type of question that is addressing.

Some examples on this area have been:

  • Structuring a body of knowledge to present finding of an evidence synthesis to make recommendations about primary health care  resilience in Latin America and the Caribbean.
  • Accelerating the SDGs achievement by converting critical needs into questions that can be taken by evidence-producers.
    A presentation about this topic was done at the Global Evidence Summit in September 2024, and the slides are available