This session will introduce common ways in which systematic review and map results can be visualised. Visualisations will be described that help to display the methods used in the review, the nature of the evidence identified, the results of assessments of relevance and validity, and any synthesis of study findings. Specific examples will include flow diagrams, evidence atlases, heat maps, pivot tables/charts, and forest plots. The session will include a practical exercise trying out some recent developments in software for visualising systematic review and map outputs.
To begin, watch the following presentation:
You can find the lecture handouts here.
Next, read the results section of the following systematic review on grazing impacts in temperate and boreal protected forests, paying close attention to the visualisations:
Bernes, C., Macura, B., Jonsson, B.G., Junninen, K., Müller, J., Sandström, J., Lõhmus, A. and Macdonald, E., 2018. Manipulating ungulate herbivory in temperate and boreal forests: effects on vegetation and invertebrates. A systematic review. Environmental Evidence, 7(1):13
In addition, if you're interested in meta-analysis and quantitative synthesis, you may want to check out this paper on visualising meta-analyses:
In this practical, you will take a look at several different forms of data visualisation. This exercise involves a bit of reading and looking at examples.
One of the most informative ways of describing the methods used in your systematic review or map and the fate of the evidence you identified is a flow diagram. Review flow diagrams detail the volume of evidence proceeding through each stage of the review process so taht the reader can easily understand the main methods used, how much research was identified, and how much was retained in the final synthesis. Here's an example of a flow diagram that conforms to the PRISMA reporting standard (click on the image to see the review it came from).
Various resources are available to help you to fill in a flow diagram for a systematic review or map. Another set of reporting standards alongside PRISMA is ROSES (RepOrting standards for Systematic Evidence Syntheses). The developers of ROSES have provided an editable flow diagram for systematic reviews and systematic maps. You can access it here.
Download the ROSES flow diagram for systematic reviews using the link above, and fill it in with the following information from a recent systematic review:
Once you've had a go at filling the flow diagram in, download the model answers here to compare your work. How did you do?
It's pretty likely that you've already produced countless figures and tables for data that you have collected or analysed. We won't spend a lot of time on this part of the practical, but it's worth spending some time looking through a few example systematic reviews to get a feel for what kind of plots are often reported in reviews. Check out some of the following examples:
Cheng, S.H., MacLeod, K., Ahlroth, S., Onder, S., Perge, E., Shyamsundar, P., Rana, P., Garside, R., Kristjanson, P., McKinnon, M.C. and Miller, D.C., 2019. A systematic map of evidence on the contribution of forests to poverty alleviation. Environmental Evidence, 8(1):3
Haddaway, N.R., Hedlund, K., Jackson, L.E., Kätterer, T., Lugato, E., Thomsen, I.K., Jørgensen, H.B. and Isberg, P.E., 2017. How does tillage intensity affect soil organic carbon? A systematic review. Environmental Evidence, 6(1):30
Systematic mapping produces interactive databases of evidence as a key output, but databases of studies can be really useful for systematic reviews, too. Typically, these would be spreadsheets consisting of studies as different lines, and information relating to various aspects of the study design and PICO elements in different columns. Databases can be made interactive by ensuring some columns include consistently reported categorical and numerical data that can be easily 'filtered' and analysed/summarised quantitatively.
Take a look at the following examples of review databases (including those from systematic maps) and consider how they are user-friendly and interactive (and perhaps how you might make them even more interactive).
As you learnt in the presentation, heat maps are cross tabulations of categorical variables, showing the volume or strength of evidence included in the review (typically the number of studies belonging to each pair of codes). These visualisations can be really important in identifying knowledge gaps and clusters. Heat maps need categorical variables because we need a discrete (and manageable) number of groups across which we can sum the number of studies. For quantitative variables, figures are better suited visualisations.
Heat maps can also visualise a third variable by using shapes and colours, for example in this bubble plot that displays the type of study (impact evaluation, high/medium/low confidence systematic reviews, and systematic review protocols). Click on the image below to visit the evidence and gap map produced by 3ie to learn more.
Heat maps can be constructed manually, but this can be quite labour intensive. Instead, reviewers may want to use simple-to-use tools to design and construct heat maps. Perhaps the most familiar tool will be pivot tables. These interactive tools allow you to easily select and change which variables to display.
Take some time to examine the pivoit table in this systematic map database here. Which variables are easier to display in a heatmap? Can you think of ways to improve the visualisation of some of the variables by changing how they are coded?
You can also produce heat maps easily with the free online tool EviAtlas.
Evidence atlases are visualisations of studies in a systematic review or map across geographical space. They are particularly useful for topics where the spatial context is believed to be particularly important. Evidence atlases need all studies to have information on their national/regional location, or specific latitude and longitude: this information can then be used to place studies into regional blocks:
or exact locations:
The real advantage of evidence atlases becomes clear when they are made interactive. This allows the user to zoom in and out on a map, investigate details of individual studies, and even manipulate the visualisations themselves to tailor the way information is displayed (for example by colouring study icons by publication year). Click the image below to explore the interactive evidence atlas.
Conceptual models (also known as logic models and theories of change visualisations) are vital tools for trying to understand how complex systems function, parcitularly where reviewers are interested in how multiple contextual factors affect a cause-effect relationship. They help to identify important influencial factors that should be included in assessments of all studies and to get an overview of a complex system. Click on the image below to visit the evidence and gap map it comes from.
Forest plots are the main means by which systematic reviewers visualise the results of meta-analyses. They display the effect size (usually squares where the size corresponds to its weight, see below) and a measure of variability (usually horizontal lines, see below) of each individual study in a meta-analysis, along with an overall summary effect and, typically, 95% confidence intervals (usually a diamond, see below):
Forest plots usually include a line of no effect (a full line in the example above) and a line showing the overall effect estimate for the analysis (a dotted line in the example above). Where study variabilities overlap the line of no effect, the individual study would find no significant effect on its own. Despite this (as demonstrated above), when analysed together, the body of evidence can show a significant relationship.
Forest plots can be suplemented with other visualisations for meta-analysis results, including meta-regressions that display effect sizes across a continuous independent variable:
So you have now spent some time thinking about a variet of different ways to visualise different aspects of the systematic review and mapping process and findings. In the next session we'll spend some time on how we actually conduct the analyses in narrative, qualitative and quantitative synthesis.