Independent SAGE and their continued quest for transparency
Decisions happening behind closed doors due to a lack of transparency of the scientific evidence has been a notable concern about the UK’s response to Covid-19.  The last of the BBC daily briefings was held on 24 June 2020. During these a Minister regularly stood in the middle of two senior members of the Scientific Advisory Group for Emergencies (SAGE). The job of SAGE is to pool together scientific experts and provide scientific advice to the Cabinet Office Briefing Room (COBR). Although decisions are made by the Government’s Ministers, not SAGE, the phrase ‘we are following the science’ was regularly used to justify decisions during these briefings.
Former Government Chief Scientific Advisor, Sir David King, established a separate group of experts called Independent SAGE in reaction to and part of this growing push for transparency. He warned the secrecy of the response could cause a loss of public trust in the science. Independent SAGE’s first meeting was on the 4 May 2020 and live streamed on YouTube. Their website is an obvious ‘dig’ at the Ministers’ rhetoric as it uses the tagline ‘Independent SAGE. Following the Science’. 
Sir David King, Chair of Independent SAGE. Image source: Climaterepair via Wikimeida
Independent SAGE’s quest for transparency has not stopped since their inception. The latest change being live weekly meetings with their own scientific experts on their social media channels as a response to the end of the Government’s daily briefings, with the first of these being held on 26 June 2020. There are now over 10 meetings that the public can view on Independent SAGE’s YouTube channel.  Several reports and open letters are also available on Independent SAGE’s website. Topics include (but are not limited to) school openings, test and trace options and the impact of Covid-19 on Black and Minority Ethnic populations.
This blog post uses media coverage to better understand these notions of transparency, the differences between SAGE and Independent SAGE and how SAGE has changed in the context of transparency over the duration of the pandemic. These sections consider whether a new standard of transparency has been set for SAGE but also highlights potential issues of having more than one scientific advisory group (albeit one not official) and the limitations of their outreach.
The origins of Independent SAGE and ongoing calls for transparency
As soon as UK’s ministers claimed to follow the science, calls for transparency of the scientific evidence started gaining traction. As commented by James Wilsdon, a digital science professor of research policy at Sheffield University: ‘transparency must now become the default operating mode across the SAGE process…We’re in a situation of what some call ‘post-normal’ science, where the facts are uncertain, values are in dispute, stakes are high and decisions are urgent’ (4 May).
Although these calls for transparency were circulating since the beginning of the pandemic, a notable push to know the experts’ names came after a Guardian article on 24 April revealed that two political figures Dominic Cummings and Ben Warner had attended SAGE meetings.  Despite Downing Street reporting they did not participate, there was uneasiness in the media that the scientific advice was being politicised before it went up to COBR. Consequently, ‘concern over the secretary of SAGE’s membership reached a new pitch’.  Shortly after this, Independent SAGE was established. However, in his recent interview with The Times, Sir David King indicates that ‘the seeds of Independent SAGE began to germinate’ in February when he had noticed the divergence of the UK’s approach to the rest of the world and the WHO’s advice to ‘test, test, test’. 
As part of their quest for transparency, Independent SAGE wanted to distinguish between the science and the politics. On 22 May, news broke in the Mirror and Guardian accusing Dominic Cummings of breaking lockdown rules. Despite a press conference in the rose garden of 10 Downing Street to explain his movements, there was an explosion of media coverage, including calls for his resignation.  Whether or not he broke the lockdown rules, articles emerged critiquing scientists for standing alongside ministers as they justified his decisions. The one article in The Guardian claimed the relationship between scientists and ministers had become ‘dangerously collusive’.
In Independent SAGE’s meeting on 28 May the panel was asked ‘Why do you think the Government is ignoring its own advice and why [have] Whitty and Vallance in particular stopped appearing?’. Although not explicitly mentioning Dominic Cummings, it is highly likely the ongoing media coverage of him in the previous week underpinned the question. Susan Michie, Professor of Health Psychology at UCL responded: “One thing I think that is very important going forward is that scientific trust isn’t dented at all… it would be extremely helpful if our chief medical advisor and chief scientific officer were to give direct press briefings and direct briefings to the public to report on the scientific thinking of SAGE.” : 47:20 – 48.17mins.
Independent SAGE’s online discussions take the format that Susan Michie referred to: no ministers, just scientists. In some cases, a reporter may help facilitate the discussion by welcoming members of the public to ask a question or asking one on their behalf. The most appropriate expert is then chosen to answer and others may raise their hand to indicate they also have something to add. In some cases, the public’s questions are included in the report Independent SAGE send to the Government. An example being Section 7 in their report about schools: ‘School reopening: some of your questions answered’. 
If watching the live Independent SAGE’s discussion on YouTube, you can see the public’s comments. Some offering opinions about the topic being discussed, but the ones that stood out for me (obviously due my interest in transparency) are those thanking Independent SAGE for indeed that, their transparency, openness and honesty. On 27 June a Crowdfunding page was set up for Independent SAGE to continue their work. The comments here also provide interesting reading. For example, one states: ‘I donated because ISAGE are working for open/transparent and effective policies to deal with COVID19. I thank them all for working on behalf of all’. 
Since the setup of Independent SAGE, Sir David King and other members of Independent SAGE have become increasingly visible scrutinising the decisions that the Government has made. In one of his latest interviews (published 27 June), Sir David King comments that if members of SAGE were as visible “it would have changed this immensely”. He then reiterates the critique of the ongoing rhetoric mentioned earlier: “you can’t have a minister or prime minister saying we’re just following the science advice if the public doesn’t know what the science advice was”. 
Although Independent SAGE’s visibility has increased since their establishment and they are being referred to in more media headlines, there are limitations to their outreach and communication of the latest scientific thinking. Those aware of Independent SAGE are likely to be those actively seeking the discussion about the scientific evidence. At the time of writing (14 July 2020), Independent SAGE have 57.8k Twitter followers and 10.9k YouTube subscribers. [15, 16] The co-chairs of the official SAGE, Professor Chris Whitty and Sir Patrick Vallance (England’s Chief Medical Officer and the UK’s Chief Scientific Advisor) have their own Twitter pages with 256.9K and 137.6K followers respectively, SAGE itself does not have a page. [17, 18] On the face of it, these numbers sound quite large. However political figures such as Boris Johnson and Donald Trump have 2.9M and 83.4M respectively, whilst to take an extreme example, celebrities such as Justin Bieber have 112.2M. [19 ,20, 21] Compared to these numbers, the outreach of Independent SAGE is small.
Nonetheless, Independent SAGE was only established a few months ago and the 57.8K followers indicates there is a significant interest in them. Independent SAGE do not claim to be a communication platform for the guidelines but a platform for informed discussions which can be accessed by people if they so desire.
How is Independent SAGE different to SAGE?
Independent SAGE and SAGE are both made up of a body of experts from different disciplines. The official SAGE has 55 members listed as well as several sub-groups which focus on behavioural science, disease modelling, serology (scientific study or diagnostic examination of blood serum), clinical information, environmental modelling, transmission in children and hospital onset.  Independent SAGE originally comprised of the Chair, Sir David King, and 12 members. This has since been expanded to include a Behavioural Science Advisory Group, composed of 9 members. Some members of Independent SAGE or its sub-groups are part of SAGE or its sub-groups but the two bodies are different as Independent SAGE’s reports do not form part of the official government response.
The two groups have been mixed up on more than one occasion. During news broadcasts, members of Independent SAGE (who are not also on SAGE) have accidently been referred to as members of the official SAGE by reporters and they have had to correct them. Another more specific example of this mix up happened on the 22 May when the deputy Labour Minister, Angela Rayner, accidently referred to SAGE after Independent SAGE had published a report advising against the reopening of schools on 1 June: ‘SAGE concludes June 1st “too soon” to open schools. Teacher unions have been absolutely correct in asking for safety measures to be in place before re-opening’. She later tried to clarify that she meant Independent SAGE but subsequently both Tweets were deleted after she was accused of spreading misinformation. Her statement has been fact checked on the website ‘full fact’.
In contrast to the two bodies being mixed up, several headlines have referred to Independent SAGE as a ‘rival’ group of experts.[25, 26, 27, 28] In response to the use of the terminology ‘rivals’, Sir David King Tweeted ‘some have billed @IndependentSAGE as a ‘rival’ to the govts. To be absolutely clear that is not how we as a group see ourselves. Science is best when our community works together, on IndiSAGE we have a broad church of experts who are working hard to supplement existing advice’ (23 May).
The mix-up and perception of Independent SAGE as ‘rivals’ emphasises a potential problem with having two separate bodies. If they are seen to contradict one another, does that just add to confusion? Should there be just the one clear voice? Or is that simply the notion of science, there is not always consensus and that’s what is important for the public to see and understand? In another interview, Sir David King explains that scientists do not always agree and science is a discipline based on the peer review process where the evidence can be scrutinised, hence the objective of Independent SAGE is to offer this scrutiny and that is why it is vital that the science being referred to by decision makers is available.
A noticeable difference between SAGE and Independent SAGE is the visibility and direct communication with the experts in the public sphere. The public did have access to senior members of SAGE, such as Sir Patrick Vallance and Professor Chris Whitty during the daily briefings where a few questions were posed by the public. However, beyond this, as far as I’m aware, there haven’t been any conversations between just the scientists that are open to the general public to listen to and participate. This is the vision of Independent SAGE. As expressed by Karl Friston, a neuroscientist advising Independent SAGE: “I think of Independent Sage as the ultimate exercise in public engagement; what it would look like if you and I and everyone else were able to sit in on a real Sage meeting… In my view there can never be anything wrong with transparent, informed discussion.” 
Ascribing the virtue of public engagement to Independent SAGE should be carefully considered. In town planning policy, the terminology of public participation has many levels, from the consultation being tokenistic to the public having a significant impact on the design of a development. The channel that Independent SAGE has opened informs the public through a question and answer session, where some of those concerns are passed onto the government. They do not give access to the inner workings of writing the report. Nonetheless, access to a more in-depth question and answer session is an important quality and Whitty himself has noted that he has been much more shorthanded than he would have liked in the briefings when he was questioned about the changing guidelines of social distancing from 2m to 1m+ in the last daily briefing. : 40.46 – 41.00mins.
There is a practical feasibility issue in having meetings like Independent SAGE, especially as the official SAGE includes more experts and sub-groups and all of those members will be under significant time-pressures (that’s not to say members of Independent SAGE are not busy as they are also volunteering their time whilst being in full time work). However, if some of the experts in the official SAGE, beyond those in senior roles, were able to factor in an hour of their time to answer some of the public’s concerns directly and in the format of a group discussion, it would be welcomed by members of the public seeking more detail about the scientific evidence. This question and answer format would help to balance the concerns about protecting national security and being open with the public as the public wouldn’t be listening into the actual deliberations but could still hear directly, and in more detail about what the science unpinning decisions is. Navigating through the evidence on the website for SAGE can quite easily become overwhelming.
Another difference between SAGE and Independent SAGE is the perception that Independent SAGE is a left-wing body. An example of this is the Daily Mail headline: ‘Ex-science tsar Sir David King has built a Left-wing cabel to disperse virus health advice (but says ministers’ experts are too political to be trusted!’. In response to this criticism, Sir David King once again used Twitter as the communication channel and stated that political ideology is irrelevant and that the job of the scientific experts is to give scientific advice not political advice, and emphasised that it is up to the ministers to make the decisions. The very fact that he had to respond, shows that the perception is/was there, so much so he felt it needed addressing.
The ‘boundary’ between science and politics has been an issue throughout the pandemic. How has the science been used within decisions which are inevitably political? Hence, the calls for transparency and concerns about trust. Scientists who are/were members of SAGE, such as Professor John Edmunds and Professor Neil Ferguson, have separately spoken out that lockdown did not happen soon enough and was being eased too quickly, whilst Melanie Smallman, a Lecturer in Science and Technology Studies at UCL, describes the terminology ‘Independent SAGE’ as an oxymoron arguing that that ‘the idea that government advisers can separate science and politics is bogus’. [34, 35, 36]
Has SAGE’s approach to transparency changed during the pandemic?
A short and simple answer to this question is yes. At the beginning of the pandemic, none of the evidence being referred to was available, the expert’s names were not released and the minutes of the meetings were not accessible. When there were the growing calls to release the names of the experts, Sir Patrick Vallance said the decision to not release them was due to concerns about safeguarding the individual’s personal security, based on advice from Centre for the Protection of National Infrastructure. However, pressure continued to mount and the experts were given an opt-out option and then the names were released on the 4 May. This was the same day as Independent SAGE’s first meeting, which Sir David King has commented he doesn’t think was a coincidence.
The Government Office for Science’s website for SAGE openly acknowledges that SAGE’s approach has changed during the course of the pandemic. The website says that in previous events the minutes and supporting documentation were not published until the conclusion of the relevant emergency in order to protect any national security and operational considerations and allow ministers to consider ‘free and frank advice’ from the experts. It also states ‘we have revisited this approach in light of the current exceptional circumstances, recognising the high level of public interest in the nature and content of SAGE advice’. Then it goes on to say, SAGE will now publish all past minutes and supporting documents and future minutes and documentations within one month of the meeting taking place. To overcome the concerns about protecting individuals and national security, in some cases information is redacted from these. When following the link to the minutes, the website user is taken to a page with the header ‘Transparency and freedom of information releases’ specific to SAGE.  In a comment in The Telegraph, Sir Patrick Vallance also acknowledges this change in SAGE saying ‘when it comes to this crisis it is clear we must get the information out as soon as possible, and in my opinion, as close to real time as is feasible and compatible with allowing ministers the time they need’. 
Obviously, the current approach is more transparent than it was at the start of the pandemic but as we are in such a fast-moving situation, to some, a one-month delay in the publication is not satisfactory, particularly for those that want to review and scrutinise the evidence before decisions are made, rather than just be informed what the evidence is when there are key policy changes. An example of evidence being released as policy changed is after an announcement on 23 June. In this announcement the public were told that from 4 July, the 2m social distancing rule could be reduced to 1m+ if social distancing was not possible. The + meaning with mitigation measures. A review of the 2 metre social distancing guidance was published by the Cabinet Office on the 24 June.  Within the text of this, a link is provided to a SAGE report from a meeting on the 4 June (published 12 June) entitled ‘Transmission of SARS-CoV2 and Mitigating Measures’. In this, the executive summary states ‘Physical distancing is an important mitigation measure (high confidence). Where a situation means that 2m face-to-face distancing cannot be achieved it is strongly recommended that additional mitigation measures including (but not limited to) face coverings and minimising duration of exposure are adopted (medium confidence)’.
In addition to the release of experts’ names and evidence, scientists have emphasised that SAGE’s job is to advise, not make decisions. A document published on 5 May explaining what SAGE is and its response to Covid-19 outlines that ‘The government is not beholden to what SAGE says, and the evidence SAGE puts forward forms just one part of what the government considers before adopting new policies and interventions during an emergency. In this current pandemic, the government also has to consider other factors’. Another example being, Sir Patrick Vallance describing what SAGE is and how the science is being used for decisions in his comment in The Telegraph on 30 May. He acknowledges that SAGE is not a group of people in consensus, that the science will not always be right and that it will change over time as we learn more. He also noted that science advice is just that, advice and the ‘Ministers must decide and have to take many other factors into consideration’. In the last daily briefing, Professor Chris Whitty made it clear that decisions on the easing of lockdown are a balance of risk. These are risks accepted by the Ministers to reduce the spread of the virus but also open up the economy. They are also risks accepted by individuals as they go about their everyday life. In order for individuals to make informed decisions, the pressure remains for the emerging scientific evidence to become available.
It’s very easy to criticise in situations where you are not the decision maker, or even one of the main advisers, Sir David King himself says he’s glad he’s not currently in Sir Patrick Vallance’s position. However, it’s clear that if Ministers say they are ‘following the science’, they should recognise that many people want to know what this science is. Without that, trust can easily be lost. If trust is lost, this can impact whether or not people follow the guidelines. Although it will be impossible to pinpoint exactly what caused SAGE to make the changes that it did, such as the release of experts’ names, evidence and minutes, undoubtedly the fact this is an event affecting everyone’s daily life led to the growing calls for transparency. Independent SAGE have clearly made their point about what they think this transparency should look like. It is likely the group contributed to the growing pressure, particularly with their presence on Twitter, which then fed into members being contacted by news outlets and several articles referring to ‘Independent SAGE’s’ advice, particularly when that went against the latest government decisions.
An open letter to the leaders of the UK political parties published in the British Medical Journal was published on 24 June following the last daily briefing expressing concern over the preparedness for a second wave. If there is a little ‘breathing room’ over the summer months, one part of this preparedness should be reflecting on the communication of the science and the evidence. SAGE should continue to release the minutes and update the evidence, but perhaps this could be taken a step further by applying some of the principles of Independent SAGE including hosting online discussions with only scientific experts and not ministers, as well as a more active presence on Twitter and other social media platforms to direct people towards these updates. Even if there is not a second wave, there will be future events that SAGE will be involved in and now there is this increased awareness of them, they will inevitable be under increased scrutiny – so the notion of transparency needs to continue wherever feasibly possible.
However, it is important to note that one limitation of this blog post is that it has assumed transparency is a good virtue which should be strived towards. Before the pandemic (2017), Dr Stephen John, a Senior Lecturer at the University of Cambridge, argues that transparency is not always beneficial. One reason being, transparency can increase confusion as members of the public may expect consensus and that’s not what science always is. In the context of Covid-19 we have seen that if the evidence is not released, it gives the impression that the Government are hiding information from the public, however if released the public can pick and choose what they communicate on social media or to their peers. In the last few days, we have seen lots of mixed messages about face coverings, the Metro’s headline (13 July) being: ‘Call to clear up the mask muddle’. As the evidence for face coverings has changed over time, people arguing for and against them, are selecting the evidence which supports their viewpoint. If we assume transparency is a good virtue due to the provision of information for those that seek it, these problems of mixed messaging and misinformation need to be overcome. The Government need to carefully consider the communication channels that they use and how to extend their outreach. This communication needs to clearly explain what the guidelines are but also, as with the face mask debate, why they have changed. Clarity is key.
Text by Dr Hannah Baker
Disclaimer: Published 14 July 2020. This article is based on media coverage and reports found online. Despite her best efforts to locate all relevant information, the author acknowledges there may be key pieces of information she would have missed. Members of Independent SAGE or SAGE have not been contacted for comment.
Thumbnail image source: Climaterepair via Wikimeida
Citizen Science in a Pandemic: A Fleeting Moment or New Normal?
Text by Katie Cohen.
The current pandemic has in many ways brought the world to a sudden halt. Across the globe many are unable to work, children can’t go to school and the ways in which we used to socialise are no longer safe. Instead, we are trying to engage with the outside world while staying socially distanced from it. One interesting and unintended consequence of this drastic change to our daily lives has been an increase in people’s engagement with citizen science.
Since the end of March when the UK and most countries across the globe went into lockdown, citizen science platforms such as Zooniverse and SciStarter have seen a surge in projects, apps, and participant activity. Zooniverse, for instance, reported that 200,000 participants contributed over 5 million classifications, the equivalent of approximately 48 years of research in one week alone. It seems that teachers, students and even researchers have jumped at the opportunity to receive help with homeschooling and contribute to research programmes. Old and new platforms for citizen scientists have also received increased media coverage, with one plug from The Conversation to ‘Ditch the news cycle—engage, gain skills and make a difference’ and a call for ‘anyone itching for a bit of escapism’ to try citizen science in the Guardian.
This heightened engagement with citizen science has also extended to new projects related to Covid-19 as people are clearly eager to help tackle the global crisis. During this period of piqued interest in citizen science, I want to not only take a closer look at the types of activities that are emerging and expanding but also reflect on the relationship between citizen scientists, experts and policymakers in our pre-and post-pandemic world. What makes this present moment unique and what lessons might it bring to bear on future collaborations between these three groups? Are efforts to engage citizens in tackling the virus harnessing people’s interest to further science as it is practiced, framed and understood by experts? Are the citizen science experiments emerging during this time democratising and pluralising science? I am interested in how the current flux in citizen engagement with science may persist beyond lockdown, but I will also consider how top-down science and decision-making processes still seem to foreground participatory efforts to tackle Covid-19.
How are citizen scientists contributing to Covid-19 research?
Covid-19 presents an especially interesting policy problem because it relies so heavily on population data and mutual trust between citizens, experts and decision-makers. This problem, while not altogether unique, seems to have contributed to the pronounced effort to utilise citizen science approaches for tackling Covid-19. During this period of uncertainty and isolation, logging symptoms, mental health impacts and tracking movement has not only helped experts and policymakers better understand the course of the pandemic but has also given participants a sense of agency. Helpful lists such as the Citizen Science Association’s Covid-19 resources have made it simpler to discover ways to engage and the participation rates reflect an eagerness to do so.
The BBC Pandemic App foreshadowed the types of citizen science efforts we have seen emerge since the spread of Covid-19 began. A project which ran from September 2017 to December 2018, BBC Pandemic was the largest citizen science experiment of its kind and aimed to help researchers better understand how infectious diseases like the flu can spread in order to prepare for the next pandemic outbreak. Participants furthered this mission by contributing data about their travel patterns and interactions. With this data the researchers involved were able to simulate the spread of a highly infectious flu across the UK, and the database is listed as one of the models supporting the government’s response to Covid-19 on the Scientific Advisory Group for Emergencies (SAGE) website.
Over two years after the study’s conclusion, institutions around the world scrambled to initiate similar studies to better understand the novel coronavirus. The Covid Symptom Tracker has proven the most widespread citizen science effort to track Covid-19 in the UK. Professor of Genetic Epidemiology Tim Spector from King’s College London originally teamed up with technologists Jonathan Wolf and George Hadjigeorgiou to launch a startup called ZOE, which conducted studies on twins and nutrition. When coronavirus hit the UK, the ZOE team acted ‘with a sense of extreme urgency’ to adapt the app to track coronavirus symptoms. The app went live on Tuesday 24 March and by the next day had over one million downloads in Britain. A collaboration between NHS England and researchers at King’s College London, the app was also endorsed by the Welsh Government, NHS Wales, the Scottish Government and NHS Scotland. At the core, however, it is a large scale effort to gather data to be analysed by researchers and then delivered to the NHS and policymakers to make informed decisions.
Geographical spread of participants reporting their status as of 26 March 2020. Data source: https://covid.joinzoe.com
Funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering, University of California, San Francisco’s (UCSF) COVID-19 Citizen Science (CCS) has also empowered users to share in the fight against the virus. Popping up in Facebook and Instagram advertisements, the mobile health study has garnered support from people around the world (see map below). The app also offers the option for participants to provide nearly continuous GPS data and potentially additional health data, such as body temperature, exercise, weight and sleep. In late April, Northwestern University and the American Lung Association announced they would be partnering with UCSF in an effort to increase the number of participants and improve chances of generating useful results. The investigators have also more recently invited citizen scientists to submit their own research questions. Receiving more than two thousand ideas, they will soon add these participants’ questions one at a time to the study’s survey.
Points representing CCS participants worldwide. Data source: Covid-19 Eureka platform.
Other citizen science experiments have engaged users more actively in Covid-19 research. For instance, researchers at the University of Washington have used a free computer game Foldit as a platform for citizen scientists to contribute to Covid-19 drug discovery efforts. Developed at the University in 2008, Foldit has previously been used to help scientists in cancer and Alzheimer’s research, but has now seen a pronounced increase in activity since the Covid-19 outbreak. Although it is US-based, the programme has gained traction more widely with the help of promotion by EU-Citizen.Science, and participants across the globe are competing to solve protein puzzles online. Tasked with designing proteins digitally that could attach to Covid-19 and block its entry into cells, participants are aiding the development of antiviral drugs that could ameliorate patients’ symptoms. Researchers involved have found crowdsourcing a helpful tool because of the creativity each person brings to the task.
The 99 most promising of the 20,000 potential Covid-19 antiviral proteins generated by citizen scientists through Foldit that University of Washington researchers plan to test in the lab. https://www.hhmi.org/news/citizen-scientists-are-helping-researchers-design-new-drugs-to-combat-covid-19
A new EU initiative has also invited citizens ‘to take an active role in research, innovation and the development of evidence-based policy on a range of coronavirus-related projects.‘ Supporting a range of citizen science and crowdsourcing programmes, the platform is significant in its broad endorsement of citizen scientists’ contributions not only to research efforts but also to the policy process. Advocating for use of another symptom reporting tool Flusurvey, developed at the London School of Hygiene and Tropical Medicine and monitored by Public Health England (PHE), the platform is helping to boost responses to existing citizen science efforts as well as publicize the benefits of citizen science approaches more generally.
Closer to home, Cambridge Judge Business School students organised a University-wide 72-hour virtual hackathon #CAMvsCOVID on the weekend 1 – 4 May 2020: ‘One global challenge. One weekend. Your solutions.’ Teams were tasked with drafting ‘a novel response to a pressing problem in the battle against COVID-19,’ with the explicit brief that even coded solutions must consider the societal context. This solutions-focused approach to crowdsourcing harnessed the creativity of the Cambridge ecosystem in a way that other experiments have not; participants were empowered not only to gather data and contribute their research skills, but also to generate potential policy proposals for review. We do not yet know what will come of the ideas generated through this exercise, but it will be interesting to see what comes next.
In what ways does citizen science in a pandemic look different?
Social isolation has prompted many to engage with citizen science who otherwise would not have done so in the past. The urgency of the problem and scale of disruption has set Covid-19 apart from other policy problems with which citizen scientists might generally engage. However, it seems to present a timely opportunity to think about who holds relevant knowledge for public policy and how different forms of knowledge are shared in tackling policy problems.
Despite the complexity of every policy decision made over the past three months, most of us can agree that saving lives and returning to a sense of normalcy were top priorities at the start of lockdown. This unification of goals seems to have set the citizen science experiments detailed above apart from others of their kind. While environmental citizen science programmes covering questions on climate change, air pollution, and biodiversity loss are areas with the largest growth in citizen science over the past decade, they have also presented challenges. Lack of urgency, competing priorities, differing lived experiences and sources of information often create conflicting desires between, say, bird monitoring volunteers and members of the European Council on conservation efforts.
With the outbreak of Covid-19, most agreed we needed to track the virus, understand the science better, develop approaches to containing its spread, support the health system and discover treatments and vaccines to combat it in the future. There was a strong sense of urgency, priorities were more aligned, we recognised these were unprecedented times and there may also have been a greater desire to learn from each other. Though epidemiologists have different knowledge and experience than participants inputting symptoms into an app and both differ from that of policymakers charged with making decisions on behalf of their constituents, these differences seem to have been largely outweighed by the alignment of goals and priorities. Although this has continued to evolve throughout lockdown, these initial conditions enabled greater cooperation and manifested in a proliferation of citizen science experiments.
Differing experiences, information and agendas also breed mistrust, which too often impedes successful collaboration between citizen scientists, experts and policymakers. Can citizen scientists ensure their contributions will be used in their best interest? Will their voices be included in the decision-making process? Can experts and policymakers ensure citizens provide unbiased, accurate data? The global priority to fight the virus from the outset seemed to unify those opting to engage as citizen scientists. The magnitude, scale and consequences of Covid-19 potentially bred a mutual dependence and, in some cases, deference between citizens, research and policy. Amassing data and securing help is crucial for governments and scientists to meet expectations, and citizen scientists will better help themselves by providing accurate and constructive contributions. Trust in the value of citizen science may have been born out of obligation rather than desire during the pandemic, but it is seemingly there.
However, desire and obligation were bound to shift as we moved forward. As Elizabeth Anderson commented in her expert bite with the Expertise Under Pressure Team the issue of trust does not disappear in the context of Covid-19. Trust in shared motives and goals has wavered increasingly as lockdown extends and restlessness grows, and we have yet to find out the consequences of this shift for citizen science.
What does the future hold for citizen science post-pandemic?
As our daily lives gradually come to look more and more like they did before Covid-19, will interest in citizen science dwindle too? There’s no way to know for sure, but I think many will agree that our lives are unlikely to pick up where they left off and the impacts of the pandemic will linger long after the number of cases falls to zero. Although we may in fact be living through a fleeting flux in citizen engagement with science, here are some thoughts on why it may persist:
Support: This new wave of citizen science has clearly seen increased involvement and support of governments, medical institutions and charities involved in the fight against Covid-19. The timing of the launch of EU-Citizen.Science this year has also by no means been a negligible development, as the platform has served to support Covid-19 related citizen science efforts as well as share insights about the potential for citizen science. Although the initiative was set in motion prior to the outbreak, it has been ignited by interest in tackling Covid-19 and could sustain those audiences long after it ends.
Breaking the ice: Motivation to help, extra time and even boredom may be contributing to the increase in citizen scientists’ participation. Desperation and pressure might have caused policymakers to become more open to citizen engagement. However, maybe the unusual circumstances under which the shift occurred are less important than the shift itself.
Funding: The announcement of UK Research and Innovation’s (UKRI) new £1.5 million Citizen science collaboration grant is another lockdown development that could help shape future directions in citizen science. Funding has long been a barrier to the field. Perhaps the successes of Covid-19 initiatives will prompt more serious consideration of its merits and continue to provide a case for support.
Whether or not increased citizen engagement with science continues beyond lockdown, the nature of more open knowledge sharing between citizen scientists, experts and policymakers during the pandemic is also important to consider. Although we have seen increased trust, cooperation and collaboration, the parameters of scientific inquiry and policy agendas have still largely been set by academic institutions and governments. Rather than enabling citizens to provoke science as usual or express political agency as some forms of citizen science do, the Covid-19 experiments outlined in this blog have predominantly provided platforms for participants to contribute their knowledge to expert-led programmes. The proliferation of participatory initiatives may help to pave the way for more dynamic and experimental citizen science in the future, but perhaps this more fundamental shift in how citizen scientists, experts and policymakers share knowledge is still in the making.
Thumbnail image source: https://covid.joinzoe.com
Mask or No Mask? A look at UK’s policy over time.
Text by EuP’s co-investigator Dr Emily So assisted by research associate Dr Hannah Baker.
I was born and raised in Hong Kong. During the Covid-19 pandemic, my friends and family have told me that if you don’t wear a face mask/covering, you are the odd one out. Vending machines selling disposable face masks are common and the government has issued every resident with a reusable face mask. Many countries and cities around the world have followed suit.
There is a rich history of face masks and its origins. In Asian cultures, masks are respected as the social norm. You wear one when you are ill to keep your germs from passing onto others. Yet in the UK, where I now live, donning a face mask attracts public attention in the opposite way, that somehow the wearer has succumbed to fear and is wearing a mask to protect themselves. The debate over in western societies since the pandemic has revolved around the ability of masks to stop transmission of and protect the wearer from potentially viral airborne particles.
By wearing a face mask, are we keeping the germs in or out?
In this short blog, I am not going to embark on a “to wear or not to wear” debate as I have no credentials to argue for either side and will leave Professor Patricia Greenhalgh and Professor Graham Martin to battle it out in the ring. What I am interested in is the timeline of advice on face masks in the UK and what has contributed to its current policy.
The World Health Organisation (WHO) declared Covid-19 a pandemic on the 11th March. The first piece of advice I received about face masks from an expert on the BBC was on the 13th March where Dr Shunmay Yang from London School of Hygiene and Tropical Medicine explained if masks really protected people from contracting the virus. Her stance and the one of the UK government at the time was that face masks should be reserved for healthcare workers and those who are already infected. Looking back through the newspaper archives, when interviewed on the BBC the day before, Dr Jenny Harries, deputy chief medical officer, highlighted that risks of catching the infection could be increased due to the incorrect use and disposal of masks and
“because of these behavioural issues, people can adversely put themselves at more risk than less.”Dr Jenny Harries, BBC interview, 12th March 2020
Tracking the pieces of advice since then from experts such as Dr Yang and the government, based on the science from Scientific Advisory Group for Emergencies (SAGE), we found the following:
Mid-March to end of March. Panic buying was rife as rumours spread of an imminent national lockdown. Face masks were on the list of items in high demand and there were reports of opportunists taking advantage of the situation with fake Personal Protective Equipment (PPE) and other Covid-19 related supplies. Worried about the scarcity of supplies to key healthcare workers, the Government reiterated their advice to the public and stressed that wearing a face mask is not recommended for people with no symptoms.
The UK was put under a strict lockdown on 23rd March.
Early April. Professor Jonathan Van-Tam reiterated at the daily briefing (4th April) that wearing of face masks by healthy people is not recommended by the Government, he goes on to say that while the practice seemed “‘wired into’ some southeast Asian cultures, there was no evidence that general wearing of face masks by the public who are well affects the spread of the disease in our society” He added: “In terms of the hard evidence and what the UK Government recommends, we do not recommend face masks for general wearing by the public.” Social distancing remained the key mitigation strategy for Covid-19.
Echoing the Government’s messages are those from the WHO and experts from the UK, including Professor Bill Keevil, Professor of Environmental Health at the University of Southampton, who when interviewed by the Evening Standard on the 9th April was asked amongst other questions the following two: If I wanted to make my own, would you recommend it? Why are we seeing more people wearing masks? His answers being:
“No. It will not protect you.” (but in answer to the first question, there was no mention of whether it could protect others.)
“The US government is provoking this new interest in face masks because they have knee-jerked. That is because of the concept of symptomless carriers, who have the virus but do not show any signs of it. So what the US government has done is say: ‘People should wear masks.’ But if people are wearing inappropriate face masks, it is creating a false sense of security.”
By Mid-April the mayor of London, Sadiq Kahn, was urging the Government to reconsider their advice on facemasks, he said that wearing non-medical facial masks, such as a bandana, scarf or reusable mask, would add “another layer of protection” to the public. His letter to the Transport Secretary Grant Shapps says he is lobbying for masks to be worn in circumstances where people cannot keep two metres apart, such as on public transport or while shopping. Mr Shapps however said it is “not the right moment” to encourage people to wear masks – adding that the Government needs to look at all the evidence.
21st April. SAGE met to discuss the advice on face masks. The minutes to this meeting were published on the 29th May.
23rd April. SAGE submitted their review stating the evidence is weak. At a daily briefing, Dr Jenny Harries, said the fact the issue was being debated means “the evidence either isn’t clear or is weak”.
She was also asked to comment on whether face coverings could have an effect on the London underground, where she said it was possible there could be “a very, very small potential beneficial effect in some enclosed environments” but no reference was given. Professor Martin Marshall, chairman of the Royal College of GPs, also echoed this and told BBC Radio 4’s Today programme: “there was no research to support wearing a mask if you were fit and well, and there was even a risk of picking up the infection if people were constantly adjusting it and touching their face.” He goes on to say that “I think the guidance that we’re expecting to hear is that the wearing of face masks is a voluntary activity, not mandated, and it certainly makes a lot of sense to focus limited resources that we have at the moment on those who have greatest need and that’s the health professionals.”
28th April. Michael Gove, the Cabinet Office Minister, confirmed that a “domestic effort“ has been launched to slow the spread of coronavirus by producing masks that “limit the droplets that each of us might be responsible for“. However, the Health Secretary Matt Hancock said, “On face masks, we are guided by the science and the UK Government position hasn’t changed, not least because the most important thing people can do is the social distancing… as opposed to the weak science on face masks, there is very clear science on social distancing. That is our absolute priority in terms of the message to the public.”
Meanwhile, the First Minister of Scotland says Scots over the age of two should wear a cloth covering, such as a scarf or t-shirt, in “an enclosed space where you will come into contact with multiple people and safe social distancing is difficult – for example on public transport or in shops”. The Scottish Government unveiled its own guidance on this issue, suggesting the voluntary wearing of coverings based on the same evidence and advice from SAGE. Downing Street said the Prime Minister wanted to maintain a UK-wide response to coronavirus as far as possible.
30th April. Boris Johnson is back at Number 10 chairing the press conference following his stay in hospital and Chequers after contracting Covid- 19: “What I think SAGE is saying, and what I certainly agree with, is that as part of coming out of the lockdown, I do think that face coverings will be useful both for epidemiological reasons but also for giving people confidence they can go back to work” the PM said. The term ‘face coverings’ was used at the Downing Street daily briefings for the first time, a term which sets them apart from medical-quality masks.
1st May. Ministers have yet to make a final decision on whether the public will be advised to wear face coverings, but the advice from science suggested that they have a weak but also positive effect in reducing transmission from asymptomatic people where physical distancing was not possible.
4th May. Ministers confirm stockpiling of PPE equipment for healthcare workers and public use.
5th May. A day after PPE was announced, Sir Patrick Vallance told the parliament’s health and social care committee SAGE thinks the evidence on masks preventing the spread of infection from one person to another is “marginal but positive“.
11th May. Two months after WHO declared a global pandemic, the UK public are urged to wear a face covering if social distancing is not possible. The government’s current (May 2020) guidelines include making a face covering out of an old t-shirt to provide some protection for others you come into close contact with.
As I review this timeline, two things struck me. The timing of government advice which has been emphasised as “following the science” and the loose use of the word ‘evidence’. Local Government Minister Simon Clarke told ITV News on the 20th April that the guidance around face masks “remains the same” until a “scientific steer” is given by SAGE. “At the moment this isn’t what is being recommended and therefore it isn’t government policy, we’re prioritising getting material to the frontline. If that advice changes then clearly that is something, we will work to accommodate over the weeks ahead.” Should the two have been linked – the advice on whether face masks become mandatory of the public based on evidence from science and prioritisation of “getting material to the frontline”? Does that imply the “scientific steer” is secondary to how much PPE the country can stockpile?
Given the central role science has taken in this and all other strategic decisions the government has made, it is hard to not feel that the science is politicised and orchestrated at times. To understand why mask guidelines have been so varied, in the US National Public Radio (NPR) reached out to specialists in academia and in government. What they learned was that face mask guidelines are about science — but go beyond. The reasons for a policy may have to do with practical considerations like the national supply of masks but may also reflect cultural values and history. In Eastern Asian countries, the attitude is perhaps one of “better safe than sorry”, particularly in countries where they had experienced SARS.
Even though the central arguments and messaging have not changed – not mandatory to wear face masks or coverings in the UK and do not deplete our frontline healthcare workers of PPE, the difficulty in accessing the evidence informing the scientific advice has left me and perhaps many others confused and anxious. In the minutes of the 21st April SAGE meeting it states that evidence (point 10 in the minutes) that exists in marginally positive for the use of masks and RCT (Randomised Clinical Trails) evidence (point 11) is weak and that it would be unreasonable to claim large benefits from wearing a mask. Though evidence was been referred to, these minutes did not form part of the information pack to the public as they were only published on the 29th May.
The Expertise Under Pressure’s Rapid Decisions Under Risk case study spun out of frustration and confusion of where my own professional advice was heading in the field of estimating losses due to natural disasters. I was concerned about accountability but more importantly, I was worried about the impact of science-led advice on the public. By allowing the media to pick and choose what to present to the public, since the evidence used by SAGE has not always been made available, any scientific studies are up to public interpretation and can be taken out of context. There are many to choose from. Scientific studies exploring the benefits of face coverings in a pandemic would vary in sample size, context and focus. The phrases ‘randomised trials‘, under ‘medical conditions‘, ‘community settings‘, ‘observational evidence‘ comes to mind. As a member of the public, how do I decipher these terms and what do they mean? Am I therefore comparing apples and oranges if I try to contrast the arguments based on studies designed with different parameters?
How does a study make the cut to become evidence examined by SAGE?
The image below is taken from a study (preprint) carried out by Aalto University, the Finnish Meteorological Institute, VTT Technical Research Centre of Finland and the University of Helsinki on a scenario where a person coughs in an aisle between shelves, like those found in supermarkets. This was picked up by most news platforms in the UK, including Sky News and the Daily Mail, and was making its rounds on Twitter and other social media outlets on the 10th April, a month before the official advice on face coverings in public settings was given. Supermarkets in the UK have been urging customers to stay two metres apart while walking down aisles, with long queues seen in car parks around the country as staff limit the number of shoppers entering the store at any given time. However, the finding from this study that “deadly coronavirus droplets can spread across two supermarket aisles and infect shoppers, with the bug hanging in the air for several minutes.” has never been addressed officially.
With reference to this issue on face coverings, what is the difference between weak and marginal but positive evidence, hence tipping the advice to the public from not wearing to wearing face coverings. Was it more than just scientific evidence?
This pandemic will test the role of science and scientists in the management of a national and global challenge. There are growing concerns of the independence of advice and some scientists have broken rank and spoken out about the policies supposedly “following the science”. Given Covid-19 is a new virus and a highly contagious and mutating disease, one can be forgiven for being cautious, and a shifting policy based on changing data is expected. However, the dithering and conflicting (with other countries and even with other nations in the United Kingdom) guidance on face coverings and other issues can only in my opinion harm the trust of the public in science and experts, if the cause of the shifts in strategies are not explained. In early May, a survey conducted by the Open Knowledge Foundation found that public trust in science has increased following the pandemic but that transparency is key. They found in the Survation (surveying the nation) poll that people are more likely to listen to expert advice from scientists and researchers, if data is openly available for checking and 97% believe it is important that Covid-19 data is openly available for people to check.
Constructive debate of risks and benefits is essential for any national intervention. We all need to be prepared to modify our views. However, open disputes can lead to public distrust and confusion. We as scientists must think of the common good and duty of care to the audience, even though it is the responsibility of the politicians to make the final decisions. As remarked by Professor Trish Greenhalgh “I conclude by thanking my academic adversaries for the intellectual sparring match, but exhort them to remember our professional accountability to a society in crisis.” The only way to be accountable is to be as transparent as possible, to offer and take time to explain the scientific evidence, the imperfect and ever-changing nature of evidence. Perhaps the way to examine the guidance on “to mask or not” is one of precautionary science rather than absolute.
To conclude, there are three main arguments presented here. The first one is that the question about whether masks protect and who they protect is partly cultural because it relies on assumptions about people’s behaviour. The reticence of UK scientists seems to be driven by their sense of how we as a nation would behave with masks, and they will only consider their health effects after some assurance of compliance. But we have all had to change our behaviours for Covid-19, so why would this be one step too far? The second is that the official advisors are not systematic in the sort of evidence they attend to, in particular they approach the question in an individualist way (will it save me?) instead of a broader communal way (would a community that wears masks be safer?). Lastly, my impression is that the scientific advisors are demanding much too high of a standard of evidence than is warranted by the current situation. The Precautionary Principle seems to argue that masks should be recommended even if the case for adopting them is not 100% watertight. Their cautious approach is also inconsistent since other highly uncertain strategies have been adopted, yet masks remain contentious.
I cannot help but wonder if face coverings would be more common and acceptable if the UK government had made it part of their public health requirement early on. Afterall, this is a small behavioural change that at a communal level could have an impact on slowing the spread. I for one will be donning a mask to keep my own germs in.
End note (added 18:30 4th June): Since the publication of this blog earlier today it has been announced that face coverings will be mandatory on public transport in England from the 15 June – showing a further development in the UK’s changing policy on this issue
Thumbnail image source: Nickolay Romensky – https://www.flickr.com/photos/111977604@N05/49726977771
Disclaimer: Published 4th June 2020. Every effort has been made to scour through the numerous Government websites and reports, relevant papers and stories from the UK media about face coverings over the past two months by myself and my research associate Dr Hannah Baker. However, the author acknowledges there will be key pieces of information she would have missed. This blogpost are her own personal reflections and hers alone.
The SAGE we knew and the SAGE ‘everyone’ now knows and wants to scrutinise
Public awareness in the Scientific Advisory Group for Emergencies (SAGE), or more specifically the scientists said to be guiding the Government’s COVID-19 decisions, is leading to increased scrutiny and calls for transparency.
Thinking back to January 2020, in the days we as UK residents were living our ‘normal’ everyday lives, going to pubs, travelling abroad or even being within 2m of strangers and friends and/or family outside our household, very few people would have heard of Chris Whitty and Patrick Vallance who have now become ‘household names’ and even faces, with Professor Whitty, England’s Chief Medical Officer, appearing in TV advert breaks (shown in video below) from the Government urging us to ‘Stay Home, Protect the NHS, Save Lives’ and both regularly taking key roles in the BBC’s daily press briefing on COVID-19. In these briefings, the political figures, often situated at the middle podium, have frequently justified the decisions that have been made saying that they have been guided by ‘the science’.
Professor Chris Whitty, England’s Chief Medical Officer, appearing in Government advert urging the public to stay at home
Many people will now know that there is a group of scientific advisors, with some knowing that they are called the Scientific Advisory Group for Emergencies (SAGE). As defined by Government Office for Science, SAGE are ‘responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR)’.
SAGE has provided evidence for events since 2009 (and scientific evidence was also used in events preceding this such as the foot and mouth crisis). Events listed on SAGE’s website are: swine flu, volcanic ash emergency, Japan nuclear incident, winter flooding, Ebola outbreak, Nepal earthquake, Zika outbreak (precautionary), Toddbrook reservoir and of course, now COVID-19.
The increase in media coverage and therefore public awareness of scientific advice and SAGE is undoubtedly because COVID-19 is an event affecting all UK resident’s lives (and of course lives throughout the world). Consequently, the situation previously described in January 2020 seems like a distant past time. This was spoken about by Sir Ian Boyd (former Chief Scientific Adviser at the Department for Environment, Food and Rural Affairs) in an interview with ITV news: “This is more than just an interesting thing going on, absolutely everybody in the country is affected by it….And therefore there’s a lot more interest in scrutiny in the underlying process.”
With this growing emphasis on SAGE in the media, we thought it would be useful to reflect on some of the questions Dr Emily So (Co-Investigator on Expertise Under Pressure project) had after being called upon as an expert herself in the 2015 Nepal Earthquake, which were posed in our previous blog summarising our ‘Disaster Response | Knowledge Domains and Information Flows’ workshop in February this year.
What is the process of turning this information into decisions and action?
The job of SAGE is to pool together scientific expertise to answer questions which are posed by COBR. It is at COBR that the scientific evidence is considered, as well as other advice including evidence from the ‘economic, security, administrative and political spheres’. An analysis of the minutes of previous SAGE events shows that the number of meetings, time frame and number of people involved varies from event to event. These figures are summarised in the first table below. For the Nepal earthquake, there was only one meeting, whilst for Swine Flu, a previous pandemic, there were 22 meetings spanning from May 2009 – November 2010. The number of people attending the meetings also varied and those involved differed depending on the expertise that was required at the time. On May 4th, a ‘list of participants of SAGE and related sub-groups’ for COVID-19 was released (a breakdown of the number of participants in each group is provided in the second table).
A letter (dated 4 April 2020) from Sir Patrick Vallance, indicated that the frequency and timing of meetings was driven by current events and that “from 1 January to 31 March 2020 SAGE meetings were held on: • January: 28 • February: 3, 4, 6, 11, 13, 18, 20, 25, 27 • March: 3, 5, 10, 13, 16, 18, 23, 26, 29, 31. In addition, a precautionary SAGE meeting was held on 22 January 2020 to discuss scientific questions that were raised by COVID-19.”. During April, reports suggest SAGE met twice a week.
A critique within the media is whether there is enough breadth in the expertise for the COVID-19 response with some feeling there is an over reliance on modelling the pandemic and a lack of public health experts. Another Guardian article suggests the Government have sent out requests to universities to expand the pool following this criticism. Putting that aside, as can be seen by the number of attendees and members of each group, many voices are put forward. However, these voices are not always in agreement and SAGE meetings can include ‘heated and prolonged’ discussions.
A frustration that is becoming increasingly apparent in the media is questions over what is the science the politicians are referring to and how decisions are influenced by other considerations. In the Centre for Science and Policy’s (CSaP’s) podcast on ‘Science, Policy and Pandemics’ (episode 2), David Spiegelhalter says that we need to be clear the decisions are made by politicians taking into account the scientific advice. Another of many examples is when asked about his thoughts on the exit strategy during a BBC interview with Victoria Derbyshire (20/04/2020), former Prime Minister Tony Blair highlighted how the easing of the lock down will be based on scientific and medical advice but that in the end it is a political judgement.
The reason this overlap between science and politics is causing frustration is that SAGE is only one strand of information feeding into COBR. In an interview with Times Higher Education, Professor James Wilsdon, a professor in research policy at the University of Sheffield is quoted saying “It is problematic if political choices are being made and then the science advice system has to front them up. There needs to be a clearer sense of where science advice ends and political judgement begins − and at times that has been quite blurred”.
Concerns about this blurring of boundaries between the scientific and political spheres were exacerbated further following The Guardian article revealing that Dominic Cummings and Ben Warner, both political figures, had attended SAGE meetings. In defense of this, a Downing Street advisor said that they were not members of SAGE and would only contribute if there were issues about Whitehall raised. However, their involvement has led to a flurry of reports questioning the independence of SAGE from politics, one example being Bloomberg alleging that Cummings was more than a bystander.
Chris Tyler, Associate Professor in Science Policy and Knowledge Infrastructure, UCL, discusses this further saying that it may be acceptable to let Cummings witness the debate as there are huge areas of uncertainty which need to be understood. However, if as the Guardian article suggests he is able to ask questions, this may risk politicising the scientific evidence before it goes up to COBR.
Whether or not Cummings did have a role in the SAGE meetings, the coverage of this in the media has clearly led to even more scrutiny of SAGE and pressure for transparency of where the boundary between the science and politics lies in the process of making decisions.
What happens to the gathered advice after it has been given or the event has taken place?
SAGE has been accused of keeping their advice behind closed doors, with the New York Times describing the operations as being within a ‘virtual black box’. I have briefly touched on the growing calls for transparency due to the increased public awareness and media interest in SAGE as well as the ongoing rhetoric by the politicians that decisions are following the science. Consequently, headlines such as ‘Case for transparency over SAGE has never been clearer’ have been circulated in the last few weeks.
As the pandemic has progressed the reasons for the calls for transparency have varied. In the initial stages, members of the public wanted to know why the UK was not in lock down when other European countries were and what the science, which was continuously being referred to, actually was. This led to the release of the scientific evidence on 16 March. This release, although criticised as being too late by some, was welcomed by several members of the scientific community as it was thought to be vital for building trust with the public.
However, the calls for transparency continued. The requests then became about understanding what the lock down exit strategy will be and who the experts actually are. In response to a request for these names by Rt Hon Greg Clark MP, Sir Patrick Vallance stated: ‘The decision to not disclose SAGE membership for the time being is based upon advice from the Centre for the Protection of National Infrastructure and is in line with the standard procedure for COBR meetings, to which SAGE gives advice. This contributes towards safeguarding individual members personal security and protects them from lobbying and other forms of unwanted influence which may hinder their ability to give impartial advice. Of course, we do not stop individuals from revealing that they have attended SAGE.’ (4 April 2020)
Since that response, the revelation that political figures had sat in on SAGE meetings increased the pressure from the media to identify the experts. Reports, including one from the New Scientist (27th April), indicated that Patrick Vallance had said the list of names would be released after the experts had been given the option to opt-out of being identified. This list was released on 4th May, followed by an additional release of evidence on the 5th May.
Some calls for transparency are going even further with requests to release the minutes of SAGE meetings to understand why decisions were actually made. Sir David King, a former Chief Scientific Advisor, has also formed an ‘Independent SAGE’ with the first meeting (4th May) taking place via livestream on Youtube, this group was called ‘a rival panel of experts’ in the media.
Transparency is not a new concept and it is commonly referred to in the disaster management literature. Even in reference to SAGE, previous reports (pre COVID-19) have called for a more transparent process. Transparency is sought to develop trust with the public, as well as allowing the net of expertise to be cast further by giving other scientists and wider research community the opportunity to scrutinise the evidence before decisions are made, with the reason being that external scrutiny can help to avoid groupthink and identify blindspots. A counter argument here being that the decisions are being made in a time constrained environment and is there time to hear all these voices? Furthermore, there are concerns about how to ensure that those voices are from those with the relevant expertise: “The world wants to know what the science is behind the decisions, but there is great danger of misinformation when media interest is amplifying the voices of scientists, but not necessarily those most qualified to comment.” (Gog, J. 2020).
It is abundantly clear from the newspaper headlines that there has been growing critiques of SAGE being ‘secret’ and the decisions happening behind closed doors. Consequently there have been calls for this evidence to be in the public sphere to allow for scrutiny but also build trust with the public. This pressure from the media and public appears to have led to the release of evidence and experts’ names, with the Government Office for Science recognising “In fast moving situations, transparency should be at the heart of what the government does”.
What if the experts are wrong?
It is far too soon to know what was the right or wrong action to take and perhaps we will never know as there are so many parameters to consider, but when looking at this question there are a few things that I want to discuss: uncertainty, consensus and blame, which are not mutually exclusive from one another.
After her own involvement in the Nepal earthquake’s SAGE meeting, Dr So posed this question ‘What if the experts are wrong?’ as she knows that there is uncertainty in the models that make casualty loss predictions for earthquakes. Obviously there is lots of uncertainty in this pandemic, be it the spread of infection, death rates or impact of interventions. To avoid being ‘wrong’ this uncertainty needs to be communicated to both the decision makers and the wider public.
I have already referred to the heated discussions within SAGE. There have been reports, including one from Buzzfeed that there was no consensus as to when the lock down and social distancing should be implemented. With some scientists arguing that it needed to be introduced immediately to halt the spread of the virus and “pleaded with the government to change tack or face dire consequences”, whilst others felt that introducing social distancing measures at that point in time would be unsustainable and lead to a second wave of infection. Within this same article Vallance is reported to have said “If you think SAGE is a cosy consensus of agreeing, you’re very wrong indeed”. What is key here is that these areas of agreement and disagreement are passed up to COBR (as well as the uncertainty) in the evidence being presented.
The most concerning aspect related to whether or not experts are right or wrong is the ‘blame game’ as accusations have come out that Boris Johnson’s team are using the scientists as ‘human shields’. We are now seeing experts expressing concern with the politicians language which can be misinterpreted by the public that decisions have been made by the scientists. In his interview with ITV, Sir Ian Boyd states that he always told ministers it’s dangerous to say ‘I will follow the science’ as ‘essentially what they are doing is shifting the decision making role from them to the scientific advisors. And it would be better if they said ‘I will be strongly advised by the science’ or something like that’. Another issue with this phrase is that it doesn’t always acknowledge that ‘in reality the science of this crisis had been “riddled with doubt, uncertainty, and debate‘ (Professor Robert Dingwall, a member of the New and Emerging Respiratory Virus Threats Advisory Group).
Many feel that a judicial review is inevitable following COVID-19. To avoid a blame game of who was right or wrong, it is important in the coverage of SAGE and the evidence presented to acknowledge the uncertainty and that there is not always consensus. A previous post by EuP Research Associate Federico Brandmayr speaks more about these issues of responsibility by considering what COVID-19 could learn from the L’Aquila earthquake.
It seems obvious that an increase in public awareness of scientific advice has led to increased scrutiny and calls for transparency of SAGE. By reflecting on the questions posed in our previous workshop, it has inevitably led to even more questions in relation to the use of scientific expertise in emergency response situations and I will now conclude by highlighting three of these:
- Decisions go beyond science. It needs to be clear where the boundary between the scientific, economic and political spheres lies rather than repeatedly saying that decisions are following the science. Where is this line drawn? Is there even a line?
- SAGE is receiving critique for being secretive which has led to numerous calls for transparency for both the scientific evidence and names of experts. This is considered important by many to develop and maintain trust with the public and also allow for the scrutiny of evidence by a ‘larger net of experts’. Delay in the release of names was due to concerns the experts may be put under undue influence. Perhaps a question for ‘next time’ is ‘How do we create an appropriate environment to allow for the open interrogation of evidence?’.
- To avoid experts ‘being wrong’, the communication of uncertainty is vital, as well as recognising that there might not actually be a right or wrong answer based on the evidence that experts have available at the time. In such an uncertain, time-pressured environment, disagreement is inevitable. These areas of consensus and disagreement need to be passed up to COBR to consider and also be communicated to the public for a truly transparent process. Has uncertainty been clearly communicated in the UK’s and worldwide response to COVID-19? If not, what should have been done differently?
Throughout the next few months, the EuP team will continue to reflect upon questions which COVID-19 has raised for our project through our series of blog posts with the overarching aim being to develop our understanding of the role of experts in bringing about social change.
Text by Hannah Baker (published 5/05/2020)
Chris Whitty image source (thumbnail): Unknown photographer / OGL 3 (http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3)
Cultures of expertise and politics of behavioral science: A conversation with Erik Angner
As part of our new series on expertise and COVID-19, Mike Kenny and Anna Alexandrova interview Professor Erik Angner of Stockholm University. Erik is a philosopher and an economist writing on behavioral economics, economists as experts, measurement of happiness and wellbeing, Hayek, and the nature of preferences among other topics. Recently he has commented on the need for epistemic humility and the uniqueness of the Swedish response to the pandemic. In the podcast we discuss cultures of expertise, contestation, politics of behavioral science, and the relation of all three to the current crisis:
Listen to the segment on comparative cultures of expertise in Sweden and the UK, starting from 1:54
Responsibilisation of experts and disagreement between them, starting from 8:00
Who gets included in powerful expert groups, who gets sidelined and epidemiology as the current queen of sciences, starting from 16:11
Trust in epistocracies and its fragility, starting from 23:30
Value judgments in expert advice and how to make them responsibly, starting from 26:10
Discomfort of uncertainty, starting from 31:50
Behavioral science, nudge politics, absence of social science in all this, starting from 41:45
Text by Anna Alexandrova (30/04/2020)
A disaster researcher’s views on knowledge domains and information flows
Dr Emma Hudson-Doyle is based at the Joint Centre for Disaster Research at Massey University/GNS Science, Wellington, New Zealand. Her interests lie at the interface between physical science and critical decision makers, with a primary focus on the communication of science advice during natural hazard events. Current research focuses on the communication of forecast and model uncertainty. Other research areas and supervision topics include community resilience, social media in disasters, citizen science, aftershock forecasts, early warnings, motivations to prepare, communicating probabilities, low likelihood risk, and visual uncertainty, and exploring the use of science advice through table top emergency management exercises.
We contacted Emma before our ‘Disaster Response | Knowledge Domains and Information Flows‘ workshop for her viewpoints on the questions that were being addressed throughout the day and to act as a foundation for conversation in the focus group discussions. We would like to take this opportunity to thank Emma for giving up her time to provide us with her answers, which can be seen below.
What type of knowledge is and should be used?
This depends upon the problems and issues you are trying to address. Adopting a problem/solution focus – rather than a ‘knowledge’ focus, and thinking about the needs and concerns of the communities, organisations, and others affected by a disaster will lead you to understand which knowledge is needed. We must move from a ‘deficit’ model of communication and knowledge sharing where we assume we know what others need, towards a two-way partnership model. Accordingly, scientific knowledge may then work in partnership with (for example) indigenous knowledge, complementing each other to address the issue at hand.
What constitutes as an expert?
Similar to above, there are many different experts. We need to move away from a generic label of “the expert” to more specific labels “the landslide expert”, “the psychology expert” etc. This respects more equally the different disciplines and epistemologies at the table, as it recognises each has expertise. Thus, with this in mind, I envision an expert as someone with extensive experience and training in a field or topic, able to make informed judgments by drawing on the evidence available. This training may not be formal/university, and thus an expert can include local community experts, etc.
How is and should uncertainty be factored into decisions and communicated?
There are a range of different academic views debated on this: On the one hand: people advocate for not communicating the uncertainty at all, as it can cause people to mistrust or deny the message, view the communicator as incompetent, allow people to interpret the information to one end member state of uncertainty, or even cause harm if the decision maker does not take a safety action because of uncertainty. On the other hand: people advocate for communicating all uncertainties as it is ethically and morally the correct action, the communicator is more honest and trustworthy, it enables decision makers to plan alternative courses of action, and is the true state of the knowledge. I think there is a happy medium somewhere in the middle: if we were to communicate all the uncertainties it would be overwhelming during short time, high pressure, decision-making situations. However, it is important to communicate them to enable decision makers to make the best decisions with alternative actions. In order to assess what to communicate, we must thus first identify the decision-makers’ needs – through relationship building activities so we can understand which uncertainties are relevant to them and their decisions, and which are not. Ideally this should be identified in pre-event planning, so that during an event, scientists and others provide a ‘targeted’ supply of information to meet the decision-makers’ needs, communicating only the decision-relevant uncertainties – communicating all uncertainties could overwhelm a decision-maker with information (causing cognitive overload). Please see the attached for more discussion of this…
What happens to, and should happen to, knowledge after it is produced and the event has taken place?
This is a tricky one: it depends whose knowledge it is. If it is publicly funded, then it should be publicly available to aid other communities, and future resilience building. If it is privately funded, one would hope it could be publicly available, but there may be company rights that mean it can not be. If it is indigenous knowledge, then (depending on the customs of the indigenous people of the region), it is for the indigenous owners of that knowledge to decide.
Thank you again to Emma for these responses.
Responses received on the 6th February 2020.
Disaster Response | Knowledge Domains and Information Flows
An Expertise Under Pressure Workshop
11 February 2020
Cripps Court, Magdalene College, Cambridge
Organised by Hannah Baker, Rob Doubleday and Emily So
The ‘Disaster Response | Knowledge Domains and Information Flows’ workshop on the 11 February 2020 formed part of the Expertise Under Pressure Project (EuP), specifically the Rapid Decisions Under Risk case study.
The aim of this event was to explore the different knowledge domains and information flows in the context of disaster response situations, such as the immediate aftermath of an earthquake or volcano, or the ongoing response to the coronavirus, which has since become the global pandemic known as COVID-19. A reflection in light of this is provided at the end of this blog following a description of the workshop’s proceedings. Attendees were from a range of disciplines, including representatives from the Centre for Science and Policy (CSaP) who have written their own summary of the day.
In the context of disaster response:
- What type of knowledge is and should be used?
- What constitutes as an expert?
- How is and should uncertainty be factored into decisions and communicated?
- What happens to, and should happen to, knowledge after it is produced and the event has taken place?
Speaker Session 1
Hannah Baker is a Research Associate at the Centre for Research in the Arts, Social Sciences, and Humanities (CRASSH) at the University of Cambridge. She opened the day by providing an overview of the EuP project. The relevance of the topic was conveyed through a display of screenshots comprising multiple newspaper headlines referring to the use of experts in dealing with the coronavirus outbreak in, at the time (February 2020), Wuhan China. The headlines were also used to highlight that these experts are not always in agreement with one another, with an example being predictions of when the peak of the infection would be.
A theoretical context for disaster management arguing that there are no ‘natural disasters’ was then provided. There are natural events, such as a volcanic eruption, but these turn into disasters due to social factors that increase the vulnerability of a population. Within the disaster management cycle there are four stages: prevention and mitigation, preparedness, response, and rehabilitation and recovery. Although this workshop focused on the response, the other stages are not mutually exclusive. In the response stage, the decision-making environment is uncertain, under time-pressure and can result in high impacts (Doyle, 2012).
The reasons for referring to 1) ‘Knowledge Domains’ and 2) ‘Information Flows’ in the workshop’s title were then outlined. To address the first part, disaster research regularly discusses the use of scientific expertise in decisions-making, however it is also recognised that information can come from elsewhere. For example, Hannah displayed a newspaper headline referring to the use of ‘indigenous expertise’ in combating the recent Australian Bushfires.
In disaster management literature there is also an emphasis on the need to create networks before an event takes place to establish trust and facilitate the flow of information when that event happens. The concept of information flows can be extended further as the communication of knowledge is not only communicated to and between decision-makers but also the wider public. An issue here being ‘fake news’ and as put by the Director General of the World Health Organisation in response to misinformation about the coronavirus, now is the time for “facts, not rumours”.
Is knowledge driving advice or vice versa in the field of natural disaster management?
Emily So is the project lead for the ‘Rapid Decisions Under Risk’ case study, a Reader in the Department of Architecture at the University of Cambridge and a chartered civil engineer. Following the 2015 Nepal Earthquake, Emily was invited by the UK’s Scientific Advisory Group for Emergencies (SAGE) to contribute her expertise on earthquake casualty modelling and loss estimations. SAGE provides scientific and technical advice to decision-makers during emergencies in the Cabinet Office Briefing Room (COBR). ,
Although casualty models take into account structural vulnerability, seismic hazards and the social resilience, Emily highlighted that the interpretation of these and use for loss estimations is often based on knowledge and experience. She emphasised that those making these interpretations are unlikely to always have experience in the country in which the earthquake has occurred.
Emily’s participation in SAGE led her to question 1) What happens to the gathered advice? 2) What is the process of turning this information into decisions and actions? 3) What if we are wrong? These questions formed the origins of the EuP case study and use of experts in disaster response situations.
Thinking holistically about risk and uncertainty
Amy Donovan is a multi-disciplinary geographer, volcanologist and lecturer at the University of Cambridge. During the workshop she presented arguments from her recent paper: ‘Critical Volcanology? Thinking holistically about risk and uncertainty’. She reiterated that there are no natural disasters and then moved on to question what creates good knowledge, emphasising that risk in itself is incomplete. For instance, in her paper she states:
‘the challenge of volcanic crises increasingly tends to drag scientists beyond Popperian science into subjective probability‘ Donovan (2019, p.20)
Historically, the physical sciences have been better accepted as they can be modelled, whilst the social sciences are difficult to measure as they are people studying people and therefore subjective. However, Amy argued that risk is a social construction in itself and that datasets can be interpreted in different ways due to people’s experiences working in different locations around the world. She also affirmed that the social sciences need to be brought in at the start of the decision-making process, rather than at the end (which is commonly the case now and often only for communication purposes). A dialogue needs to be happening before a disaster even happens.
Amy also discussed the impact on the people being consulted as experts in disaster response situations as they can be affected by this as the advice that they give can affect other’s lives. This is why the transfer of knowledge is important as its often difficult for scientists to control which parts of knowledge are taken forward and how that is communicated.
Speaker Session 2
Nature and use of scientific expertise
Robert Evans is a Professor in Sociology at the Cardiff University School of Social Sciences, specialising in Science and Technology studies. The focus of his presentation built upon previous work on the ‘Third wave of Science Studies: Studies of Expertise and Experience’. Two key concepts within this paper are the notions of contributory and interactional expertise. The former is often the accomplished practitioner who can perform practical tasks and the latter was a new idea based on linguistic socialisation as the expert is able to communicate and speak fluently about practical tasks.
As no one can be an expert in everything, in their paper Collins and Evans state:
‘The job, as we have indicated, is to start to think about how different kinds of expertise should be combined to make decisions in different kinds of science and in different kinds of cultural enterprise‘ Collins & Evans (2002, p.271)
Robert also spoke about legitimacy and extension, and how legitimacy can increase as more voices are included, yet, poses the question whether the quality of technical advice decreases if ‘non-expert’ inputs are given too much weight. This ties in with the concept of robust evidence. Although Robert spoke before COVID-19 became a global pandemic, this is now as relevant as ever as he put forward questions about how we handle controversial advice and that scientific experts will not always reach a consensus.
Social Domains of disaster knowledge and action
Dorothea Hilhorst is a Professor of Humanitarian Aid & Reconstruction at the International Institute of Social Studies (ISS) of Erasmus University Rotterdam. Dorothea’s paper, published in 2003, ‘Responding to Disasters: Diversity of Bureaucrats, Technocrats and Local People’ led to us thinking about the use of different knowledge domains in disaster response situations. Dorothea reflected upon this by linking the domains of knowledge to power, and also on how we see a disaster as alliances between the domains, such as science, political authorities, civil society and community groups. Within her paper, she states:
‘Instead of assuming that scientific knowledge is superior to local knowledge, or the other way around, a more open and critical eye needs to be cast on each approach…disaster responses come about through the interaction of science, governance and local practices and they are defined and defended in relation to one another‘ Hilhorst (2003, p.51)
Like Amy Donovan, Dorothea emphasised that there are ‘no natural disasters’ by referring to the change in disaster paradigms through time and also speaking about the concept of Disaster Risk Creation (DRC). This shift went from an attention to behavioural studies in the 1950s, the entry of vulnerability followed by community to the paradigm in the 1980s, a focus on climate change in the 1990s and then a turn to the concept of resilience in the 2000s. Thea noted that in her own work she focuses on when disasters and conflict happen at the same time whereby the governance is even more complex in these situations.
Speaker Session 3
Disasters, Evidence and experts: A case Study from Evidence Aid
Benjamin Heaven Taylor is the Chief Executive Officer of Evidence Aid, an international NGO that works to enable evidence-based decision-making in humanitarian settings. Ben opened the discussion by describing the humanitarian ecosystem which includes (but is not limited to) the UN, International NGOs, research bodies, as well as local civil society, private sectors and individuals. He showed a pyramid reflecting the hierarchy of evidence. Due to the time constraints of a disaster response situation, expert evidence is frequently used but there is often a weak research-evidence base, meaning that there is little basis for challenging experts’ views. The research-evidence base is often weak due to it being inaccessible, ‘patchy’ and there being political barriers. However, Ben emphasised that the use of experts isn’t necessarily a bad thing and that…
‘When used properly experts can be a vital mediator between evidence (which can be a blunt instrument) and practice. But experts (including scientists) can be influenced by bias, just like anyone‘ Taylor (2020) – Workshop presentation
The presentation concluded by referring to Evidence Aid’s theory of change with the overarching idea being that before, during and after disasters, the best available evidence is used to design interventions, strategies and policies to assist those affected or at risk.
Focus Group Discussions
As part of the day, we had three separate focus group discussions. The facilitator for each group opened with some thoughts provided by Emma Doyle, a Senior Lecturer at the Joint Centre for Disaster Research at Massey University, New Zealand (Emma’s answers are provided in a separate blog post). Each group then built upon these initial thoughts and discussed the question. Summaries of discussion topics are provided below.
In the context of disaster response…
What type of knowledge is and should be used?
Initially the conversation separated knowledge domains into two streams – science and indigenous knowledge. However, this separation was critiqued and considered to be a reductionist way of thinking. Although it was acknowledged it is important to be clear where knowledge has come from and the conditions in which it was created, it was suggested that perhaps it is more useful to think about knowledge as a network of clusters that may or may not be talking to one another.
Disaster response is an integrated problem and in a time constrained environment, it’s someone’s job to bring this separate and sometimes conflicting information together. As part of this role, the framing of the initial questions is vital in determining what knowledge is collected. A key issue with the collection of knowledge is credibility and the need to demonstrate trustworthiness. For one engineer in the group, model makers often do not have the ‘luxury’ of choosing data, and if they do, the determination of reliability is often subjective and determined by expert judgment.
How is and should uncertainty be factored into decisions and communicated?
The level of communication for uncertainty impacts the confidence that the public have in decision-makers and consequently the level of trust. The question of how much and how uncertainty is communicated was raised. For example, whether the uncertainty is presented as a number or through graphics is dependent on the type of event and the cultural context. Perhaps there is also a balance to be struck between communicating the full range of possibilities for transparency and not supplying too much information, which can cause cognitive overload.
Examples were given of model makers who are keen to communicate all uncertainties rather than make the decision themselves. Another example is that once a decision is made, if an immediate response is required, people ‘on the ground’ may just prefer to be told what to do and given instructions. A potential way in which the communication of uncertainty can be balanced is a layered approach, which is sometimes used in healthcare. Highlighting the information that needs to be known but then allowing access to more detailed information if a patient wants to see the same level of detail as their clinician. However, it was recognised that in a time pressured situation such as disaster response, this will be more difficult to formulate. Fundamentally, the question of communicating uncertainty was described as a moral and ethical judgment.
What happens to, and should happen to, knowledge after it is produced and the event has taken place?
This focus group began by discussing the initial collection of knowledge and how this is often based on visibility or access to data or individuals. In some cases, ‘experts’ might be selected because of the institution they are from or willingness to interact with the media but this may not make them the most appropriate person to answer the questions at hand. In any case, wherever the knowledge has come from, transparency is key and the group felt that the general public can act out of panic if they do not feel informed. If the release of information to the public is staggered, this can lead to a loss of empowerment. However, this is then balanced against communicating what is necessary. In many cases, the scientific experts should not be expected to communicate directly with the public, often this requires a mediator. If there is not a clear and hard line from the government, fake news and rumours are likely to be a major issue.
Reflections in light of COVID-19 being declared a global Pandemic
Clearly the topics discussed in the workshop are highly relevant to the ongoing COVID-19 pandemic. In the UK, COVID-19 has been declared as a national emergency and at the time of writing I am socially distancing myself under the new strict governmental measures and working from home. COVID-19 is relevant to all the questions posed in our workshop, and to the content of the presentations and focus groups. I will now draw some links with the talks given by each guest speaker, but recognise that there are many more!
Amy talked about the transfer of knowledge and how that can then be out of the expert’s control once imparted. Due to the popularity of social media, there have been widespread issues of miscommunication with platforms such as Twitter trying to direct people towards official information sources and the Government now hosting a daily press briefing.
Robert questioned how we handle controversial advice and that scientific experts are unlikely to reach consensus with the final say being from political actors. Repeatedly, we have heard Boris Johnson and other political actors saying that the decisions are being driven by the science. An important point to make here, which was raised by Professor David Spiegelhalter in the Centre for Science and Policy’s (CSaP) ‘Science, Policy & Pandemics’ Podcast (Episode 2: Communicating Evidence and Uncertainty), is that SAGE is not the decision-maker, they are providing the evidence to inform decisions made by politicians. After calls for transparency, SAGE released the evidence which is guiding decisions and identified the core expert groups who they are consulting. As far as we are aware, this has not happened in such a short time frame for other events that SAGE have provided advice for, but perhaps this is because it is something that is affecting us all rather than a specific geographic location.
One point in Thea’s presentation that stands out is that information/evidence sometimes needs to be simplified for people to understand and act upon. I would be surprised if people in the UK had not now heard the line ‘Stay at home, protect the NHS, save lives’, which is also on the front page of a booklet circulated nationwide summarising the action that needs to be taken by individuals and includes illustrations on the correct way to wash hands (see figure below).
Over the past few weeks Evidence Aid has been preparing collections of relevant evidence for COVID-19. Their aim has been to provide the best available evidence to help with the response, supporting Ben’s proposal that there needs to be research-evidence based decision-making in disaster response situations.
There is clearly a lot of uncertainty about COVID-19 and as the situation is changing day by day, it is impossible to comment on what the right or wrong approach is, and this approach has differed from country to country. One of the aims of our project is to now establish what evidence and type of experts different countries have relied upon and why the interventions have differed.
Members of the EuP team have also started a blog with opinion pieces about the pandemic including: ‘Are the experts responsible for bad disaster response?‘ and ‘Reading Elizabeth Anderson in the time of COVID-19’.
Text by Hannah Baker (published 23/04/2020)
Photographs within text by Hannah Baker, Cléo Chassonnery-Zaïgouche & Judith Weik
Thumbnail image by JohannHelgason/Shutterstock.com
Disaster Response | Knowledge Domains and Information Flows
Knowledge Domains and Information Flows
11 February 2020, 10.30:17:00
Cripps Court, Magdalene College, University of Cambridge, CB3 0AG
Hannah Baker, Research Associate, CRASSH (University of Cambridge)
Robert Doubleday, Executive Director at the Centre for Science and Policy (CSaP), (University of Cambridge)
Emily So, Reader in the Department of Architecture (University of Cambridge)
Disaster management is formed of several parts including preparedness, mitigation, response and recovery. Critics argue that current disaster management practices are technocratic and call for a co-production of knowledge. This workshop, therefore, explores knowledge domains and flows of information in the context of disaster response. When responding to an earthquake, volcanic eruption, pandemic and other emergency situations, decisions need to be made at governmental level and on the ground. Information has to be collated, understood and disseminated to make decisions in these time-pressured environments subject to uncertainty.
The workshop addresses a range of questions in the context of disaster response:
- What type of knowledge is and should be used?
- What constitutes an expert?
- How is and should uncertainty be factored into decisions and communicated?
- What happens to, and should happen to, knowledge after it is produced and the event has taken place?
Speakers from different disciplinary backgrounds represent both academia and policy, emphasising the need to think holistically about these problems. The workshop includes focus groups to allow for in-depth discussions about the questions posed and to facilitate collaboration between participants.
Amy Donovan (University of Cambridge)
Robert Evans (Cardiff University)
Dorothea Hilhorst (Erasmus University Rotterdam)
Mausmi Juthani (Government Office of Science)
Benjamin Taylor (Evidence Aid)
This is an interactive workshop, with the purpose of bringing together people from a range of disciplines and experiences. The target audience includes (but is not limited to) people working in/researching expertise, organisational theory, knowledge production and dissemination, and disaster management.
All participants are expected to take part in the focus groups. Multiple perspectives and levels of experiences are encouraged and facilitators will be on hand to manage discussions.
The workshop is followed by the Centre for Science and Policy’s (CSaP’s) annual lecture, which participants may also find of interest. This will be delivered by Professor Dame Sally Davies, Master of Trinity College, Cambridge and former Chief Medical Officer for England and Chief Medical Advisor to the UK Government. The lecture will take place in St John’s College at 17:30. Anyone interested in attending should register with CSaP.
This workshop forms part of the Expertise Under Pressure (EUP) project, funded by the Humanities and Social Change International Foundation. The EUP project’s overarching goal is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach and what realistic demands communities should place on experts.
Queries: Contact Una Yeung
Expertise, Adult Learning & Intercultural Effectiveness
In November 2019, Fodé Beaudet from the Canadian Foreign Service Institute from Global Affairs Canada visited the UK as part of a project to better understand how we can design, facilitate and evaluate our work to support behavioural change at the individual, group or system level. He met with Anna Alexandrova and Hannah Baker at the University of Cambridge to discuss overlaps between his own work and the Expertise Under Pressure project. Consequently, he kindly agreed to answer the questions that we put forward during our ‘Expert Bite’ discussions as a blog post, and we hope that there will be further collaboration in the future.
Senior Learning Advisor, Centre for Intercultural Learning, Canadian Foreign Service Institute (CFSI) at Global Affairs Canada
Fodé Beaudet is a Senior Learning Advisor at the Centre for Intercultural Learning with Global Affairs Canada (GAC). He has extensive experience in designing and facilitating multi-stakeholder initiatives around the world – themes include train-the-trainer platforms to facilitate change, Whole of Government Approaches to strategic collaboration, navigating through Complex Adaptive Systems and strengthening the intercultural effectiveness of professionals working overseas. Clients include international NGOs, global networks and institutions, foreign governments, the defence sector, research institutions and partner agencies affiliated with GAC. He currently serves on the Board of Directors of the Institute for Performance and Learning (I4PL).
Adult learning approaches and intercultural effectiveness
For the purpose of this blog, I will focus mostly on our adult learning approach to strengthening intercultural effectiveness competencies. Established in 1969, the Centre for Intercultural Learning (CIL) is Canada’s largest provider of intercultural and international training services for internationally-assigned government and private-sector personnel. One of the CIL significant research products was the development of a competency-based model for intercultural effectiveness, the profile of the intercultural effective person (Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001). This model, delivered with an experiential learning approach (Kolb, 1984), has proven successful to help prepare individuals for short- or long-term missions. According to Kolb, “learning is the process whereby knowledge is created through the transformation of experience” (1984, p. 38). The experiential learning model, adapted from Kolb, invites learners in a series of cycles, as indicated below.
The expert-knowledge content is often integrated at the ‘Generalize’ stage of the ERGA learning cycle. At this point, the expert has a good overview of the knowledge in the room, and she or he can best distill valuable insights to complement what is already known. This may involve validating current knowledge, nuancing some points of views, or challenging what was said. The ‘application’ step invites learners to discuss in groups or in solo how to apply what they have learned and integrates their peers’ knowledge as well as the expert’s contribution. Thus, the cycles of learning loops look more like a spiral than a circle.
1. Considering your research and/or work in practice, what makes a good expert?
Understanding the andragogic approach to learning: comfort with emergence. We distinguish good intercultural experts by their ability to acknowledge and recognize how the content of their contribution can reinforce the knowledge and experience in the room. An expert content does not always have to be elaborated at length, because learners may have reached similar insights. A good expert will challenge, validate, nuance or enrich content. As such, comfort with emergence means a predisposition for active listening and demonstrate agility based on what is said in the moment. As for an expert facilitator, comfort with emergence and understanding andragogic approach to learning also applies. For instance, our train-the-trainer approach involves very little content from the facilitator. A facilitator will rarely speak for more than 5 minutes. In a train-the-trainer format, learners assess, design, facilitate and evaluate their work collaboratively, in real-time. Reflective practice encourages learners to deepen their self-awareness about the experience. I recall when I was first introduced to this work, co-facilitating a train-the-trainer: my first reaction, when learners were struggling with a task, requesting an example, was to provide one. And then, I got into a murky water: how would you proceed next? My gifted co-facilitator at the time took me to the side and reminded me: let them struggle. Let them figure it out. This is important. In this particular format and context, an expert facilitator has to hold and suspend their ideas or creativity. It’s about setting the container for learners to shine. The less an expert facilitator says, is seen, the better. Then, learners become the experts, taking ownership and responsibility to finding solutions instead of asking for them.
I also recall a learner asking at the start of a train-the-trainer workshop regarding expectations: “How can I deal with difficult people?” At the end of the workshop, the learner shared this, which I paraphrase “I think it is me, who can be difficult sometimes”. In other words, we are not isolated from the system we wish to intervene. We are part of any system. And as long as we project “fixing” to be solely about others, we may miss an important blind spot. Which means making discomfort and silence your friend. During another train-the-trainer in the Middle East, a man shared a powerful testimony. He pointed to a woman in the group, and said for all to hear:
“Before coming here, I didn’t think women could lead. Not only were you a leader, working with two men, but I have two daughters and I hope someday they will be like you.”
This is the potential transformative of participatory processes when learners are taking an active part in co-creating with others. Hierarchy breaks down. Self-organization is encouraged. And mental models are challenged. Because the expertise is across the room.
Inquisitive mind. The value of an expert’s skilled questions to better understand the learner profile. In contrast, an expert with a set presentation seeking to repurpose what she or he has already repeated may not be a good fit.
Humility. Here is an anecdote to make the point. At the beginning of an intercultural course about an Asian country, the intercultural expert said: “If someone tells you they know the country, they don’t know the country.” He described at length his experience, but only to convey how he felt short of truly understanding the culture and that he had a lot to learn. A learner, who was born and raised in that particular country, said to the expert for everyone to hear: “You understand the country very well.” This humility, in some circles, may be counter-intuitive. However, when engaging with understanding what it means to be effective across cultures, humility promotes curiosity and deepens one’s inquiry to what are complex webs and layers that evolve and transform over time. In contrast, an assertiveness and certainty about culture in general terms can do a disservice to learners, reinforcing assumptions and inadvertently promoting ill-conceived predictive behaviours. Here is one example about how the model can be adapted. Years ago, I was collaborating with a client hosting a Japanese delegation. One of their goals was to come up with a partnership agreement. They sought our services to learn about Japanese culture. We had less than a day. I worked with an expert facilitator and an intercultural expert with knowledge about Japanese culture to design an approach. Given the client’s end goal, we devised a plan where the client was given an opportunity to reflect on how their current approach relates to Japanese culture. In the morning, they learned key features about the culture. And then, in the afternoon, we asked the client to describe how they intended to host their counterparts, dividing the task into three categories: activities before the delegation arrives, during, and after. And they presented back their findings, strategies and questions to the intercultural expert. Details, like the value of greeting the delegation at the airport surfaced. Reviewing lunch time allocation, setting up social activities, and not just work-related activities. But the most insightful learning from the client was the need to reframe their goal: given the decision-making process they had learned about, it was unrealistic to expect a partnership agreement so soon. Therefore, emphasis on hosting was about nurturing and building relationships. I give a lot of credit to the client, who, instead of forcing their approach, reframed the goal itself. And the intercultural expert had the wisdom to let learners surface their assumptions, complementing their strategies with her own insight.
2. What are the pressures experts face in your field?
No one organization nor individual can understand nor convey the full extent of what needs to be learned. Hence the importance to acknowledge the boundaries of expert knowledge while being confident in what they can contribute. Problems are not solved with the help of one expert, but through the collaboration among diverse expertise. I’ll weave some examples of our work and research in addressing complex problems in a multi-stakeholder context. Here are a few pressures experts face:
Attribution:how do you communicate the value of your expertise, when the result is part of a larger ecosystem? How important is it for the expert to be visible, vs promoting the visibility of others?
Tensions between asking good questions vs offering answers.Inevitably, someone will want to know: can you tell me exactly what I need to say? What I need to do? How will someone respond if I do X? What if I do Y? It dependsdoesn’t answer the question and can be frustrating. And yet, it dependscan also be closer to the truth than coming up with a reassurance in the moment that can be deceptive later. Probing further questions about what is being asked may lead to better answers. It’s also tricky: aren’t you supposed to be an expert? Aren’t you supposed to know?
This is where experts and clients may collude: in the search of an easy solution, in the effort to prove that something is done, the lure of pursuing the wrong problem can provide an illusory solace, and therefore lead both experts and clients to ‘tick the box’, indicating that ‘actions’ were made.
Contractual agreement. Building on the previous points, experts feel pressure to operate within the contractual agreement they are accountable to. Yet, when facing complex problems, the emergence of unexpected scenarios may require reframing what was understood to be the problem. So if the contract has little flexibility, a new understanding about a problem cannot be accounted for. So experts are torn between the reality of what they see, and the murkier expectations of what they should deliver.
Hence, to make the most of a good expert, there is also a need for a good procurement process. Faulty procurement, with little flexibility, can turn the best experts into the worst. All with good intentions. For example, a common reflex for easy solution is training. Yet, in many instances, training is not the answer. Here’s an example about how a mental model shifted as a result of incorporating generative dialogues in our train-the-trainer workshops. Generative dialogues put emphasis on a shared understanding about a problem or an inquiry, to reflect on the collective wisdom, and propose actions to move forward. David Bohm (2013), among others, have written extensively about the value of dialogue. A key feature of generative dialogue is that neither the problem/inquiry nor the solutions are predetermined. Everyone’s voice is valued. The physical environment is key: conversations in small groups or in a large circle, as oppose to lecture-style format. During one of the train-the-trainer in the Horn of Africa, a learner approached me, after experimenting with such approach, saying; “You mean, we don’t need training? We can have dialogue?” In some instances, yes. Training has its place too. Lecture-style have their place too. It’s a matter of finding the appropriate response to the problem at hand. Generative dialogue requires dealing with discomfort, with not knowing what the answer will be. In other words, generative dialogue is useful when dealing with complex problems. And perhaps paradoxically, while it’s an ancient practice well known among First Nations communities, it may also be part of a practice for approaching the future.
3. Have you observed any significant changes occurring in recent times in the way experts operate?
The concept of the wisdom of the crowd has certainly grown. In the context of how experts operate, I will say that there is a greater openness and inquisitiveness about how their part fit into a broader picture. I noticed several experts exemplified in Ed Schein’s (1999) stance of the process consultant, whereby an expert will help clients identify the problem properly. I also noticed this stance when experts work in the field of systems thinking. This may also reflect experts gaining knowledge not only about the content they can provide, but the process they can support.
The rise of social labs is perhaps an indication of an emerging ‘expertise’; the art convening space for experts to co-create. One of many examples would be the Art of Hosting. A change I see is the blend of expert content and the expert skilled to hold the container. I remember working with a client system that insisted to push the boundaries of what is possible. They were seeking to create new theories of change for education in Africa. In this context, all participants had a voice, whether former ministers of education, deans, vice-deans, farmers, CEOs or students. In hindsight, this reflected the complementary role of expertise. It’s alluring for an expert to define what should be done. But in some instances, a community, a collective, or a multi-stakeholder representation is needed to name, envision and frame a compelling and shared understanding towards a future horizon. Building from a shared vision and understanding, the role of experts is clearer. But if we name or assume a particular expert model too soon and especially prior to a shared understanding, then the risk is to fall into the trap of using a fashionable trend that serves a model rather than the actual problem or appreciative inquiry.
Finally, I also think that Donald Schön, in his thought-provoking book in 1987, the Reflective Practitioner, offered insights still relevant today: the growing shift of technical knowledge to address complex problems. And as a result, a more reflective professional stance, whereby framing the problem is of the upmost importance.
4. Do you envision any changes in the role of experts in the future?
It’s unclear for me if the future will look like the continuation of a trend, a pattern, or if there will be significant upheaval or sudden bifurcation. One factor that may influence the role of experts is our relationships with evidence and facts. The other is the independence in which experts operate: to what extent is the ecosystem in which experts will evolve is vulnerable to collusion, unconsciously, or implicitly? In the latter case, experts may serve to prove a point of view, rather than enrich point of views.
In the context of intercultural effectiveness, I foresee the continuation of expanding beyond the classroom and the virtual environment to include learning journeys and more action learning oriented approaches. I could see strengthening the model of experiential learning to include mindfulness, where sensemaking is not limited to what one thinks or feels, but also inquisitiveness about the body’s way of learning. This would call for more immediate attention to presence, in order to witness and discern how we are influenced by past experience, which affects how we project into the future. However, to embrace a more direct perception with less filter is an uncomfortable place to be, to hold, but one rich with clarity.
Overall, I see a growing emphasis on two streams that have been more or less separated: technical and process-oriented expertise. The latter is about holding the container, sensemaking, facilitation, convening and hosting skills. The former, is delving deeper into system dynamics and social network analysis, where culture is understood as systems, transcending borders. Here’s an example of what I’ve learned from experts in these fields through work projects; when we integrate the lens of systems and how nodes interact with each other, we see common features among patterns: systems are self-reinforcing. Systems have a purpose, which is to survive. The purpose of a system is what it does, once said the cyberneticist Stafford Beer. This has implications for living in a turbulent environment where survival instincts are heightened: if the purpose of a system is a system, then perhaps change happens by understanding patterns and purpose beyond their perceived shared values. I found the experience of collaborating with experts in social network analysis while inviting stakeholders into a sensemaking analysis revealing.
I wouldn’t be surprised, in the future, to witness more and more of these conversations among technical expertise and process-oriented approaches. Perhaps a process without good content is as useless as good content without a proper process to make sense of it.
Bohm, David. (2013). On Dialogue.Abingdon, Oxon: Routledge.
Kolb, D. A. (1984). Experiential Learning experience as a source of learning and development. New Jersey: Prentice Hall.
Schein, E. H. (1999). Process consultation revisited: Building the helping relationship. New York: Addison- Wesley.
Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action.United States of America: Basic Books.
Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001). A Profile of the Interculturally Effective Person. Centre for Intercultural Learning. Canadian Foreign Service Institute.