Blog Cambridge

Searching for the facts in a global pandemic 1024 683 Shauna Concannon

Searching for the facts in a global pandemic

In times of great uncertainty, such as global pandemics, the appetite for reliable and trustworthy information increases. We look to official authorities and our peers for guidance. However, as has quickly become evident, the veracity and authenticity of information being circulated is not always sound. As people are increasingly connected to their devices, checking social media and news sites for information and guidance to inform their own actions, the potential impact of misleading and harmful content is more keenly observed. 

Misinformation exacerbates uncertainty and provokes emotive responses such as fear, anger and distrust. Furthermore, it can also inform potentially harmful behaviours and attitudes, which in the context of a global health pandemic, carry high stakes. In the context of COVID-19, the global reach and severity of the situation, together with the increased connectivity afforded by digital platforms, make for a dangerous combination. 

The World Health Organisation (WHO) explain that the ‘infodemic’ – the ‘over-abundance of information – some accurate and some not’ – is a significant concern, and ‘makes it hard for people to find trustworthy sources and reliable guidance when they need it’.

World Health Organisation

How are platforms dealing with misleading content?

Image: LoboStudioHamburg via Pixabay

Social media platforms such as Twitter have reported a surge in usage since the onset of the pandemic, making for a particularly captive audience for information, whether accurate or not. The director-general of the World Health Organization (WHO), Tedros Adhanom Ghebreyesus, warned in his address on 15 February 2020 that ‘[f]ake news spreads faster and more easily than this virus, and is just as dangerous’.

In a statement released on 11 May 2020, Twitter outlined their updated guidelines on how they are handling misleading content about COVID-19 on their platform. They have begun using a tiered approach to determine how problematic content should be handled. Content is to be classified based on potential harm (moderate/severe) and whether it is misleading, disputed or unverified.

Misleading content, i.e. a tweet that includes statements that have been confirmed false, where the propensity for harm is severe, will be removed. For moderate harm the tweet will remain but be accompanied by a label, reading Get the facts about COVID-19, linking trusted information sources or additional information about the claim.  For tweets containing information about disputed claims that carry potential for severe harm, a warning will be issued, and the user will have to choose to reveal the tweet.

What isn’t clear is exactly how Twitter are carrying out these fact-checks. In an earlier blog post, Twitter stated that they were stepping up their automatic fact-checking in response to COVID-19. Fact-checking is an increasingly vital approach for tackling the rapid spread of false claims online. While there is an urgent need for automated systems that detect, extract and classify incorrect information in real time, it is an extremely challenging task.

Why automated fact-checking is so difficult

In January we held a fact-checking hackathon (you can read more about that here) where participants developed automated approaches to assess claims made about wikipedia data. Even with a static set of claims and predefined evidence sources, the process of verifying claims as true or false is highly complex. To start with, you need a veracious baseline to assess against, a database of facts and falsehoods. This is challenging in the current context, when new information is constantly being generated. The International Fact Checking Network (IFCN) has created a database of fact-checks, sourced from fact-checking organisations across the world. However, while this is an extremely useful resource it is not a complete database. Then, given a claim to be fact-checked, and the availability of a corresponding fact-checked entry in your veracious database, you need to classify the claim sentences either as supporting or refuting the original claim or else as providing too little information to either support or refute it. If we consider the following example:

“5G is responsible for COVID-19.

This certainly reads as a pretty unequivocal endorsement for the (debunked) theory that 5g is linked to how COVID-19 spreads. 

However, what if we consider the following:

“Oh okay… 5G is responsible for COVID-19. Come on people!

For a human reader, the preceding `oh okay’, and tailing `come on people!’, suggests that this could be an incredulous response to the proposed link between the virus and mobile internet connectivity. Language is flexible and interpreting the implied meaning often requires us to consider context, both what is said around a particular sentence, but also the interactional sequence in which it occurs, for example:

“What is the most ridiculous conspiracy theory about COVID-19 you have heard?

Reply: “5G is responsible for COVID-19.

While this example served to emphasise how relatively straightforward sentences can be taken out of context, content designed to mislead often strategically incorporates truth or half-truths into the content. As Kate Starbird (2019) observed, ‘disinformation often layers true information with false — an accurate fact set in misleading context, a real photograph purposely mislabelled’ (Kate Starbird, 2019). The recently published Reuters Institute factsheet summarising COVID-19 misinformation, corroborates this: 87% of the misinformation in their sample circulated on social media involved `various forms of reconfiguration, where existing and often true information is spun, twisted, recontextualised, or reworked’, while, purely fabricated content accounted for only 12%. 

It is easy to see how content can easily be misclassified, and this is a key challenge for automated fact-checking. If systems are oversensitive or insufficiently trained then content can be incorrectly flagged and removed. In March, Facebook’s efforts to control the spread of misinformation led to genuine news articles to removed from its site. Twitter acknowledge this limitation and explain that they will be cautious in their approach and are unlikely to immediately suspend accounts:

“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes. As a result, we will not permanently suspend any accounts based solely on our automated enforcement systems.” 

Twitter blog post

By the time a post is flagged and removed it will likely already have been seen by many people. Ideas are very resilient and hard to unpick, particularly when they are shared at speed and integrated into individuals’ sense-making practices. A recent article in the New York Times showed that in little over 24 hours a misleading Medium article was read by more than two million people and shared in over 15 thousand tweets, before the blog post was removed.

Consequently, fact-checking approaches can only be so effective. Even if misleading content is flagged or reported, will it be removed? And if it is, by that time it will have spread many times over, been reformulated and shared in other forms that may fly under the radar of detection algorithms. 

Fact-checking alone is not enough

Even if a perfect automated fact-checking system did exist, the problem still remains, which is why empirical evaluation of strategies such as those being rolled out by social media platforms needs to be carried out. 

While disinformation campaigns are typically thought to be powered by nefarious actors, misinformation can be passed on by well-meaning individuals who think they are sharing useful and newsworthy information. A key challenge is that once a piece of fake news has made an impression it can be challenging to correct. Furthermore, it has been shown that some people are more resistant to fact-checking than others, and even once a claim has been disproved, biased attitudes remain (De keersmaecker & Roets, 2017).

A paper published by Vosoughi et al (2018) found that “false news spreads more pervasively than the truth online”. This is particularly problematic, as one way we judge something to be true or false is by how often we come across it (exposure bias). Furthermore, fake news is typically created to be memorable and newsworthy; with emotive headlines that appeal to the concerns and interests of its readership, playing on the reader’s emotional biases and tapping into their pre-existing attitudes (myside bias). Considering the cognitive processes that impact on how individuals engage with misinformation can inform new thinking on how we can mitigate its effects. Recent work by Rozenbeek & Van der Linden highlights psychological inoculation can develop attitudinal resistance to misinformation.

Slow but steady progress

Clearly, working out how best to approach these challenges is not straightforward, but it is promising that there is a concerted effort to develop resources, tools and datasets to assist in the pursuit. Initiatives, such as The Coronavirus Tech Handbook, a crowdsourced library of tools and resources related to COVID-19. Initiated by Newspeak House including a section on misinformation, highlight how distributed expertise can be harnessed and collated to support collective action.

Similarly, it is promising to see organisations experimenting with different formats for information provision, that may prove more accessible for people trying to navigate through the masses of information available.  For example, the Poynter Institute’s International Fact-Checking Network (IFCN) has created a WhatsApp chatbot and the WHO has developed a chatbot for facebook. While these use very constrained interaction formats, i.e. choosing from preselected categories to access information, these may well be more accessible than reading a long web page for many. 

The WHO chatbot on facebook messenger presents key messaging in an interactive format. 

In the context of COVID-19, individuals are required to interpret what has at times been ambiguous guidance and make decisions about how best to protect themselves and their families. This backdrop of uncertainty will inevitably drive up information seeking behaviours. While it may seem that each day unveils new examples of misleading content, having pages dedicated to the issue on many leading websites can be no bad thing. Hopefully, it will help to build public awareness that assessing information online requires scrutiny. The issue comes down to trust and how we as content consumers assess which information we deem reliable.

In Spinning the Semantic Web, published over 17 years ago now, the understanding that a mechanism for encoding trust was essential to the flourishing of the internet was stated in the foreward by Tim Berners Lee. In this nascent projective vision of how the web should function, Lee explained, “statements of trust can be added in such a way as to reflect actual trust exactly. People learn to trust through experience and though [sic] recommendation. We change our minds about who to trust and for what purposes. The Web of trust must allow us to express this.” (Spinning the Semantic Web, p.xviii). He outlines the mission of the W3C to be to “help the community have a common language for expressing trust” not to take ”a central or controlling role in the content of the web.” Clearly the internet has developed in ways unforeseen by the authors in 2003, and in 2020 we find ourselves grappling with these very challenges of balancing top down and bottom up approaches to developing an ecosystem of trust.

Useful resources and further information

Full Fact, a UK based fact-checking organisation, has published a useful post on how individuals can guard themselves against misleading information

If you would like to find out more about COVID-19 misinformation, we recommend the following podcasts:

Full Fact Podcast:

Episode 5 of the Science and Policy podcast from the CSAP :

Nature CoronaPod – Troubling News:

Economists in the City #3 1020 786 Cléo Chassonnery-Zaïgouche

Economists in the City #3

From Cities to Nations: Jane Jacobs’ Thinking about Economic Expansion

by Cédric Philadelphe Divry

Jane Jacobs (1916-2006) is best known for her critique of top-down forms of city planning and urban renewal projects, laid out in the groundbreaking The Death and Life of Great American Cities (1961). This had a profound impact on the profession of urban planning, boosted the institutionalization of urban economics, and considerably influenced community organizing. She was one of the most celebrated thinkers about cities in the twentieth century, and her ideas have had a long-lasting influence on urbanism.

What is less well known is her economic thinking, which was developed in her subsequent works, although a partial reinterpretation of this has gained traction since the 1990s. However, a more comprehensive interpretation is needed if we are to grasp the nature of her theorization of cities, and the policy lessons she drew from her insights.

Numerous interviews with her, events in her honor and commentaries on her work have mostly focused on her urban thought and activism, and advanced the appealing narrative of a housewife standing up to men bulldozing the neighborhoods of 1950s New York City. But when asked about her own legacy, she dissented from this view:

If I were to be remembered as a really important thinker of the century, the most important thing I’ve contributed is my discussion of what makes economic expansion happen. This is something that has puzzled people always. (Jacobs, 2001)

A reading of all of her works brings to the fore a thinker theorizing the economic behavior of cities on the basis of a rich understanding of the urban social fabric. Starting with The Death and Life of Great American Cities, and across during five subsequent decades, she led an enquiry into the mechanisms of what she termed “economic expansion,” and elaborated her own approach to the dynamics of economic growth and development.

The Economy of Cities (1969) and Cities and the Wealth of Nations (1984)

Two works are especially important in the development of her thinking. In The Economy of Cities (1969), she builds upon her insights of her earlier The Death and Life of Great American Cities to derive a “city economy model.” This describes how the economic lifecycles of cities depend largely on the relation between local economic activity and inter-city trade, from periods of “explosive” growth to eventual stagnation and economic decline. Fifteen years later, Cities and the Wealth of Nations: Principles of Economic Life (1984) argued against the idea of nations as economic units, highlighting the contradictory dynamics of political expansion and long-term economic development. In 2006, she left behind the first draft of what would have been a broader synthesis of her economic theory.

A series of diagrams in the appendix of The Economy of Cities (p.253, p.257) illustrates a number of stages in the lifecycle of a city economy. On the left, an early stage where the export (E) of a good previously produced (P) for local consumption (C) may trigger an increase in imports (I) accompanied by population increase, (part of the “export multiplier effect” EM). On the right, part of the process of a later stage dubbed “import-replacing,” when a local economy replaces an important number of former imports and grows relatively to its export economy.

Considering her contribution to economic theory may seem counter-intuitive. In addition to lacking academic credentials, she took little interest in engaging the discipline of economics. Her models were neither formal nor developed in reference to existing models. And her view of economic theory in general was dismissive. In the opening chapter of Cities and The Wealth of Nations, “Fool’s paradise,” Jacobs lays out a history of economic thought and arrives at this sweeping conclusion: “Choosing among the existing schools of thought is bootless. We are on our own.” The same dismissive stance extended to academic institutions, as she refused numerous honorary degrees from various Universities.

And yet, some economists had picked up on her insights. A type of economic externality has been derived from her detailed historical accounts of new economic activities arising from urban diversity. Chicago and Harvard urban economists Glaeser, Kallal, Scheinkman, and Shleifer credited Jacobs in 1992 for identifying cross-industry knowledge transfers, which they dubbed “Jacobs externalities.” The concept was based on Jacobs’ The Economy of Cities and posits that knowledge transfer occur between different industries, and that local competition supports economic growth. This came four years after future Nobel prize recipient Robert Lucas pointed to Jacobs’ work while investigating the external effects of human capital in his 1988 article On the Mechanics of Economic Development, although without formalizing his insight. Lucas’ endorsement earned Jacobs increasing recognition among economists over the following decades. Paul Krugman described her as a “patron saint of the new growth theory” and her unusual status was summed up by Robert Dimand and Robert Koehn who saw her as “her own distinctive kind of political economist … an exceptional instance of a woman without academic affiliation or university training achieving recognition among leading academic economists”. And a considerable literature grew up after Glaeser et al.’s piece. And in 2016, based on a growing number of attempts to quantify Jacobs externalities, academic interest for her work was at an all-time high.

But although Jacobs does repeatedly make the argument for local diversity which is captured by the externality bearing her name, it merely serves as the basis for her wider system. From her understanding of local economic life in an urban setting, she derived a model of the city as an economic unit with a structure and a lifecycle, which in turn she used to analyze macroeconomic phenomena. Despite this interest in her work, extended reassessments of her contribution to economic thought have yet to appear. Jacobs’ stance towards economic theory may somewhat explain this situation. But while her writings do not fit the template of conventional economics, they do connect her with other established interpretative paradigms.

Analogies and evolutionary thinking

One line of interpretation of Jacobs’ work explores her repeated use of ecological and biological analogies. She often made the case that looking at patterns in nature may help understand dynamics in the economy. But although these analogies highlight the dynamic aspects of her conception of city economies, they are not sufficient evidence of her evolutionary understanding of economic development.

The city economy model, first developed in The Economy of Cities,argues that the desirable diversification of local economic activities depends largely on the destination of goods and services entering the city’s economy. The key claim is that imports are key to economic development: they embody knowledge and allow further diversifications in the local economy, as imports are gradually replaced by local supply, and make “room” for new imports – in a similar manner to import substitution. Jacobs uses this model to stress the long-term undesirability of overspecialization derived from a focus on maximizing exports, and the importance of a large and diverse local economy – ultimately delivering a critique of comparative advantages as an organizing principle of trade.

The closing chapter of Cities and the Wealth of Nations provides a succinct version of two analogies which Jacobs uses to help conceptualize the dynamic features of a city economy. She likens the apparent order at work in cities in terms of “…biological evolution whose purpose, if any, we cannot see unless we are satisfied to think its purpose is us.” And further:

“[T]he more niches that are filled in a given natural ecology, other things being equal, the more efficiently it uses the energy it has at its disposal … That is another way of saying that economies producing diversely and amply for their own people and producers, as well as for others, are better off than specialized economies …”

The most elaborate study of Jacobs’ use of biological and ecological analogies is provided in mathematician and philosopher David Ellerman’s paper How Do We Grow? Jane Jacobs on Diversification and Specialization (2005). In this original treatment of Jacobs’ economic thought, Ellerman expands on Jacobs’ set of analogies to reframe her often overlooked city economy model. As he puts it, in the case of the ecological analogy:

“Organized energy comes free from the sun, but its trajectory within an ecology will depend on the complexity of the system. The two extremes could be taken as a desert and a rain forest. A rain forest and a desert at the same latitude would have about the same amount of solar energy arriving per unit area. In the case of the desert, it is essentially a sterile conduit; the energy comes in during the day and is dissipated at night. Little is captured; it is a throughput operation. The opposite is the case for the rain forest. much energy is captured through the photosynthesis of its plants.”

According to Ellerman, this would be translated in economic theory as follows: the flow of organized energy equates to money in the economic system, funds which have a multiplier effect on the local economy depending on a number of factors, before being dissipated. However, he notes that Jacobs equates organized energy with “embodied knowledge and know-how.” This difference illustrates why imports, not exports (which would matter most if organized energy equated to money in the analogy), are key in Jacobs’ system, in which “development is a conceptualized form of social learning.” Incoming goods, the products of foreign know-how, are vectors of developmental learning. And exports of commodities and services fund these imports. What separates economic rain forests from economic deserts in Jacobs’ system would be “the way imports are used” (or the path they follow), whether they are “primary, intermediate, final, or producer goods.” When imports feed into the somewhat enclaved export economy (i.e. overspecialized), they have a lesser effect then when they are dissipated in local consumption.

Ellerman’s reframing diverges from the usual Jacobsian argument for local diversity by depicting the city economy’s boundaries as an open system governed by evolutionary dynamics. It may also open up the possibility of considering Jacobs’ writings as a contribution to evolutionary economics. For example, following Geoffrey Hodgson’s taxonomy in Economics and Evolution (1993), part of Jacobs’ system could be characterized as phylogenetic and non-consummatory, that is, as exhibiting an open-ended process of evolutionary selection among a population of firms and individuals.

But Jacobs’ economic thought does not consistently involve the application of evolutionary logic to the processes she attempts to uncover. Her analogies primarily serve to help us think of city economies as complex systems, within which she chooses to highlight specific properties.

Solving problems of “organized complexity”

The paradigm of “organized complexity” she explicitly embraces in The Death and Life of Great American Cities, appears to also run through her economic thought. Jacobs borrowed the concept from mathematician Warren Weaver and appears early on in The Death and Life of Great American Cities to characterize the “kind of problem a city is;” which, in Weaver’s words, “[a]s contrasted with the disorganized situations with which statistics can cope, show the essential feature of organization”. The prevalence of this paradigm in her economic writings is probably what drew Friedrich Hayek’s attention to The Death and Life of Great American Cities. It has also has been highlighted by author David Warsh in The Idea of Economic Complexity (1984).

Cover of Cosmos + Taxis issue dedicated to Jane Jacobs (2017)

More recently, in 2017, the publication Cosmos + Taxis dedicated an issue to Jane Jacobs. And as Pierre Desrochers and Joanna Szurmak note, despite academic interest in Jacobs’ work being at an “all-time high,” her economic theory has yet to be studied from the perspective of the “paradigm which shaped her worldview.” I would suggest that the study of “feedback mechanisms” in Jacobs’ economic writings may be an effective first step in addressing this deficiency, while also highlighting her critical discussion of urban policies.

Positive feedback relationships are a central mechanism in the study of complex systems, as they help explain self-reinforcing behaviors and path dependency. Jacobs repeatedly identifies feedback mechanisms while investigating how city economies evolve in a national setting and drew wider conclusions about to promote long-term city growth. Whether feedback is “accurate” or “faulty” in Jacobs’ terms depends for the most part on whether or not the feedback conveys correct information about the quantity of imports a city economy can “earn” from its local production. As a conceptual tool, Jacobs’ uses feedback mechanisms to expose leverage points in economic systems. Just as The Death and Life of Great American Cities was targeted at ongoing urban renewal projects, her economic writings often involved a critique of policies.

In Cities and the Wealth of Nations, she argues that the unity of nations as political units often rests on “faulty feedback to cities,” for instance national currencies which act as “powerful carriers of feedback information” on the demand for a city’s exports. She hypothesizes that national currencies, because they provide consolidated information on a nation’s international trade, provide feedback best tailored for the city which weighs the most in the nation’s economy. Such feedback accrues over time and helps explain what she calls the “elephant city-region pattern,” where a city gains a definitive edge over the others, which in time may require subsidies – subsidies returning their own faulty feedback to cities which cannot make the adjustments to expand from their own local activity. This grim outlook on the future of cities within nations prompted her to argue in favor of the independence of Quebec in The Question of Separatism: Quebec and the Struggle over Sovereignty (1980).

Likewise, Jacobs targeted development schemes developed by the World Bank. She pointed to the inherent weaknesses of Robert McNamara’s development strategies for addressing “basic human needs” (literacy, nutrition, reduction in infant mortality, and health) of poor populations. She argued that because economic development is a process, it cannot be thought of as a “collection of things” which can be bought or provided. The “basic human needs approach” ignored the necessity for solvent markets to support increased agricultural yields and the populations that were being displaced. As they could no longer rely on agricultural work to sustain themselves, displaced workers failed to find jobs in nearby city economies, where labor markets had not evolved alongside the increased agricultural yields through a succession of appropriate feedback mechanisms triggering the needed corrections. And she made the same argument against technology transfers in the “Green Revolution” of the 1960s and 1970s.

The mechanism of feedback relationships is one example among others of Jacobs’ usage of systemic concepts to draw boundaries around the city economy as a system and elaborate on its behavior. Further examination of Jacobs’ use of these concepts within the paradigm she adopted may reveal a consistent link between her analysis of cities as economic units and the policies she is tended to critique. In short, future attempts at more comprehensive interpretations of Jacobs’ economic thought might benefit from stepping away from the urban focus of The Death and Life of Great American Cities while considering more carefully her later economic writings.

Cédric Philadelphe Divry is a graduate student at PHARE, University of Paris 1 Panthéon-Sorbonne. His work examines conceptions of cities in the history of economic thought, in particular during the mid-19th century urban renewal of Paris and in the economic writings of urban theorist Jane Jacobs (1916-2006).

Other posts from the blogged conference:

Cities and Space: Towards a History of ‘Urban Economics’, by Beatrice Cherrier & Anthony Rebours

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

Economists in the City #2 1024 740 Cléo Chassonnery-Zaïgouche

Economists in the City #2

Cities and Space: Towards a History of ‘Urban Economics’

by Beatrice Cherrier & Anthony Rebours

 A map for a hostile territory?

The field of ‘Urban Economics’ is an elusive object. That economic phenomena related to the city might need a distinctive form of analysis was something economists hardly thought about until the early 1960s. In the United States, it took a few simultaneous scholarly articles, a series of urban riots, and the attention of the largest American philanthropies to make this one of the hottest topics in economics. The hype about it was, however, short-lived enough so­­­ that, by the 1980s, urban economics was considered a small, ‘peripheral’ field. It was only through the absorption into a new framework to analyze the location of economic activities – the ‘New Economics Geography’ – in the 1990s that it regained prominence.

Understanding the development of urban economics as a field, or last least the variant which originated in the US and later became international, presents a tricky task. This is because the institutional markers of an academic field are difficult to grasp. A joint society with real estate economists was established in 1964, and a standalone one in 2006; a journal was founded in 1974, with an inaugural editorial which stated that: “Urban economics is a diffuse subject, with more ambiguous boundaries than most specialties. The goal of this Journal is to increase rather than decrease that ambiguity;” a series of handbooks was shared with the neighboring field of regional economics; textbooks and courses about urban and geographical, urban and spatial, or urban and real estate economics were published; and programs that mixed urban economics with neighboring disciplines such as urban geography and urban planning emerged. Situated within a master-discipline (economics) that is often described as exhibiting an articulated identity, clear boundaries with other sciences and strict hierarchies, urban economics is an outlier.

There is, however, one stable and distinctive object that has been associated with the term ‘urban economics’ throughout the 1970s, the 1980s, the 2000s and the 2010s:  the Alonso-Muth-Mills model (AMM). It represents a monocentric city where households make trade-offs between land, goods and services, and the commuting costs needed to access the workplace. The price of land decreases with distance from the city center. The model was articulated almost simultaneously in William Alonso’s dissertation, published in 1964, a 1967 article by Edwin B. Mills, and a book by John Muth published in 1969. This trilogy is often considered as a “founding act” of urban economics.

Alonso (1964) and Muth (1969) are the most cited of all the articles published in the Journal of Urban Economics, with Mills (1967) being ranked at 9. If there is a coherent field of ‘urban economics’ to be studied, it makes sense to focus on these three publications in particular. To do so, we collected citations to each of these ‘AMM’ texts in all the journals indexed in the Web of Science database between 1965 and 2009. We then reconstruct a partial map of the field through representing, across 5 year periods, the network of scholars who authored texts co-cited with either one or several of these three ‘foundational’ texts. We thus interpret a citation to one of these three contributions as signaling a specific interest in the kinds of work being done in the field of urban economics. By mapping the authors most co-cited alongside Alonso, Muth or Mills in successive time windows, we aim to reconstruct some sort of core urban economics community (without making claims about the entire scope or outer boundaries of the field).  We have supplemented this rough map of the changing fate of AMM with individual and institutional archives, so as to delineate and flesh out the territory populated by urban economists. Below is a summary of the main trends that we identify.


In 1956, William Alonso moved from Harvard, where he had completed architecture and urban planning degrees at the University of Pennsylvania. He became Walter Isard’s first graduate student in the newly founded department of “regional science.” He applied a model of agricultural land use developed 150 years earlier by the German economist Johann Von Thünen to a city where all employment is located in a Central Business District. His goal was to understand how the residential land market worked and could be improved. His resulting PhD, Location and Land Use, was completed in 1960.  Around that time, young Chicago housing economist Richard Muth spent a snowstorm lockdown thinking about how markets determine land values. The resulting model he developed was expanded to study population density. And a book based on it was published a decade later: Cities and Housing. Drafts of Alonso and Muth’s work reached inventory specialist Edwin Mills in 1966, while he was working at the RAND corporation, and trying to turn models describing growth paths over time into a model explaining distance from an urban center. His “Aggregative Model of Resource Allocation in a Metropolitan Area” was published the next year.

As is clear from the network map below, this new set of models immediately drew attention from a wide array of transportation economists, engineers and geographers concerned with explaining the size and transformation of cities, why citizens chose to live in centers or suburbs, and how to develop an efficient transportation system. The economists included Raymond Vernon and Edgar Hoover, whose study of New York became the Anatomy of the Metropolis; RAND analyst Ira Lowry, who developed a famous spatial interaction model; spatial and transportation econometrician Martin Beckman, based at Brown; and Harvard’s John Kain, who was then working on his spatial mismatch hypothesis and a simulation approach to model polycentric workplaces. Through the early works of Brian Berry and David Harvey, quantitative urban geographers also engaged with these new urban land use models.

Authors co-citation network 1970-1974. The colors result from a community detection algorithm applied to the whole network, but for readability, only those authors with 11 or more links to Alonso (1964) and/or Mills (1967) and/or Muth (1969) are represented. The size of the nodes and links is proportional to the total number of co-citations. The 1970-1974 network represents the state of urban economics as expressed through citations by economists who published at the time, thus, there might be a short time lag between the publication of new works and their incorporation by the rest of the profession.

But the development of a new generation of models relying on optimization behavior to explain urban location was by no mean sufficient to engender a separate field of economics.  Neither Alonso, who saw himself as contributing to an interdisciplinary regional science, nor Muth, involved in Chicago housing policy debates, cared much about its institutionalization. But both were influenced and funded by men who did. Muth acknowledged the influence of Lowdon Wingo, who had authored a land use model. Together with Harvey Perloff, a professor of social sciences at the University of Chicago, they convinced the Washington-based think-thank Resource for the Future to establish a “Committee for Urban Economics” with the help of a grant by the Ford Foundation. The decision was fueled by urbanization and dissatisfaction with the urban renewal programs implemented in the 1950s. Their goal was to “develop a common analytical framework” through the establishment of graduate programs in urban economics, and supporting dissertations, and coordinating the organization of workshops and the development of urban economics textbooks.

Their agenda was soon boosted by the publication of Jane Jacobs’ The Death and Life of Great American Cities, and by growing policy interest in the problems of congestion, pollution, housing segregation and ghettoization, labor discrimination, slums, crime and local government bankruptcy, and by the stream of housing and transportation acts which were passed in response to these. The Watts riots, followed by the McCone and Kerner commissions, acted as an important catalyst. The Ford Foundation poured more than $ 20 millions into urban chairs, programs and institutes through urban grants awarded to Columbia, Chicago, Harvard and MIT in 1967 and 1970. The first round of funds emphasized “the development of an analytical framework”, and the second sought “a direction for effective action.”

As a consequence of this massive investment, virtually every well-known US economist turned to urban topics, as shown by the several names of theorists and public or labor economists expanding the 1975-79 network below. At MIT, for instance, Ford’s money was used to set up a two-year “urban policy seminar,” which was attended by more than half of the department.The organizer was welfare theorist Jerome Rothenberg, who had just published a book on the evaluation of urban renewal policies. He was developing a large-scale econometric model of the Boston area with Robert Engle and John Harris, and putting together a reader with his radical colleague Matt Edel. Department chair Carry Brown and Peter Diamond were working on municipal finance. Robert Hall was studying public assistance while Paul Joskow examined urban fire and property insurance. Robert Solow developed a theoretical model of urban congestion, published in a 1972 special issue of the Swedish Journal of Economics, alongside a model by taxation theorist Jim Mirrlees investigating the effect of commuter and housing state tax on land use. Solow’s former student Avinash Dixit published an article modeling a tradeoff between city center economies of scale and commuting congestion costs in another special issue on urban economics in the Bell Journal the next year. A survey of the field was also published in the Journal of Economic Literature, just before the foundation of the Journal of Urban Economics in 1974.

Authors co-citation network 1975-1979 (11 or more links), the size of the nodes and links being proportional to the total number of co-citations.


But the publication of a dedicated journal, and growing awareness of the “New Urban Economics” was not the beginning of a breakthrough. It turned out to be the peak of this wave. On the demand side, the growing policy interest and financial support that had fueled this new body of work receded after the election of Richard Nixon and the reorientation of federal policies. On the supply side, the mix of questions, methods and conversations with neighboring scholars that had hitherto characterized urban economics was becoming an impediment. More generally, the 1970s was a period of consolidation for the economics profession. To be considered as bona fide parts of the discipline, applied fields needed to reshape themselves around a theoretical core, usually a few general equilibrium micro-founded workhorse models. Some old fields (macro and public economy for instance) as well as newer ones (health, education, household) developed such theoretical models. Others resisted, but could rely on separate funding streams and policy networks (development and agricultural). Urban economics was stuck.  

Policy and business interest was directed toward topics like housing, public choice and transportation. And, combined with the growing availability of new microdata, micro-econometrics advances, and the subsequent spread of the personal computer, this resulted in an outpouring of applied research. Computable transportation models and real estate forecasting models were especially fashionable.

On the other hand, a theoretical unification was not in sight. Workhorse models of the price of amenities, the demand for housing, or suburban transportation, were proposed by Sherwin Rosen, William Wheaton and Michelle White, among others. But explanations of the size, number, structure and growth of cities were now becoming contested. J. Vernon Henderson developed a general equilibrium theory of urban systems based on the trade-off between external economies and diseconomies of city size, but in these agglomeration effects did not rely on individual behavior. Isard’s former student Masahita Fujita proposed a unified theory of urban land use and city size that combined externalities and the monopolistic competition framework pioneered by Dixit and Joseph Stiglitz, but without making his framework dynamic or relaxing the monocentric hypothesis. At a point when there was growing interest in the phenomenon of business districts –  or Edge cities as journalist Joël Garreau called them, this was considered a shortcoming by many economists. General equilibrium modelling was rejected by other contributors, including by figures like  Harry Richardson, and a set of radical economists moving closer to urban geographers (such as David Harvey, Doreen Massey and Allen Scott) working with neo-Marxist ideas.


In the 1990s, various trends aimed at explaining the number, size, evolution of cities matured and were confronted to one another. In work which he framed as contributing to the new field of “economic geography,” Krugman aimed to employ his core-periphery model to sustain a unified explanation for the agglomeration of economic activity in space. At Chicago, those economists who had spent most of the 1980s modeling how different types of externalities and  increasing returns could help explain growth – among them Robert Lucas, José Scheikman and his student Ed Glaeser – increasingly reflected on Jane Jacob’s claim that cities exist because of the spillover of ideas across industries which they facilitate. Some of them found empirical support for her claim than for the kind within-industry knowledge spillovers Henderson was advocating.

Krugman soon worked with Fujita to build a model with labour mobility, trade-offs between economies of scale at the plant level and transportation costs to cities. Their new framework he was adamant to compare to Henderson’s general equilibrium model of systems of cities. He claimed that their framework enabled the derivation of agglomeration from individual behavior and could explain not only city size and structure, but also location.  In his review of Krugman and Fujita’s 1999 book with Venables, Glaeser praised the unification of urban, regional and international economics around the microfoundations of agglomeration theory. He also contrasted Krugman’s emphasis upon transportation costs – which were then declining – with other frameworks focusing on people’s own movement, and began to sketch out the research program focused on idea exchanges that he would develop in the next decades. He also insisted on the importance of working out empirically testable hypotheses.

The “New Economic Geography” was carried by a newly-minted John Bates Clark medalist who had, from the outset, promised to lift regional, spatial and urban economics from their “peripheral” status through parsimonious, micro-founded, tractable and flexible models. It attracted a new generation of international scholars, for some of whom working on cities was a special case of contributing to spatial economics. In the process, however, olders ties with geographers were severed, and questions that were closely associated with changing cities, like the emergence of the digital age, congestion, inequalities in housing, segregation, the rise of crime and urban riots, became less central to the identity of this field. The field lost some sort of autonomy. Within our own maps, this can be seen from the contrast between the many disparate  links which leading urban economists had to Alonso-Muth-Mills, and the discrete, interconnected (green) network in which figures like Fujita, Krugman, Henderson, Lucas, and  Glaeser are embedded.

Authors co-citation network 2005-2009 (15 or more links), the size of the nodes and links being proportional to the total number of co-citations.

Most recently, Glaeser’s insistence that urban models need to be judged by their empirical fit may be again transforming the identity of urban economics. The shift is already visible in the latest volume of the series of Handbooks in Urban and Regional Science. Its editors (Gilles Duranton, Henderson and William Strange) explain that, while its previous volume (2004) was heavily focused on agglomeration theory, this one is “a return to more traditional urban topics.” And the field is now characterised not in terms of a unified, theorical framework, but with reference to a shared empirical epistemology about how to develop causal inferences from spatial data. There is also growing evidence that students going on the US economics job market  increasingly add “spatial economics” and/or “urban economics” to their field list.

Overall, the successive shifts in urban economists’ identity and autonomy which we describe here, were sometimes prompted by external pressures (urban crises and policy responses) and sometimes from internal epistemological shifts about what counts as “good economic science.” A key development in the 1970s was the unification around general equilibrium, micro-founded models. It is widely held that the profession is currently experiencing an “applied turn” or a “credibility revolution”, centered on the establishment of causal inference (gold) standards. How this will affect urban economics remains unclear.

Beatrice Cherrier is an associate professor at the Centre National de la Recherche Scientifique (CNRS). She documents the “applied turn” in the history of recent economics through chasing institutional artifacts like the JEL codes, researching the history of selected applied field (urban, public, macro) and unpacking its gendered aspects.   

Anthony Rebours is currently a graduate student at University Paris 8 and a young fellow of the Center for the History of Political Economy (CHOPE) at Duke University. His interests are about the recent history of economics and its relations with geography, and the use of sociological methods for quantitative history.

Other posts from the blogged conference:

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

From Cities to Nations: Jane Jacobs’ Thinking about Economic Expansion by Cédric Philadelphe Divry

Economists in the City #1 1024 780 Cléo Chassonnery-Zaïgouche

Economists in the City #1

Economists in the City: Reconsidering the History of Urban Policy Expertise

An introduction

When and why did the expertise associated with economics as an academic discipline become so highly valued in the world of public policy? 

We planned a workshop to explore this broad question in relation to the more specific theme of policy-making in relation to cities, and the influence of agglomeration economics upon urban and government policy in countries like the US, France and the UK. And our aim was to examine, in particular, the increasing focus upon cities in the work of an important group of economists since the 1980s, and to explore some of the main lines of criticism of public policies that reflect the logic and value of agglomeration.

Detail from Booth Inquiry into the Life and Labour of People in London. Map Description of London Poverty, 1898-9, West Central District., Public Domain

We anticipated a rich conversation on these issues between historians of economics, economists, urban policy experts and social scientists. The embedding of agglomerationism within the thinking of policy-makers and governmental institutions provides a fascinating example of a broader shift towards the growing impact of economic expertise, and indeed of individual economists, on policy-making.

This focus sits within a wider field of study which is interested in the complex roles that economists have at times played – as public intellectuals, policy experts and academic specialists. How different kinds of analytical tools and a particular style of economic reasoning made their way into the world of elite decision-making is a major theme of interest for many historians and social scientists. So too is the related question of how quantification (testable theoretical hypotheses, measurement technique and indicators, as well as decision-models) has over the last few decades gained ascendancy in policy circles.

As a result of the on-going Covid-19 crisis, we have decided to convert this event into a blogged conference, publishing shortened, online versions of a number of the papers that were due to be presented at the original event, and eliciting comments and responses to these as comments.

We will be publishing the first of these posts on Monday 18 May, and others will appear shortly afterwards. The conference will open with a contribution from Dr Béatrice Cherrier (CNRS, University of Cergy-Pontoise) and Anthony Rebours (University Paris 8), Cities and Space: Towards a History of ‘Urban Economics’, introducing readers to the pre-history of agglomeration economics, and offering reflections on how the field of urban economics in the US provided a crucible for its later development.

Our other contributors include, Professor Diane Coyle (University of Cambridge), Professor Ron Martin (University of Cambridge), Cedric Philadelphe Divry (University Paris 1 Panthéon-Sorbonne), Professor Denise Pumain (University Paris 1 Panthéon-Sorbonne) and Professor Philip McCann (University of Sheffield).

After we have published each of their contributions, we will invite other contributors to comment in response, and will offer our own reflections about some of the key debates and issues.

We would like to thank The Humanities and Social Change International Foundation for supporting the work of the ‘Expertise Under Pressure’ project at Cambridge, which hosts this particular project, as well as colleagues at CRASSH where the project is based, for their intellectual and logistical support for it.

Michael Kenny and Cléo Chassonnery-Zaïgouche

Cambridge, 15 May 2020.


PS: We chose as a visual an detail from a map made for Booth’s Inquiry into the Life and Labour of People in London. It is a part of this particular map: Map Description of London Poverty, 1898-9, West Central District, part of a larger enterprise to produce maps for London in which the levels of poverty and wealth fare mapped out street by street.


Tackling the Problem of Online Hate Speech 370 275 Stefanie Ullmann

Tackling the Problem of Online Hate Speech

06 May 2020

by Marcus Tomalin and Stefanie Ullmann

Source: shutterstock/asiandelight

In recent years, the automatic detection of online hate speech has become an active research topic in machine learning. This has been prompted by increasing anxieties about the prevalence of hate speech on social media, and the psychological and societal harms that offensive messages can cause. These anxieties have only increased in recent weeks as many countries have been in lockdown due to the Covid-19 pandemic (L1GHT Toxicity during Coronavirus report). António Guterres, the Secretary-General of the United Nations has explicitly acknowledged that the ongoing crisis has caused a marked increase in hate speech.

Quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.

Stefanie Ullmann

Online hate speech presents particular problems, especially in modern liberal democracies, and dealing with it forces us to reflect carefully upon the tension between free speech (i.e., allowing people to say what they want) and protective censorship (i.e., safeguarding vulnerable groups from abusive or threatening language). Most social media sites have adopted self-imposed definitions, guidelines, and policies for handling toxic messages, and human beings employed as content moderators determine whether or not certain posts are offensive and should be removed. However, this framework is unsustainable. For a start, the offensive posts are only removed retrospectively, after the harm has already been caused. Further, there are far too many instances of hate speech for human moderators to assess them all. In addition, it is problematical that unelected corporations such as Facebook and Twitter should be the gate-keepers of free speech. Who are they to regulate our democracies by deciding what we can and can’t say?

Demonstration of app
Demonstration of app

Towards the end of 2019, two Cambridge-based researchers, Dr Marcus Tomalin and Dr Stefanie Ullmann, proposed a different approach. Their framework demonstrated how an automated hate speech detection system could be used to identify a message as being offensive before it was posted. The message would then be temporarily quarantined, and the intended recipient would receive a warning message, indicating the degree to which the quarantined message may be offensive. That person could then choose either to read the message, or else to prevent it appearing. This approach achieves an appropriate balance between libertarian and authoritarian tendencies: it allows people to write whatever they want, but recipients are also free to read only those messages they wish to read. Crucially, this framework obviates the need for corporations or national governments to make decisions about which messages are acceptable and which are not. As Dr Ullmann puts it, “quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.”


A 4th-year student in the Engineering Department, Nicholas Foong who is supervised by Dr Tomalin, has now developed both a state-of-the-art automatic hate speech detection system, and an app that demonstrates how the system can be used to quarantine offensive messages by blurring them until the recipient actively chooses to read them. An Android version of the app is available, along with a short demo video of the app in action.

The state-of-the art system is able to correctly identify up to 91% of offensive posts. In the app, it is used to automatically detect and quarantine hateful posts in a simulated social media feed in real-time. The app demonstrates that the trained system can run locally on mobile phones, taking up just 9MB of space and requiring no internet connection to function.

Nicholas Foong, app developer, Department of Engineering Cambridge

Despite these promising developments, there is still a lot of work that needs to be done if the problem of online hate speech is going to be solved convincingly. The detection systems themselves need to be able to cope with different linguistic registers and styles (e.g., irony, satire), and the training data must be annotated accurately, to avoid introducing unwanted biases. In addition, since hate speech increasingly contains both words and images, the next generation of automated detection systems will need to handle multimodal input. Nonetheless, the quarantining framework offers an effective practical way of incorporating such technologies into our regular online interactions. And, as we adjust to life in lockdown, we can perhaps appreciate more than ever how quarantining can help to keep us all safe.

The SAGE we knew and the SAGE ‘everyone’ now knows and wants to scrutinise 960 640 Hannah Baker

The SAGE we knew and the SAGE ‘everyone’ now knows and wants to scrutinise

Public awareness in the Scientific Advisory Group for Emergencies (SAGE), or more specifically the scientists said to be guiding the Government’s COVID-19 decisions, is leading to increased scrutiny and calls for transparency. 

Thinking back to January 2020, in the days we as UK residents were living our ‘normal’ everyday lives, going to pubs, travelling abroad or even being within 2m of strangers and friends and/or family outside our household, very few people would have heard of Chris Whitty and Patrick Vallance who have now become ‘household names’ and even faces, with Professor Whitty, England’s Chief Medical Officer, appearing in TV advert breaks (shown in video below) from the Government urging us to ‘Stay Home, Protect the NHS, Save Lives’ and both regularly taking key roles in the BBC’s daily press briefing on COVID-19. In these briefings, the political figures, often situated at the middle podium, have frequently justified the decisions that have been made saying that they have been guided by ‘the science’.

Professor Chris Whitty, England’s Chief Medical Officer, appearing in Government advert urging the public to stay at home

Many people will now know that there is a group of scientific advisors, with some knowing that they are called the Scientific Advisory Group for Emergencies (SAGE). As defined by Government Office for Science, SAGE are ‘responsible for ensuring that timely and coordinated scientific advice is made available to decision makers to support UK cross-government decisions in the Cabinet Office Briefing Room (COBR)’.

SAGE has provided evidence for events since 2009 (and scientific evidence was also used in events preceding this such as the foot and mouth crisis). Events listed on SAGE’s website are: swine flu, volcanic ash emergency, Japan nuclear incident, winter flooding, Ebola outbreak, Nepal earthquake, Zika outbreak (precautionary), Toddbrook reservoir and of course, now COVID-19. 

The increase in media coverage and therefore public awareness of scientific advice and SAGE is undoubtedly because COVID-19 is an event affecting all UK resident’s lives (and of course lives throughout the world). Consequently, the situation previously described in January 2020 seems like a distant past time. This was spoken about by Sir Ian Boyd (former Chief Scientific Adviser at the Department for Environment, Food and Rural Affairs) in an interview with ITV news“This is more than just an interesting thing going on, absolutely everybody in the country is affected by it….And therefore there’s a lot more interest in scrutiny in the underlying process.”

With this growing emphasis on SAGE in the media, we thought it would be useful to reflect on some of the questions Dr Emily So (Co-Investigator on Expertise Under Pressure project) had after being called upon as an expert herself in the 2015 Nepal Earthquake, which were posed in our previous blog summarising our ‘Disaster Response | Knowledge Domains and Information Flows’ workshop in February this year.  

What is the process of turning this information into decisions and action?

The job of SAGE is to pool together scientific expertise to answer questions which are posed by COBR. It is at COBR that the scientific evidence is considered, as well as other advice including evidence from the ‘economic, security, administrative and political spheres’. An analysis of the minutes of previous SAGE events shows that the number of meetings, time frame and number of people involved varies from event to event. These figures are summarised in the first table below. For the Nepal earthquake, there was only one meeting, whilst for Swine Flu, a previous pandemic, there were 22 meetings spanning from May 2009 – November 2010. The number of people attending the meetings also varied and those involved differed depending on the expertise that was required at the time. On May 4th, a ‘list of participants of SAGE and related sub-groups’ for COVID-19 was released (a breakdown of the number of participants in each group is provided in the second table).

A  letter (dated 4 April 2020) from Sir Patrick Vallance, indicated that the frequency and timing of meetings was driven by current events and that “from 1 January to 31 March 2020 SAGE meetings were held on: • January: 28 • February: 3, 4, 6, 11, 13, 18, 20, 25, 27 • March: 3, 5, 10, 13, 16, 18, 23, 26, 29, 31. In addition, a precautionary SAGE meeting was held on 22 January 2020 to discuss scientific questions that were raised by COVID-19.”. During April, reports suggest SAGE met twice a week.

Data source:

A critique within the media is whether there is enough breadth in the expertise for the COVID-19 response with some feeling there is an over reliance on modelling the pandemic and a lack of public health experts. Another Guardian article suggests the Government have sent out requests to universities to expand the pool following this criticism. Putting that aside, as can be seen by the number of attendees and members of each group, many voices are put forward. However, these voices are not always in agreement and SAGE meetings can include ‘heated and prolonged’ discussions. 

A frustration that is becoming increasingly apparent in the media is questions over what is the science the politicians are referring to and how decisions are influenced by other considerations. In the Centre for Science and Policy’s (CSaP’s) podcast on ‘Science, Policy and Pandemics’ (episode 2), David Spiegelhalter says that we need to be clear the decisions are made by politicians taking into account the scientific advice. Another of many examples is when asked about his thoughts on the exit strategy during a BBC interview with Victoria Derbyshire (20/04/2020), former Prime Minister Tony Blair highlighted how the easing of the lock down will be based on scientific and medical advice but that in the end it is a political judgement.

The reason this overlap between science and politics is causing frustration is that SAGE is only one strand of information feeding into COBR. In an interview with Times Higher Education, Professor James Wilsdon, a professor in research policy at the University of Sheffield is quoted saying “It is problematic if political choices are being made and then the science advice system has to front them up. There needs to be a clearer sense of where science advice ends and political judgement begins − and at times that has been quite blurred”. 

Concerns about this blurring of boundaries between the scientific and political spheres were exacerbated further following The Guardian article revealing that Dominic Cummings and Ben Warner, both political figures, had attended SAGE meetings. In defense of this, a Downing Street advisor said that they were not members of SAGE and would only contribute if there were issues about Whitehall raised. However, their involvement has led to a flurry of reports questioning the independence of SAGE from politics, one example being Bloomberg alleging that Cummings was more than a bystander.

Chris Tyler, Associate Professor in Science Policy and Knowledge Infrastructure, UCL, discusses this further saying that it may be acceptable to let Cummings witness the debate as there are huge areas of uncertainty which need to be understood. However, if as the Guardian article suggests he is able to ask questions, this may risk politicising the scientific evidence before it goes up to COBR.

Whether or not Cummings did have a role in the SAGE meetings, the coverage of this in the media has clearly led to even more scrutiny of SAGE and pressure for transparency of where the boundary between the science and politics lies in the process of making decisions. 

What happens to the gathered advice after it has been given or the event has taken place?

SAGE has been accused of keeping their advice behind closed doors, with the New York Times describing the operations as being within a ‘virtual black box’. I have briefly touched on the growing calls for transparency due to the increased public awareness and media interest in SAGE as well as the ongoing rhetoric by the politicians that decisions are following the science. Consequently, headlines such as ‘Case for transparency over SAGE has never been clearer’ have been circulated in the last few weeks.

As the pandemic has progressed the reasons for the calls for transparency have varied. In the initial stages, members of the public wanted to know why the UK was not in lock down when other European countries were and what the science, which was continuously being referred to, actually was. This led to the release of the scientific evidence on 16 March. This release, although criticised as being too late by some, was welcomed by several members of the scientific community as it was thought to be vital for building trust with the public.

However, the calls for transparency continued. The requests then became about understanding what the lock down exit strategy will be and who the experts actually are. In response to a request for these names by Rt Hon Greg Clark MP, Sir Patrick Vallance stated: ‘The decision to not disclose SAGE membership for the time being is based upon advice from the Centre for the Protection of National Infrastructure and is in line with the standard procedure for COBR meetings, to which SAGE gives advice. This contributes towards safeguarding individual members personal security and protects them from lobbying and other forms of unwanted influence which may hinder their ability to give impartial advice. Of course, we do not stop individuals from revealing that they have attended SAGE.’ (4 April 2020)

Since that response, the revelation that political figures had sat in on SAGE meetings increased the pressure from the media to identify the experts. Reports, including one from the New Scientist (27th April), indicated that Patrick Vallance had said the list of names would be released after the  experts had been given the option to opt-out of being identified. This list was released on 4th May, followed by an additional release of evidence on the 5th May.

Some calls for transparency are going even further with requests to release the minutes of SAGE meetings to understand why decisions were actually made. Sir David King, a former Chief Scientific Advisor, has also formed an ‘Independent SAGE’ with the first meeting (4th May) taking place via livestream on Youtube, this group was called ‘a rival panel of experts’ in the media.

Transparency is not a new concept and it is commonly referred to in the disaster management literature. Even in reference to SAGE, previous reports (pre COVID-19) have called for a more transparent process. Transparency is sought to develop trust with the public, as well as allowing the net of expertise to be cast further by giving other scientists and wider research community the opportunity to scrutinise the evidence before decisions are made, with the reason being that external scrutiny can help to avoid groupthink and identify blindspots. A counter argument here being that the decisions are being made in a time constrained environment and is there time to hear all these voices? Furthermore, there are concerns about how to ensure that those voices are from those with the relevant expertise: “The world wants to know what the science is behind the decisions, but there is great danger of misinformation when media interest is amplifying the voices of scientists, but not necessarily those most qualified to comment.”  (Gog, J. 2020).

It is abundantly clear from the newspaper headlines that there has been growing critiques of SAGE being ‘secret’ and the decisions happening behind closed doors. Consequently there have been calls for this evidence to be in the public sphere to allow for scrutiny but also build trust with the public. This pressure from the media and public appears to have led to the release of evidence and experts’ names, with the Government Office for Science recognising “In fast moving situations, transparency should be at the heart of what the government does”.

What if the experts are wrong?

It is far too soon to know what was the right or wrong action to take and perhaps we will never know as there are so many parameters to consider, but when looking at this question there are a few things that I want to discuss: uncertainty, consensus and blame, which are not mutually exclusive from one another. 

After her own involvement in the Nepal earthquake’s SAGE meeting, Dr So posed this question ‘What if the experts are wrong?’ as she knows that there is uncertainty in the models that make casualty loss predictions for earthquakes. Obviously there is lots of uncertainty in this pandemic, be it the spread of infection, death rates or impact of interventions. To avoid being ‘wrong’ this uncertainty needs to be communicated to both the decision makers and the wider public. 

I have already referred to the heated discussions within SAGE. There have been reports, including one from Buzzfeed that there was no consensus as to when the lock down and social distancing should be implemented. With some scientists arguing that it needed to be introduced immediately to halt the spread of the virus and “pleaded with the government to change tack or face dire consequences”, whilst others felt that introducing social distancing measures at that point in time would be unsustainable and lead to a second wave of infection. Within this same article Vallance is reported to have said “If you think SAGE is a cosy consensus of agreeing, you’re very wrong indeed”. What is key here is that these areas of agreement and disagreement are passed up to COBR (as well as the uncertainty) in the evidence being presented.

The most concerning aspect related to whether or not experts are right or wrong is the ‘blame game’ as accusations have come out that Boris Johnson’s team are using the scientists as ‘human shields’. We are now seeing experts expressing concern with the politicians language which can be misinterpreted by the public that decisions have been made by the scientists. In his interview with ITV, Sir Ian Boyd states that he always told ministers it’s dangerous to say ‘I will follow the science’ as ‘essentially what they are doing is shifting the decision making role from them to the scientific advisors. And it would be better if they said ‘I will be strongly advised by the science’ or something like that’. Another issue with this phrase is that it doesn’t always acknowledge that ‘in reality the science of this crisis had been “riddled with doubt, uncertainty, and debate(Professor Robert Dingwall, a member of the New and Emerging Respiratory Virus Threats Advisory Group).

Many feel that a judicial review is inevitable following COVID-19. To avoid a blame game of who was right or wrong, it is important in the coverage of SAGE and the evidence presented to acknowledge the uncertainty and that there is not always consensus. A previous post by EuP Research Associate Federico Brandmayr speaks more about these issues of responsibility by considering what COVID-19 could learn from the L’Aquila earthquake.

It seems obvious that an increase in public awareness of scientific advice has led to increased scrutiny and calls for transparency of SAGE. By reflecting on the questions posed in our previous workshop, it has inevitably led to even more questions in relation to the use of scientific expertise in emergency response situations and I will now conclude by highlighting three of these:

  • Decisions go beyond science. It needs to be clear where the boundary between the scientific, economic and political spheres lies rather than repeatedly saying that decisions are following the science. Where is this line drawn? Is there even a line?
  • SAGE is receiving critique for being secretive which has led to numerous calls for transparency for both the scientific evidence and names of experts. This is considered important by many to develop and maintain trust with the public and also allow for the scrutiny of evidence by a ‘larger net of experts’. Delay in the release of names was due to concerns the experts may be put under undue influence. Perhaps a question for ‘next time’ is ‘How do we create an appropriate environment to allow for the open interrogation of evidence?’.
  • To avoid experts ‘being wrong’, the communication of uncertainty is vital, as well as recognising that there might not actually be a right or wrong answer based on the evidence that experts have available at the time. In such an uncertain, time-pressured environment, disagreement is inevitable. These areas of consensus and disagreement need to be passed up to COBR to consider and also be communicated to the public for a truly transparent process. Has uncertainty been clearly communicated in the UK’s and worldwide response to COVID-19? If not, what should have been done differently?

Throughout the next few months, the EuP team will continue to reflect upon questions which COVID-19 has raised for our project through our series of blog posts with the overarching aim being to develop our understanding of the role of experts in bringing about social change.

Text by Hannah Baker (published 5/05/2020)

Chris Whitty image source (thumbnail): Unknown photographer / OGL 3 (

Cultures of expertise and politics of behavioral science: A conversation with Erik Angner 1024 682 Hannah Baker

Cultures of expertise and politics of behavioral science: A conversation with Erik Angner

Erik Angner
Photo credit: Niklas Björling/ Stockholm University

As part of our new series on expertise and COVID-19, Mike Kenny and Anna Alexandrova interview Professor Erik Angner of Stockholm University. Erik is a philosopher and an economist writing on behavioral economics, economists as experts, measurement of happiness and wellbeing, Hayek, and the nature of preferences among other topics. Recently he has commented on the need for epistemic humility and the uniqueness of the Swedish response to the pandemic. In the podcast we discuss cultures of expertise, contestation, politics of behavioral science, and the relation of all three to the current crisis:

Listen to the segment on comparative cultures of expertise in Sweden and the UK, starting from 1:54

Responsibilisation of experts and disagreement between them, starting from 8:00

Who gets included in powerful expert groups, who gets sidelined and epidemiology as the current queen of sciences, starting from 16:11

Trust in epistocracies and its fragility, starting from 23:30

Value judgments in expert advice and how to make them responsibly, starting from 26:10

Discomfort of uncertainty, starting from 31:50

Behavioral science, nudge politics, absence of social science in all this, starting from 41:45

Text by Anna Alexandrova (30/04/2020)

A disaster researcher’s views on knowledge domains and information flows 372 402 Hannah Baker

A disaster researcher’s views on knowledge domains and information flows

Dr Emma Doyle

Dr Emma Hudson-Doyle is based at the Joint Centre for Disaster Research at Massey University/GNS Science, Wellington, New Zealand. Her interests lie at the interface between physical science and critical decision makers, with a primary focus on the communication of science advice during natural hazard events. Current research focuses on the communication of forecast and model uncertainty. Other research areas and supervision topics include community resilience, social media in disasters, citizen science, aftershock forecasts, early warnings, motivations to prepare, communicating probabilities, low likelihood risk, and visual uncertainty, and exploring the use of science advice through table top emergency management exercises. 

We contacted Emma before our ‘Disaster Response | Knowledge Domains and Information Flows‘ workshop for her viewpoints on the questions that were being addressed throughout the day and to act as a foundation for conversation in the focus group discussions. We would like to take this opportunity to thank Emma for giving up her time to provide us with her answers, which can be seen below.

What type of knowledge is and should be used?

This depends upon the problems and issues you are trying to address. Adopting a problem/solution focus – rather than a ‘knowledge’ focus, and thinking about the needs and concerns of the communities, organisations, and others affected by a disaster will lead you to understand which knowledge is needed. We must move from a ‘deficit’ model of communication and knowledge sharing where we assume we know what others need, towards a two-way partnership model. Accordingly, scientific knowledge may then work in partnership with (for example) indigenous knowledge, complementing each other to address the issue at hand.

What constitutes as an expert?

Similar to above, there are many different experts. We need to move away from a generic label of “the expert” to more specific labels “the landslide expert”, “the psychology expert” etc. This respects more equally the different disciplines and epistemologies at the table, as it recognises each has expertise. Thus, with this in mind, I envision an expert as someone with extensive experience and training in a field or topic, able to make informed judgments by drawing on the evidence available. This training may not be formal/university, and thus an expert can include local community experts, etc.

How is and should uncertainty be factored into decisions and communicated?

There are a range of different academic views debated on this: On the one hand: people advocate for not communicating the uncertainty at all, as it can cause people to mistrust or deny the message, view the communicator as incompetent, allow people to interpret the information to one end member state of uncertainty, or even cause harm if the decision maker does not take a safety action because of uncertainty. On the other hand: people advocate for communicating all uncertainties as it is ethically and morally the correct action, the communicator is more honest and trustworthy, it enables decision makers to plan alternative courses of action, and is the true state of the knowledge. I think there is a happy medium somewhere in the middle: if we were to communicate all the uncertainties it would be overwhelming during short time, high pressure, decision-making situations. However, it is important to communicate them to enable decision makers to make the best decisions with alternative actions. In order to assess what to communicate, we must thus first identify the decision-makers’ needs – through relationship building activities so we can understand which uncertainties are relevant to them and their decisions, and which are not. Ideally this should be identified in pre-event planning, so that during an event, scientists and others provide a ‘targeted’ supply of information to meet the decision-makers’ needs, communicating only the decision-relevant uncertainties – communicating all uncertainties could overwhelm a decision-maker with information (causing cognitive overload). Please see the attached for more discussion of this… 

What happens to, and should happen to, knowledge after it is produced and the event has taken place?

This is a tricky one: it depends whose knowledge it is. If it is publicly funded, then it should be publicly available to aid other communities, and future resilience building. If it is privately funded, one would hope it could be publicly available, but there may be company rights that mean it can not be. If it is indigenous knowledge, then (depending on the customs of the indigenous people of the region), it is for the indigenous owners of that knowledge to decide. 

Thank you again to Emma for these responses.

Responses received on the 6th February 2020.

Disaster Response | Knowledge Domains and Information Flows 1024 683 Hannah Baker

Disaster Response | Knowledge Domains and Information Flows

An Expertise Under Pressure Workshop

11 February 2020

Cripps Court, Magdalene College, Cambridge

Organised by Hannah Baker, Rob Doubleday and Emily So

Cripps Court, Magdalene College

The ‘Disaster Response | Knowledge Domains and Information Flows’ workshop on the 11 February 2020 formed part of the Expertise Under Pressure Project (EuP), specifically the Rapid Decisions Under Risk case study.

The aim of this event was to explore the different knowledge domains and information flows in the context of disaster response situations, such as the immediate aftermath of an earthquake or volcano, or the ongoing response to the coronavirus, which has since become the  global pandemic known as COVID-19. A reflection in light of this is provided at the end of this blog following a description of the workshop’s proceedings. Attendees were from a range of disciplines, including representatives from the Centre for Science and Policy (CSaP) who have written their own summary of the day.

Workshop questions:

In the context of disaster response:

  1. What type of knowledge is and should be used?
  2. What constitutes as an expert?
  3. How is and should uncertainty be factored into decisions and communicated?
  4. What happens to, and should happen to, knowledge after it is produced and the event has taken place?

Speaker Session 1

Hannah Baker


Hannah Baker is a Research Associate at the Centre for Research in the Arts, Social Sciences, and Humanities (CRASSH) at the University of Cambridge. She opened the day by providing an overview of the EuP project. The relevance of the topic was conveyed through a display of screenshots comprising multiple newspaper headlines referring to the use of experts in dealing with the coronavirus outbreak in, at the time (February 2020), Wuhan China. The headlines were also used to highlight that these experts are not always in agreement with one another, with an example being predictions of when the peak of the infection would be.

A theoretical context for disaster management arguing that there are no ‘natural disasters’ was then provided. There are natural events, such as a volcanic eruption, but these turn into disasters due to social factors that increase the vulnerability of a population. Within the disaster management cycle there are four stages: prevention and mitigation, preparedness, response, and rehabilitation and recovery. Although this workshop focused on the response, the other stages are not mutually exclusive. In the response stage, the decision-making environment is uncertain, under time-pressure and can result in high impacts (Doyle, 2012).

The reasons for referring to 1) ‘Knowledge Domains’ and 2) ‘Information Flows’ in the workshop’s title were then outlined. To address the first part, disaster research regularly discusses the use of scientific expertise in decisions-making, however it is also recognised that information can come from elsewhere. For example, Hannah displayed a newspaper headline referring to the use of ‘indigenous expertise’ in combating the recent Australian Bushfires.

In disaster management literature there is also an emphasis on the need to create networks before an event takes place to establish trust and facilitate the flow of information when that event happens. The concept of information flows can be extended further as the communication of knowledge is not only communicated to and between decision-makers but also the wider public. An issue here being ‘fake news’ and as put by the Director General of the World Health Organisation in response to misinformation about the coronavirus, now is the time for “facts, not rumours”.

Emily So

Is knowledge driving advice or vice versa in the field of natural disaster management?

Emily So is the project lead for the ‘Rapid Decisions Under Risk’ case study, a Reader in the Department of Architecture at the University of Cambridge and a chartered civil engineer. Following the 2015 Nepal Earthquake, Emily was invited by the UK’s Scientific Advisory Group for Emergencies (SAGE) to contribute her expertise on earthquake casualty modelling and loss estimations. SAGE provides scientific and technical advice to decision-makers during emergencies in the Cabinet Office Briefing Room (COBR). ,

Although casualty models take into account structural vulnerability, seismic hazards and the social resilience, Emily highlighted that the interpretation of these and use for loss estimations is often based on knowledge and experience. She emphasised that those making these interpretations are unlikely to always have experience in the country in which the earthquake has occurred.

Emily’s participation in SAGE led her to question 1) What happens to the gathered advice? 2) What is the process of turning this information into decisions and actions? 3) What if we are wrong? These questions formed the origins of the EuP case study and use of experts in disaster response situations.

Amy Donovan

Thinking holistically about risk and uncertainty

Amy Donovan is a multi-disciplinary geographer, volcanologist and lecturer at the University of Cambridge. During the workshop she presented arguments from her recent paper: ‘Critical Volcanology? Thinking holistically about risk and uncertainty. She reiterated that there are no natural disasters and then moved on to question what creates good knowledge, emphasising that risk in itself is incomplete. For instance, in her paper she states:

 ‘the challenge of volcanic crises increasingly tends to drag scientists beyond Popperian science into subjective probabilityDonovan (2019, p.20)

Historically, the physical sciences have been better accepted as they can be modelled, whilst the social sciences are difficult to measure as they are people studying people and therefore subjective. However, Amy argued that risk is a social construction in itself and that datasets can be interpreted in different ways due to people’s experiences working in different locations around the world. She also affirmed that the social sciences need to be brought in at the start of the decision-making process, rather than at the end (which is commonly the case now and often only for communication purposes). A dialogue needs to be happening before a disaster even happens.

Amy also discussed the impact on the people being consulted as experts in disaster response situations as they can be affected by this as the advice that they give can affect other’s lives. This is why the transfer of knowledge is important as its often difficult for scientists to control which parts of knowledge are taken forward and how that is communicated.

Speaker Session 2

Robert Evans

Nature and use of scientific expertise

Robert Evans is a Professor in Sociology at the Cardiff University School of Social Sciences, specialising in Science and Technology studies. The focus of his presentation built upon previous work on the ‘Third wave of Science Studies: Studies of Expertise and Experience’. Two key concepts within this paper are the notions of contributory and interactional expertise. The former is often the accomplished practitioner who can perform practical tasks and the latter was a new idea based on linguistic socialisation as the expert is able to communicate and speak fluently about practical tasks.

As no one can be an expert in everything, in their paper Collins and Evans state:

‘The job, as we have indicated, is to start to think about how different kinds of expertise should be combined to make decisions in different kinds of science and in different kinds of cultural enterpriseCollins & Evans (2002, p.271)

Robert also spoke about legitimacy and extension, and how legitimacy can increase as more voices are included, yet, poses the question whether the quality of technical advice decreases if ‘non-expert’ inputs are given too much weight. This ties in with the concept of robust evidence. Although Robert spoke before COVID-19 became a global pandemic, this is now as relevant as ever as he put forward questions about how we handle controversial advice and that scientific experts will not always reach a consensus.

Dorothea Hilhorst

Social Domains of disaster knowledge and action

Dorothea Hilhorst is a Professor of Humanitarian Aid & Reconstruction at the International Institute of Social Studies (ISS) of Erasmus University Rotterdam. Dorothea’s paper, published in 2003, Responding to Disasters: Diversity of Bureaucrats, Technocrats and Local People’ led to us thinking about the use of different knowledge domains in disaster response situations. Dorothea reflected upon this by linking the domains of knowledge to power, and also on how we see a disaster as alliances between the domains, such as science, political authorities, civil society and community groups. Within her paper, she states:

‘Instead of assuming that scientific knowledge is superior to local knowledge, or the other way around, a more open and critical eye needs to be cast on each approach…disaster responses come about through the interaction of science, governance and local practices and they are defined and defended in relation to one anotherHilhorst (2003, p.51)

Like Amy Donovan, Dorothea emphasised that there are ‘no natural disasters’ by referring to the change in disaster paradigms through time and also speaking about the concept of Disaster Risk Creation (DRC). This shift went from an attention to behavioural studies in the 1950s, the entry of vulnerability followed by community to the paradigm in the 1980s, a focus on climate change in the 1990s and then a turn to the concept of resilience in the 2000s. Thea noted that in her own work she focuses on when disasters and conflict happen at the same time whereby the governance is even more complex in these situations.

Speaker Session 3

Ben Taylor

Ben Taylor

Disasters, Evidence and experts: A case Study from Evidence Aid

Benjamin Heaven Taylor is the Chief Executive Officer of Evidence Aid, an international NGO that works to enable evidence-based decision-making in humanitarian settings. Ben opened the discussion by describing the humanitarian ecosystem which includes (but is not limited to) the UN, International NGOs, research bodies, as well as local civil society, private sectors and individuals. He showed a pyramid reflecting the hierarchy of evidence. Due to the time constraints of a disaster response situation, expert evidence is frequently used but there is often a weak research-evidence base, meaning that there is little basis for challenging experts’ views. The research-evidence base is often weak due to it being inaccessible, ‘patchy’ and there being political barriers. However, Ben emphasised that the use of experts isn’t necessarily a bad thing and that…

‘When used properly experts can be a vital mediator between evidence (which can be a blunt instrument) and practice. But experts (including scientists) can be influenced by bias, just like anyoneTaylor (2020) – Workshop presentation

The presentation concluded by referring to Evidence Aid’s theory of change with the overarching idea being that before, during and after disasters, the best available evidence is used to design interventions, strategies and policies to assist those affected or at risk.

Focus Group Discussions

As part of the day, we had three separate focus group discussions. The facilitator for each group opened with some thoughts provided by Emma Doyle, a Senior Lecturer at the Joint Centre for Disaster Research at Massey University, New Zealand (Emma’s answers are provided in a separate blog post). Each group then built upon these initial thoughts and discussed the question. Summaries of discussion topics are provided below.

In the context of disaster response…

What type of knowledge is and should be used?

Initially the conversation separated knowledge domains into two streams – science and indigenous knowledge. However, this separation was critiqued and considered to be a reductionist way of thinking. Although it was acknowledged it is important to be clear where knowledge has come from and the conditions in which it was created, it was suggested that perhaps it is more useful to think about knowledge as a network of clusters that may or may not be talking to one another.

Disaster response is an integrated problem and in a time constrained environment, it’s someone’s job to bring this separate and sometimes conflicting information together. As part of this role, the framing of the initial questions is vital in determining what knowledge is collected. A key issue with the collection of knowledge is credibility and the need to demonstrate trustworthiness. For one engineer in the group, model makers often do not have the ‘luxury’ of choosing data, and if they do, the determination of reliability is often subjective and determined by expert judgment.

How is and should uncertainty be factored into decisions and communicated?

The level of communication for uncertainty impacts the confidence that the public have in decision-makers and consequently the level of trust. The question of how much and how uncertainty is communicated was raised. For example, whether the uncertainty is presented as a number or through graphics is dependent on the type of event and the cultural context. Perhaps there is also a balance to be struck between communicating the full range of possibilities for transparency and not supplying too much information, which can cause cognitive overload.

Examples were given of model makers who are keen to communicate all uncertainties rather than make the decision themselves. Another example is that once a decision is made, if an immediate response is required, people ‘on the ground’ may just prefer to be told what to do and given instructions. A potential way in which the communication of uncertainty can be balanced is a layered approach, which is sometimes used in healthcare. Highlighting the information that needs to be known but then allowing access to more detailed information if a patient wants to see the same level of detail as their clinician. However, it was recognised that in a time pressured situation such as disaster response, this will be more difficult to formulate. Fundamentally, the question of communicating uncertainty was described as a moral and ethical judgment.

What happens to, and should happen to, knowledge after it is produced and the event has taken place?

This focus group began by discussing the initial collection of knowledge and how this is often based on visibility or access to data or individuals. In some cases, ‘experts’ might be selected because of the institution they are from or willingness to interact with the media but this may not make them the most appropriate person to answer the questions at hand. In any case, wherever the knowledge has come from, transparency is key and the group felt that the general public can act out of panic if they do not feel informed. If the release of information to the public is staggered, this can lead to a loss of empowerment. However, this is then balanced against communicating what is necessary. In many cases, the scientific experts should not be expected to communicate directly with the public, often this requires a mediator. If there is not a clear and hard line from the government, fake news and rumours are likely to be a major issue.

Reflections in light of COVID-19 being declared a global Pandemic

Clearly the topics discussed in the workshop are highly relevant to the ongoing COVID-19 pandemic. In the UK, COVID-19 has been declared as a national emergency and at the time of writing I am socially distancing myself under the new strict governmental measures and working from home. COVID-19 is relevant to all the questions posed in our workshop, and to the content of the presentations and focus groups. I will now draw some links with the talks given by each guest speaker, but recognise that there are many more!

Amy talked about the transfer of knowledge and how that can then be out of the expert’s control once imparted. Due to the popularity of social media, there have been widespread issues of miscommunication with platforms such as Twitter trying to direct people towards official information sources and the Government now hosting a daily press briefing.

Robert questioned how we handle controversial advice and that scientific experts are unlikely to reach consensus with the final say being from political actors. Repeatedly, we have heard Boris Johnson and other political actors saying that the decisions are being driven by the science. An important point to make here, which was raised by Professor David Spiegelhalter in the Centre for Science and Policy’s (CSaP) ‘Science, Policy & Pandemics’ Podcast (Episode 2: Communicating Evidence and Uncertainty), is that SAGE is not the decision-maker, they are providing the evidence to inform decisions made by politicians. After calls for transparency, SAGE released the evidence which is guiding decisions and identified the core expert groups who they are consulting. As far as we are aware, this has not happened in such a short time frame for other events that SAGE have provided advice for, but perhaps this is because it is something that is affecting us all rather than a specific geographic location.

One point in Thea’s presentation that stands out is that information/evidence sometimes needs to be simplified for people to understand and act upon. I would be surprised if people in the UK had not now heard the line ‘Stay at home, protect the NHS, save lives’, which is also on the front page of a booklet circulated nationwide summarising the action that needs to be taken by individuals and includes illustrations on the correct way to wash hands (see figure below).

Over the past few weeks Evidence Aid has been preparing collections of relevant evidence for COVID-19. Their aim has been to provide the best available evidence to help with the response, supporting Ben’s proposal that there needs to be research-evidence based decision-making in disaster response situations.

There is clearly a lot of uncertainty about COVID-19 and as the situation is changing day by day, it is impossible to comment on what the right or wrong approach is, and this approach has differed from country to country. One of the aims of our project is to now establish what evidence and type of experts different countries have relied upon and why the interventions have differed.

Members of the EuP team have also started a blog with opinion pieces about the pandemic including: ‘Are the experts responsible for bad disaster response?‘ and ‘Reading Elizabeth Anderson in the time of COVID-19’.

Text by Hannah Baker (published 23/04/2020)

Photographs within text by Hannah Baker, Cléo Chassonnery-Zaïgouche & Judith Weik

Thumbnail image by JohannHelgason/

Are the experts responsible for bad disaster response? 499 310 Federico Brandmayr

Are the experts responsible for bad disaster response?

A few lessons for the coronavirus outbreak from L’Aquila

~ ~ ~

A few weeks ago, a Facebook group called 3e32 and based in the Italian city of L’Aquila posted a message stating: “whether it is a virus or lack of prevention, science should always protect its independence from the power of those who guarantee the interests of the few at the expense of the many”. The statement was followed by a picture of a rally, showing people marching and carrying a banner which read: “POWER DICTATES, ‘SCIENCE’ OBEYS, JUSTICE ABSOLVES”.

What was that all about? “3e32” refers to the moment in which a deadly earthquake struck L’Aquila on April 6th 2009 (at 3:32 in the Morning). It is now the name of a collective founded shortly after the disaster. The picture was taken on November 13th 2014: a few days earlier, a court of appeals had acquitted six earth scientists of charges of negligence and manslaughter, for which they had previously been sentenced to six years in prison.

Even today, many people believe that scientists were prosecuted and convicted in L’Aquila for “for failing to predict an earthquake”, as a commentator put it in 2012. If this were the case, it would be shocking indeed: earthquake prediction is seen by most seismologists as a hopeless endeavour (to the point that there is a stigma associated to it in the community), and the probabilistic concept of forecast is preferred instead. But, in fact, things are more complicated, as I and others have shown. What prosecutors and plaintiffs claimed was that in a city that had been rattled for months by tremors, where cracks had started to appear on many buildings, where people were frightened and some had started to sleep in their cars, a group of scientists had come to L’Aquila to say that there was no danger and that a strong earthquake was highly unlikely. Prosecutors attributed to the group of experts, some of whom were part of an official body called National Commission for the Forecast and Prevention of Major Risks (CMR), a negative prediction, or, in other terms, they claimed that hat they had inferred “evidence of absence” from “absence of evidence”. This gross mistake was considered a result of the experts submitting to the injunctions of the chief of the civil protection service, Guido Bertolaso, who wanted Aquilani to keep calm and carry on, instead of following the best scientific evidence available. Less than a week after the highly publicised expert meeting, a 6.3 magnitude quake struck the city, killing more than 300 people.

The Facebook post, published at the end of March, suggests a link between the management of disaster in L’Aquila and the response to the covid-19 outbreak. The reminiscence was made all the starker by the fact that, just a couple of weeks before the post, Bertolaso had come once again to the forefront of Italian public life, this time not as chief of the civil protection service but as special advisor to the president of the Lombardy region to fight covid-19. But the analogies are deeper than the simple reappearance of the same characters. As during and after all disasters, attributions of blame are today ubiquitous. Scientists and experts are under the spotlight as they were in L’Aquila. Policymakers and the public expect highly accurate predictions and want them quickly. Depending on how a country is doing in containing the virus, experts will be praised or blamed, sometimes as much as elected representatives.

In Italy, for example, many now ask why the province of Bergamo was not declared “red zone”, meaning that unessential companies were not closed down, in late February, despite clear evidence of uncontrolled outbreaks in several towns in the area (various other towns in Italy had been declared “red zones” since February 23rd). Only on March 8th the national government decided to lock down the whole region of Lombardy, and the rest of the country two days later. The UK government has been similarly accused of complacency in delaying school closures and bans on mass gatherings. Public accusations voiced by journalists, researchers, and members of the public provoked blame games between state agencies, levels of governments, elected representatives, and expert advisors. In Italy, following extensive media coverage of public officials’ omissions and commissions in the crucial weeks between February 21st and March 8th, regional authorities and the national government now blame each other for the delay. In a similar way, the UK government and the Mayor of London have pointed fingers at each other after photos taken during the lockdown showed overcrowded Tube trains in London.

It would be easy to argue, with the benefit of hindsight, that more should have been done, and more promptly, to stop the virus, and not only in terms of long-term prevention or preparedness, but also in terms of immediate response. Immediate response to disaster includes such decisions as country-wide lockdowns to block the spread of a virus (like we are witnessing now), the evacuation of populations from unsafe areas (such as the 1976 Guadeloupe evacuation), the stop of the operation of an industrial facility or transport system (such as the airspace closure in Northern Europe after the Eyjafjallajökull eruption in 2011), or the confinement of hazardous materials (such as the removal of radioactive debris during the Chernobyl disaster). Focusing on this kind of immediate responses, I offer three insights from L’Aquila that seem relevant to understand the pressures expert advisors dealing with the covid-19 are facing today in Britain.

Experts go back to being scientists when things get messy

When decisions informed by scientific experts turn out to be mistaken, experts tend to defend themselves by drawing a thick boundary between science and policy, the same boundary that they eagerly cross in times of plenty to seize the opportunities of being in the situation room. Falling back into the role of scientists, they emphasise the uncertainties and controversies that inevitably affect scientific research.

Although most of the CMR experts in L’Aquila denied that they had made reassuring statements or that they had made a “negative prediction”, after the earthquake, they still had to explain why they were not responsible for what had happened. This was done in several ways. First, the draft minutes of the meeting were revised after the earthquake so as to make the statements less categorical and more probabilistic. Secondly, they emphasised the highly uncertain and tentative nature of seismological knowledge, arguing for example that “at the present stage of our knowledge,” nothing allows us to consider seismic swarms (like the one that was ongoing in L’Aquila before April 6th 2009) as precursors of strong earthquakes, a claim which is disputed within seismology. Finally, the defendants argued that the meeting was not addressed to the population and local authorities of L’Aquila (as several announcements of the civil protection service suggested), but rather to the civil protection service only, who then had to take the opportune measures autonomously. They claimed that scientists only provide advice, and that it is public officials and elected representatives who bear responsibility for any decision taken. This was part of a broader strategy to frame the meeting as a meeting of scientists, while the prosecution tried to frame it as a meeting of civil servants.

In Britain, the main expert body that has provided advice to the government is SAGE (Scientific Advisory Group for Emergencies), formed by various subcommittees, such as NERVTAG (New and Emerging Respiratory Virus Threats Advisory Group). These groups, along with the chief scientific adviser, Sir Patrick Vallance, have been under intense scrutiny over the past weeks. Questioned by Reuters about why the covid-19 threat level was not increased from “moderate” to “high” at the end of February, when the virus was spreading rapidly and deadly in Italy, a SAGE spokesperson responded that “SAGE and advisers provide advice, while Ministers and the Government make decisions”. When challenged about their advice, British experts also emphasized the uncertainty they faced. They depicted their meetings not as ceremonies in which the scientific solution to the covid-19 problem was revealed to the government, but rather as heated deliberations in which fresh and conflicting information about the virus was constantly being discussed: what Bruno Latour calls “science in the making”, and not what he calls “ready-made science”. For example, on March 17th Vallance stated before the Health and Social Care Select Committee that “If you think SAGE is a cosy consensus of agreeing, you’re very wrong indeed”.

Italian sociologist Luigi Pellizzoni has similarly pointed out an oscillation between the role of the expert demanding full trust from the public and the role of the scientist who, when things go wrong, blames citizens for their pretence of certainty. The result is confusion and suspicion among the public, and a reinforcement of conspiratorial beliefs according to which scientists are hired guns of powerful interests and that science is merely a continuation of politics by other means. In this way, the gulf between those who decry a populist aversion to science, and those who denounce its technocratic perversion cannot but widen, as I suggested in a recent paper.

Epidemiological (like geophysical) expert advice contains sociological and normative assumptions

Expert advice about how to respond to a natural phenomenon, like intense seismic activity or a rapidly spreading virus, will inevitably contain sociological assumptions, i.e. assumptions about how people will behave in relation to the natural phenomenon itself and in relation to what public authorities (and their law enforcers) will do. They also contain normative (or moral) assumptions, about what is the legitimate course of action in response to a disaster. In most cases, these assumptions remain implicit, which can create various problems: certain options that might be valuable are not even considered and the whole process is less transparent, potentially fostering distrust.

In the L’Aquila case, the idea of evacuating the town or of advising the inhabitants to temporarily leave their homes if these had not been retrofitted was simply out of the question. The mayor closed the schools for two days in late March, but most of the experts and decisionmakers involved, especially those who worked at the national level and were not residing in L’Aquila, believed that doing anything more radical would have been utterly excessive at the time. A newspaper condensed the opinion of US seismologist Richard Allen the day after the quake by writing that “it is not possible to evacuate whole cities without precise data” about where and when an earthquake is going to hit. The interview suggested that this impossibility stems from our lack of seismological predictive power, but in fact it is either a normative judgment based on the idea that too much time, money, and wellbeing would be dissipated without clear benefits, or a sociological judgment based on the idea that people would resist evacuation.

The important issue here is not whether a certain form of disaster response is a good or a bad idea, but that judgments of the sort “it is impossible to respond in this way” very often neglect to acknowledge the standards and information on which these are based. And there are good reasons to believe that this rhetorical loop-hole is especially true of judgments that, by decrying certain measures as impossible, simply ratify the status quo and “business as usual”. Our societies rest on a deep grained assumption that “the show must go on”, so that reassuring people is much less problematic than alarming them that something terrible is going to happen. Antonello Ciccozzi, an anthropologist who testified as an expert witness in the L’Aquila trial, expressed this idea by arguing that while the concepts of alarmism and false alarm are well established in ordinary language (and also have a distinctive legal existence, as in the article number 658 of Italian criminal law, which expressly proscribes and punishes false alarm [procurato allarme]), their opposites have no real semantic existence, occupying instead a “symbolic void”. This is why he coined a new term, “reassurism” (rassicurazionismo), to mean a disastrous and negligent reassurance, which he used to interpret the rhetoric of earth scientists and public authorities in 2009 and which he has applied to the current management of the covid-19 crisis.

Pushing the earthquake-virus analogy further, several clues suggest that the scientists that provided advice on covid-19 in Britain limited the range of possible options by a great deal because they were making sociological and normative assumptions. According to Reuters, “the scientific committees that advised Johnson didn’t study, until mid-March, the option of the kind of stringent lockdown adopted early on in China”, on the grounds that Britons would not accept such restrictions. This of course contained all sorts of sociological and moral assumptions about Britain, China, about democracies and autocracies, about political legitimacy and institutional trust. It is hard to establish whether the government explicitly delimited the range of possible policies on which expert advice was required, whether experts shared these assumptions anyway, or whether experts actually influenced the government by excluding certain options from the start. But by and large, these assumptions remained implicit. They were properly questioned only after several European countries started to adopt stringent counter-measures to stop the virus and new studies predicted up to half a million deaths in Britain, forcing the government to reconsider what had previously been deemed a sociological or normative impossibility.

It is true that, in stark contrast to the CMR in L’Aquila, where social science was not represented at all, SAGE has activated its subsection of behavioural science, called SPI-B (Scientific Pandemic Influenza Advisory Committee – Behaviour). Several commentators have argued that this section, by advancing ideas that resonated with broader libertarian paternalistic sensibilities among elite advisors and policymakers, had a significant influence in the early stage of the UK response to covid-19. There is certainly some truth to that, but my bets are that the implicit assumptions of policy-makers and epidemiologists were much more decisive. Briefs of SPI-B meetings in March and February reveal concerns about unintended consequences of and social resistance to measures such as school closures and the isolation of the elderly, but they are far from containing a full-fledged defence of a “laissez faire” approach. The statements reported in the minutes strike for their prudence, emphasising the uncertainties and even disagreements among members of the section. This leads us to consider a third point, i.e. the degree to which experts, along with their implicit or explicit assumptions, managed to exert an influence over policymakers and where able to confront them when they had reasons to do so. 

Speaking truth to power or speaking power to truth?

Scientists gain much from being appointed to expert committees: prestige; the prospect of influencing policy; better working conditions; less frequently they might also have financial incentives. Politicians also gain something: better, more rational decisions that boost their legitimacy; the possibility of justifying predetermined policies on a-political, objective grounds; a scapegoat that they can use in case things go wrong; an easy way to make allies and expand one’s network by distributing benefits. But although both sides gain, they are far from being on an equal footing: expert commissions and groups are established by ministers, not the other way around. This platitude testifies to the deep asymmetry between experts and policymakers. We have good reasons to think that, under certain circumstances, such an asymmetric relation prevents scientific experts to fully voice their opinions on the one hand, and emboldens policymakers into thinking that they should not be given lessons by their subordinates on the other. Thanks to the high popularity of the 2019 television series Chernobyl, many now find the best exemplification of such arrogance and lack of criticism in how the Ukrainian nuclear disaster was managed by both engineers and public officials.

There is little doubt that something of the sort occurred in L’Aquila. Several pieces of evidence show that Bertolaso did not summon the CMR meeting to get a better picture of the earthquake swarm that was occurring in the region. In his own words, the meeting was meant as a “media ploy” to reassure the Aquilani. But how could he be so sure that the situation in L’Aquila did not require his attention? It seems that one of the main reasons is that he had his own seismological theory to make sense of what was going on. Bertolaso believed that seismic swarms do not increase the odds of a strong earthquake, but on the contrary that they decrease such odds because small shocks discharge the total amount of energy contained in the earth. Most seismologists would disagree with this claim: low-intensity tremors technically release energy, but this does not amount to a favourable discharge of energy that decreases the odds of a big quake because magnitudes are based on a logarithmic scale, and a magnitude 4 earthquake releases a negligible quantity of energy compared to that released by a magnitude 6 earthquake (and, more generally, to the energy stored in an active fault zone). But scientists appear to have been much too cautious in confronting him and criticising his flawed theory. Bertolaso testified in court that in the course of a decade he had mentioned the theory of the favourable discharge of energy “dozens of times” to various earth scientists (including some of the defendants) and that “nobody ever raised any objection about that”. Moreover, both Bertolaso’s deputy and a volcanologist who was the most senior member of the CMR alluded to the theory during the meeting and in interviews given to local media in L’Aquila. A seismologist testified that he did not feel like contradicting another member of the commission (and a more senior one at that) in front of an unqualified public and so decided to change the topic instead. Such missed objections created the conditions under which the “discharge of energy” as a “positive phenomenon” became a comforting refrain that circulated first among civil protection officials and policymakers, and then among the Aquilani as well.

Has something similar occurred in the management of the covid-19 crisis in Britain? As no judicial inquiry has taken place there, there is only limited evidence that does not authorize anything other than speculative conjectures. However, there are two main candidate theories that, although lacking proper scientific support, might have guided the actions of the government thanks to their allure of scientificity: “behavioural fatigue” and “herd immunity”. As mentioned above, many think that behavioural fatigue, according to which people would not comply with lockdown restrictions after a certain period of time so that strict measures could be useless or even detrimental, has been the sociological justification of a laissez faire (if not social Darwinist) attitude to the virus. But this account seems to give too much leverage to behavioural scientists who, for the most part, were cautious and divided on the social consequences of a lockdown. This also finds support in the fact that no public official to my knowledge referred to “behavioural fatigue” but rather simply to “fatigue”, without explicit reference to an expert report or an authoritative study (as a matter of fact, none of the SPI-B documents ever mentions “fatigue”). I’d like to propose a different interpretation: instead of being a scientific theory approved by behavioural experts, it was rather a storytelling device with a common-sense allure that allowed it to get a life of its own among policy circles, ending up in official speeches and interviews. The vague notion of “fatigue”, which reassuringly suggested that the country and the economy could go on as usual, might have ended up being accepted with little suspicion by many experts as well, especially those of the non-behavioural kind. The concept could have served both as a reassuring belief for public officials and as an argument that could be used to justify delaying (or avoiding) a lockdown. The circulation of “herd immunity” might have followed a similar pattern. Although a scientifically legitimate concept, there is evidence that, along with similar formulations such as “building some immunity”, it was never a core strategy of the government, but rather part of a communicative repertoire that could be invoked to justify a delay of the lockdown as well as measures directed only at certain sections of the population, such as the elderly. Only on 23 March the government changed track and abandoned these concepts altogether, taking measures similar to other European countries.

~ ~ ~

The analogy between how Italian civil protection authorities managed an earthquake swarm in L’Aquila and how the British government responded to covid-19 cannot be pushed too far. Earthquakes and epidemics have different temporalities (a disruptive event limited in space and time on the one hand, a long-lasting process with no strict geographical limits on the other), are subject to different predictive techniques, and demand highly different responses. While a large proportion of Aquilani blamed civil protection authorities immediately after the earthquake, Boris Johnson’s approval rating has improved from March to April 2020. However, what happened in L’Aquila remains, to paraphrase Charles Perrow, a textbook case of a “normal accident” of expertise, i.e. a situation in which expert advice ended up being catastrophically bad for systemic reasons, and notably for how the science-policy interface had developed in the Italian civil protection service. As such, there is much that expert advisors and policymakers can learn from it, whether they are giving advice and responding to earthquakes, nuclear accidents, terrorism, or a global pandemic.

Federico Brandmayr