Allgemein

Expert Bites with Arsenii Khitrov 592 592 Federico Brandmayr

Expert Bites with Arsenii Khitrov

21 November 2020

A sociologist and philosopher of wide-ranging interests, Arsenii Khitrov is currently writing up his doctoral dissertation at the University of Cambridge on how Hollywood politically-themed television series are made and what role political and social experts play in their production. He has recently published a brilliant article in which he reconstructs the relationships of conflict, competition and collaboration between the film industry, state agencies, research organisations, social movements, and independent experts. We had the chance to meet Arsenii in person before the coronavirus pandemic started and asked him a few questions about what is distinctive about being an expert for the film industry.

Considering your research, what makes a good expert? 

In the domain I study, which is the field of television production in America today, I focus on a very particular type of expertise. Within this project, I call ‘experts’ the people who come to the entertainment industry from the outside and bring their knowledge to the writers, producers, and actors. These ‘experts’ most commonly come from the government, social movements, universities, think tanks, medial, media, military, intelligence, and law enforcement communities. Some of them represent these organisations and communities and lobby on their behalf, others are private expertise entrepreneurs exchanging their experiences and knowledge for success in Hollywood.

In both cases, Hollywood professionals have the power to define the value of these experts, and the industry defines their value depending on how much they can offer in terms of what Hollywood specifically needs. Experts’ formal qualifications, past and present affiliations, access, and experiences are important, but what is more important is how well they can recognise a particular set of expectations and dispositions that Hollywood professionals share, which I call the ‘Hollywood habitus’, as well as how well they can perform it. In other words, good experts are the experts that Hollywood professionals see as good, or, in other words, whoever plays the Hollywood game well.

What are the pressures experts face in your field?

The main pressure experts experience in Hollywood are the explicit and tacit requirements, expectations, and hierarchies that define both the creative and the management sides of the production process. Television series in the USA are commonly written by a group of writers working on a very tight schedule. Shooting often starts when the writing is still taking place. Experts can be invited to writers’ rooms and on set, and what the writers and producers expect from them is to be as quick, open-minded, and inventive as possible. Hollywood professionals do not expect the experts to criticise what they write or shoot: they don’t want to hear that what they are doing is not possible or unrealistic. Rather, they want experts to ‘pitch solutions’, as many of my research participants told me. However realistic television makers want their products to be, if realism prevents them from creating a good drama, realism must go. If an expert is too insistent on just one version of realism, s/he must make way for a more creative expert.

Taking a step away from the specific case I am studying, I would consider the relationality of experts. By this, I mean the question of whether experts are not actually entities in themselves, but rather intermediaries between two spheres: the sphere that accumulates knowledge and the sphere that receives it. If this is indeed the case, then it is worth thinking about how much the intermediary position the expert occupies demands the expert to adapt to the requirements and expectations of expertise recipients. In other words, how much the type of knowledge experts can provide is defined not only by the knowledge they possess, but by the receivers’ expectations and the experts’ intermediary position.

Have you observed any significant changes occurring in recent times in the way experts operate?

I can answer this question in relation to the field I study. The biggest change that I know of in the way experts operate in Hollywood is that their work has become more institutionalised and routinised, especially when it comes to experts representing social movements. In the late 1960s, social movements gained momentum in relation to Hollywood. They bombarded Hollywood with criticism, boycotted films and television programmes, wrote letters and sent petitions to networks. They became a real force the industry had to reckon with, and the industry sought ways to lessen their pressure. Various intermediary institutions and mediating procedures started emerging in the 1970s and continue emerging today, and these incorporate social movements in the industry. One of the ways the pressure of social movements was channelled into less acute forms of power struggle was through the work of technical advisors and consultants specialising in pressing social and political issues. The way these experts work with and in Hollywood has become institutionalised and routinized, which slightly decreased the pressure of social movements on Hollywood and simultaneously made Hollywood more accessible to social movements.

Do you envision any changes in the role of experts in the future?

If I step away from my research project and speculate about the role of experts in Western societies at large, I would say that it is important to address what many have called the ‘crisis of expertise’, i.e. mistrust of experts and expertise from the side of some groups of the population, media outlets, and some public officials. If we look at the social world as an arena where various groups fight for resources and power, it is not surprising that someone is under attack, that someone is blamed for alleged troubles. This is simply how power struggles unfold: any method is acceptable, and if blaming experts works (i.e. if it mobilises political power), then why not take this route. Yet why does accusing experts help mobilise power?

Perhaps this is due to a unique breakthrough in the way information is performed, practiced, stored, and accessed, which brings four distinct processes together. First, the amount of information available in the world, scientific and otherwise, is increasing at a terribly fast pace. Second, experts have to increasingly specialise to be able to know at least something with certainty. Third, online search engines, databases, and online media make this plethora of information easily accessible to almost anyone for free. Fourth, unequally developed and unequally accessible educational systems help some adapt to these changes faster, while making others to lag behind. The way these four processes continue to unfold will define the role of experts.

Protected: Urban Search and Rescue – Views from the field 486 476 Hannah Baker

Protected: Urban Search and Rescue – Views from the field

This content is password protected. To view it please enter your password below:

Trusting the experts takes more than belief 600 396 Federico Brandmayr

Trusting the experts takes more than belief

Matt Bennett

As part of a series on expertise and COVID-19, the Expertise Under Pressure team asked Matt Bennett, currently a lecturer in Philosophy at the University of Cambridge, to write a piece based on his new article “Should I do as I’m told? Trust, Experts, and COVID-19” forthcoming in the Kennedy Institute of Ethics Journal.

Trusting the science

Radical public health responses to the pandemic around the world have asked us to make unprecedented changes to our daily lives. Social distancing measures require compliance with recommendations, instructions, and legal orders that come with undeniable sacrifices for almost all of us (though these sacrifices are far from equally distributed). These extreme public measures depend for their success on public trust.

Trust in these measures is both a necessary and desirable feature of almost all of the public health strategies currently in place. Necessary, because it seems fair to assume that such extreme measures cannot be effectively introduced, much less maintained, solely through policing or other forms of direct state coercion. These measures require a significant degree of voluntary compliance if they are to work. And desirable, because even if totalitarian policing of pandemic lockdown were viable, it also seems fair to assume that most of us would prefer not to depend on a heavily policed public health strategy.

The same kind of trust is necessary for many kinds of policy, particularly where that policy requires citizens to comply with rules that come at significant cost, and coercion alone would be ineffective. But what is distinctive about our pandemic policies is that they depend not just on public trust in policy, but public trust in the science that we are told informs that policy.

When governments follow the science, their response to the pandemic requires public trust in experts, raising questions about how we might develop measures not just to control the spread of the virus, but to maintain public confidence in the scientific recommendations that support these measures. I address some of these questions in this post (I have also addressed these same questions at greater length elsewhere).

My main point in what follows is that when public policy claims to follow the science, citizens are asked not just to believe what they are told by experts, but to follow expert recommendations. And when this is the case, it can be perfectly reasonable for a well-informed citizen to defer to experts on the relevant science, but nonetheless disagree with policy recommendations based on that science. Until we appreciate this, we will struggle to generate public support for science-led policy that is demanded by some of our most urgent political challenges.

Following the science?

Before I get to questions about the kind of public trust required by science-led policy, we need to first address the extent to which pandemic responses have indeed been led by experts. In the UK, the government’s publicly visible response to the pandemic began with repeated claims that ministers were “following the science”. 10 Downing Street began daily press conferences in March, addresses to journalists and the public in which the Prime Minister was often accompanied by the government’s Chief Medical Officer, Chris Whitty, or Chief Science Officer, Patrick Vallance (sometimes both). In the months that followed Vallance and Whitty played a prominent role in communicating the government’s public health strategy, standing alongside government ministers in many more press conferences and appearing on television, on radio, and in print.

But have ministers in fact followed the science? There are reasons to be sceptical. One thing to consider is whether a reductive referral to “the science” hides a partial or selective perspective informing government decisions. There is of course no one “science” of the pandemic. Different disciplines contribute different kinds of relevant information, and within disciplines experts disagree.

And some have observed that the range of disciplines informing UK government in the early spring was inexplicably narrow. The Scientific Advisory Group for Emergencies (SAGE) evidence cited by government in March included advice from epidemiologists, virologists, and behavioural scientists. But some disciplines were conspicuous in their absence among the government’s experts: economists, sociologists, and psychologists, for example, can provide important insights into the economic and social effects of lockdown that ought to be considered by any genuinely evidence-based policy.

Another reason to be sceptical about the UK government’s claim to follow the science is that several SAGE meetings included Boris Johnson’s infamous Chief Advisor Dominic Cummings, with some members of SAGE stating they were worried about undue influence from Cummings. The problem is not just that “the science” government claimed to follow was incomplete, but also that the government could well have been directing the advice it was claiming to follow.

And perhaps the claim to “follow the science” has been exaggerated. While ministers defer to scientists, those same scientists have been eager to point out that their role is exclusively advisory. Government experts have consistently cleaved to a division of labour that is a cornerstone of so-called “evidence-based policy”: experts provide facts, politicians make decisions.  

Nonetheless, it has been clear throughout the UK’s response to the pandemic that government has seen fit to communicate its policy as if it were unequivocally following expert recommendations. Daily 10 Downing St press conferences, running from March to June, began with Boris Johnson at the podium flanked either side by the government’s Chief Medical Officer and Chief Scientific Advisor. The optics of this are not hard to read.

And even in recent weeks, in which the government has stopped its daily press conferences and dialled down the rhetoric of their science-led policy, ministers still claim that they defer to expert advice, despite those same experts repeatedly distancing themselves from government decision making. As recently as the end of July, in announcing the reintroduction of stricter lockdown measures in parts of Greater Manchester, Lancashire, and Yorkshire, Matt Hancock has repeatedly deferred to evidence that a rise in infections in these areas has been caused not by a return to work, or opening pubs, but people visiting each other in their homes.

We are still being asked by the government to trust in recommendations provided by experts, even if the government is not being led by evidence in the way it would have us believe. The communications strategy may not be honest, but it has been consistent, and because the government is inviting the public to think of its policy as science-led, its public health strategy still depends on public trust in science. We are asked to accept that government is following the recommendations of experts, and that we must follow suit.

Photo by Belinda Fewings on Unsplash

Believing what we are told

I have said above that public trust in science is both a necessary and desirable feature of an effective public health response to the pandemic. But it is desirable only insofar as it is well placed trust. I presume we don’t want the public to put their faith in just any self-identified expert, regardless of their merits and the level of their expertise. We want the public to trust experts, but only where they have good reason to do so. One important question this raises is what makes trust in experts reasonable, when it is. A second important question is what we can do to ensure that the conditions for reasonable trust in experts are indeed in place.

Philosophers of science and social epistemologists have had a lot to say about when and why it is reasonable to trust experts. The anxiety that many philosophers of epistemic trust respond to is a perceived threat to knowledge about a range of basic facts that most of us don’t have the resources to check for ourselves. Do I know whether the Earth is flat without travelling? Should I believe that penicillin can be used to treat an infection without first studying biochemistry? Though it’s important that we know such things, knowledge of this kind doesn’t meet the same evidence requirements that apply to beliefs about, say, where I left my house keys.

Thankfully, there is an influential way of rescuing knowledge about scientific matters that most of us aren’t able to verify for ourselves. In the 1980s philosopher of science John Hardwig proposed a principle that, if true, rescues the rationality of the beliefs that we hold due to our deference to experts.

Hardwig maintained that if an expert tells me that something is the case this is enough reason for me to believe it too, provided that I have good reason to think that the expert in question has good reason to believe what they tell me. Say that I have a doctor who I see regularly, and I have plenty of evidence to believe that they are competent, well-informed, and sincere. On this basis I have good reason to think that my doctor understands, for example, how to interpret my blood test results, and will not distort the truth when they discuss the results with me. I thus have good reason to think that the doctor has good reason to believe what they tell me about my test results. This is enough, Hardwig maintains, for me to form my own beliefs based on what they tell me, and my epistemic trust has good grounds.

Doing what we are told

But can the same be said when the expert isn’t just asking me to believe something, but is recommending that I do something? Is it still reasonable to trust science when it doesn’t just provide policy-relevant facts, but leads the policy itself?

Consider an elaboration of the doctor example. Say I consult my trusted doctor to discuss the option of a Do Not Attempt CPR (DNACPR) instruction. My doctor is as helpful as always, and provides me with a range of information relevant to the decision. In light of my confidence in the doctor’s professionalism, and if we accept Hardwig’s principle, we can say that I have good reason to believe the information my doctor gives me.

Now consider how I should respond if my doctor were to tell me that, in light of facts about CPR’s success and about my health, I should sign a DNACPR (and set aside the very worrying medical ethics violation involved in a doctor directing a patient in this way regarding life-sustaining treatment). I have good reason to believe the facts they have given me relevant to a DNACPR. Do I also have good reason to follow their advice on whether I should sign? Not necessarily.

For one thing, my doctor’s knowledge regarding the relevant facts might not reliably indicate their ability to reason well about what to do in light of the facts. My doctor might know everything there is to know about the risks, but also be a dangerously impulsive person, or conversely an excessively cautious, risk-averse person.

And even if I think my doctor probably has good reason to think I should sign – I believe they are as wise as they are knowledgeable – their good reason to think I should sign is not thereby a good reason for me. The doctor may have, for instance, some sort of perverse administrative incentive that encourages them to increase the number of signed DNACPRs. Or, more innocently, they may have seen too many patients and families suffer the indignity of a failed CPR attempt at the end of life, or survive CPR only to live for just two or three more days with broken ribs and severe pain. And maybe I have a deeply held conviction in the value of life, or in the purpose of medicine to preserve life at all costs, and I might think this trumps any purported value of a dignified end of life. I may not even agree with the value of dignity at the end of life in the first place.

Well-placed trust in the recommendation of an expert is more demanding than well-placed trust in their factual testimony. A good reason for an expert to believe something factual is thereby a good reason for me to believe it too. But a good reason for an expert to think I should do something is not necessarily a good reason for me to do it. And this is because what I value and what the expert values can diverge without either of us being in any way mistaken about the facts of our situation. I can come to believe everything my doctor tells me about the facts concerning CPR, but still have very good reason to think that I should not do what they are telling me to do.

Something additional is needed for me to have well-placed trust in expert recommendations. When an expert tells me not just what to believe, but what I should do, I need assurance that the expert understands what is in my interest, and that they make recommendations on this basis. An expert might make a recommendation that accords with the values that I happen to have (“want to save the NHS? Wear a face covering in public”) or a recommendation that is in my interest despite my occurrent desires (“smoking is bad for you; stop it).

If I have good reason to think that my doctor, or my plumber, or, indeed, the state epidemiologist, has a good grasp of what is in my interest, and that their recommendations are based on this, then I am in a position to have well-placed trusted in their advice. But without this assurance, I may quite reasonably distrust or disagree with expert recommendations, and not simply out of ignorance or some vague “post-truth” distrust of science in general.

Photo by Nick Fewings on Unsplash

Cultivating trust

This demandingness of well-placed trust in expert recommendations, as opposed to expert information, has ramifications for what we can do to cultivate public trust in (at least purportedly) expert-led policy.

Consider a measure sometimes suggested to increase levels of public trust in science: increased transparency. Transparency can help build confidence in the sincerity of scientists, a crucial requirement for public trust. It can also, of course, help us to see when politics is at greater risk of distorting the science (e.g. Dominic Cummings attending SAGE meetings), and allows us to be more discerning with where we place our trust.

Transparency can also mitigate tendencies to think that anything less than a completely value-free science is invalidated by bias and prejudice. When adjudicating on matters of fact, we can distinguish good from bad use of values in science, depending on whether they play a direct or indirect role in arriving at factual conclusions. Thus we might for instance allow values to determine what we consider an acceptable level of risk of false positives for a coronavirus test, but we would not want values to determine how we interpret the results of an individual instance of the test (“I don’t want to have coronavirus – let’s run the test again”). Transparency can help us be more nuanced in our evaluation of whether a given scientific conclusion has depended on value-judgements in a legitimate way.

But it seems to me that transparency is not so effective when we are being asked to trust in the recommendations of experts. One reason for this is that it is far less easy for us to distinguish good and bad use of values in expert advice. This is because values must always play a direct role in recommendations about what to do. The obstacle to public trust in science-led policy is not, as with public trust in scientific fact, the potential for values to overreach. The challenge is instead to give the public good reason to think that the values that inform expert recommendations align with those to whom they issue advice.

There are more direct means of achieving this than transparency. I will end with two such means, both of which can be understood as ways to democratise expert-led policy.

One helpful measure to show the public that a policy does align with their interest is what is something called expressive overdetermination: investing policy with multiple meanings such that it can be accepted from diverse political perspectives. Reform to French abortion law is sometimes cited as an example of this. After decades of disagreement, France adopted a law that made abortion permissible provided the individual has been granted an unreviewable certification of personal emergency. This new policy was sufficiently polyvalent to be acceptable to the most important parties to the debate; religious conservatives understood the certification to be protecting life, while pro-choice advocates saw the unreviewable nature of the certification as protection for the autonomy of women. The point was to find a way of showing that the same policy can align with the interests of multiple conflicting political groups, rather than to ask groups to either set aside, alter, or compromise on their values.

A second helpful measure, which complements expressive overdetermination, is to recruit spokespersons that are identifiable to diverse groups as similar to them in political outlook. This is sometimes called identity vouching. The strategy is to convince citizens that the relevant scientific advice, and the policy that follows that advice, is likely not to be a threat to their interests because that same consensus is accepted by those with similar values. Barack Obama attempted such a measure when he established links with Evangelical Christians such as Rick Warren, one of the 86 evangelical leaders who had signed the Evangelical Climate Initiative 2 years before the beginning of Obama’s presidency. The move may have had multiple intentions, but one of them is likely to have been an attempt to win over conservative Christians to Obama’s climate-change policy.

Expressive overdetermination and identity vouching are ways of showing the public that a policy is in their interests. Whether they really are successful at building public trust in policy, and more specifically in science-led policy, is a question that needs an empirical answer. What I have tried to show here is that we have good theoretical reasons to think that such additional measures are needed when we are asking the public not just to believe what scientists tell us is the case, but to comply with policy that is led by the best science.

Public trust in science comes in at least two very different forms: believing expert testimony, and following expert recommendations. Efforts to build trust in experts would do well to be sensitive to this difference.

§§§

About Matt Bennett

Matt Bennett is a lecturer with the Faculty of Philosophy at the University of Cambridge.  His research and teaching cover topics in ethics (theoretical and applied), political philosophy, and moral psychology, as well as historical study of philosophical work in these areas in the post-Kantian tradition. Much of his research focuses on ethical and political phenomena that are not well understood in narrowly moral terms, and he has written about non-moral forms of trust, agency, and responsibility. From October 2020 Matt will be a postdoctoral researcher with the Leverhulme Competition and Competitiveness project at the University of Essex, where he will study different forms of competition and competitiveness and the role they play in a wide range of social practices and institutions, including markets, the arts, sciences, and sports.

Economists in the City #7 1024 688 Cléo Chassonnery-Zaïgouche

Economists in the City #7

Regions and Cities: Policy Narratives and Policy Challenges in the UK

by Philipp McCann

Many of the narratives that now dominate policy debates in the United Kingdom and Europe regarding questions of interregional convergence and divergence are derived from observations overwhelmingly based on the experience of the United States, and to a much smaller extent Canada and Australia (Sandbu 2020). These narratives often focus on the supposed ‘Triumph of the City’ (Glaeser 2011) and the problems of ‘left behind’ small towns and rural areas. Moreover, many debates about ‘the city’ – which immediately jump to discussions of London, New York, San Francisco, Los Angeles, Paris, Tokyo, etc. – often have very little relevance for thinking about how the vast majority of urban dwellers live and work, in most parts of the world.

Unfortunately, however, the empirical evidence suggests that many of these narratives only have very limited applicability to the European context (Dijkstra et al. 2015). The European context is a patchwork of quite differing national experiences, and these types of US-borrowed narratives only reflect the urban and rural growth experiences of a few western European countries such as France, plus the central European former-transition economies (Dijkstra et al. 2013; McCann 2015).

Interregional divergence has indeed been a feature of most countries since the 2008 crisis and this is also likely to increase in the wake of the Covid-19 crisis, but not necessarily in the way that these US narratives suggest. Indeed, it is important to consider these issues in detail because interregional inequality has deep and pernicious social consequences (Collier 2018), without necessarily playing any positive role in economic growth. Across the industrialised world there is no relationship between national economic growth and interregional inequality (Carrascal-Incera et al. 2020), and more centralised states tend to be more interregionally unequal and to have much larger primal cities. In the case of the UK, very high interregional inequality and an over-dominance by London has been achieved with no national growth advantage whatsoever over competitor countries.

Blackpool, Lancashire, England, UK. (Photo by Michael D Beckwith, wikicommons)

The problems associated with narrative-transfer leading to policy-transfer are greatly magnified in the UK due to our poor language skills, whereby UK media, think-tanks, ministries and media are only really able to benchmark the UK experience against the experiences of other English-speaking countries such as USA, Canada and Australia. Yet, when it comes to urban and regional issues this particular grouping of countries in reality represents just about the least applicable comparator grouping possible. These countries are each larger than the whole of Europe, they have highly polycentric national spatial structures, and they are federal countries, whereas the UK is smaller than Wyoming, is almost entirely monocentric, and is an ultra-centralised top-down unitary state with levels of local government autonomy akin to Albania or Moldova (OECD 2019).

These problems are now evident again in the UK in the debates regarding ‘levelling up’. When we think about the role of cities and regions in our national growth story, in the case of the UK it is very difficult to translate many of the ideas currently popular in the North American urban economics arena to the specifics of the UK context. The literature on agglomeration economies and also the widespread international empirical evidence confirms that cities are key drivers of national economic growth, and evidence from certain countries suggests that nowadays there are large and growing productivity differences between urban and rural regions.

Yet, in the UK case the evidence suggests that these patterns are only partially correct. Some very prosperous urban areas such as London, Edinburgh, Oxford, Bristol, Reading and Milton Keynes contribute heavily to the national economic growth story. On the other hand, many of the UK’s large cities located outside of the South of England underperform by both national and international standards and contribute much less to economic performance than might otherwise be expected on the basis of international comparators (McCann 2016).

What are the features of this under-performance? Firstly, in the UK there is almost no relationship between localised productivity and the size of the urban area (Ahrend et al. 2015; OECD 2015), especially once London is removed from the analysis, whereas positive city size-productivity relationships are widely observed in many countries. Secondly, there are only very small productivity differences between the performance of large cities and urban areas, between small cities and towns, and between urban areas in general and rural areas (ONS 2017). Indeed, many of the UK’s most prosperous places are small towns and rural areas while some of the poorest places in the UK are large cities. Thirdly, sectoral explanations play an ever-decreasing role (Martin et al. 2018) and interregional migration has remained largely unchanged for four decades (McCann 2016). Fourthly, a simple and mechanistic reading of Zipf’s Law tells us little or nothing about UK urban growth or productivity challenges in the UK. As such, many simple urban economic textbook-type analyses are of limited, little, or no use at all for understanding the UK regional and urban context, as are stylised discussions about so-called MAR-vs-Jacobs externalities, spatial sorting, or ‘people-based versus place-based’ policies.

The UK economy is one of the world’s most interregionally unbalanced industrialised economies (McCann 2016, 2019, 2020; Carrascal-Incera et al. 2020), characterised as it is by an enormous core-periphery structure. UK inequalities between regions are very high by international standards, and inequalities between its cities are also quite high by international standards, but less so than for regions. This is because of the regional spatial clustering, partitioning and segregation of groups of prosperous cities, towns and rural areas into certain regions (broadly the South and Scotland) and the regional spatial clustering, partitioning and segregation of groups of low prosperity cities, towns and rural areas in other regions (Midlands, North, Wales, Norther Ireland).

In particular, the differential long-run performance of UK cities by regional location is very stark. Obviously, there are low prosperity places in the South (Hastings, Clacton, Tilbury, etc.) and prosperous places elsewhere (Ripon, Chester, Warwick, etc.), but what is remarkable is the extent to which these exceptions are almost entirely towns. Indeed, many of the most prosperous places in the Midlands and the North are also towns, while the South also accounts for huge numbers of very prosperous small towns and villages. Unless our policy-narratives closely reflect the realities of the urban and regional challenges facing the UK it is unlikely that policy actions will be effective, and narrative-transfer from the US to the UK is often very unhelpful.

In the case of the current ‘levelling up’ debates these issues are especially important. Given the seriousness and the scale of the situation that we are in, our policy narratives should be led by a careful reading of the literature and a detailed examination of the data on cities (Martin et al. 2018), trade (Chen et al. 2018), connectivity and spatial structures (Arbabi et al. 2019; 2020) in the context of widespread consultation (UK2070 2020) and not on the skills of speechwriters or ideologically-led partisan politics. Brexit, alone, will almost certainly lead to greater long-term interregional inequalities (Billing et al. 2019; McCann and Ortega-Argilés 2020a,b), and Covid-19 is likely to further exacerbate these inequalities.

Our hyper-centralised governance set-up is almost uniquely ill-equipped to address these challenges, and while the setting up of the three Devolved Administrations along with the recent movement towards City-Region Combined Authorities are all steps in the right direction, a much more fundamental reform of our governance systems is required in order to address these challenges. These devolution (not decentralisation!) issues are the difficult institutional challenges that must be focussed on in order to foster the types of agglomeration spillovers and linkages that we would want to see across the country, whereby cities can underpin the economic buoyance of their regional, small-town and rural hinterlands.

A key test of this will be the design of the new ‘Shared Prosperity Fund’, the replacement for EU Cohesion Policy, which for many years had played such an important role in the economic development of the weaker regions of the UK. If the Shared Prosperity Fund programme and processes are devolved, cross-cutting in their focus, allow for specific and significant local tailoring, and are also long-term in nature, then this will be an indication that institutional change is moving in the right direction. But if this Fund is organised in a largely top-down, sectoral, centrally-designed and orchestrated fashion, and is also competitive in nature, then this will clearly indicate otherwise… Let’s see.

References

Ahrend, R., Farchy, E., Kaplanis, I., and Lembcke, A., 2014, “What Makes Cities More Productive? Evidence on the Role of Urban Governance from Five OECD Countries”, OECD Regional Development Working Papers 2014/05, Organisation for Economic Cooperation and Development, Paris

Arbabi, H., Mayfield, M., and McCann, P., 2019, “On the Development Logic of City-Regions: Inter- Versus Intra-City Mobility in England and Wales”, Spatial Economic Analysis, 14.3, 301-320

Arbabi, H., Mayfield, M., and McCann, P., 2020, “Productivity, Infrastructure, and Urban Density: An Allometric Comparison of Three European City-Regions across Scales”, Journal of the Royal Statistical Society: Series A, 183.1, 211-228

Billing, C., McCann, P., and Ortega-Argilés, R., 2019, “Interregional Inequalities and UK Sub-National Governance Responses to Brexit”, Regional Studies, 53.5, 741-760

Carrascal-Incera, A., McCann, P., Ortega-Argilés, R., and Rodríguez-Pose, A., 2020, UK Interregional Inequality in a Historical and International Comparative Context”, National Institute Economic Review, Forthcoming

Chen, W., Los, B., McCann, P., Ortega-Argilés, R., Thissen, M., van Oort, F., 2018, “The Continental Divide? Economic Exposure to Brexit in Regions and Countries on Both Sides of the Channel”, Papers in Regional Science, 97.1, 25-54

Collier, P., 2018, The Future of Capitalism: Facing the Anxieties, Penguin Books, London

Dijkstra, L., Garcilazo, E., and McCann, P., 2013, “The Economic Performance of European Cities and City-Regions: Myths and Realities”, 2013, European Planning Studies, 21.3, 334-354

Dijkstra, L., Garcilazo, E., and McCann, P., 2015, “The Effects of the Global Financial Crisis on European Regions and Cities”, Journal of Economic Geography, 15.5, 935-949

Glaeser, E.L., 2011, Triumph of the City: How Our Greatest Invention Makes Us Richer, Smarter, Greener, Healthier, and Happier, Penguin Press, New York

Martin, R., Sunley, P., Gardiner, B., and Evenhuis, E., and Peter Tyler, 2018, “The City Dimension of the Productivity Problem: The Relative Role of Structural Change and Within-Sector Slowdown”, Journal of Economic Geography, 18.3, 539-570

McCann, P., 2015, The Regional and Urban Policy of the European Union: Cohesion, Results-Orientation and Smart Specialisation, Edward Elgar, Cheltenham

McCann, P., 2016, The Regional and Urban Policy of the European Union: Cohesion, Results-Orientation and Smart Specialisation, Edward Elgar, Cheltenham

McCann, P., 2019, “Perceptions of Regional Inequality and the Geography of Discontent: Insights from the UK”, Regional Studies, 53.5, 741–760

McCann, P., 2020, “Productivity Perspectives: Observations from the UK and the International Arena”, in McCann, P., and Vorley, T., (eds.), Productivity Perspectives, Edward Elgar, Cheltenham

McCann, P., and Ortega-Argilés, R., 2020a, “Regional Inequality” 2020, in Menon, A., (ed.), Brexit: What Next?, UK in a Changing Europe.

McCann, P., and Ortega-Argilés, R., 2020b, “Levelling Up, Rebalancing and Brexit?”, in McCabe, S., and Neilsen, B., (eds.), English Regions After Brexit, Bitesize Books, London

OECD, 2015, The Metropolitan Century: Understanding Urbanisation and its Consequences, Organisation for Economic Cooperation and Development, Paris

OECD, 2019, OECD Making Decentralisation Work 2019, Organisation for Economic Cooperation and Development, Paris

ONS, 2017, “Exploring Labour Productivity in Rural and Urban Areas in Great Britain: 2014”, UK Office for National Statistics.

Sandbu, M., 2020, The Economics of Belonging, A Radical Plan to Win Back the Left Behind and Achieve Prosperity for All, Princeton University Press, Princeton NJ

UK2070, 2020, Make No Little Plans: Acting At Scale For A Fairer And Stronger Future, UK2070 Commission Final Report, See:


Philipp McCann is Professor of Urban and Regional Economics in the University of Sheffield Management School.


Other posts from the blogged conference:

Technology as a Driver of Agglomeration by Diane Coyle

Urban Agglomeration, City Size and Productivity: Are Bigger, More Dense Cities Necessarily More Productive? by Ron Martin

The Institutionalization of Regional Science  In the Shadow of Economics by Anthony Rebours

Cities and Space: Towards a History of ‘Urban Economics’, by Beatrice Cherrier & Anthony Rebours

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

Economists in the City #6 938 792 Cléo Chassonnery-Zaïgouche

Economists in the City #6

Technology as a Driver of Agglomeration

by Diane Coyle

Urbanisation has for centuries been a marker of economic development, while Alfred Marshall provided the basic economic analysis of the forces of agglomeration in his 1890 Principles of Economics. Yet economic research into cities and agglomeration – into the geography of the economy – has revived significantly since the late 1990s, including work by prominent economists such as Ed Glaeser, Paul Krugman and Tony Venables.

For most of the mid-20th century economics largely lived up to its caricature as a discipline analysing the world in terms of atomistic optimising individuals in linear models, and paid decreasing attention to the specifics of history or geography. The profession rewarded the ability to manipulate mathematical models while steadily dropping from the curriculum the requirement to study the world in all its untidy detail. So what was the reason for the 1990s renewal of interest in agglomeration, the spatial distribution of economic activity? Digitalisation was starting to change the dynamics of the economy in the 1990s. When I started writing about the economic and social effects of digital technologies around the same time, it seemed clear to me that the forces driving urbanisation would intensify (although others predicted the opposite effect, the loosening of geographical ties or ‘death of distance’). Marshall’s original explanations for the concentration of activity in the same places – closeness to market, depth of the labour market and proximity to ideas – still stood but the importance of exchanging ideas was growing as the role of high value added services and intangible (‘weightless’) activities in the economy expanded.

In these ideas-driven activities, Michael Polanyi’s tacit knowledge looms large. In any new domain of activity it anyway takes some time for the information needed to operate a new machine or process, say, to become systematic enough to be codified – written down in instructions that someone else can follow – as James Bessen describes in his outstanding book Learning By Doing. This is the situation now with areas such as AI and big data; although the computational and data handling processes are central, operating them is still a craft skill, passed on between practitioners. Moreover, when it comes to ideas-based work in general, it is difficult-to-impossible to pass on know-how without conversation – although the pandemic is an enforced test of whether improved videoconferencing can finally substitute for face-to-face contact (as Richard Baldwin predicts).

Source: Unplash, Unsplash License.

As economists rediscovered the importance of place, the importance of history also re-emerged, again prompted by the arrival of the new technologies. History is the source of evidence about the economic impact of the periodic arrival of general-purpose technologies with wide application such as digital or AI, so researchers started looked back to the Industrial Revolution or even the printing press. An influential example is Paul David explicitly comparing the diffusion and productivity effects of electrification and computerisation. More recent work has specifically highlighted the role of another 19th century technology, the steam train, in driving substantial urban agglomeration. And the role of ideas and technology in economic growth gained broader traction through Paul Romer’s endogenous growth theory.

It has taken some time, however, for the full implications of geographic agglomeration to filter through both economic research and particularly economic policy. The role of both historical and geographical context, of path-dependence in economic trajectories, of the dynamics of self-fulfilling processes, stand in contrast to the (mainly) linear and context-free tradition of economics for much of its 20th century practice. This has recently started to change, driven perhaps by growing understanding of digital dynamics (or by the overlap with the dynamics of natural systems in environmental economics) with a new focus in economic research on increasing returns, network effects and tipping points.

Evidence has also underlined the need to take agglomeration seriously. The growth of global cities has been obvious. Patricia Melo finds that productivity drops off with distance from city centres. Influential research by Enrico Moretti and David Autor among others indicates that in the US the big city lead is accelerating: the often-observed occupational and income polarization has a geography.

However, the policy implications of the polarising, snowball-type dynamics of an increasingly digital economy have taken some time to become clear. The first iteration in policy was probably the desire many city authorities had to become a locus for the ‘creative classes’, focusing on amenities and culture, or alternatively their competing bids to have science campuses or other high-skill magnets. A second reaction was the argument that it was pointless to resist the market dynamics, which would self-equilibrate by pushing up prices of housing and creating congestion in the attractive places.

Source: Unplash, Unsplash License.

Both have some truth. Amenities and physical facilities can act as magnets. Markets will bring about some adjustments in behaviour. Neither is an adequate approach to policy in contexts where the big city/small town & rural divide is starting to have significant political consequences – for recent voting trends in many countries reflect to some extent the geography of economic division. To make matters even more urgent, the covid-19 pandemic is clearly amplifying existing societal inequalities of all kinds.

Yet there are no simple policy recipes in contexts of hard-to-predict non-linear and path dependent dynamics. History casts a long shadow and points of inflexion depend on the interactions of many variables in a complex system. Changing the dynamics will require the alignment of a number of different policy interventions, just as a key needs to align all the tumblers in a lock before the door will open. One area of policy I have explored, with Marianne Sensier, is the use of cost-benefit analysis (CBA) for the appraisal of public transport investments. This frequently-used (in the UK) policy tool uses place-specific land values and productivity measures to determine whether an investment is worthwhile, resulting in a strong bias to approving them in the already-most productive places. The rationale is the wish to contribute as much as possible to national productivity but of course it plays to the idea that agglomeration is a natural result of the way markets operate and reinforces spatial inequality within the nation. Nor can CBA methods take into account the counterfactual returns to an investment if other policies were put into effect at the same time: housebuilding, investment in amenities, and training alongside an upgraded commuter line, for example. Thinking about policies one by one rather than as a suite aiming to shift a system outcome cannot overcome the powerful technology-driven dynamics.

At a minimum, policymakers need a far more granular understanding of places other than the handful of high skill global cities. In the super-centralised UK, the availability of data at sub-national level has improved dramatically in recent years but we still know too little about the geography of supply chains or skills, although complexity theory and other innovative approaches are providing new insights.

The combination of the ‘levelling up’ policy agenda and behaviour change following the pandemic will surely make prospects outside the big urban agglomerations the focus of policy in the near future. But unless there is a reversal of the historical complementarity between technologies of communication and face-to-face contact, human proximity in major cities will continue to be the engine of economic growth. That means finding a way to make the forces of agglomeration deliver prosperity without polarisation will continue to be the analytical and policy challenge.


Diane Coyle is Professor of Economics and Public Policy at the University of Cambridge.


Other posts from the blogged conference:

Urban Agglomeration, City Size and Productivity: Are Bigger, More Dense Cities Necessarily More Productive? by Ron Martin

The Institutionalization of Regional Science  In the Shadow of Economics by Anthony Rebours

Cities and Space: Towards a History of ‘Urban Economics’, by Beatrice Cherrier & Anthony Rebours

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

Economists in the City #5 1024 649 Cléo Chassonnery-Zaïgouche

Economists in the City #5

Urban Agglomeration, City Size and Productivity: Are Bigger, More Dense Cities Necessarily More Productive?

by Ron Martin

Economists and Cities

Over the past three to four decades, economists’ interest in cities has undergone an unprecedented expansion. The renaissance of urban economics (hence the moniker ‘new urban economics’) and the rise of the ‘so-called ‘new economic geography’ (inspired especially by Paul Krugman), have together directed considerable theoretical and empirical attention to cities, how they function as economies and their importance as sources of economic growth and prosperity. This heightened focus on cities no doubt reflects the fact that, globally, the majority of people now live in cities, and that it is in cities that the bulk of economic activity is located, jobs are concentrated, and wealth is produced. And as the urbanist Jane Jacobs emphasised nearly forty years ago, cities are the main nodes in national and global trade networks.[1] Such is now the economic significance and success of cities that the leading urban economist Ed Glaeser has been moved to declare the ‘triumph of the city’, as the ‘invention that makes us richer, smarter, greener, healthier and happier’.[2]

Whilst geographers have long studied cities, not just as economic entities but also as arenas of social and cultural life, it has been the work of these urban and spatial economists that has attracted the attention of policymakers, in large part, one suspects, because of the seeming formal rigour of the models that many of these economists have used to guide and underpin their analyses of cities, and their deployment of those models to derive policy implications. The equilibrist nature of many of these models – whereby the concentration of economic activity into cities is an equilibrium outcome of market-driven economic processes – also probably appeals to policy-makers, since policies can then be justified if they work to assist the operation of those market processes and help to overcome any ‘market failures’.

The Mantra of Agglomeration

If there is one aspect of this body of economists’ work on cities that has become a dominant theme, it is that of agglomeration. Indeed, the notion has assumed almost hegemonic status in the new urban economics, the new economic geography and other related branches of spatial economics, in that the agglomeration of firms and skilled workers in cities is assumed to be the driver of various positive externalities and increasing returns effects (such as knowledge spillovers, local supply chains, specialised intermediaries, and pools of skilled labour) that in turn are claimed to be instrumental in fostering innovation, productivity, creativity, and enterprise.  Further, according to this chain of reasoning, other things being equal, the larger a city is, or the greater is the density of activity and population in a city, the more powerful and pervasive are the associated agglomeration externalities: bigger is better, and denser is better.

Policymakers have eagerly seized on this sort of argument. Almost every policy statement and strategy for boosting local and regional economic performance – whether emanating from central UK Government policymakers, from the local and city policy community or from the policy reports prepared by ‘think tanks’ and consultancies – sooner or later singles out boosting ‘agglomeration’ as a key imperative. It has almost reached the point where the agglomeration argument has become unassailable, a conventional wisdom that is all but taken for granted, and unquestioned.  Voices of dissent are either ignored or dismissed (typically for not being based on the sort of formal models used by the exponents of the agglomeration thesis).  In many respects, agglomeration theory has become protected by ‘confirmation bias’, where empirical support (often selective) is exaggerated and evidence that runs counter to or which fails to confirm the theory is discounted.[3]

City Size and Productivity

One of the issues that illustrates this state of affairs is the claim that city size (and hence greater agglomeration) promotes higher productivity.[4] This is a particularly pertinent claim with respect to economic debates in the UK because of the current policy concern over the stagnation of national productivity, in general, and the low productivity of many of the country’s northern cities, more specifically.

A number of studies of the advanced economies – the UK included – have been conducted to estimate how productivity increases with city size, typically involving cross-section models that regress the former on the latter (with or without controlling variables). The size of the effect of city size on city productivity from these studies is, however, modest to say the least. Typically, a doubling of city size is estimated to be associated with an increase in the level of productivity of between 2-5 percent (see, for example, Ahrend et al, 2014; OECD, 2020[5]). To put these sorts of estimates into the UK context, compare, for example, London and Manchester. In 2016, London’s nominal labour productivity (as measured by GVA per employed worker) of £57,000 was 50 percent higher than Manchester’s £38,000. So, if we assume an elasticity of 5 percent, a doubling of Manchester’s population (or employment) – even if that was achievable and desirable – would only raise Manchester’s productivity to around £40,000. This hardly represents a major ‘levelling up’ towards the level of London, the aim of current Government policy. 

Figure 1: Productivity and City Size, 2015. Key to Cities: 1-London, 2-Birmingham, 3-Manchester, 4-Sheffield, 5-Newcastle, 6-Bristol, 7-Glasgow, 8-Edinburgh, 9-Liverpool, 10-Leeds, 11-Cardiff.

In fact, the evidence for the UK suggests that the city size argument should be viewed with caution. Using the estimates of labour productivity for some 85 British cities defined in terms of travel to work areas, the (logged regression) relationship between labour productivity (GVA per employed worker) and city size is small (a doubling of city size is associated with a mere 4 percent higher productivity (Figure1).[6] London, as the largest and most dense city, has the highest labour productivity.  However, the next largest cities – Manchester, Birmingham, Glasgow, Sheffield, Newcastle, Edinburgh, Bristol and Nottingham – all have much lower productivity, in fact below the national average. Some of the highest productivity cities after London – such as Reading, Milton Keynes, Swindon and Oxford – are all much smaller in size or less dense as urban centres.  If we measure agglomeration in terms of the density of employment, as many economists would argue, the association is even smaller ( a doubling of density yields only a 2.5 percent increase in city productivity – Figure 2). Clearly, other factors than size or agglomeration alone are at work in influencing a city’s labour productivity.[7]

Figure 2: Productivity and City Density, 2015. Key to Cities:  As in Figure 1.

What Makes London So Different?

Nevertheless, the agglomeration argument has proved tenacious. Both academic economists and many policymakers have argued that the productivity gap between London and major northern UK cities is not that London is exceptional but that major northern cities are too small.  Advocates of this view have invoked Zipf’s law of city sizes, sometimes known as the rank-size rule. This rule states that the population of a city is inversely proportional to its rank. If the rule held exactly, then the second largest city in a country would have half the population of the biggest city; the third largest city would have one third the population, and so on. Put another way, according to Zifp’s law, if we plot the ranks of a country’s cities against their sizes on a graph, using logarithmic scales, then the line relating rank to population is downward sloping, with a slope of -1. Appealing to this law, Overman and Rice (2008), for example, argue that while medium sized cities in England are, roughly speaking, about the size that Zipf’s law would predict given the size of London, the largest city, the major second-tier cities in the north of the country all lie below the Zipf line and hence are smaller than would be predicted.[8] This is assumed to mean that they lack the agglomeration effects that London enjoys.

The empirical evidence for Zipf’s Law is, however, highly varied internationally (see Brakman, Garretsen and van Marrewijk, 2019).[9] Further, there is no generally accepted theoretical economic explanation of Zipf’s law, nor does the ‘law’ tell us how far a city can fall below the rank-size rule line before it is deemed to be ‘too small’, or how this will affect its economic performance. Perhaps more seriously, it is indeed the case that Zipf’s law does not in fact hold for a national urban political economic system of the sort that characterises the UK.

As Paul Krugman (1996) argues, while the Zipf relationship holds fairly closely for the cities of the United States, and has done so over a long period of time, indicating a pattern of equal proportionate growth across the urban system, this is not necessarily the case elsewhere:

Zipf’s law is not quite as neat in other countries as it is in the United States, but it still seems to hold in most places, if you make one modification: many countries, for example, France and the United Kingdom, have a single ‘primate city’ that is much larger than a line drawn through the distribution of other cities would lead you to expect. These primate cities are typically political capitals: it is easy to imagine that they are essentially different creatures from the rest of the urban system. (Krugman, p. 41, emphases added).[10]

London is indeed a ‘different creature’ from the rest of the UK’s urban system.  Not only is it the national capital, but a major global centre, and its development is likely to reflect the benefits of that role, and be less linked to (even significantly decoupled from) the rest of its national urban system.[11] These observations suggest that in such cases, it makes little real economic sense to argue that second tier cities below the primate capital city are ‘too small’ relative to what the rank-size rule would predict, since the size of the capital itself has to do with national political and administrative roles and factors in addition to the purely economic.

As the nation’s capital, London has long benefited from being the political centre (one the most centralised of the advanced economies), containing the nation’s main financial institutions an markets (that historically were much more regionally distributed), the main organs of Government policy-making (the UK is one of the most centralised on the OECD countries), a large number of headquarters of major corporations, the largest concentration of top universities, and a high degree of policy autonomy relative to other UK cities. This has meant that it attracts much of the talent and skilled of the country’s workforce.[12] In recent years, it has also benefited from a disproportionate share of major infrastructural investment. Under these circumstances, it is hardly surprising that it has a high productivity. Yet the high productivity of several much smaller cities suggests that size and agglomeration are not everything.

Beyond the Agglomeration Credo

This is not to say that agglomeration is irrelevant – clearly all cities of whatever size benefit to a greater or lesser extent from the local concentration and proximity of workers, firms and infrastructures. But aiming to improve the productivity of Britain’s northern cities by substantially expanding their size or density may be neither necessary nor sufficient as a strategy. The relevant question to pose is why are smaller, and less dense, cities more productive, and what policy lessons might be learned from their experience?   What a city does is obviously important, not just in sectoral terms, but also in terms of functions and tasks (and hence roles in domestic and international supply networks and chains). So, relatedly, is its export base. Further, and crucially, its innovative capacity; its ability to produce and retain, highly educated and skilled workers; its levels of entrepreneurship; the quality and efficiency of its infrastructures; and the extent of its decentralised powers of economic governance; these are all of key importance.  Cities outside London have for decades scored poorly on such factors. But these drivers of productivity cannot simply or solely be reduced to ‘insufficient agglomeration’. The ‘Northern Powerhouse’ cities traditionally once formed polycentric regional systems of innovative and competitive, export-orientated manufacturing. They have lost that role through sustained deindustrialisation, in part because of globalisation and technological lock-in, in part because of spatial biases in national economic policy and management that favoured London and ignored manufacturing. Finding a new role for Britain’s northern cities will be key to their economic renaissance.  Cities do not have to be big or more dense to succeed, but adaptive, dynamic and with appropriate powers of self-determination.[13] Theorising and understanding economic adaptability might yield greater policy dividends than yet more theorising and promotion of agglomeration.


[1] Jane Jacobs (1985) Cities and the Wealth of Nations: Principles of Economic Life, New York: Vintage Books.

[2] Ed Glaeser (2011) The Triumph of the City: How or Greatest Invention Makes us Richer, Smarter, Greener, Healthier and Happier, London: Macmillan.

[3] The negative effects of increasing size and density – such the diseconomies of increased congestion, pollution, travel time, and land and housing costs, are infrequently given the due empirical attention they deserve. It is also significant that in surveys of quality of life satisfaction, large cities often score less well than smaller cities and towns.  London reports some of the lowest average life satisfaction in the UK (see, for example).

[4] See, for example, Glaeser, E. (2010) Agglomeration Economics, Chicago: University of Chicago Press; Glaeser, E. (2011) The Wealth of Cities: Agglomeration Economies and Spatial Equilibrium in the United States, NBER Working Paper 14806; Combes, P., Duranton, G., Gobillon, L., Puga, D., & Roux, S. (2012). The Productivity Advantages of Large Cities: Distinguishing From Firm Selection, Econometrica, 80, pp. 2543-2594. Ahrend et al (2017)  The Role of Urban Agglomerations for Economic and Productivity GrowthInternational Productivity Monitor,  32, pp. 161-179.

[5] Ahrend, R., et al. (2014) What Makes Cities More Productive? Evidence on the Role of Urban Governance from Five OECD Countries, OECD Regional Development Working Papers, No. 2014/05, OECD Publishing, Paris. OECD (2020) The Spatial Dimension of Productivity: Connecting the Dots across Industries, Firms and Places,  OECD Regional Development Working Papers 2020/1, Paris: OECD.

[6] For a comprehensive study of the productivity performance of British cities over the past half century, see Martin, R., Gardiner, B., Evenhuis, E., Sunley,P. and Tyler, P. (2018) The City Dimension of the Productivity Puzzle, Journal of Economic Geography, 18,  pp. 539-570.

[7] Nor does the spatial agglomeration or clustering of individual firms in the same or related industries necessarily increase their productivity, another conventional wisdom in the business and economics literatures (see  Harris, R. Sunley, P., Evenhuis, E., Martin,, R. and Pike, A.(2019)Does Spatial Proximity Raise Firm Productivity? Evidence from British Manufacturing, Cambridge Journal of Regions, Economy and Society, 12, pp. 467-487

[8] Overman, H., and P. Rice. 2008. Resurgent cities and regional economic performance. SERC Policy Paper 1, London School of Economics.

[9] Brakman, S., Garretsen, H. and Marrewijk, C. (2019) An Introduction to Geographical and Urban and Economics: A Spiky World, Cambridge: CUP.

[10] P. Krugman (2006) The Self-organising Economy, Cambridge Mass: MIT Press.

[11] For an interesting analysis of how decoupled London has become from the rest of the UK economy, see Deutsche Bank (2013) London and the UK economy: In for a penny, in for a pound? Special Report, Deutsche Bank Markets Research, London.

[12] As Vince Cable, when Secretary of State for Business in the Coalition Government of 2010, put it: “One of the big problems that we have at the moment… is that London is becoming a kind of giant suction machine, draining the life out of the rest of the country.” (Cable, V. 2013, London draining life out of rest of country). Cable was in fact merely echoing a similar view expressed 75 years earlier by the famous Barlow Commission report on rebalancing Britain’s economy: “The contribution in one area of such a large proportion of the national population as is contained in Greater London, and the attraction to the Metropolis of the best industrial, financial, commercial and general ability, represents a serious drain on the rest of the country” (Barlow Commission, 1940, Royal Commission on the Distribution of the Industrial Population. London: H.M. Stationery Office.

[13] Martin, R. L. and Gardiner, B. (2017) Reviving the ‘Northern Powerhouse’ and Spatially Rebalancing the British Economy: The Scale of the Challenge, in Berry, C. and Giovannini, A. (Eds) Developing England’s North: The Political Economy of the Northern Powerhouse, London: Palgrave Macmillan, pp. 23-58.


Ron Martin is Professor of Economic Geography at the University of Cambridge.


Other posts from the blogged conference:

The Institutionalization of Regional Science  In the Shadow of Economics by Anthony Rebours

Cities and Space: Towards a History of ‘Urban Economics’, by Beatrice Cherrier & Anthony Rebours

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

Citizen Science in a Pandemic: A Fleeting Moment or New Normal? 804 521 Hannah Baker

Citizen Science in a Pandemic: A Fleeting Moment or New Normal?

Text by Katie Cohen.


The current pandemic has in many ways brought the world to a sudden halt. Across the globe many are unable to work, children can’t go to school and the ways in which we used to socialise are no longer safe. Instead, we are trying to engage with the outside world while staying socially distanced from it. One interesting and unintended consequence of this drastic change to our daily lives has been an increase in people’s engagement with citizen science.

Since the end of March when the UK and most countries across the globe went into lockdown, citizen science platforms such as Zooniverse and SciStarter have seen a surge in projects, apps, and participant activity. Zooniverse, for instance, reported that 200,000 participants contributed over 5 million classifications, the equivalent of approximately 48 years of research in one week alone. It seems that teachers, students and even researchers have jumped at the opportunity to receive help with homeschooling and contribute to research programmes. Old and new platforms for citizen scientists have also received increased media coverage, with one plug from The Conversation to ‘Ditch the news cycle—engage, gain skills and make a difference’ and a call for ‘anyone itching for a bit of escapism’ to try citizen science in the Guardian.

This heightened engagement with citizen science has also extended to new projects related to Covid-19 as people are clearly eager to help tackle the global crisis. During this period of piqued interest in citizen science, I want to not only take a closer look at the types of activities that are emerging and expanding but also reflect on the relationship between citizen scientists, experts and policymakers in our pre-and post-pandemic world. What makes this present moment unique and what lessons might it bring to bear on future collaborations between these three groups? Are efforts to engage citizens in tackling the virus harnessing people’s interest to further science as it is practiced, framed and understood by experts? Are the citizen science experiments emerging during this time democratising and pluralising science? I am interested in how the current flux in citizen engagement with science may persist beyond lockdown, but I will also consider how top-down science and decision-making processes still seem to foreground participatory efforts to tackle Covid-19.

How are citizen scientists contributing to Covid-19 research?

Covid-19 presents an especially interesting policy problem because it relies so heavily on population data and mutual trust between citizens, experts and decision-makers. This problem, while not altogether unique, seems to have contributed to the pronounced effort to utilise citizen science approaches for tackling Covid-19. During this period of uncertainty and isolation, logging symptoms, mental health impacts and tracking movement has not only helped experts and policymakers better understand the course of the pandemic but has also given participants a sense of agency. Helpful lists such as the Citizen Science Association’s Covid-19 resources have made it simpler to discover ways to engage and the participation rates reflect an eagerness to do so.

The BBC Pandemic App foreshadowed the types of citizen science efforts we have seen emerge since the spread of Covid-19 began. A project which ran from September 2017 to December 2018, BBC Pandemic was the largest citizen science experiment of its kind and aimed to help researchers better understand how infectious diseases like the flu can spread in order to prepare for the next pandemic outbreak. Participants furthered this mission by contributing data about their travel patterns and interactions. With this data the researchers involved were able to simulate the spread of a highly infectious flu across the UK, and the database is listed as one of the models supporting the government’s response to Covid-19 on the Scientific Advisory Group for Emergencies (SAGE) website.

Over two years after the study’s conclusion, institutions around the world scrambled to initiate similar studies to better understand the novel coronavirus. The Covid Symptom Tracker has proven the most widespread citizen science effort to track Covid-19 in the UK. Professor of Genetic Epidemiology Tim Spector from King’s College London originally teamed up with technologists Jonathan Wolf and George Hadjigeorgiou to launch a startup called ZOE, which conducted studies on twins and nutrition. When coronavirus hit the UK, the ZOE team acted ‘with a sense of extreme urgency’ to adapt the app to track coronavirus symptoms. The app went live on Tuesday 24 March and by the next day had over one million downloads in Britain. A collaboration between NHS England and researchers at King’s College London, the app was also endorsed by the Welsh Government, NHS Wales, the Scottish Government and NHS Scotland. At the core, however, it is a large scale effort to gather data to be analysed by researchers and then delivered to the NHS and policymakers to make informed decisions.
 


Geographical spread of participants reporting their status as of 26 March 2020. Data source: https://covid.joinzoe.com


Funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering, University of California, San Francisco’s (UCSF) COVID-19 Citizen Science (CCS) has also empowered users to share in the fight against the virus. Popping up in Facebook and Instagram advertisements, the mobile health study has garnered support from people around the world (see map below). The app also offers the option for participants to provide nearly continuous GPS data and potentially additional health data, such as body temperature, exercise, weight and sleep. In late April, Northwestern University and the American Lung Association announced they would be partnering with UCSF in an effort to increase the number of participants and improve chances of generating useful results. The investigators have also more recently invited citizen scientists to submit their own research questions. Receiving more than two thousand ideas, they will soon add these participants’ questions one at a time to the study’s survey.

Global Citizen Scientists
Points representing CCS participants worldwide. Data source: Covid-19 Eureka platform.
 

Other citizen science experiments have engaged users more actively in Covid-19 research. For instance, researchers at the University of Washington have used a free computer game Foldit as a platform for citizen scientists to contribute to Covid-19 drug discovery efforts. Developed at the University in 2008, Foldit has previously been used to help scientists in cancer and Alzheimer’s research, but has now seen a pronounced increase in activity since the Covid-19 outbreak. Although it is US-based, the programme has gained traction more widely with the help of promotion by EU-Citizen.Science, and participants across the globe are competing to solve protein puzzles online. Tasked with designing proteins digitally that could attach to Covid-19 and block its entry into cells, participants are aiding the development of antiviral drugs that could ameliorate patients’ symptoms. Researchers involved have found crowdsourcing a helpful tool because of the creativity each person brings to the task.


Diagrams
The 99 most promising of the 20,000 potential Covid-19 antiviral proteins generated by citizen scientists through Foldit that University of Washington researchers plan to test in the lab. https://www.hhmi.org/news/citizen-scientists-are-helping-researchers-design-new-drugs-to-combat-covid-19


A new EU initiative has also invited citizens ‘to take an active role in research, innovation and the development of evidence-based policy on a range of coronavirus-related projects.‘ Supporting a range of citizen science and crowdsourcing programmes, the platform is significant in its broad endorsement of citizen scientists’ contributions not only to research efforts but also to the policy process. Advocating for use of another symptom reporting tool Flusurvey, developed at the London School of Hygiene and Tropical Medicine and monitored by Public Health England (PHE), the platform is helping to boost responses to existing citizen science efforts as well as publicize the benefits of citizen science approaches more generally.

Closer to home, Cambridge Judge Business School students organised a University-wide 72-hour virtual hackathon #CAMvsCOVID on the weekend 1 – 4 May 2020: ‘One global challenge. One weekend. Your solutions.’ Teams were tasked with drafting ‘a novel response to a pressing problem in the battle against COVID-19,’ with the explicit brief that even coded solutions must consider the societal context. This solutions-focused approach to crowdsourcing harnessed the creativity of the Cambridge ecosystem in a way that other experiments have not; participants were empowered not only to gather data and contribute their research skills, but also to generate potential policy proposals for review. We do not yet know what will come of the ideas generated through this exercise, but it will be interesting to see what comes next.

In what ways does citizen science in a pandemic look different?

Social isolation has prompted many to engage with citizen science who otherwise would not have done so in the past. The urgency of the problem and scale of disruption has set Covid-19 apart from other policy problems with which citizen scientists might generally engage. However, it seems to present a timely opportunity to think about who holds relevant knowledge for public policy and how different forms of knowledge are shared in tackling policy problems.

Despite the complexity of every policy decision made over the past three months, most of us can agree that saving lives and returning to a sense of normalcy were top priorities at the start of lockdown. This unification of goals seems to have set the citizen science experiments detailed above apart from others of their kind. While environmental citizen science programmes covering questions on climate change, air pollution, and biodiversity loss are areas with the largest growth in citizen science over the past decade, they have also presented challenges. Lack of urgency, competing priorities, differing lived experiences and sources of information often create conflicting desires between, say, bird monitoring volunteers and members of the European Council on conservation efforts.

With the outbreak of Covid-19, most agreed we needed to track the virus, understand the science better, develop approaches to containing its spread, support the health system and discover treatments and vaccines to combat it in the future. There was a strong sense of urgency, priorities were more aligned, we recognised these were unprecedented times and there may also have been a greater desire to learn from each other. Though epidemiologists have different knowledge and experience than participants inputting symptoms into an app and both differ from that of policymakers charged with making decisions on behalf of their constituents, these differences seem to have been largely outweighed by the alignment of goals and priorities. Although this has continued to evolve throughout lockdown, these initial conditions enabled greater cooperation and manifested in a proliferation of citizen science experiments.

Differing experiences, information and agendas also breed mistrust, which too often impedes successful collaboration between citizen scientists, experts and policymakers. Can citizen scientists ensure their contributions will be used in their best interest? Will their voices be included in the decision-making process? Can experts and policymakers ensure citizens provide unbiased, accurate data? The global priority to fight the virus from the outset seemed to unify those opting to engage as citizen scientists. The magnitude, scale and consequences of Covid-19 potentially bred a mutual dependence and, in some cases, deference between citizens, research and policy. Amassing data and securing help is crucial for governments and scientists to meet expectations, and citizen scientists will better help themselves by providing accurate and constructive contributions. Trust in the value of citizen science may have been born out of obligation rather than desire during the pandemic, but it is seemingly there.

However, desire and obligation were bound to shift as we moved forward. As Elizabeth Anderson commented in her expert bite with the Expertise Under Pressure Team the issue of trust does not disappear in the context of Covid-19. Trust in shared motives and goals has wavered increasingly as lockdown extends and restlessness grows, and we have yet to find out the consequences of this shift for citizen science.

What does the future hold for citizen science post-pandemic?

As our daily lives gradually come to look more and more like they did before Covid-19, will interest in citizen science dwindle too? There’s no way to know for sure, but I think many will agree that our lives are unlikely to pick up where they left off and the impacts of the pandemic will linger long after the number of cases falls to zero. Although we may in fact be living through a fleeting flux in citizen engagement with science, here are some thoughts on why it may persist:

Support: This new wave of citizen science has clearly seen increased involvement and support of governments, medical institutions and charities involved in the fight against Covid-19. The timing of the launch of EU-Citizen.Science this year has also by no means been a negligible development, as the platform has served to support Covid-19 related citizen science efforts as well as share insights about the potential for citizen science. Although the initiative was set in motion prior to the outbreak, it has been ignited by interest in tackling Covid-19 and could sustain those audiences long after it ends.

Breaking the ice: Motivation to help, extra time and even boredom may be contributing to the increase in citizen scientists’ participation. Desperation and pressure might have caused policymakers to become more open to citizen engagement. However, maybe the unusual circumstances under which the shift occurred are less important than the shift itself. 

Funding: The announcement of UK Research and Innovation’s (UKRI) new £1.5 million Citizen science collaboration grant is another lockdown development that could help shape future directions in citizen science. Funding has long been a barrier to the field. Perhaps the successes of Covid-19 initiatives will prompt more serious consideration of its merits and continue to provide a case for support.

Whether or not increased citizen engagement with science continues beyond lockdown, the nature of more open knowledge sharing between citizen scientists, experts and policymakers during the pandemic is also important to consider. Although we have seen increased trust, cooperation and collaboration, the parameters of scientific inquiry and policy agendas have still largely been set by academic institutions and governments. Rather than enabling citizens to provoke science as usual or express political agency as some forms of citizen science do, the Covid-19 experiments outlined in this blog have predominantly provided platforms for participants to contribute their knowledge to expert-led programmes. The proliferation of participatory initiatives may help to pave the way for more dynamic and experimental citizen science in the future, but perhaps this more fundamental shift in how citizen scientists, experts and policymakers share knowledge is still in the making.


Katie Cohen is a Research Assistant for the Expertise Under Pressure project at CRASSH and at the Centre for Science and Policy (CSaP).

Thumbnail image source:  https://covid.joinzoe.com

Economists in the City #4 870 546 Cléo Chassonnery-Zaïgouche

Economists in the City #4

The Institutionalization of Regional Science

In the Shadow of Economics

by Anthony Rebours

The history of regional science offers an interesting case study, as well as a one of the few examples, of the institutionalization of an entirely new scientific field in the years after 1945.  Its foundation by Walter Isard and a group of social scientists in the 1950s represents the most institutionalized attempt to stimulate the relationship  between economics and geography. The original project of Isard, who was trained as an economist at Harvard, was to promote the study of location and regional problems.

And at the outset, regional science was, in various ways, a success. It attracted many scholars from different disciplines, mostly economics, geography and urban/regional planning, and it quickly became institutionalized formally through the foundation of the Regional Science Association (RSA) in 1954 and establishment of a Regional Science Department at the University of Pennsylvania in 1958. At the same time, the creation of the Papers and Proceedings of The Regional Science Association in 1955 and of the Journal of Regional Science in 1958, offered new publication venues for scholars interested in location analysis, in particular quantitative geographers who found it difficult to publish in traditional geography journals. Within economics, regional science influenced analytical works in urban economics, as, for instance, William Alonso’s thesis, widely recognized as one of the foundational works of urban economics, was written at Penn under the supervision of Isard in 1960.

However, the prevailing processes of knowledge production and evaluation which shaped the emergence of this new field were deeply influenced by economics. Geographers became dissatisfied with Isard’s vision of the hierarchical division between geographers and economists, and the primacy given to economic theorizing and modelling as the core of the new regional science. Thus, the social organization of the field of regional science and its interactions with other disciplines mirrored the particularity of economics, a hierarchical discipline organized around a strong theoretical core and an insularity from the rest of social sciences. In this short article, I discuss the findings of an analysis I have conducted of the contents of the main journal for the field – Journal of Regional Science –and associated archival materials, in order to shed light on the ways in which this field was institutionalized.

The emergence of regional science as a field of study

Regional science emerged in a particularly favourable context. In the US, the impetus for studies about regional development which began during the 1930s, supported by the success of the Tennessee Valley Authority program, persisted after the war. The Second World War and the ensuing Cold War provided new opportunities for the development of scientific research with an unprecedented increase in funding, student enrolment and collaboration between academics and external bodies, such as military institutions. This period confirmed economists’ aspirations to be treated as scientists, and resulted in the increasing prevalence of statistical methods and mathematical modelling, and the accompanying theory of rational and maximizing agents.

In 1942, Walter Isard obtained his doctoral degree under the supervision of Alvin Hansen, who was attached to the National Resources Planning Board, and Abbott Usher, who taught him about the  German tradition of location analysis, at the Economics Department of Harvard. There, and during a graduate fellowship at Chicago, he encountered other leading economists, such as Edward Chamberlin, Joseph Schumpeter, Jacob Viner, Frank Knight and Oscar Lange. At the end of the war, Isard produced a series of research articles in which he offered conventional economic analysis of production costs about the regional implications of the development of the airline industry, the atomic energy industry, and the future location of the iron and steel industry. At the time, these industries were considered as particularly important for national security and economic development.

In the late 1940s, Isard became increasingly concerned about the lack of interest among economists in the location of economic activities. His perception of the subject was not really different to his colleagues, but he wanted to improve the theory they used, which, following the British tradition of the late 19th century, suffered from a lack of spatial dimension. He did not seek to challenge the general equilibrium economic theory that was becoming dominant, but sought instead to integrate a spatial aspect within it.  

In the late 1940s, he started to be more active in the promotion of location analysis but failed to convince the American Economic Association (AEA) to organize sessions on regional topics at the annual conventions. In 1949 Isard was recruited to Harvard by Wassily Leontief to develop an input-output approach to regional development. During the war, input-output analysis received much attention because it enabled the American Air Force to identify the best targets for bombing. As a consequence, Leontief had received large research funds to develop his input-output framework. Drawing on Leontief’s financial resources, Isard was able to organize a series of multi-disciplinary sessions on regional research at meetings of various social science associations between 1950 and 1954. An informal newsletter was also created to disseminate the discussions and papers presented at the meetings.  

In 1949, at Harvard, Leontief also persuaded the faculty to create a new course on location theory at the Economics Department in which Isard would teach. During the course, he promoted the same kind of research he was doing at the Leontief project and that he would continue to conduct and support after having established formerly regional science. In a context where there was a large influx of war veterans returning to Harvard to complete their graduate studies, Isard managed to gather around him a core of young scholars to contribute to this work.  

The institutionalization of regional science

In 1954, after four years of informal meetings and discussions, the Regional Science Association was officially created during a meeting held conjointly with the American Economic Association and the American Social Sciences Association this time. Sixty participants from different disciplines—economics, geography and planning being the largest — as well as organisations like the RAND Corporation and Resources For the Future, were present. The papers presented were published in the first issue of The Papers and Proceedings of the Regional Science Association which was established at the same time. In 1956, Isard opened the first PhD program in regional science at the Penn’s Wharton School, and, in 1958, the first Department of Regional Science. The same year, the Journal of Regional Science, the future leading journal of the field, was founded.

These events were key to the institutionalization of the field, and reflected the thinking of Isard and his colleagues about the main focus and boundaries of regional science. This reflected an amalgam of diverse approaches to the study of regional and spatial issues, drawing on different disciplines, in particular economics and geography, with a strong emphasis on the same kind of analytical and statistical methods he learned at Harvard and from his work with Leontief.

In what follows, I look more closely at the constituent parts of the new discipline.


Figure 1. Journal co-citation network 1958-1967 (2 or more co-citation links), the size of the nodes and links being proportional to the number of links. Source Web of Science.

The centrality of economics for regional science is clearly visible in Figure 1, which maps the network of co-citations for articles published in the Journal of Regional Science (JRS) between 1958 and 1967. The co-citation technique allows us to measure the conceptual proximity of different journals cited in different papers of JRS. This technique is complemented by the use of a community detection algorithm in order to identify coherent sub-groups that have stronger links with each other than with the rest of the journals. The most striking feature of this mapping is the distance and the net distinction between economics journals and geography journals. While some leading journals such as the Geographical Review and the Annals of the Association of American Geographers, were among the most cited in the network (respectively the fourth and fifth most cited), geography journals occupied a peripheral position, along with journals of other disciplines like sociology (green). More surprising is the relative prominence of psychology journals (blue) which were more strongly associated with economics journals (purple) and represented the second most cited discipline of the period. However, in the next decade the situation completely changed, and psychology journals received less than 1% of the total citations in 1968–1977 (Table 1). For the whole period, 1958 to 1977, the geography is the second most cited discipline but with only 13,4% of the total citations of the period, far behind economics with 55,2% of the citations.


Table 1. Disciplines cited in the Journal of Regional Science (two percent of citations or more). Source Web of Science, using the National Science Foundation disciplinary categorization of journals.

This first result is consistent with the idea, expressed by Isard in Location and Space-Economy (1956), of a hierarchical division between economists, who provided the analytical foundations of regional science, and the geographers, who provided the empirical facts and testing. Another way to confirm this asymmetrical relationship between economics and geography is to compare the most cited disciplines (Table 1) with the disciplines that cited the most JRS (Table 2). While geography journals were by far the ones cited most in the JRS, with 44,1% of the total citations, they only received 10,7% of the citations within it. At the same time, economics journals, which represented only 12,6% of the total citations to the JRS, were the most cited,  41,5%. Regional science, thus, was more important for geographers than economists, while the reverse was not true as economics was more important for regional scientists than geography. This result is also consistent with the idea that the quantitative turn in geography and the emergence of regional science were closely associated.

Table 2. Disciplines citing the Journal of Regional Science (two percent of citations or more). Source Web of Science, using the National Science Foundation disciplinary categorization of journals.

These trends persisted in the next period (1968–1977). The size of the network in Figure 2 is representative of the increase of publications of the JRS during the period, with an increase in issues per years after 1969. The separation between the cluster of economics journals and geography journals persisted. Moreover, and despite an increase of the proportion of citations to geography journals, economics became even more important in the network and accentuated the difference of size with geography (Table 1). On the other hand, regional science became even more important for geography as the discipline represented 50% of the citations to JRS in the period, while it also received more citations from economics journals in this period than in the preceding one (Table 2). More generally, the data show that economics and geography were the two most important disciplines for the authors who published in the JRS between 1958 and 1977, a trend that has continued after 1967.

Figure 2. Journal co-citation network 1968-1977 (4 or more co-citation links), the size of the nodes and links being proportional to the number of links. Source Web of Science.

The fact that the JRS was much less quoted by economics journals doesn’t mean that it was completely ignored by economists as, in fact, the JRS was among the most cited economic journals in 1970. However, it shows that regional science was more discussed by scholars publishing in geography journals than economics. As already indicated, this situation is certainly related to the dynamics of both disciplines at the time. While, the identity of economics was legitimated and reinforced by its success during the war, in geography, there was an increasing dissatisfaction with the regional geography approach that dominated the field in the1950s. The Cold War context facilitated the promotion of a new generation of quantitative geographers looking for more scientific methods. Most of them were early members of the Regional Science Association, and as Brian Berry, were interested in the potential of regional science to transform geography. On the other hand, the stronger identity of economists meant that when they associated with other scholars, they were inclined to retain their own frameworks and methods, as Walter Isard did for regional science. However, by the mid-1970s, regional science experienced a progressive decline when geographers started to distance themselves from the analytical methods that were promoted by Isard. But even after the Regional Science Department at Penn closed its doors in 1993, regional science journals remained a going concern and continued to promote studies of spatial issues notably from urban economics and, after 1991, New Economic Geography.


Anthony Rebours is currently a graduate student at University Paris 8 and a young fellow of the Center for the History of Political Economy (CHOPE) at Duke University. His dissertation deals with the relationships between economics and neighbouring disciplines such as geography and regional science. It combines archival work and sociological methods for quantitative history.


Other posts from the blogged conference:

Cities and Space: Towards a History of ‘Urban Economics’, by Beatrice Cherrier & Anthony Rebours

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche

Searching for the facts in a global pandemic 1024 683 Shauna Concannon

Searching for the facts in a global pandemic

In times of great uncertainty, such as global pandemics, the appetite for reliable and trustworthy information increases. We look to official authorities and our peers for guidance. However, as has quickly become evident, the veracity and authenticity of information being circulated is not always sound. As people are increasingly connected to their devices, checking social media and news sites for information and guidance to inform their own actions, the potential impact of misleading and harmful content is more keenly observed. 

Misinformation exacerbates uncertainty and provokes emotive responses such as fear, anger and distrust. Furthermore, it can also inform potentially harmful behaviours and attitudes, which in the context of a global health pandemic, carry high stakes. In the context of COVID-19, the global reach and severity of the situation, together with the increased connectivity afforded by digital platforms, make for a dangerous combination. 

The World Health Organisation (WHO) explain that the ‘infodemic’ – the ‘over-abundance of information – some accurate and some not’ – is a significant concern, and ‘makes it hard for people to find trustworthy sources and reliable guidance when they need it’.

World Health Organisation

How are platforms dealing with misleading content?

Image: LoboStudioHamburg via Pixabay

Social media platforms such as Twitter have reported a surge in usage since the onset of the pandemic, making for a particularly captive audience for information, whether accurate or not. The director-general of the World Health Organization (WHO), Tedros Adhanom Ghebreyesus, warned in his address on 15 February 2020 that ‘[f]ake news spreads faster and more easily than this virus, and is just as dangerous’.

In a statement released on 11 May 2020, Twitter outlined their updated guidelines on how they are handling misleading content about COVID-19 on their platform. They have begun using a tiered approach to determine how problematic content should be handled. Content is to be classified based on potential harm (moderate/severe) and whether it is misleading, disputed or unverified.

Misleading content, i.e. a tweet that includes statements that have been confirmed false, where the propensity for harm is severe, will be removed. For moderate harm the tweet will remain but be accompanied by a label, reading Get the facts about COVID-19, linking trusted information sources or additional information about the claim.  For tweets containing information about disputed claims that carry potential for severe harm, a warning will be issued, and the user will have to choose to reveal the tweet.

What isn’t clear is exactly how Twitter are carrying out these fact-checks. In an earlier blog post, Twitter stated that they were stepping up their automatic fact-checking in response to COVID-19. Fact-checking is an increasingly vital approach for tackling the rapid spread of false claims online. While there is an urgent need for automated systems that detect, extract and classify incorrect information in real time, it is an extremely challenging task.

Why automated fact-checking is so difficult

In January we held a fact-checking hackathon (you can read more about that here) where participants developed automated approaches to assess claims made about wikipedia data. Even with a static set of claims and predefined evidence sources, the process of verifying claims as true or false is highly complex. To start with, you need a veracious baseline to assess against, a database of facts and falsehoods. This is challenging in the current context, when new information is constantly being generated. The International Fact Checking Network (IFCN) has created a database of fact-checks, sourced from fact-checking organisations across the world. However, while this is an extremely useful resource it is not a complete database. Then, given a claim to be fact-checked, and the availability of a corresponding fact-checked entry in your veracious database, you need to classify the claim sentences either as supporting or refuting the original claim or else as providing too little information to either support or refute it. If we consider the following example:

“5G is responsible for COVID-19.

This certainly reads as a pretty unequivocal endorsement for the (debunked) theory that 5g is linked to how COVID-19 spreads. 

However, what if we consider the following:

“Oh okay… 5G is responsible for COVID-19. Come on people!

For a human reader, the preceding `oh okay’, and tailing `come on people!’, suggests that this could be an incredulous response to the proposed link between the virus and mobile internet connectivity. Language is flexible and interpreting the implied meaning often requires us to consider context, both what is said around a particular sentence, but also the interactional sequence in which it occurs, for example:

“What is the most ridiculous conspiracy theory about COVID-19 you have heard?

Reply: “5G is responsible for COVID-19.

While this example served to emphasise how relatively straightforward sentences can be taken out of context, content designed to mislead often strategically incorporates truth or half-truths into the content. As Kate Starbird (2019) observed, ‘disinformation often layers true information with false — an accurate fact set in misleading context, a real photograph purposely mislabelled’ (Kate Starbird, 2019). The recently published Reuters Institute factsheet summarising COVID-19 misinformation, corroborates this: 87% of the misinformation in their sample circulated on social media involved `various forms of reconfiguration, where existing and often true information is spun, twisted, recontextualised, or reworked’, while, purely fabricated content accounted for only 12%. 

It is easy to see how content can easily be misclassified, and this is a key challenge for automated fact-checking. If systems are oversensitive or insufficiently trained then content can be incorrectly flagged and removed. In March, Facebook’s efforts to control the spread of misinformation led to genuine news articles to removed from its site. Twitter acknowledge this limitation and explain that they will be cautious in their approach and are unlikely to immediately suspend accounts:

“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes. As a result, we will not permanently suspend any accounts based solely on our automated enforcement systems.” 

Twitter blog post

By the time a post is flagged and removed it will likely already have been seen by many people. Ideas are very resilient and hard to unpick, particularly when they are shared at speed and integrated into individuals’ sense-making practices. A recent article in the New York Times showed that in little over 24 hours a misleading Medium article was read by more than two million people and shared in over 15 thousand tweets, before the blog post was removed.

Consequently, fact-checking approaches can only be so effective. Even if misleading content is flagged or reported, will it be removed? And if it is, by that time it will have spread many times over, been reformulated and shared in other forms that may fly under the radar of detection algorithms. 

Fact-checking alone is not enough

Even if a perfect automated fact-checking system did exist, the problem still remains, which is why empirical evaluation of strategies such as those being rolled out by social media platforms needs to be carried out. 

While disinformation campaigns are typically thought to be powered by nefarious actors, misinformation can be passed on by well-meaning individuals who think they are sharing useful and newsworthy information. A key challenge is that once a piece of fake news has made an impression it can be challenging to correct. Furthermore, it has been shown that some people are more resistant to fact-checking than others, and even once a claim has been disproved, biased attitudes remain (De keersmaecker & Roets, 2017).

A paper published by Vosoughi et al (2018) found that “false news spreads more pervasively than the truth online”. This is particularly problematic, as one way we judge something to be true or false is by how often we come across it (exposure bias). Furthermore, fake news is typically created to be memorable and newsworthy; with emotive headlines that appeal to the concerns and interests of its readership, playing on the reader’s emotional biases and tapping into their pre-existing attitudes (myside bias). Considering the cognitive processes that impact on how individuals engage with misinformation can inform new thinking on how we can mitigate its effects. Recent work by Rozenbeek & Van der Linden highlights psychological inoculation can develop attitudinal resistance to misinformation.

Slow but steady progress

Clearly, working out how best to approach these challenges is not straightforward, but it is promising that there is a concerted effort to develop resources, tools and datasets to assist in the pursuit. Initiatives, such as The Coronavirus Tech Handbook, a crowdsourced library of tools and resources related to COVID-19. Initiated by Newspeak House including a section on misinformation, highlight how distributed expertise can be harnessed and collated to support collective action.

Similarly, it is promising to see organisations experimenting with different formats for information provision, that may prove more accessible for people trying to navigate through the masses of information available.  For example, the Poynter Institute’s International Fact-Checking Network (IFCN) has created a WhatsApp chatbot and the WHO has developed a chatbot for facebook. While these use very constrained interaction formats, i.e. choosing from preselected categories to access information, these may well be more accessible than reading a long web page for many. 

The WHO chatbot on facebook messenger presents key messaging in an interactive format. 

In the context of COVID-19, individuals are required to interpret what has at times been ambiguous guidance and make decisions about how best to protect themselves and their families. This backdrop of uncertainty will inevitably drive up information seeking behaviours. While it may seem that each day unveils new examples of misleading content, having pages dedicated to the issue on many leading websites can be no bad thing. Hopefully, it will help to build public awareness that assessing information online requires scrutiny. The issue comes down to trust and how we as content consumers assess which information we deem reliable.

In Spinning the Semantic Web, published over 17 years ago now, the understanding that a mechanism for encoding trust was essential to the flourishing of the internet was stated in the foreward by Tim Berners Lee. In this nascent projective vision of how the web should function, Lee explained, “statements of trust can be added in such a way as to reflect actual trust exactly. People learn to trust through experience and though [sic] recommendation. We change our minds about who to trust and for what purposes. The Web of trust must allow us to express this.” (Spinning the Semantic Web, p.xviii). He outlines the mission of the W3C to be to “help the community have a common language for expressing trust” not to take ”a central or controlling role in the content of the web.” Clearly the internet has developed in ways unforeseen by the authors in 2003, and in 2020 we find ourselves grappling with these very challenges of balancing top down and bottom up approaches to developing an ecosystem of trust.

Useful resources and further information

Full Fact, a UK based fact-checking organisation, has published a useful post on how individuals can guard themselves against misleading information https://fullfact.org/health/how-to-fact-check-coronavirus/

If you would like to find out more about COVID-19 misinformation, we recommend the following podcasts:

Full Fact Podcast: https://play.acast.com/s/fullfactpodcast/a597558f-24a7-406e-aad8-330a09220520

Episode 5 of the Science and Policy podcast from the CSAP : http://www.csap.cam.ac.uk/Research-Policy-Engagement/science-and-policy-podcasts/

Nature CoronaPod – Troubling News: https://www.nature.com/articles/d41586-020-01137-7

Economists in the City #2 1024 740 Cléo Chassonnery-Zaïgouche

Economists in the City #2

Cities and Space: Towards a History of ‘Urban Economics’

by Beatrice Cherrier & Anthony Rebours

 A map for a hostile territory?

The field of ‘Urban Economics’ is an elusive object. That economic phenomena related to the city might need a distinctive form of analysis was something economists hardly thought about until the early 1960s. In the United States, it took a few simultaneous scholarly articles, a series of urban riots, and the attention of the largest American philanthropies to make this one of the hottest topics in economics. The hype about it was, however, short-lived enough so­­­ that, by the 1980s, urban economics was considered a small, ‘peripheral’ field. It was only through the absorption into a new framework to analyze the location of economic activities – the ‘New Economics Geography’ – in the 1990s that it regained prominence.

Understanding the development of urban economics as a field, or last least the variant which originated in the US and later became international, presents a tricky task. This is because the institutional markers of an academic field are difficult to grasp. A joint society with real estate economists was established in 1964, and a standalone one in 2006; a journal was founded in 1974, with an inaugural editorial which stated that: “Urban economics is a diffuse subject, with more ambiguous boundaries than most specialties. The goal of this Journal is to increase rather than decrease that ambiguity;” a series of handbooks was shared with the neighboring field of regional economics; textbooks and courses about urban and geographical, urban and spatial, or urban and real estate economics were published; and programs that mixed urban economics with neighboring disciplines such as urban geography and urban planning emerged. Situated within a master-discipline (economics) that is often described as exhibiting an articulated identity, clear boundaries with other sciences and strict hierarchies, urban economics is an outlier.

There is, however, one stable and distinctive object that has been associated with the term ‘urban economics’ throughout the 1970s, the 1980s, the 2000s and the 2010s:  the Alonso-Muth-Mills model (AMM). It represents a monocentric city where households make trade-offs between land, goods and services, and the commuting costs needed to access the workplace. The price of land decreases with distance from the city center. The model was articulated almost simultaneously in William Alonso’s dissertation, published in 1964, a 1967 article by Edwin B. Mills, and a book by John Muth published in 1969. This trilogy is often considered as a “founding act” of urban economics.

Alonso (1964) and Muth (1969) are the most cited of all the articles published in the Journal of Urban Economics, with Mills (1967) being ranked at 9. If there is a coherent field of ‘urban economics’ to be studied, it makes sense to focus on these three publications in particular. To do so, we collected citations to each of these ‘AMM’ texts in all the journals indexed in the Web of Science database between 1965 and 2009. We then reconstruct a partial map of the field through representing, across 5 year periods, the network of scholars who authored texts co-cited with either one or several of these three ‘foundational’ texts. We thus interpret a citation to one of these three contributions as signaling a specific interest in the kinds of work being done in the field of urban economics. By mapping the authors most co-cited alongside Alonso, Muth or Mills in successive time windows, we aim to reconstruct some sort of core urban economics community (without making claims about the entire scope or outer boundaries of the field).  We have supplemented this rough map of the changing fate of AMM with individual and institutional archives, so as to delineate and flesh out the territory populated by urban economists. Below is a summary of the main trends that we identify.

Agglomeration

In 1956, William Alonso moved from Harvard, where he had completed architecture and urban planning degrees at the University of Pennsylvania. He became Walter Isard’s first graduate student in the newly founded department of “regional science.” He applied a model of agricultural land use developed 150 years earlier by the German economist Johann Von Thünen to a city where all employment is located in a Central Business District. His goal was to understand how the residential land market worked and could be improved. His resulting PhD, Location and Land Use, was completed in 1960.  Around that time, young Chicago housing economist Richard Muth spent a snowstorm lockdown thinking about how markets determine land values. The resulting model he developed was expanded to study population density. And a book based on it was published a decade later: Cities and Housing. Drafts of Alonso and Muth’s work reached inventory specialist Edwin Mills in 1966, while he was working at the RAND corporation, and trying to turn models describing growth paths over time into a model explaining distance from an urban center. His “Aggregative Model of Resource Allocation in a Metropolitan Area” was published the next year.

As is clear from the network map below, this new set of models immediately drew attention from a wide array of transportation economists, engineers and geographers concerned with explaining the size and transformation of cities, why citizens chose to live in centers or suburbs, and how to develop an efficient transportation system. The economists included Raymond Vernon and Edgar Hoover, whose study of New York became the Anatomy of the Metropolis; RAND analyst Ira Lowry, who developed a famous spatial interaction model; spatial and transportation econometrician Martin Beckman, based at Brown; and Harvard’s John Kain, who was then working on his spatial mismatch hypothesis and a simulation approach to model polycentric workplaces. Through the early works of Brian Berry and David Harvey, quantitative urban geographers also engaged with these new urban land use models.

Authors co-citation network 1970-1974. The colors result from a community detection algorithm applied to the whole network, but for readability, only those authors with 11 or more links to Alonso (1964) and/or Mills (1967) and/or Muth (1969) are represented. The size of the nodes and links is proportional to the total number of co-citations. The 1970-1974 network represents the state of urban economics as expressed through citations by economists who published at the time, thus, there might be a short time lag between the publication of new works and their incorporation by the rest of the profession.

But the development of a new generation of models relying on optimization behavior to explain urban location was by no mean sufficient to engender a separate field of economics.  Neither Alonso, who saw himself as contributing to an interdisciplinary regional science, nor Muth, involved in Chicago housing policy debates, cared much about its institutionalization. But both were influenced and funded by men who did. Muth acknowledged the influence of Lowdon Wingo, who had authored a land use model. Together with Harvey Perloff, a professor of social sciences at the University of Chicago, they convinced the Washington-based think-thank Resource for the Future to establish a “Committee for Urban Economics” with the help of a grant by the Ford Foundation. The decision was fueled by urbanization and dissatisfaction with the urban renewal programs implemented in the 1950s. Their goal was to “develop a common analytical framework” through the establishment of graduate programs in urban economics, and supporting dissertations, and coordinating the organization of workshops and the development of urban economics textbooks.

Their agenda was soon boosted by the publication of Jane Jacobs’ The Death and Life of Great American Cities, and by growing policy interest in the problems of congestion, pollution, housing segregation and ghettoization, labor discrimination, slums, crime and local government bankruptcy, and by the stream of housing and transportation acts which were passed in response to these. The Watts riots, followed by the McCone and Kerner commissions, acted as an important catalyst. The Ford Foundation poured more than $ 20 millions into urban chairs, programs and institutes through urban grants awarded to Columbia, Chicago, Harvard and MIT in 1967 and 1970. The first round of funds emphasized “the development of an analytical framework”, and the second sought “a direction for effective action.”

As a consequence of this massive investment, virtually every well-known US economist turned to urban topics, as shown by the several names of theorists and public or labor economists expanding the 1975-79 network below. At MIT, for instance, Ford’s money was used to set up a two-year “urban policy seminar,” which was attended by more than half of the department.The organizer was welfare theorist Jerome Rothenberg, who had just published a book on the evaluation of urban renewal policies. He was developing a large-scale econometric model of the Boston area with Robert Engle and John Harris, and putting together a reader with his radical colleague Matt Edel. Department chair Carry Brown and Peter Diamond were working on municipal finance. Robert Hall was studying public assistance while Paul Joskow examined urban fire and property insurance. Robert Solow developed a theoretical model of urban congestion, published in a 1972 special issue of the Swedish Journal of Economics, alongside a model by taxation theorist Jim Mirrlees investigating the effect of commuter and housing state tax on land use. Solow’s former student Avinash Dixit published an article modeling a tradeoff between city center economies of scale and commuting congestion costs in another special issue on urban economics in the Bell Journal the next year. A survey of the field was also published in the Journal of Economic Literature, just before the foundation of the Journal of Urban Economics in 1974.





Authors co-citation network 1975-1979 (11 or more links), the size of the nodes and links being proportional to the total number of co-citations.

Segregation

But the publication of a dedicated journal, and growing awareness of the “New Urban Economics” was not the beginning of a breakthrough. It turned out to be the peak of this wave. On the demand side, the growing policy interest and financial support that had fueled this new body of work receded after the election of Richard Nixon and the reorientation of federal policies. On the supply side, the mix of questions, methods and conversations with neighboring scholars that had hitherto characterized urban economics was becoming an impediment. More generally, the 1970s was a period of consolidation for the economics profession. To be considered as bona fide parts of the discipline, applied fields needed to reshape themselves around a theoretical core, usually a few general equilibrium micro-founded workhorse models. Some old fields (macro and public economy for instance) as well as newer ones (health, education, household) developed such theoretical models. Others resisted, but could rely on separate funding streams and policy networks (development and agricultural). Urban economics was stuck.  

Policy and business interest was directed toward topics like housing, public choice and transportation. And, combined with the growing availability of new microdata, micro-econometrics advances, and the subsequent spread of the personal computer, this resulted in an outpouring of applied research. Computable transportation models and real estate forecasting models were especially fashionable.

On the other hand, a theoretical unification was not in sight. Workhorse models of the price of amenities, the demand for housing, or suburban transportation, were proposed by Sherwin Rosen, William Wheaton and Michelle White, among others. But explanations of the size, number, structure and growth of cities were now becoming contested. J. Vernon Henderson developed a general equilibrium theory of urban systems based on the trade-off between external economies and diseconomies of city size, but in these agglomeration effects did not rely on individual behavior. Isard’s former student Masahita Fujita proposed a unified theory of urban land use and city size that combined externalities and the monopolistic competition framework pioneered by Dixit and Joseph Stiglitz, but without making his framework dynamic or relaxing the monocentric hypothesis. At a point when there was growing interest in the phenomenon of business districts –  or Edge cities as journalist Joël Garreau called them, this was considered a shortcoming by many economists. General equilibrium modelling was rejected by other contributors, including by figures like  Harry Richardson, and a set of radical economists moving closer to urban geographers (such as David Harvey, Doreen Massey and Allen Scott) working with neo-Marxist ideas.

Renewal

In the 1990s, various trends aimed at explaining the number, size, evolution of cities matured and were confronted to one another. In work which he framed as contributing to the new field of “economic geography,” Krugman aimed to employ his core-periphery model to sustain a unified explanation for the agglomeration of economic activity in space. At Chicago, those economists who had spent most of the 1980s modeling how different types of externalities and  increasing returns could help explain growth – among them Robert Lucas, José Scheikman and his student Ed Glaeser – increasingly reflected on Jane Jacob’s claim that cities exist because of the spillover of ideas across industries which they facilitate. Some of them found empirical support for her claim than for the kind within-industry knowledge spillovers Henderson was advocating.

Krugman soon worked with Fujita to build a model with labour mobility, trade-offs between economies of scale at the plant level and transportation costs to cities. Their new framework he was adamant to compare to Henderson’s general equilibrium model of systems of cities. He claimed that their framework enabled the derivation of agglomeration from individual behavior and could explain not only city size and structure, but also location.  In his review of Krugman and Fujita’s 1999 book with Venables, Glaeser praised the unification of urban, regional and international economics around the microfoundations of agglomeration theory. He also contrasted Krugman’s emphasis upon transportation costs – which were then declining – with other frameworks focusing on people’s own movement, and began to sketch out the research program focused on idea exchanges that he would develop in the next decades. He also insisted on the importance of working out empirically testable hypotheses.

The “New Economic Geography” was carried by a newly-minted John Bates Clark medalist who had, from the outset, promised to lift regional, spatial and urban economics from their “peripheral” status through parsimonious, micro-founded, tractable and flexible models. It attracted a new generation of international scholars, for some of whom working on cities was a special case of contributing to spatial economics. In the process, however, olders ties with geographers were severed, and questions that were closely associated with changing cities, like the emergence of the digital age, congestion, inequalities in housing, segregation, the rise of crime and urban riots, became less central to the identity of this field. The field lost some sort of autonomy. Within our own maps, this can be seen from the contrast between the many disparate  links which leading urban economists had to Alonso-Muth-Mills, and the discrete, interconnected (green) network in which figures like Fujita, Krugman, Henderson, Lucas, and  Glaeser are embedded.

Authors co-citation network 2005-2009 (15 or more links), the size of the nodes and links being proportional to the total number of co-citations.

Most recently, Glaeser’s insistence that urban models need to be judged by their empirical fit may be again transforming the identity of urban economics. The shift is already visible in the latest volume of the series of Handbooks in Urban and Regional Science. Its editors (Gilles Duranton, Henderson and William Strange) explain that, while its previous volume (2004) was heavily focused on agglomeration theory, this one is “a return to more traditional urban topics.” And the field is now characterised not in terms of a unified, theorical framework, but with reference to a shared empirical epistemology about how to develop causal inferences from spatial data. There is also growing evidence that students going on the US economics job market  increasingly add “spatial economics” and/or “urban economics” to their field list.

Overall, the successive shifts in urban economists’ identity and autonomy which we describe here, were sometimes prompted by external pressures (urban crises and policy responses) and sometimes from internal epistemological shifts about what counts as “good economic science.” A key development in the 1970s was the unification around general equilibrium, micro-founded models. It is widely held that the profession is currently experiencing an “applied turn” or a “credibility revolution”, centered on the establishment of causal inference (gold) standards. How this will affect urban economics remains unclear.


Beatrice Cherrier is an associate professor at the Centre National de la Recherche Scientifique (CNRS). She documents the “applied turn” in the history of recent economics through chasing institutional artifacts like the JEL codes, researching the history of selected applied field (urban, public, macro) and unpacking its gendered aspects.   

Anthony Rebours is currently a graduate student at University Paris 8 and a young fellow of the Center for the History of Political Economy (CHOPE) at Duke University. His interests are about the recent history of economics and its relations with geography, and the use of sociological methods for quantitative history.


Other posts from the blogged conference:

Economists in the City: Reconsidering the History of Urban Policy Expertise: An Introduction, by Mike Kenny & Cléo Chassonnery-Zaïgouche