Posts By :

Federico Brandmayr

Expert Bites with Arsenii Khitrov 592 592 Federico Brandmayr

Expert Bites with Arsenii Khitrov

21 November 2020

A sociologist and philosopher of wide-ranging interests, Arsenii Khitrov is currently writing up his doctoral dissertation at the University of Cambridge on how Hollywood politically-themed television series are made and what role political and social experts play in their production. He has recently published a brilliant article in which he reconstructs the relationships of conflict, competition and collaboration between the film industry, state agencies, research organisations, social movements, and independent experts. We had the chance to meet Arsenii in person before the coronavirus pandemic started and asked him a few questions about what is distinctive about being an expert for the film industry.

Considering your research, what makes a good expert? 

In the domain I study, which is the field of television production in America today, I focus on a very particular type of expertise. Within this project, I call ‘experts’ the people who come to the entertainment industry from the outside and bring their knowledge to the writers, producers, and actors. These ‘experts’ most commonly come from the government, social movements, universities, think tanks, medial, media, military, intelligence, and law enforcement communities. Some of them represent these organisations and communities and lobby on their behalf, others are private expertise entrepreneurs exchanging their experiences and knowledge for success in Hollywood.

In both cases, Hollywood professionals have the power to define the value of these experts, and the industry defines their value depending on how much they can offer in terms of what Hollywood specifically needs. Experts’ formal qualifications, past and present affiliations, access, and experiences are important, but what is more important is how well they can recognise a particular set of expectations and dispositions that Hollywood professionals share, which I call the ‘Hollywood habitus’, as well as how well they can perform it. In other words, good experts are the experts that Hollywood professionals see as good, or, in other words, whoever plays the Hollywood game well.

What are the pressures experts face in your field?

The main pressure experts experience in Hollywood are the explicit and tacit requirements, expectations, and hierarchies that define both the creative and the management sides of the production process. Television series in the USA are commonly written by a group of writers working on a very tight schedule. Shooting often starts when the writing is still taking place. Experts can be invited to writers’ rooms and on set, and what the writers and producers expect from them is to be as quick, open-minded, and inventive as possible. Hollywood professionals do not expect the experts to criticise what they write or shoot: they don’t want to hear that what they are doing is not possible or unrealistic. Rather, they want experts to ‘pitch solutions’, as many of my research participants told me. However realistic television makers want their products to be, if realism prevents them from creating a good drama, realism must go. If an expert is too insistent on just one version of realism, s/he must make way for a more creative expert.

Taking a step away from the specific case I am studying, I would consider the relationality of experts. By this, I mean the question of whether experts are not actually entities in themselves, but rather intermediaries between two spheres: the sphere that accumulates knowledge and the sphere that receives it. If this is indeed the case, then it is worth thinking about how much the intermediary position the expert occupies demands the expert to adapt to the requirements and expectations of expertise recipients. In other words, how much the type of knowledge experts can provide is defined not only by the knowledge they possess, but by the receivers’ expectations and the experts’ intermediary position.

Have you observed any significant changes occurring in recent times in the way experts operate?

I can answer this question in relation to the field I study. The biggest change that I know of in the way experts operate in Hollywood is that their work has become more institutionalised and routinised, especially when it comes to experts representing social movements. In the late 1960s, social movements gained momentum in relation to Hollywood. They bombarded Hollywood with criticism, boycotted films and television programmes, wrote letters and sent petitions to networks. They became a real force the industry had to reckon with, and the industry sought ways to lessen their pressure. Various intermediary institutions and mediating procedures started emerging in the 1970s and continue emerging today, and these incorporate social movements in the industry. One of the ways the pressure of social movements was channelled into less acute forms of power struggle was through the work of technical advisors and consultants specialising in pressing social and political issues. The way these experts work with and in Hollywood has become institutionalised and routinized, which slightly decreased the pressure of social movements on Hollywood and simultaneously made Hollywood more accessible to social movements.

Do you envision any changes in the role of experts in the future?

If I step away from my research project and speculate about the role of experts in Western societies at large, I would say that it is important to address what many have called the ‘crisis of expertise’, i.e. mistrust of experts and expertise from the side of some groups of the population, media outlets, and some public officials. If we look at the social world as an arena where various groups fight for resources and power, it is not surprising that someone is under attack, that someone is blamed for alleged troubles. This is simply how power struggles unfold: any method is acceptable, and if blaming experts works (i.e. if it mobilises political power), then why not take this route. Yet why does accusing experts help mobilise power?

Perhaps this is due to a unique breakthrough in the way information is performed, practiced, stored, and accessed, which brings four distinct processes together. First, the amount of information available in the world, scientific and otherwise, is increasing at a terribly fast pace. Second, experts have to increasingly specialise to be able to know at least something with certainty. Third, online search engines, databases, and online media make this plethora of information easily accessible to almost anyone for free. Fourth, unequally developed and unequally accessible educational systems help some adapt to these changes faster, while making others to lag behind. The way these four processes continue to unfold will define the role of experts.

Trusting the experts takes more than belief 600 396 Federico Brandmayr

Trusting the experts takes more than belief

Matt Bennett

As part of a series on expertise and COVID-19, the Expertise Under Pressure team asked Matt Bennett, currently a lecturer in Philosophy at the University of Cambridge, to write a piece based on his new article “Should I do as I’m told? Trust, Experts, and COVID-19” forthcoming in the Kennedy Institute of Ethics Journal.

Trusting the science

Radical public health responses to the pandemic around the world have asked us to make unprecedented changes to our daily lives. Social distancing measures require compliance with recommendations, instructions, and legal orders that come with undeniable sacrifices for almost all of us (though these sacrifices are far from equally distributed). These extreme public measures depend for their success on public trust.

Trust in these measures is both a necessary and desirable feature of almost all of the public health strategies currently in place. Necessary, because it seems fair to assume that such extreme measures cannot be effectively introduced, much less maintained, solely through policing or other forms of direct state coercion. These measures require a significant degree of voluntary compliance if they are to work. And desirable, because even if totalitarian policing of pandemic lockdown were viable, it also seems fair to assume that most of us would prefer not to depend on a heavily policed public health strategy.

The same kind of trust is necessary for many kinds of policy, particularly where that policy requires citizens to comply with rules that come at significant cost, and coercion alone would be ineffective. But what is distinctive about our pandemic policies is that they depend not just on public trust in policy, but public trust in the science that we are told informs that policy.

When governments follow the science, their response to the pandemic requires public trust in experts, raising questions about how we might develop measures not just to control the spread of the virus, but to maintain public confidence in the scientific recommendations that support these measures. I address some of these questions in this post (I have also addressed these same questions at greater length elsewhere).

My main point in what follows is that when public policy claims to follow the science, citizens are asked not just to believe what they are told by experts, but to follow expert recommendations. And when this is the case, it can be perfectly reasonable for a well-informed citizen to defer to experts on the relevant science, but nonetheless disagree with policy recommendations based on that science. Until we appreciate this, we will struggle to generate public support for science-led policy that is demanded by some of our most urgent political challenges.

Following the science?

Before I get to questions about the kind of public trust required by science-led policy, we need to first address the extent to which pandemic responses have indeed been led by experts. In the UK, the government’s publicly visible response to the pandemic began with repeated claims that ministers were “following the science”. 10 Downing Street began daily press conferences in March, addresses to journalists and the public in which the Prime Minister was often accompanied by the government’s Chief Medical Officer, Chris Whitty, or Chief Science Officer, Patrick Vallance (sometimes both). In the months that followed Vallance and Whitty played a prominent role in communicating the government’s public health strategy, standing alongside government ministers in many more press conferences and appearing on television, on radio, and in print.

But have ministers in fact followed the science? There are reasons to be sceptical. One thing to consider is whether a reductive referral to “the science” hides a partial or selective perspective informing government decisions. There is of course no one “science” of the pandemic. Different disciplines contribute different kinds of relevant information, and within disciplines experts disagree.

And some have observed that the range of disciplines informing UK government in the early spring was inexplicably narrow. The Scientific Advisory Group for Emergencies (SAGE) evidence cited by government in March included advice from epidemiologists, virologists, and behavioural scientists. But some disciplines were conspicuous in their absence among the government’s experts: economists, sociologists, and psychologists, for example, can provide important insights into the economic and social effects of lockdown that ought to be considered by any genuinely evidence-based policy.

Another reason to be sceptical about the UK government’s claim to follow the science is that several SAGE meetings included Boris Johnson’s infamous Chief Advisor Dominic Cummings, with some members of SAGE stating they were worried about undue influence from Cummings. The problem is not just that “the science” government claimed to follow was incomplete, but also that the government could well have been directing the advice it was claiming to follow.

And perhaps the claim to “follow the science” has been exaggerated. While ministers defer to scientists, those same scientists have been eager to point out that their role is exclusively advisory. Government experts have consistently cleaved to a division of labour that is a cornerstone of so-called “evidence-based policy”: experts provide facts, politicians make decisions.  

Nonetheless, it has been clear throughout the UK’s response to the pandemic that government has seen fit to communicate its policy as if it were unequivocally following expert recommendations. Daily 10 Downing St press conferences, running from March to June, began with Boris Johnson at the podium flanked either side by the government’s Chief Medical Officer and Chief Scientific Advisor. The optics of this are not hard to read.

And even in recent weeks, in which the government has stopped its daily press conferences and dialled down the rhetoric of their science-led policy, ministers still claim that they defer to expert advice, despite those same experts repeatedly distancing themselves from government decision making. As recently as the end of July, in announcing the reintroduction of stricter lockdown measures in parts of Greater Manchester, Lancashire, and Yorkshire, Matt Hancock has repeatedly deferred to evidence that a rise in infections in these areas has been caused not by a return to work, or opening pubs, but people visiting each other in their homes.

We are still being asked by the government to trust in recommendations provided by experts, even if the government is not being led by evidence in the way it would have us believe. The communications strategy may not be honest, but it has been consistent, and because the government is inviting the public to think of its policy as science-led, its public health strategy still depends on public trust in science. We are asked to accept that government is following the recommendations of experts, and that we must follow suit.

Photo by Belinda Fewings on Unsplash

Believing what we are told

I have said above that public trust in science is both a necessary and desirable feature of an effective public health response to the pandemic. But it is desirable only insofar as it is well placed trust. I presume we don’t want the public to put their faith in just any self-identified expert, regardless of their merits and the level of their expertise. We want the public to trust experts, but only where they have good reason to do so. One important question this raises is what makes trust in experts reasonable, when it is. A second important question is what we can do to ensure that the conditions for reasonable trust in experts are indeed in place.

Philosophers of science and social epistemologists have had a lot to say about when and why it is reasonable to trust experts. The anxiety that many philosophers of epistemic trust respond to is a perceived threat to knowledge about a range of basic facts that most of us don’t have the resources to check for ourselves. Do I know whether the Earth is flat without travelling? Should I believe that penicillin can be used to treat an infection without first studying biochemistry? Though it’s important that we know such things, knowledge of this kind doesn’t meet the same evidence requirements that apply to beliefs about, say, where I left my house keys.

Thankfully, there is an influential way of rescuing knowledge about scientific matters that most of us aren’t able to verify for ourselves. In the 1980s philosopher of science John Hardwig proposed a principle that, if true, rescues the rationality of the beliefs that we hold due to our deference to experts.

Hardwig maintained that if an expert tells me that something is the case this is enough reason for me to believe it too, provided that I have good reason to think that the expert in question has good reason to believe what they tell me. Say that I have a doctor who I see regularly, and I have plenty of evidence to believe that they are competent, well-informed, and sincere. On this basis I have good reason to think that my doctor understands, for example, how to interpret my blood test results, and will not distort the truth when they discuss the results with me. I thus have good reason to think that the doctor has good reason to believe what they tell me about my test results. This is enough, Hardwig maintains, for me to form my own beliefs based on what they tell me, and my epistemic trust has good grounds.

Doing what we are told

But can the same be said when the expert isn’t just asking me to believe something, but is recommending that I do something? Is it still reasonable to trust science when it doesn’t just provide policy-relevant facts, but leads the policy itself?

Consider an elaboration of the doctor example. Say I consult my trusted doctor to discuss the option of a Do Not Attempt CPR (DNACPR) instruction. My doctor is as helpful as always, and provides me with a range of information relevant to the decision. In light of my confidence in the doctor’s professionalism, and if we accept Hardwig’s principle, we can say that I have good reason to believe the information my doctor gives me.

Now consider how I should respond if my doctor were to tell me that, in light of facts about CPR’s success and about my health, I should sign a DNACPR (and set aside the very worrying medical ethics violation involved in a doctor directing a patient in this way regarding life-sustaining treatment). I have good reason to believe the facts they have given me relevant to a DNACPR. Do I also have good reason to follow their advice on whether I should sign? Not necessarily.

For one thing, my doctor’s knowledge regarding the relevant facts might not reliably indicate their ability to reason well about what to do in light of the facts. My doctor might know everything there is to know about the risks, but also be a dangerously impulsive person, or conversely an excessively cautious, risk-averse person.

And even if I think my doctor probably has good reason to think I should sign – I believe they are as wise as they are knowledgeable – their good reason to think I should sign is not thereby a good reason for me. The doctor may have, for instance, some sort of perverse administrative incentive that encourages them to increase the number of signed DNACPRs. Or, more innocently, they may have seen too many patients and families suffer the indignity of a failed CPR attempt at the end of life, or survive CPR only to live for just two or three more days with broken ribs and severe pain. And maybe I have a deeply held conviction in the value of life, or in the purpose of medicine to preserve life at all costs, and I might think this trumps any purported value of a dignified end of life. I may not even agree with the value of dignity at the end of life in the first place.

Well-placed trust in the recommendation of an expert is more demanding than well-placed trust in their factual testimony. A good reason for an expert to believe something factual is thereby a good reason for me to believe it too. But a good reason for an expert to think I should do something is not necessarily a good reason for me to do it. And this is because what I value and what the expert values can diverge without either of us being in any way mistaken about the facts of our situation. I can come to believe everything my doctor tells me about the facts concerning CPR, but still have very good reason to think that I should not do what they are telling me to do.

Something additional is needed for me to have well-placed trust in expert recommendations. When an expert tells me not just what to believe, but what I should do, I need assurance that the expert understands what is in my interest, and that they make recommendations on this basis. An expert might make a recommendation that accords with the values that I happen to have (“want to save the NHS? Wear a face covering in public”) or a recommendation that is in my interest despite my occurrent desires (“smoking is bad for you; stop it).

If I have good reason to think that my doctor, or my plumber, or, indeed, the state epidemiologist, has a good grasp of what is in my interest, and that their recommendations are based on this, then I am in a position to have well-placed trusted in their advice. But without this assurance, I may quite reasonably distrust or disagree with expert recommendations, and not simply out of ignorance or some vague “post-truth” distrust of science in general.

Photo by Nick Fewings on Unsplash

Cultivating trust

This demandingness of well-placed trust in expert recommendations, as opposed to expert information, has ramifications for what we can do to cultivate public trust in (at least purportedly) expert-led policy.

Consider a measure sometimes suggested to increase levels of public trust in science: increased transparency. Transparency can help build confidence in the sincerity of scientists, a crucial requirement for public trust. It can also, of course, help us to see when politics is at greater risk of distorting the science (e.g. Dominic Cummings attending SAGE meetings), and allows us to be more discerning with where we place our trust.

Transparency can also mitigate tendencies to think that anything less than a completely value-free science is invalidated by bias and prejudice. When adjudicating on matters of fact, we can distinguish good from bad use of values in science, depending on whether they play a direct or indirect role in arriving at factual conclusions. Thus we might for instance allow values to determine what we consider an acceptable level of risk of false positives for a coronavirus test, but we would not want values to determine how we interpret the results of an individual instance of the test (“I don’t want to have coronavirus – let’s run the test again”). Transparency can help us be more nuanced in our evaluation of whether a given scientific conclusion has depended on value-judgements in a legitimate way.

But it seems to me that transparency is not so effective when we are being asked to trust in the recommendations of experts. One reason for this is that it is far less easy for us to distinguish good and bad use of values in expert advice. This is because values must always play a direct role in recommendations about what to do. The obstacle to public trust in science-led policy is not, as with public trust in scientific fact, the potential for values to overreach. The challenge is instead to give the public good reason to think that the values that inform expert recommendations align with those to whom they issue advice.

There are more direct means of achieving this than transparency. I will end with two such means, both of which can be understood as ways to democratise expert-led policy.

One helpful measure to show the public that a policy does align with their interest is what is something called expressive overdetermination: investing policy with multiple meanings such that it can be accepted from diverse political perspectives. Reform to French abortion law is sometimes cited as an example of this. After decades of disagreement, France adopted a law that made abortion permissible provided the individual has been granted an unreviewable certification of personal emergency. This new policy was sufficiently polyvalent to be acceptable to the most important parties to the debate; religious conservatives understood the certification to be protecting life, while pro-choice advocates saw the unreviewable nature of the certification as protection for the autonomy of women. The point was to find a way of showing that the same policy can align with the interests of multiple conflicting political groups, rather than to ask groups to either set aside, alter, or compromise on their values.

A second helpful measure, which complements expressive overdetermination, is to recruit spokespersons that are identifiable to diverse groups as similar to them in political outlook. This is sometimes called identity vouching. The strategy is to convince citizens that the relevant scientific advice, and the policy that follows that advice, is likely not to be a threat to their interests because that same consensus is accepted by those with similar values. Barack Obama attempted such a measure when he established links with Evangelical Christians such as Rick Warren, one of the 86 evangelical leaders who had signed the Evangelical Climate Initiative 2 years before the beginning of Obama’s presidency. The move may have had multiple intentions, but one of them is likely to have been an attempt to win over conservative Christians to Obama’s climate-change policy.

Expressive overdetermination and identity vouching are ways of showing the public that a policy is in their interests. Whether they really are successful at building public trust in policy, and more specifically in science-led policy, is a question that needs an empirical answer. What I have tried to show here is that we have good theoretical reasons to think that such additional measures are needed when we are asking the public not just to believe what scientists tell us is the case, but to comply with policy that is led by the best science.

Public trust in science comes in at least two very different forms: believing expert testimony, and following expert recommendations. Efforts to build trust in experts would do well to be sensitive to this difference.

§§§

About Matt Bennett

Matt Bennett is a lecturer with the Faculty of Philosophy at the University of Cambridge.  His research and teaching cover topics in ethics (theoretical and applied), political philosophy, and moral psychology, as well as historical study of philosophical work in these areas in the post-Kantian tradition. Much of his research focuses on ethical and political phenomena that are not well understood in narrowly moral terms, and he has written about non-moral forms of trust, agency, and responsibility. From October 2020 Matt will be a postdoctoral researcher with the Leverhulme Competition and Competitiveness project at the University of Essex, where he will study different forms of competition and competitiveness and the role they play in a wide range of social practices and institutions, including markets, the arts, sciences, and sports.

Are the experts responsible for bad disaster response? 499 310 Federico Brandmayr

Are the experts responsible for bad disaster response?

A few lessons for the coronavirus outbreak from L’Aquila

~ ~ ~

A few weeks ago, a Facebook group called 3e32 and based in the Italian city of L’Aquila posted a message stating: “whether it is a virus or lack of prevention, science should always protect its independence from the power of those who guarantee the interests of the few at the expense of the many”. The statement was followed by a picture of a rally, showing people marching and carrying a banner which read: “POWER DICTATES, ‘SCIENCE’ OBEYS, JUSTICE ABSOLVES”.

What was that all about? “3e32” refers to the moment in which a deadly earthquake struck L’Aquila on April 6th 2009 (at 3:32 in the Morning). It is now the name of a collective founded shortly after the disaster. The picture was taken on November 13th 2014: a few days earlier, a court of appeals had acquitted six earth scientists of charges of negligence and manslaughter, for which they had previously been sentenced to six years in prison.

Even today, many people believe that scientists were prosecuted and convicted in L’Aquila for “for failing to predict an earthquake”, as a commentator put it in 2012. If this were the case, it would be shocking indeed: earthquake prediction is seen by most seismologists as a hopeless endeavour (to the point that there is a stigma associated to it in the community), and the probabilistic concept of forecast is preferred instead. But, in fact, things are more complicated, as I and others have shown. What prosecutors and plaintiffs claimed was that in a city that had been rattled for months by tremors, where cracks had started to appear on many buildings, where people were frightened and some had started to sleep in their cars, a group of scientists had come to L’Aquila to say that there was no danger and that a strong earthquake was highly unlikely. Prosecutors attributed to the group of experts, some of whom were part of an official body called National Commission for the Forecast and Prevention of Major Risks (CMR), a negative prediction, or, in other terms, they claimed that hat they had inferred “evidence of absence” from “absence of evidence”. This gross mistake was considered a result of the experts submitting to the injunctions of the chief of the civil protection service, Guido Bertolaso, who wanted Aquilani to keep calm and carry on, instead of following the best scientific evidence available. Less than a week after the highly publicised expert meeting, a 6.3 magnitude quake struck the city, killing more than 300 people.

The Facebook post, published at the end of March, suggests a link between the management of disaster in L’Aquila and the response to the covid-19 outbreak. The reminiscence was made all the starker by the fact that, just a couple of weeks before the post, Bertolaso had come once again to the forefront of Italian public life, this time not as chief of the civil protection service but as special advisor to the president of the Lombardy region to fight covid-19. But the analogies are deeper than the simple reappearance of the same characters. As during and after all disasters, attributions of blame are today ubiquitous. Scientists and experts are under the spotlight as they were in L’Aquila. Policymakers and the public expect highly accurate predictions and want them quickly. Depending on how a country is doing in containing the virus, experts will be praised or blamed, sometimes as much as elected representatives.

In Italy, for example, many now ask why the province of Bergamo was not declared “red zone”, meaning that unessential companies were not closed down, in late February, despite clear evidence of uncontrolled outbreaks in several towns in the area (various other towns in Italy had been declared “red zones” since February 23rd). Only on March 8th the national government decided to lock down the whole region of Lombardy, and the rest of the country two days later. The UK government has been similarly accused of complacency in delaying school closures and bans on mass gatherings. Public accusations voiced by journalists, researchers, and members of the public provoked blame games between state agencies, levels of governments, elected representatives, and expert advisors. In Italy, following extensive media coverage of public officials’ omissions and commissions in the crucial weeks between February 21st and March 8th, regional authorities and the national government now blame each other for the delay. In a similar way, the UK government and the Mayor of London have pointed fingers at each other after photos taken during the lockdown showed overcrowded Tube trains in London.

It would be easy to argue, with the benefit of hindsight, that more should have been done, and more promptly, to stop the virus, and not only in terms of long-term prevention or preparedness, but also in terms of immediate response. Immediate response to disaster includes such decisions as country-wide lockdowns to block the spread of a virus (like we are witnessing now), the evacuation of populations from unsafe areas (such as the 1976 Guadeloupe evacuation), the stop of the operation of an industrial facility or transport system (such as the airspace closure in Northern Europe after the Eyjafjallajökull eruption in 2011), or the confinement of hazardous materials (such as the removal of radioactive debris during the Chernobyl disaster). Focusing on this kind of immediate responses, I offer three insights from L’Aquila that seem relevant to understand the pressures expert advisors dealing with the covid-19 are facing today in Britain.

Experts go back to being scientists when things get messy

When decisions informed by scientific experts turn out to be mistaken, experts tend to defend themselves by drawing a thick boundary between science and policy, the same boundary that they eagerly cross in times of plenty to seize the opportunities of being in the situation room. Falling back into the role of scientists, they emphasise the uncertainties and controversies that inevitably affect scientific research.

Although most of the CMR experts in L’Aquila denied that they had made reassuring statements or that they had made a “negative prediction”, after the earthquake, they still had to explain why they were not responsible for what had happened. This was done in several ways. First, the draft minutes of the meeting were revised after the earthquake so as to make the statements less categorical and more probabilistic. Secondly, they emphasised the highly uncertain and tentative nature of seismological knowledge, arguing for example that “at the present stage of our knowledge,” nothing allows us to consider seismic swarms (like the one that was ongoing in L’Aquila before April 6th 2009) as precursors of strong earthquakes, a claim which is disputed within seismology. Finally, the defendants argued that the meeting was not addressed to the population and local authorities of L’Aquila (as several announcements of the civil protection service suggested), but rather to the civil protection service only, who then had to take the opportune measures autonomously. They claimed that scientists only provide advice, and that it is public officials and elected representatives who bear responsibility for any decision taken. This was part of a broader strategy to frame the meeting as a meeting of scientists, while the prosecution tried to frame it as a meeting of civil servants.

In Britain, the main expert body that has provided advice to the government is SAGE (Scientific Advisory Group for Emergencies), formed by various subcommittees, such as NERVTAG (New and Emerging Respiratory Virus Threats Advisory Group). These groups, along with the chief scientific adviser, Sir Patrick Vallance, have been under intense scrutiny over the past weeks. Questioned by Reuters about why the covid-19 threat level was not increased from “moderate” to “high” at the end of February, when the virus was spreading rapidly and deadly in Italy, a SAGE spokesperson responded that “SAGE and advisers provide advice, while Ministers and the Government make decisions”. When challenged about their advice, British experts also emphasized the uncertainty they faced. They depicted their meetings not as ceremonies in which the scientific solution to the covid-19 problem was revealed to the government, but rather as heated deliberations in which fresh and conflicting information about the virus was constantly being discussed: what Bruno Latour calls “science in the making”, and not what he calls “ready-made science”. For example, on March 17th Vallance stated before the Health and Social Care Select Committee that “If you think SAGE is a cosy consensus of agreeing, you’re very wrong indeed”.

Italian sociologist Luigi Pellizzoni has similarly pointed out an oscillation between the role of the expert demanding full trust from the public and the role of the scientist who, when things go wrong, blames citizens for their pretence of certainty. The result is confusion and suspicion among the public, and a reinforcement of conspiratorial beliefs according to which scientists are hired guns of powerful interests and that science is merely a continuation of politics by other means. In this way, the gulf between those who decry a populist aversion to science, and those who denounce its technocratic perversion cannot but widen, as I suggested in a recent paper.

Epidemiological (like geophysical) expert advice contains sociological and normative assumptions

Expert advice about how to respond to a natural phenomenon, like intense seismic activity or a rapidly spreading virus, will inevitably contain sociological assumptions, i.e. assumptions about how people will behave in relation to the natural phenomenon itself and in relation to what public authorities (and their law enforcers) will do. They also contain normative (or moral) assumptions, about what is the legitimate course of action in response to a disaster. In most cases, these assumptions remain implicit, which can create various problems: certain options that might be valuable are not even considered and the whole process is less transparent, potentially fostering distrust.

In the L’Aquila case, the idea of evacuating the town or of advising the inhabitants to temporarily leave their homes if these had not been retrofitted was simply out of the question. The mayor closed the schools for two days in late March, but most of the experts and decisionmakers involved, especially those who worked at the national level and were not residing in L’Aquila, believed that doing anything more radical would have been utterly excessive at the time. A newspaper condensed the opinion of US seismologist Richard Allen the day after the quake by writing that “it is not possible to evacuate whole cities without precise data” about where and when an earthquake is going to hit. The interview suggested that this impossibility stems from our lack of seismological predictive power, but in fact it is either a normative judgment based on the idea that too much time, money, and wellbeing would be dissipated without clear benefits, or a sociological judgment based on the idea that people would resist evacuation.

The important issue here is not whether a certain form of disaster response is a good or a bad idea, but that judgments of the sort “it is impossible to respond in this way” very often neglect to acknowledge the standards and information on which these are based. And there are good reasons to believe that this rhetorical loop-hole is especially true of judgments that, by decrying certain measures as impossible, simply ratify the status quo and “business as usual”. Our societies rest on a deep grained assumption that “the show must go on”, so that reassuring people is much less problematic than alarming them that something terrible is going to happen. Antonello Ciccozzi, an anthropologist who testified as an expert witness in the L’Aquila trial, expressed this idea by arguing that while the concepts of alarmism and false alarm are well established in ordinary language (and also have a distinctive legal existence, as in the article number 658 of Italian criminal law, which expressly proscribes and punishes false alarm [procurato allarme]), their opposites have no real semantic existence, occupying instead a “symbolic void”. This is why he coined a new term, “reassurism” (rassicurazionismo), to mean a disastrous and negligent reassurance, which he used to interpret the rhetoric of earth scientists and public authorities in 2009 and which he has applied to the current management of the covid-19 crisis.

Pushing the earthquake-virus analogy further, several clues suggest that the scientists that provided advice on covid-19 in Britain limited the range of possible options by a great deal because they were making sociological and normative assumptions. According to Reuters, “the scientific committees that advised Johnson didn’t study, until mid-March, the option of the kind of stringent lockdown adopted early on in China”, on the grounds that Britons would not accept such restrictions. This of course contained all sorts of sociological and moral assumptions about Britain, China, about democracies and autocracies, about political legitimacy and institutional trust. It is hard to establish whether the government explicitly delimited the range of possible policies on which expert advice was required, whether experts shared these assumptions anyway, or whether experts actually influenced the government by excluding certain options from the start. But by and large, these assumptions remained implicit. They were properly questioned only after several European countries started to adopt stringent counter-measures to stop the virus and new studies predicted up to half a million deaths in Britain, forcing the government to reconsider what had previously been deemed a sociological or normative impossibility.

It is true that, in stark contrast to the CMR in L’Aquila, where social science was not represented at all, SAGE has activated its subsection of behavioural science, called SPI-B (Scientific Pandemic Influenza Advisory Committee – Behaviour). Several commentators have argued that this section, by advancing ideas that resonated with broader libertarian paternalistic sensibilities among elite advisors and policymakers, had a significant influence in the early stage of the UK response to covid-19. There is certainly some truth to that, but my bets are that the implicit assumptions of policy-makers and epidemiologists were much more decisive. Briefs of SPI-B meetings in March and February reveal concerns about unintended consequences of and social resistance to measures such as school closures and the isolation of the elderly, but they are far from containing a full-fledged defence of a “laissez faire” approach. The statements reported in the minutes strike for their prudence, emphasising the uncertainties and even disagreements among members of the section. This leads us to consider a third point, i.e. the degree to which experts, along with their implicit or explicit assumptions, managed to exert an influence over policymakers and where able to confront them when they had reasons to do so. 

Speaking truth to power or speaking power to truth?

Scientists gain much from being appointed to expert committees: prestige; the prospect of influencing policy; better working conditions; less frequently they might also have financial incentives. Politicians also gain something: better, more rational decisions that boost their legitimacy; the possibility of justifying predetermined policies on a-political, objective grounds; a scapegoat that they can use in case things go wrong; an easy way to make allies and expand one’s network by distributing benefits. But although both sides gain, they are far from being on an equal footing: expert commissions and groups are established by ministers, not the other way around. This platitude testifies to the deep asymmetry between experts and policymakers. We have good reasons to think that, under certain circumstances, such an asymmetric relation prevents scientific experts to fully voice their opinions on the one hand, and emboldens policymakers into thinking that they should not be given lessons by their subordinates on the other. Thanks to the high popularity of the 2019 television series Chernobyl, many now find the best exemplification of such arrogance and lack of criticism in how the Ukrainian nuclear disaster was managed by both engineers and public officials.

There is little doubt that something of the sort occurred in L’Aquila. Several pieces of evidence show that Bertolaso did not summon the CMR meeting to get a better picture of the earthquake swarm that was occurring in the region. In his own words, the meeting was meant as a “media ploy” to reassure the Aquilani. But how could he be so sure that the situation in L’Aquila did not require his attention? It seems that one of the main reasons is that he had his own seismological theory to make sense of what was going on. Bertolaso believed that seismic swarms do not increase the odds of a strong earthquake, but on the contrary that they decrease such odds because small shocks discharge the total amount of energy contained in the earth. Most seismologists would disagree with this claim: low-intensity tremors technically release energy, but this does not amount to a favourable discharge of energy that decreases the odds of a big quake because magnitudes are based on a logarithmic scale, and a magnitude 4 earthquake releases a negligible quantity of energy compared to that released by a magnitude 6 earthquake (and, more generally, to the energy stored in an active fault zone). But scientists appear to have been much too cautious in confronting him and criticising his flawed theory. Bertolaso testified in court that in the course of a decade he had mentioned the theory of the favourable discharge of energy “dozens of times” to various earth scientists (including some of the defendants) and that “nobody ever raised any objection about that”. Moreover, both Bertolaso’s deputy and a volcanologist who was the most senior member of the CMR alluded to the theory during the meeting and in interviews given to local media in L’Aquila. A seismologist testified that he did not feel like contradicting another member of the commission (and a more senior one at that) in front of an unqualified public and so decided to change the topic instead. Such missed objections created the conditions under which the “discharge of energy” as a “positive phenomenon” became a comforting refrain that circulated first among civil protection officials and policymakers, and then among the Aquilani as well.

Has something similar occurred in the management of the covid-19 crisis in Britain? As no judicial inquiry has taken place there, there is only limited evidence that does not authorize anything other than speculative conjectures. However, there are two main candidate theories that, although lacking proper scientific support, might have guided the actions of the government thanks to their allure of scientificity: “behavioural fatigue” and “herd immunity”. As mentioned above, many think that behavioural fatigue, according to which people would not comply with lockdown restrictions after a certain period of time so that strict measures could be useless or even detrimental, has been the sociological justification of a laissez faire (if not social Darwinist) attitude to the virus. But this account seems to give too much leverage to behavioural scientists who, for the most part, were cautious and divided on the social consequences of a lockdown. This also finds support in the fact that no public official to my knowledge referred to “behavioural fatigue” but rather simply to “fatigue”, without explicit reference to an expert report or an authoritative study (as a matter of fact, none of the SPI-B documents ever mentions “fatigue”). I’d like to propose a different interpretation: instead of being a scientific theory approved by behavioural experts, it was rather a storytelling device with a common-sense allure that allowed it to get a life of its own among policy circles, ending up in official speeches and interviews. The vague notion of “fatigue”, which reassuringly suggested that the country and the economy could go on as usual, might have ended up being accepted with little suspicion by many experts as well, especially those of the non-behavioural kind. The concept could have served both as a reassuring belief for public officials and as an argument that could be used to justify delaying (or avoiding) a lockdown. The circulation of “herd immunity” might have followed a similar pattern. Although a scientifically legitimate concept, there is evidence that, along with similar formulations such as “building some immunity”, it was never a core strategy of the government, but rather part of a communicative repertoire that could be invoked to justify a delay of the lockdown as well as measures directed only at certain sections of the population, such as the elderly. Only on 23 March the government changed track and abandoned these concepts altogether, taking measures similar to other European countries.

~ ~ ~

The analogy between how Italian civil protection authorities managed an earthquake swarm in L’Aquila and how the British government responded to covid-19 cannot be pushed too far. Earthquakes and epidemics have different temporalities (a disruptive event limited in space and time on the one hand, a long-lasting process with no strict geographical limits on the other), are subject to different predictive techniques, and demand highly different responses. While a large proportion of Aquilani blamed civil protection authorities immediately after the earthquake, Boris Johnson’s approval rating has improved from March to April 2020. However, what happened in L’Aquila remains, to paraphrase Charles Perrow, a textbook case of a “normal accident” of expertise, i.e. a situation in which expert advice ended up being catastrophically bad for systemic reasons, and notably for how the science-policy interface had developed in the Italian civil protection service. As such, there is much that expert advisors and policymakers can learn from it, whether they are giving advice and responding to earthquakes, nuclear accidents, terrorism, or a global pandemic.

Federico Brandmayr

Japan Prime Minister Shinzo Abe convening the First Novel Coronavirus Expert Meeting. 16 Feburary 2020
Reading Elizabeth Anderson in the time of COVID-19 800 533 Federico Brandmayr

Reading Elizabeth Anderson in the time of COVID-19

The pandemic is a good time to reflect on expertise (if you have the luxury).

During this particular emergency, governments appear to pay heed to experts. Or at least they do now that the extent of the crisis is clear. The public and the media show them respect and even reverence. This is especially true of physicians and public health scientists, especially epidemiologists and virologists. To a lesser extent, social scientists specializing in behavior and networks weigh in on how to organize life under partial or complete lockdown and how to make this lockdown effective. Economists are voicing ominous warnings on the magnitude of changes to come.

Japan’s Prime Minister Shinzo Abe at the First Novel Coronavirus Expert Meeting, 16 February 2020

One tempting conclusion is that after decades of being dismissed or scrutinized for their various weaknesses, experts are back with a vengeance. Indeed, it is striking how much more trust politicians and the public are willing to place in epidemiology than, say, in climate science. It is possible that after COVID-19 is overcome, this halo effect will last and many experts will enjoy greater trust, not just the ones whose advice was particularly relevant during the pandemic.

But this prediction may be wishful thinking. The UK government’s abrupt U-turn from mitigation to suppression, while officially justified by scientific advice, is more likely a result of internal rebellion and external criticisms. The critics of mitigation have appealed to a medley of scientific, ethical, political considerations against the pursuit of ‘herd immunity’. Some expressed astonishment at the fact that the UK experts arrived at advice so different from most other countries which overwhelmingly backed suppression. The radical uncertainty about how the epidemic will develop, the disagreements about how to ‘flatten the curve’ and contain further damage, as well as the now familiar bouts of fakes, misinformation and politicization of expertise, all undermine the optimistic ‘return of the experts’ narrative.

Even if the position of the experts after the pandemic will be stronger, this is not a reason to forget how complex and hard-won epistemic authority is. Public health scientists that are now considering strategies of containing the pandemic rely on models with inevitably speculative assumptions. Furthermore, in order to make inferences from these models, they have to make judgments about the appropriate levels of harm to the public, the acceptable numbers of dead, the tolerable restrictions on freedom, the likely behavior of masses under lockdown, and so on. These judgments are uncertain and controversial, and disagreements between different experts are often intractable. So even if experts are back, their return should not herald their rule.

Elizabeth Anderson

Professor Elizabeth Anderson of the University of Michigan is a moral philosopher known for her work on expertise and the politics of knowledge. Her writings are a must for anyone who cares about how to define expertise, whether expertise can and should be challenged by laypeople, and what is the proper place of experts in a democracy. Her ideas are as relevant as ever and we recommend two papers in particular. These are classic Anderson papers many of us know and love: they start with a theoretical claim and then illustrate it with historical and contemporary examples of expertise in action.

Anderson, E. (2011). Democracy, Public Policy, and Lay Assessments of Scientific Testimony. Episteme, 8(2), 144-164. doi:10.3366/epi.2011.0013

In this paper, Anderson observes that responsible public policy in a technological society must rely on complex scientific reasoning, which ordinary citizens cannot directly assess. But this inability should not call into question the democratic legitimacy of technologically driven policy. Citizens need not be able to judge whether experts are making justified claims, but rather they need to be able to make, what she calls, reliable second-order assessments of the consensus of trustworthy scientific experts. Her case study is anthropogenic global warming and she argues that judging the trustworthiness of climate experts is straightforward ‘for anyone of ordinary education with access to the Web’.

Anderson, E. (2006). The Epistemology of Democracy. Episteme, 3(1-2), 8-22. doi:10.3366/epi.2006.3.1-2.8

This is a paper on how institutions make knowledge, both theoretically and in practice. Theoretically, Anderson reconstructs democracy as an epistemic engine through deliberation and votes, arguing that democracy’s success in this task is due to the experimental nature of its institutions, just as John Dewey taught. Her case study is based on Bina Agarwal’s account of community forestry in India and Nepal. Its initial exclusion of women resulted in failure to solve the problem of firewood and fodder shortages.

We recommend reading these papers alongside Anderson’s answers to our questions below. We asked her to answer questions about these topics and she sent us her answers before the pandemic struck. Her reflections are on expertise and democracy in general, not on how it has played out in the last weeks:

1. Considering your research and/or work in practice, what makes a good expert?

  • EA: Expertise in any field must join technical knowledge in the field with certain virtues: (i) honesty in communicating findings in the field, including uncertainties about these findings and the most likely alternative possibilities; (ii) conscientiousness in communicating the “whole” truth, in the sense of not omitting findings that are normatively relevant to policymaking, although they may be inconvenient to one or more political views; (iii) avoidance of dogmatism – i.e., a willingness to revise conclusions in light of new evidence; (iv) taking the public seriously: listening to their concerns, which may include distrust of experts, and taking action to earn their trust, rather than dismissing them out of hand or treating them as stupid, even when their concerns are based on misinformation.

2. What are the pressures experts face in your field?

  • EA: As a moral and political philosopher, I am reluctant to claim that there are specifically moral experts, in the sense of people who convey technical conclusions to the public by way of testimony – that is, where we are asking the public to take our word for it in virtue of our being experts, because the considerations for these conclusions are too technical for the public to assess.  Philosophers don’t convey findings to the public by way of testimony.  We offer ideas, arguments, and perspectives to the public, which they can evaluate for themselves.

3. Have you observed any significant changes occurring in recent times in the way experts operate?

  • EA: We now live in a climate of distrust in expertise, of disinformation spread by social media, irresponsible politicized news, and authoritarian regimes, and of propaganda and toxic discourse that has displaced evidence-based, constructive, democratic policymaking with ideas designed to spread distrust and division among citizens.  Some of the distrust in scientific expertise arises from experts themselves, who have failed to take responsibility for bad predictions.  Some experts have also been corrupted by moneyed interests.  Experts need to repair their broken relations with the public.  But it’s not all on them.  Conflict entrepreneurs – including populist politicians and media – deliberately spread lies and unfounded doubts about experts, to create a climate in which they can operate with impunity while in power, without taking responsibility for the consequences.  Spreading doubt about climate change allows fossil fuel interests to wreck the conditions for a sustainable planet. Spreading doubt about economics may allow plutocrats to drive the UK over a no-deal Brexit cliff. Spreading doubt about the safety of vaccines spreads preventable disease while enriching quack doctors.

4. Do you envision any changes in the role of experts in the future?

  • EA: Experts can no longer rely on their technical knowledge alone, in order to be able to play a constructive role in policymaking.  They need to find constructive ways to relate to the public, to engage the public with their findings in ways that both earn their trust and empower the public to distinguish between real experts and those who disseminate lies, propaganda, and toxic discourse.  This will require a reinvigoration of democratic practices in conjunction with science. In the U.S., an exemplary case of what I have in mind is the citizen science undertaken in Flint, Michigan, which exposed the presence of lead in the water and consequent lead poisoning of children.  In this case, experts – doctors and environmental scientists—empowered citizens to collect data from their own water lines, and reason together about the meanings of their findings and what to do about them.  This is democracy in action, empowered by experts in ways that reinforce trust in expertise and democracy alike.

Reading these now, it is hard not to draw connections to the story of expertise during the pandemic. Anderson’s conception of a responsible expert – as transparent about value judgments, respectful of concerns of the public, and properly undogmatic – is a compelling standard against which to evaluate the experts driving the response to the epidemic. But this standard is also tricky to articulate and to apply in the present context. What it would mean for institutions of public health to produce knowledge that is properly representative and practical? Is there a place for citizen science of infectious diseases or does the urgency and danger of a virus like COVID-19 call for a less distributed, more centralised, and frankly a more authoritarian model than Dewey’s? A proper defence of participatory science needs to show that it is not a luxury that can be put aside during crisis, but rather a necessity. This is far from obvious. What could a citizen science about COVID-19 be? And how can such science command trust in an age of misinformation?

In an email on March 26th Anderson added that citizen science on COVID-19 is already happening:

We also wonder how Anderson’s view that the trustworthiness of experts is a second-order question (recall she argues that the public need not know the science to trust the experts), help us to understand the trustworthiness of epidemiologists in the time of COVID-19. How do they marshal as much trust as they do, at least once they do, and what accounts for the contrast with, say, climate scientists? Is epidemiologists’ knowledge or character somehow superior to that of so many other distrusted experts? Or is there something about the clear and present urgency of a pandemic and the vividly obvious threats to one’s own life that makes an expert on it trustworthy? (We mean to say trustworthy, rather than trusted, because, as the precautionary principle recommends, when the risk of tragedy is high, it is appropriate to act on less evidence than otherwise.) If so, the proper response to climate scepticism is not better science or better experts as such, but a better representation of urgency and crisis.

There is much more to say and in the coming weeks the Expertise Under Pressure team will be publishing our own and invited commentary on the role of experts during this pandemic. But the writings of classics such as Elizabeth Anderson are an obligatory passage point.

The Expertise Under Pressure team

When Does Explaining Become Explaining Away? 795 599 Federico Brandmayr

When Does Explaining Become Explaining Away?

The Last Day of a Condemned Man (1869) by Mihály Munkácsy.  In the public domain (Wikimedia Commons).

When Does Explaining Become Explaining Away?

Compassion, Justification and Exculpation in Social Research

27 SEPtember 2019, 09:15 – 17:30

Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT

Convenor

Federico Brandmayr (University of Cambridge)

Overview

A common charge levelled against researchers who study human culture and social behaviour is that their explanations can provide justifications or excuses for ill-intentioned people. Sociologists often encounter this objection when they explain crime and unemployment, historians when they study dictators and genocide, anthropologists when they interpret religious and traditional practices, and psychologists when they assess mental illness and addiction. Although many of these accusations are far-fetched and betray a profound ignorance of social research, we should not underestimate the practical and performative effects social scientists can have in society, as well as the fact that social research is often laden with a web of normative assumptions. Where, then, should we draw the boundary between explaining and explaining away, between understanding and agreeing, between finding causes and making excuses? Drawing together perspectives from the disciplines of history, sociology, law and philosophy, the workshop will provide an opportunity to critically reflect on the exculpatory potential of social research. 

Speakers and discussants

Gabriel Abend (Universität Luzern)

Anna Alexandrova (University of Cambridge)

Jana Bacevic (University of Cambridge)

Federico Brandmayr (University of Cambridge)

Cléo Chassonnery-Zaïgouche (University of Cambridge)

Livia Holden (University of Oxford)

Stephen John (University of Cambridge)

Hadrien Malier (École des hautes études en sciences sociales)

Nigel Pleasants (University of Exeter)

Marco Santoro (Università di Bologna)

Paulina Sliwa (University of Cambridge)

Stephen Turner (University of South Florida)

Further information

This workshop forms part of the Expertise Under Pressure (EUP) project, funded by the Humanities and Social Change International Foundation. The EUP project’s overarching goal is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach and what realistic demands communities should place on experts.

Queries: Contact Una Yeung

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research 795 599 Federico Brandmayr

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research

FIRST WORKSHOP – 27 September 2019

Organised by Federico Brandmayr and Anna Alexandrova

“Does understanding come at the price of undermining our capacity to judge, blame and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome?”


The Last Day of a Condemned Man (1869) by Mihály Munkácsy (Wikimedia Commons).

The Expertise Under Pressure project hosted its first workshop on 27 September 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation. The overarching goal of Expertise Under Pressure is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach, and what realistic demands communities should place on experts.

The talks and discussions of this first workshop focused specifically on a charge frequently levelled against experts who study human culture and social behaviour, i.e. that their explanations can provide justifications or excuses for ill-intentioned people, and that decisionmakers making choices on the basis of their advice might neglect to punish and react effectively to harmful behaviours.

A good way to capture the theme of the workshop is a saying attributed to Germaine de Stael: “tout comprendre, c’est tout pardonner”, “to understand all is to forgive all”. Social scientists perhaps do not intend to understand all that there is, but they generally like the idea of increasing our understanding of the social world. By and large, historians, sociologists, political scientists and anthropologists, tend to show that the people they study do certain things not just because they want to do those things, but also because they are driven by various kinds of factors. And the more knowledge we have of these factors, the more choice, responsibility, and agency seem to fade away. This begs the question: does understanding come at the price of undermining our capacity to judge, blame, and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome? And how should scientific disciplines, professional associations, and individual researchers deal with this issue in their daily practice and especially in their interventions in public debates and in policymaking contexts? Indeed, these issues essentially relate to the question of how social knowledge is produced and how it circulates outside academia, and notably how it is appropriated and misappropriated by different groups in the endless disputes that divide society and in which attributions of credit and blame are widespread.

The one-day event brought together researchers from various academic disciplines, looking at the exculpatory potential of social research. Here is what they came up with.

Livia Holden

Professor Livia Holden (University of Oxford) was the first speaker of the day. With a background in anthropology and socio-legal studies, Holden leads a European Research Council project titled Cultural Expertise in Europe: What is it useful for? The project looks at the role of anthropologists and other cultural experts in advising judges in court cases and policymakers in fields such as immigration law. In her talk, ‘Cultural Expertise and the Fear of Absolution’, she analysed the concept of cultural expertise and described the specific challenges cultural experts face, especially where anthropology enjoys little credit. Drawing on several examples, including her own experience as an expert witness in family law cases, she argued that experts oscillate between the fear of absolution, i.e. concerns of excusing harmful acts (such as genital mutilation) on the grounds that they are rooted in cultural traditions, and the fear of condemnation, i.e. concerns of being complicit with colonial rule and repressive criminal justice policies.

Jana Bacevic, Livia Holden, and Hadrien Malier

The following speaker was Hadrien Malier (École des hautes études en sciences sociales), a sociologist who studies policy measures aimed at nudging working-class people into adopting more ‘eco-friendly’ habits. His talk, ‘No (Sociological) Excuses for Not Going Green: Urban Poor Households and Climate Activism in France’, presented the results of an ethnography conducted in two low-income housing projects. The volunteers and activists that Malier followed in these neighbourhoods framed the protection of the environment as an individual and universally distributed moral obligation, independent of privilege, class and education. Climate activists, who are mostly middle-class and educated, recognise the social difference between them and the mostly poor people they try to nudge toward eco-friendly habits. But this difference is simply interpreted as proof that people with low income do not know or care enough about the environment. More relevant sociological insights on class differences, including well-supported claims according to which people with low income have a relatively light ecological footprint, are often seen as a bad excuse for acts that are detrimental to environment.

Nigel Pleasants

Dr Nigel Pleasants (University of Exeter) gave the next talk. Pleasants is a philosopher of social science who has written extensively on how sociological and historical knowledge influences our moral judgements. In his recent publications, he focused on various controversies related to historical explanations of the Holocaust. His talk, ‘Social Scientific Explanation and the Fact-Value Distinction’, explored and clarified the relation between excuse and justification. Excuses concern the responsibility of an actor in performing a certain action, while justifications refer to the moral status of an action (i.e. whether it is right or wrong) regardless of the responsibility of the actor that performs it. Drawing on scholarship on the Holocaust, he argued that while explanatory accounts from the social sciences are highly relevant to determine whether a certain act can be excused, the same cannot be said for whether a certain act is justified or not.

Marco Santoro and Nigel Pleasant

The morning session ended with a talk by Professor Marco Santoro (Università di Bologna): ‘Whose Sides (of the Field) Could We Be On? Situatedness, Perspectivism, and Credibility in Social Research’. Santoro is a sociologist who has written on such diverse topics as the notarial profession, popular music, the international circulation of social scientific ideas and the Sicilian mafia. His starting point was a personal experience in which his interpretation of the mafia was harshly criticised by a colleague. In his writings on the topic, he had argued that the mafia can be interpreted as a form of political organisation, a non-state political institution enjoying a certain legitimacy and providing protection and services to its constituency, in a region where poverty runs high and that many see as having been left behind by the Italian state. Those scholars who instead saw the mafia as functioning like a company, simply providing services (e.g. protection from violence) in exchange for money, considered his arguments tantamount to a justification of organised crime. This episode inspired Santoro’s forceful defence of a multi-perspectival approach, according to which we should broaden the range of interpretations of a single phenomenon while being aware that these perspectives are not morally and politically neutral. Some might put us in dangerous territory, but it is only by seriously advancing them that we can clarify our very moral ideals.

Federico Brandmayr

Opening the afternoon session, Dr Federico Brandmayr (University of Cambridge) reconstructed the debate on ‘sociological excuses’ that took place in France after the country was struck by several deadly terrorist attacks in 2015 and 2016. In his talk, ‘The Political Epistemology of Explanation in Contemporary French Social Thought’, he showed that the very expression of sociological excuse has clear intellectual and political origins, rooted in US right-wing libertarianism, and argued that it is mainly used in France in relation to accounts of the urban lower class that emphasise poverty, unemployment and stigmatisation. Sociology as a discipline was at the centre of much controversy after the 2015 terrorist attacks, and sociologists reacted in three main ways: some denied the allegations, others reappropriated the derogatory label of excuse by giving it a positive meaning, while others accepted criticism and called for a reformation of sociology. Accordingly, Dr Brandmayr argued that French sociology should not be considered as a monolithic block that experiences attacks from political sectors, but rather as a heterogeneous complex of different epistemic communities.

Stephen Turner, Federico Brandmayr, and Stephen John

In a similar historical vein, Professor Stephen Turner (University of South Florida) gave a talk titled ‘Explaining Away Crime: The Race Narrative in American Sociology’. A renowned historian and philosopher of social science, he reconstructed the history of how social scientists have dealt with the fact that crime rates for Blacks in the US have always been higher than for other ethnic groups. Generally speaking, social scientists wanted to avoid racist accounts of this gap (like those based on a form of genetic predisposition of black people to commit crimes), but they also showed dissatisfaction with accounts that explained the gap by simply pointing to social factors such as poverty and discrimination. This is because of certain theoretical inconsistencies (such as the fact that black crime mainly targets black people, while one would assume that discrimination should cause Blacks to act violently against Whites), but also because it was seen as an excuse pointing to a deficiency in the agent and implying a form of inferiority. Spanning more than a century, Turner’s historical reconstruction identified three basic strategies US social scientists adopted to overcome this dilemma and delineated their ethical implications.

Finally, Gabriel Abend (Universität Luzern) took a more philosophical approach in a talk titled ‘Decisions, “Decisions”, and Moral Evaluation’. His talk built on a theoretical framework that he has recently developed in several publications, and which provides the foundation for the study of decisionism, i.e. the fact that people use decision (or choice) concepts and define certain things as decisions. Decisionism has clear moral and practical implications, as people are generally held accountable and subject to moral judgment when their acts are interpreted as decisions. Abend provided a striking list of examples from scientific journals in which the concept of decision was used to describe such unrelated things as bees’ foraging activities, saccadic eye movements and plant flowering. While these instances of decisionism offer plenty of material for the empirical sociologist, he raised concerns about the risk of conceptual stretching and advocated a responsible conceptual practice.

The workshop was a truly interdisciplinary inquiry, in the spirit of CRASSH. All interventions, whether their approach was philosophical, sociological, historical, or legal converged toward increasing our knowledge of the relationship between explaining and understanding on the one hand, and excusing and justifying on the other. Thanks to the lively and thorough responses given by an impressive battery of discussants (Dr Anna Alexandrova, Dr Jana Bacevic, Dr Cléo Chassonnery-Zaïgouche and Dr Stephen John), the talks were followed by fruitful exchanges. A special issue with the papers given in the workshop is in preparation and will be submitted soon to a prominent interdisciplinary journal.

Text by Federico Brandmayr

Pictures taken by Judith Weik