Mindful of AI: Language, Technology and Mental Healthhttps://hscif.org/wp-content/uploads/2020/09/shutterstock_1024338388-1024x683.jpg1024683Stefanie UllmannStefanie Ullmannhttps://secure.gravatar.com/avatar/a39b3d5109c4bb8ddc6f7319d8ca8f36?s=96&d=mm&r=g
Bill Byrne (University of Cambridge), Shauna Concannon (University of Cambridge), Ann Copestake (University of Cambridge), Ian Roberts (University of Cambridge), Marcus Tomalin (University of Cambridge), Stefanie Ullmann (University of Cambridge)
Language-based Artificial Intelligence (AI) is having an ever greater impact on how we communicate and interact. Whether overtly or covertly, such systems are essential components in smartphones, social media sites, streaming platforms, virtual personal assistants, and smart speakers. Long before the worldwide Covid-19 lockdowns, these devices and services were already affecting not only our daily routines and behaviours, but also our ways of thinking, our emotional well-being and our mental health.Social media sites create new opportunities for peer-group pressure, which can heighten feelings of anxiety, depression and loneliness (especially in young people); malicious twitterbots can influence our emotional responses to important events; and online hate speech and cyberbullying can cause victims to have suicidal thoughts.
Consequently, there are frequent calls for stricter regulation of these technologies, and there are growing concerns about the ethical appropriateness of allowing companies to inculcate addictive behaviours to increase profitability. Infinite scrolls and ‘Someone is typing a comment’ indicators in messaging apps keep us watching and waiting, and we repeatedly return to check the number of ‘likes’ our posts have received. The underlying software has often been purposefully crafted to trigger biochemical responses in our brains (eg the release of serotonin and/or dopamine), and these neurotransmitters strongly influence our reward-related cognition. The powerful psychological impact of such technologies is not always a positive one. Indeed, it sometimes seems appropriate that those who interact with these technologies, and those who inject drugs, are all called ‘users’.
However, while AI-based communications technologies undoubtedly have the potential to harm our mental health, they can also offer forms of psychological support. Machine Learning systems can measure the physical and mental well-being of users by evaluating their language use in social media posts, and a variety of empathetic therapy, care, and mental health chatbots, apps, and conversational agents are already widely available. These applications demonstrate some of the ways in which well-designed language-based AI technologies can offer significant psychological and practical support to especially vulnerable social groups. Indeed, medical professionals have started to consider the possibility that the future of mental healthcare will inevitably be digital, at least in part. Yet, despite their potential benefits, developments such as these raise a number of non-trivial regulatory and ethical concerns.
This two-day virtual interdisciplinary workshop brings together a diverse group of researchers from academia, industry and government, with specialisms in many different disciplines, to discuss the different effects, both positive and negative, that AI-based communications technologies are currently having, and will have, on mental health and well-being.
Speakers & Structure of Event:
Thursday 1 October
Session 1: Social Media and Mental Health
Speakers: Michelle O’Reilly (University of Leicester), Amy Orben (University of Cambridge)
The workshop comprises four sessions. You can register for more than one workshop session. Please register for each of the four sessions if you wish to attend the entire workshop. Follow the links below to register for the individual sessions on Eventbrite:
Economists in the Cityhttps://hscif.org/wp-content/uploads/2020/05/Capture-d’écran-2020-05-19-à-15.08.41.png758460Cléo Chassonnery-ZaïgoucheCléo Chassonnery-Zaïgouchehttps://secure.gravatar.com/avatar/2119bb24a352503076d598b7f18489dd?s=96&d=mm&r=g
When and why did the expertise and knowledge of economists become so highly valued in the world of public policy? Our blogged conference explores this question by bringing together historians of economics, economists, urban policy experts and social scientists. Blogposts from each participants will be published on a rolling basis. After we have published each of their contributions, we will invite other contributors to comment in response, and will offer our own reflections about some of the key debates and issues.
Tackling the Problem of Online Hate Speechhttps://hscif.org/wp-content/uploads/2020/05/Hate_Speech_app_Shutterstock.jpg370275Stefanie UllmannStefanie Ullmannhttps://secure.gravatar.com/avatar/a39b3d5109c4bb8ddc6f7319d8ca8f36?s=96&d=mm&r=g
In recent years, the automatic detection of online hate speech has become an active research topic in machine learning. This has been prompted by increasing anxieties about the prevalence of hate speech on social media, and the psychological and societal harms that offensive messages can cause. These anxieties have only increased in recent weeks as many countries have been in lockdown due to the Covid-19 pandemic (L1GHT Toxicity during Coronavirus report). António Guterres, the Secretary-General of the United Nations has explicitly acknowledged that the ongoing crisis has caused a marked increase in hate speech.
Quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.
Online hate speech presents particular problems, especially in modern liberal democracies, and dealing with it forces us to reflect carefully upon the tension between free speech (i.e., allowing people to say what they want) and protective censorship (i.e., safeguarding vulnerable groups from abusive or threatening language). Most social media sites have adopted self-imposed definitions, guidelines, and policies for handling toxic messages, and human beings employed as content moderators determine whether or not certain posts are offensive and should be removed. However, this framework is unsustainable. For a start, the offensive posts are only removed retrospectively, after the harm has already been caused. Further, there are far too many instances of hate speech for human moderators to assess them all. In addition, it is problematical that unelected corporations such as Facebook and Twitter should be the gate-keepers of free speech. Who are they to regulate our democracies by deciding what we can and can’t say?
Towards the end of 2019, two Cambridge-based researchers, Dr Marcus Tomalin and Dr Stefanie Ullmann, proposed a different approach. Their framework demonstrated how an automated hate speech detection system could be used to identify a message as being offensive before it was posted. The message would then be temporarily quarantined, and the intended recipient would receive a warning message, indicating the degree to which the quarantined message may be offensive. That person could then choose either to read the message, or else to prevent it appearing. This approach achieves an appropriate balance between libertarian and authoritarian tendencies: it allows people to write whatever they want, but recipients are also free to read only those messages they wish to read. Crucially, this framework obviates the need for corporations or national governments to make decisions about which messages are acceptable and which are not. As Dr Ullmann puts it, “quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.”
A 4th-year student in the Engineering Department, Nicholas Foong who is supervised by Dr Tomalin, has now developed both a state-of-the-art automatic hate speech detection system, and an app that demonstrates how the system can be used to quarantine offensive messages by blurring them until the recipient actively chooses to read them. An Android version of the app is available, along with a short demo video of the app in action.
The state-of-the art system is able to correctly identify up to 91% of offensive posts. In the app, it is used to automatically detect and quarantine hateful posts in a simulated social media feed in real-time. The app demonstrates that the trained system can run locally on mobile phones, taking up just 9MB of space and requiring no internet connection to function.
Nicholas Foong, app developer, Department of Engineering Cambridge
Despite these promising developments, there is still a lot of work that needs to be done if the problem of online hate speech is going to be solved convincingly. The detection systems themselves need to be able to cope with different linguistic registers and styles (e.g., irony, satire), and the training data must be annotated accurately, to avoid introducing unwanted biases. In addition, since hate speech increasingly contains both words and images, the next generation of automated detection systems will need to handle multimodal input. Nonetheless, the quarantining framework offers an effective practical way of incorporating such technologies into our regular online interactions. And, as we adjust to life in lockdown, we can perhaps appreciate more than ever how quarantining can help to keep us all safe.
Are the experts responsible for bad disaster response?https://hscif.org/wp-content/uploads/2020/04/Pciture-1.jpg499310Federico BrandmayrFederico Brandmayrhttps://secure.gravatar.com/avatar/4d4b534f331ba9de387d42aee9566b39?s=96&d=mm&r=g
A few lessons for the coronavirus outbreak from L’Aquila
~ ~ ~
A few weeks ago, a Facebook group called 3e32 and based in the Italian city of L’Aquila posted a message stating: “whether it is a virus or lack of prevention, science should always protect its independence from the power of those who guarantee the interests of the few at the expense of the many”. The statement was followed by a picture of a rally, showing people marching and carrying a banner which read: “POWER DICTATES, ‘SCIENCE’ OBEYS, JUSTICE ABSOLVES”.
What was that all about? “3e32” refers to the moment in which a deadly earthquake struck L’Aquila on April 6th 2009 (at 3:32 in the Morning). It is now the name of a collective founded shortly after the disaster. The picture was taken on November 13th 2014: a few days earlier, a court of appeals had acquitted six earth scientists of charges of negligence and manslaughter, for which they had previously been sentenced to six years in prison.
Even today, many people believe that scientists were prosecuted and convicted in L’Aquila for “for failing to predict an earthquake”, as a commentator put it in 2012. If this were the case, it would be shocking indeed: earthquake prediction is seen by most seismologists as a hopeless endeavour (to the point that there is a stigma associated to it in the community), and the probabilistic concept of forecast is preferred instead. But, in fact, things are more complicated, as I and others have shown. What prosecutors and plaintiffs claimed was that in a city that had been rattled for months by tremors, where cracks had started to appear on many buildings, where people were frightened and some had started to sleep in their cars, a group of scientists had come to L’Aquila to say that there was no danger and that a strong earthquake was highly unlikely. Prosecutors attributed to the group of experts, some of whom were part of an official body called National Commission for the Forecast and Prevention of Major Risks (CMR), a negative prediction, or, in other terms, they claimed that hat they had inferred “evidence of absence” from “absence of evidence”. This gross mistake was considered a result of the experts submitting to the injunctions of the chief of the civil protection service, Guido Bertolaso, who wanted Aquilani to keep calm and carry on, instead of following the best scientific evidence available. Less than a week after the highly publicised expert meeting, a 6.3 magnitude quake struck the city, killing more than 300 people.
The Facebook post, published at the end of March, suggests a link between the management of disaster in L’Aquila and the response to the covid-19 outbreak. The reminiscence was made all the starker by the fact that, just a couple of weeks before the post, Bertolaso had come once again to the forefront of Italian public life, this time not as chief of the civil protection service but as special advisor to the president of the Lombardy region to fight covid-19. But the analogies are deeper than the simple reappearance of the same characters. As during and after all disasters, attributions of blame are today ubiquitous. Scientists and experts are under the spotlight as they were in L’Aquila. Policymakers and the public expect highly accurate predictions and want them quickly. Depending on how a country is doing in containing the virus, experts will be praised or blamed, sometimes as much as elected representatives.
In Italy, for example, many now ask why the province of Bergamo was not declared “red zone”, meaning that unessential companies were not closed down, in late February, despite clear evidence of uncontrolled outbreaks in several towns in the area (various other towns in Italy had been declared “red zones” since February 23rd). Only on March 8th the national government decided to lock down the whole region of Lombardy, and the rest of the country two days later. The UK government has been similarly accused of complacency in delaying school closures and bans on mass gatherings. Public accusations voiced by journalists, researchers, and members of the public provoked blame games between state agencies, levels of governments, elected representatives, and expert advisors. In Italy, following extensive media coverage of public officials’ omissions and commissions in the crucial weeks between February 21st and March 8th, regional authorities and the national government now blame each other for the delay. In a similar way, the UK government and the Mayor of London have pointed fingers at each other after photos taken during the lockdown showed overcrowded Tube trains in London.
It would be easy to argue, with the benefit of hindsight, that more should have been done, and more promptly, to stop the virus, and not only in terms of long-term prevention or preparedness, but also in terms of immediate response. Immediate response to disaster includes such decisions as country-wide lockdowns to block the spread of a virus (like we are witnessing now), the evacuation of populations from unsafe areas (such as the 1976 Guadeloupe evacuation), the stop of the operation of an industrial facility or transport system (such as the airspace closure in Northern Europe after the Eyjafjallajökull eruption in 2011), or the confinement of hazardous materials (such as the removal of radioactive debris during the Chernobyl disaster). Focusing on this kind of immediate responses, I offer three insights from L’Aquila that seem relevant to understand the pressures expert advisors dealing with the covid-19 are facing today in Britain.
Experts go back to being scientists when things get messy
When decisions informed by scientific experts turn out to be mistaken, experts tend to defend themselves by drawing a thick boundary between science and policy, the same boundary that they eagerly cross in times of plenty to seize the opportunities of being in the situation room. Falling back into the role of scientists, they emphasise the uncertainties and controversies that inevitably affect scientific research.
Although most of the CMR experts in L’Aquila denied that they had made reassuring statements or that they had made a “negative prediction”, after the earthquake, they still had to explain why they were not responsible for what had happened. This was done in several ways. First, the draft minutes of the meeting were revised after the earthquake so as to make the statements less categorical and more probabilistic. Secondly, they emphasised the highly uncertain and tentative nature of seismological knowledge, arguing for example that “at the present stage of our knowledge,” nothing allows us to consider seismic swarms (like the one that was ongoing in L’Aquila before April 6th 2009) as precursors of strong earthquakes, a claim which is disputed within seismology. Finally, the defendants argued that the meeting was not addressed to the population and local authorities of L’Aquila (as several announcements of the civil protection service suggested), but rather to the civil protection service only, who then had to take the opportune measures autonomously. They claimed that scientists only provide advice, and that it is public officials and elected representatives who bear responsibility for any decision taken. This was part of a broader strategy to frame the meeting as a meeting of scientists, while the prosecution tried to frame it as a meeting of civil servants.
In Britain, the main expert body that has provided advice to the government is SAGE (Scientific Advisory Group for Emergencies), formed by various subcommittees, such as NERVTAG (New and Emerging Respiratory Virus Threats Advisory Group). These groups, along with the chief scientific adviser, Sir Patrick Vallance, have been under intense scrutiny over the past weeks. Questioned by Reuters about why the covid-19 threat level was not increased from “moderate” to “high” at the end of February, when the virus was spreading rapidly and deadly in Italy, a SAGE spokesperson responded that “SAGE and advisers provide advice, while Ministers and the Government make decisions”. When challenged about their advice, British experts also emphasized the uncertainty they faced. They depicted their meetings not as ceremonies in which the scientific solution to the covid-19 problem was revealed to the government, but rather as heated deliberations in which fresh and conflicting information about the virus was constantly being discussed: what Bruno Latour calls “science in the making”, and not what he calls “ready-made science”. For example, on March 17th Vallance stated before the Health and Social Care Select Committee that “If you think SAGE is a cosy consensus of agreeing, you’re very wrong indeed”.
Italian sociologist Luigi Pellizzoni has similarly pointed out an oscillation between the role of the expert demanding full trust from the public and the role of the scientist who, when things go wrong, blames citizens for their pretence of certainty. The result is confusion and suspicion among the public, and a reinforcement of conspiratorial beliefs according to which scientists are hired guns of powerful interests and that science is merely a continuation of politics by other means. In this way, the gulf between those who decry a populist aversion to science, and those who denounce its technocratic perversion cannot but widen, as I suggested in a recent paper.
Epidemiological (like geophysical) expert advice contains sociological and normative assumptions
Expert advice about how to respond to a natural phenomenon, like intense seismic activity or a rapidly spreading virus, will inevitably contain sociological assumptions, i.e. assumptions about how people will behave in relation to the natural phenomenon itself and in relation to what public authorities (and their law enforcers) will do. They also contain normative (or moral) assumptions, about what is the legitimate course of action in response to a disaster. In most cases, these assumptions remain implicit, which can create various problems: certain options that might be valuable are not even considered and the whole process is less transparent, potentially fostering distrust.
In the L’Aquila case, the idea of evacuating the town or of advising the inhabitants to temporarily leave their homes if these had not been retrofitted was simply out of the question. The mayor closed the schools for two days in late March, but most of the experts and decisionmakers involved, especially those who worked at the national level and were not residing in L’Aquila, believed that doing anything more radical would have been utterly excessive at the time. A newspaper condensed the opinion of US seismologist Richard Allen the day after the quake by writing that “it is not possible to evacuate whole cities without precise data” about where and when an earthquake is going to hit. The interview suggested that this impossibility stems from our lack of seismological predictive power, but in fact it is either a normative judgment based on the idea that too much time, money, and wellbeing would be dissipated without clear benefits, or a sociological judgment based on the idea that people would resist evacuation.
The important issue here is not whether a certain form of disaster response is a good or a bad idea, but that judgments of the sort “it is impossible to respond in this way” very often neglect to acknowledge the standards and information on which these are based. And there are good reasons to believe that this rhetorical loop-hole is especially true of judgments that, by decrying certain measures as impossible, simply ratify the status quo and “business as usual”. Our societies rest on a deep grained assumption that “the show must go on”, so that reassuring people is much less problematic than alarming them that something terrible is going to happen. Antonello Ciccozzi, an anthropologist who testified as an expert witness in the L’Aquila trial, expressed this idea by arguing that while the concepts of alarmism and false alarm are well established in ordinary language (and also have a distinctive legal existence, as in the article number 658 of Italian criminal law, which expressly proscribes and punishes false alarm [procurato allarme]), their opposites have no real semantic existence, occupying instead a “symbolic void”. This is why he coined a new term, “reassurism” (rassicurazionismo), to mean a disastrous and negligent reassurance, which he used to interpret the rhetoric of earth scientists and public authorities in 2009 and which he has applied to the current management of the covid-19 crisis.
Pushing the earthquake-virus analogy further, several clues suggest that the scientists that provided advice on covid-19 in Britain limited the range of possible options by a great deal because they were making sociological and normative assumptions. According to Reuters, “the scientific committees that advised Johnson didn’t study, until mid-March, the option of the kind of stringent lockdown adopted early on in China”, on the grounds that Britons would not accept such restrictions. This of course contained all sorts of sociological and moral assumptions about Britain, China, about democracies and autocracies, about political legitimacy and institutional trust. It is hard to establish whether the government explicitly delimited the range of possible policies on which expert advice was required, whether experts shared these assumptions anyway, or whether experts actually influenced the government by excluding certain options from the start. But by and large, these assumptions remained implicit. They were properly questioned only after several European countries started to adopt stringent counter-measures to stop the virus and new studies predicted up to half a million deaths in Britain, forcing the government to reconsider what had previously been deemed a sociological or normative impossibility.
It is true that, in stark contrast to the CMR in L’Aquila, where social science was not represented at all, SAGE has activated its subsection of behavioural science, called SPI-B (Scientific Pandemic Influenza Advisory Committee – Behaviour). Several commentators have argued that this section, by advancing ideas that resonated with broader libertarian paternalistic sensibilities among elite advisors and policymakers, had a significant influence in the early stage of the UK response to covid-19. There is certainly some truth to that, but my bets are that the implicit assumptions of policy-makers and epidemiologists were much more decisive. Briefs of SPI-B meetings in March and February reveal concerns about unintended consequences of and social resistance to measures such as school closures and the isolation of the elderly, but they are far from containing a full-fledged defence of a “laissez faire” approach. The statements reported in the minutes strike for their prudence, emphasising the uncertainties and even disagreements among members of the section. This leads us to consider a third point, i.e. the degree to which experts, along with their implicit or explicit assumptions, managed to exert an influence over policymakers and where able to confront them when they had reasons to do so.
Speaking truth to power or speaking power to truth?
Scientists gain much from being appointed to expert committees: prestige; the prospect of influencing policy; better working conditions; less frequently they might also have financial incentives. Politicians also gain something: better, more rational decisions that boost their legitimacy; the possibility of justifying predetermined policies on a-political, objective grounds; a scapegoat that they can use in case things go wrong; an easy way to make allies and expand one’s network by distributing benefits. But although both sides gain, they are far from being on an equal footing: expert commissions and groups are established by ministers, not the other way around. This platitude testifies to the deep asymmetry between experts and policymakers. We have good reasons to think that, under certain circumstances, such an asymmetric relation prevents scientific experts to fully voice their opinions on the one hand, and emboldens policymakers into thinking that they should not be given lessons by their subordinates on the other. Thanks to the high popularity of the 2019 television series Chernobyl, many now find the best exemplification of such arrogance and lack of criticism in how the Ukrainian nuclear disaster was managed by both engineers and public officials.
There is little doubt that something of the sort occurred in L’Aquila. Several pieces of evidence show that Bertolaso did not summon the CMR meeting to get a better picture of the earthquake swarm that was occurring in the region. In his own words, the meeting was meant as a “media ploy” to reassure the Aquilani. But how could he be so sure that the situation in L’Aquila did not require his attention? It seems that one of the main reasons is that he had his own seismological theory to make sense of what was going on. Bertolaso believed that seismic swarms do not increase the odds of a strong earthquake, but on the contrary that they decrease such odds because small shocks discharge the total amount of energy contained in the earth. Most seismologists would disagree with this claim: low-intensity tremors technically release energy, but this does not amount to a favourable discharge of energy that decreases the odds of a big quake because magnitudes are based on a logarithmic scale, and a magnitude 4 earthquake releases a negligible quantity of energy compared to that released by a magnitude 6 earthquake (and, more generally, to the energy stored in an active fault zone). But scientists appear to have been much too cautious in confronting him and criticising his flawed theory. Bertolaso testified in court that in the course of a decade he had mentioned the theory of the favourable discharge of energy “dozens of times” to various earth scientists (including some of the defendants) and that “nobody ever raised any objection about that”. Moreover, both Bertolaso’s deputy and a volcanologist who was the most senior member of the CMR alluded to the theory during the meeting and in interviews given to local media in L’Aquila. A seismologist testified that he did not feel like contradicting another member of the commission (and a more senior one at that) in front of an unqualified public and so decided to change the topic instead. Such missed objections created the conditions under which the “discharge of energy” as a “positive phenomenon” became a comforting refrain that circulated first among civil protection officials and policymakers, and then among the Aquilani as well.
Has something similar occurred in the management of the covid-19 crisis in Britain? As no judicial inquiry has taken place there, there is only limited evidence that does not authorize anything other than speculative conjectures. However, there are two main candidate theories that, although lacking proper scientific support, might have guided the actions of the government thanks to their allure of scientificity: “behavioural fatigue” and “herd immunity”. As mentioned above, many think that behavioural fatigue, according to which people would not comply with lockdown restrictions after a certain period of time so that strict measures could be useless or even detrimental, has been the sociological justification of a laissez faire (if not social Darwinist) attitude to the virus. But this account seems to give too much leverage to behavioural scientists who, for the most part, were cautious and divided on the social consequences of a lockdown. This also finds support in the fact that no public official to my knowledge referred to “behavioural fatigue” but rather simply to “fatigue”, without explicit reference to an expert report or an authoritative study (as a matter of fact, none of the SPI-B documents ever mentions “fatigue”). I’d like to propose a different interpretation: instead of being a scientific theory approved by behavioural experts, it was rather a storytelling device with a common-sense allure that allowed it to get a life of its own among policy circles, ending up in official speeches and interviews. The vague notion of “fatigue”, which reassuringly suggested that the country and the economy could go on as usual, might have ended up being accepted with little suspicion by many experts as well, especially those of the non-behavioural kind. The concept could have served both as a reassuring belief for public officials and as an argument that could be used to justify delaying (or avoiding) a lockdown. The circulation of “herd immunity” might have followed a similar pattern. Although a scientifically legitimate concept, there is evidence that, along with similar formulations such as “building some immunity”, it was never a core strategy of the government, but rather part of a communicative repertoire that could be invoked to justify a delay of the lockdown as well as measures directed only at certain sections of the population, such as the elderly. Only on 23 March the government changed track and abandoned these concepts altogether, taking measures similar to other European countries.
~ ~ ~
The analogy between how Italian civil protection authorities managed an earthquake swarm in L’Aquila and how the British government responded to covid-19 cannot be pushed too far. Earthquakes and epidemics have different temporalities (a disruptive event limited in space and time on the one hand, a long-lasting process with no strict geographical limits on the other), are subject to different predictive techniques, and demand highly different responses. While a large proportion of Aquilani blamed civil protection authorities immediately after the earthquake, Boris Johnson’s approval rating has improved from March to April 2020. However, what happened in L’Aquila remains, to paraphrase Charles Perrow, a textbook case of a “normal accident” of expertise, i.e. a situation in which expert advice ended up being catastrophically bad for systemic reasons, and notably for how the science-policy interface had developed in the Italian civil protection service. As such, there is much that expert advisors and policymakers can learn from it, whether they are giving advice and responding to earthquakes, nuclear accidents, terrorism, or a global pandemic.
Reading Elizabeth Anderson in the time of COVID-19https://hscif.org/wp-content/uploads/2020/03/Abe_and_First_Novel_Coronavirus_Expert_Meeting.jpg800533Federico BrandmayrFederico Brandmayrhttps://secure.gravatar.com/avatar/4d4b534f331ba9de387d42aee9566b39?s=96&d=mm&r=g
The pandemic is a good time to reflect on expertise (if you have the luxury).
During this particular emergency, governments appear
to pay heed to experts. Or at least they do now that the extent of the crisis is
clear. The public and the media show them respect and even reverence. This is
especially true of physicians and public health scientists, especially
epidemiologists and virologists. To a lesser extent, social scientists
specializing in behavior and networks weigh in on how to organize life under
partial or complete lockdown and how to make this lockdown effective. Economists
are voicing ominous warnings on the magnitude of changes to come.
One tempting conclusion is that after decades of being dismissed or scrutinized for their various weaknesses, experts are back with a vengeance. Indeed, it is striking how much more trust politicians and the public are willing to place in epidemiology than, say, in climate science. It is possible that after COVID-19 is overcome, this halo effect will last and many experts will enjoy greater trust, not just the ones whose advice was particularly relevant during the pandemic.
But this prediction may be wishful thinking. The UK government’s abrupt U-turn from mitigation to suppression, while officially justified by scientific advice, is more likely a result of internal rebellion and external criticisms. The critics of mitigation have appealed to a medley of scientific, ethical, political considerations against the pursuit of ‘herd immunity’. Some expressed astonishment at the fact that the UK experts arrived at advice so different from most other countries which overwhelmingly backed suppression. The radical uncertainty about how the epidemic will develop, the disagreements about how to ‘flatten the curve’ and contain further damage, as well as the now familiar bouts of fakes, misinformation and politicization of expertise, all undermine the optimistic ‘return of the experts’ narrative.
Even if the position of the experts after the pandemic will be stronger, this is not a reason to forget how complex and hard-won epistemic authority is. Public health scientists that are now considering strategies of containing the pandemic rely on models with inevitably speculative assumptions. Furthermore, in order to make inferences from these models, they have to make judgments about the appropriate levels of harm to the public, the acceptable numbers of dead, the tolerable restrictions on freedom, the likely behavior of masses under lockdown, and so on. These judgments are uncertain and controversial, and disagreements between different experts are often intractable. So even if experts are back, their return should not herald their rule.
Professor Elizabeth Anderson of the University of Michigan is a moral philosopher known for her work on expertise and the politics of knowledge. Her writings are a must for anyone who cares about how to define expertise, whether expertise can and should be challenged by laypeople, and what is the proper place of experts in a democracy. Her ideas are as relevant as ever and we recommend two papers in particular. These are classic Anderson papers many of us know and love: they start with a theoretical claim and then illustrate it with historical and contemporary examples of expertise in action.
Anderson, E. (2011). Democracy, Public Policy, and Lay Assessments of Scientific Testimony. Episteme,8(2), 144-164. doi:10.3366/epi.2011.0013
In this paper, Anderson observes that responsible public policy in a technological society must rely on complex scientific reasoning, which ordinary citizens cannot directly assess. But this inability should not call into question the democratic legitimacy of technologically driven policy. Citizens need not be able to judge whether experts are making justified claims, but rather they need to be able to make, what she calls, reliable second-order assessments of the consensus of trustworthy scientific experts. Her case study is anthropogenic global warming and she argues that judging the trustworthiness of climate experts is straightforward ‘for anyone of ordinary education with access to the Web’.
Anderson, E. (2006). The Epistemology of Democracy. Episteme,3(1-2), 8-22. doi:10.3366/epi.2006.3.1-2.8
This is a paper on how institutions make knowledge, both theoretically
and in practice. Theoretically, Anderson reconstructs democracy as an epistemic
engine through deliberation and votes, arguing that democracy’s success in this
task is due to the experimental nature of its institutions, just as John Dewey
taught. Her case study is based on Bina Agarwal’s account of community forestry
in India and Nepal. Its initial exclusion of women resulted in failure to solve
the problem of firewood and fodder shortages.
We recommend reading
these papers alongside Anderson’s answers to our questions below. We
asked her to answer questions about these topics and she sent us her answers
before the pandemic struck. Her reflections are on expertise and democracy in
general, not on how it has played out in the last weeks:
your research and/or work in practice, what makes a good expert?
EA: Expertise in any field must join technical knowledge in the field with certain virtues: (i) honesty in communicating findings in the field, including uncertainties about these findings and the most likely alternative possibilities; (ii) conscientiousness in communicating the “whole” truth, in the sense of not omitting findings that are normatively relevant to policymaking, although they may be inconvenient to one or more political views; (iii) avoidance of dogmatism – i.e., a willingness to revise conclusions in light of new evidence; (iv) taking the public seriously: listening to their concerns, which may include distrust of experts, and taking action to earn their trust, rather than dismissing them out of hand or treating them as stupid, even when their concerns are based on misinformation.
2. What are the pressures experts face in your
EA: As a moral and political philosopher, I am reluctant to claim that there are specifically moral experts, in the sense of people who convey technical conclusions to the public by way of testimony – that is, where we are asking the public to take our word for it in virtue of our being experts, because the considerations for these conclusions are too technical for the public to assess. Philosophers don’t convey findings to the public by way of testimony. We offer ideas, arguments, and perspectives to the public, which they can evaluate for themselves.
3. Have you
observed any significant changes occurring in recent times in the way experts
EA: We now live in a climate of distrust in expertise, of disinformation spread by social media, irresponsible politicized news, and authoritarian regimes, and of propaganda and toxic discourse that has displaced evidence-based, constructive, democratic policymaking with ideas designed to spread distrust and division among citizens. Some of the distrust in scientific expertise arises from experts themselves, who have failed to take responsibility for bad predictions. Some experts have also been corrupted by moneyed interests. Experts need to repair their broken relations with the public. But it’s not all on them. Conflict entrepreneurs – including populist politicians and media – deliberately spread lies and unfounded doubts about experts, to create a climate in which they can operate with impunity while in power, without taking responsibility for the consequences. Spreading doubt about climate change allows fossil fuel interests to wreck the conditions for a sustainable planet. Spreading doubt about economics may allow plutocrats to drive the UK over a no-deal Brexit cliff. Spreading doubt about the safety of vaccines spreads preventable disease while enriching quack doctors.
4. Do you envision any changes in the role of
experts in the future?
EA: Experts can no longer rely on their technical knowledge alone, in order to be able to play a constructive role in policymaking. They need to find constructive ways to relate to the public, to engage the public with their findings in ways that both earn their trust and empower the public to distinguish between real experts and those who disseminate lies, propaganda, and toxic discourse. This will require a reinvigoration of democratic practices in conjunction with science. In the U.S., an exemplary case of what I have in mind is the citizen science undertaken in Flint, Michigan, which exposed the presence of lead in the water and consequent lead poisoning of children. In this case, experts – doctors and environmental scientists—empowered citizens to collect data from their own water lines, and reason together about the meanings of their findings and what to do about them. This is democracy in action, empowered by experts in ways that reinforce trust in expertise and democracy alike.
Reading these now, it is hard not to draw connections to the story of expertise during the pandemic. Anderson’s conception of a responsible expert – as transparent about value judgments, respectful of concerns of the public, and properly undogmatic – is a compelling standard against which to evaluate the experts driving the response to the epidemic. But this standard is also tricky to articulate and to apply in the present context. What it would mean for institutions of public health to produce knowledge that is properly representative and practical? Is there a place for citizen science of infectious diseases or does the urgency and danger of a virus like COVID-19 call for a less distributed, more centralised, and frankly a more authoritarian model than Dewey’s? A proper defence of participatory science needs to show that it is not a luxury that can be put aside during crisis, but rather a necessity. This is far from obvious. What could a citizen science about COVID-19 be? And how can such science command trust in an age of misinformation?
In an email on March 26th Anderson added that citizen science
on COVID-19 is already happening:
We also wonder how Anderson’s view that the trustworthiness of experts is a second-order question (recall she argues that the public need not know the science to trust the experts), help us to understand the trustworthiness of epidemiologists in the time of COVID-19. How do they marshal as much trust as they do, at least once they do, and what accounts for the contrast with, say, climate scientists? Is epidemiologists’ knowledge or character somehow superior to that of so many other distrusted experts? Or is there something about the clear and present urgency of a pandemic and the vividly obvious threats to one’s own life that makes an expert on it trustworthy? (We mean to say trustworthy, rather than trusted, because, as the precautionary principle recommends, when the risk of tragedy is high, it is appropriate to act on less evidence than otherwise.) If so, the proper response to climate scepticism is not better science or better experts as such, but a better representation of urgency and crisis.
There is much more to say and in the coming weeks the Expertise Under Pressure team will be publishing our own and invited commentary on the role of experts during this pandemic. But the writings of classics such as Elizabeth Anderson are an obligatory passage point.
The project manager Marcus Tomalin welcomed attendees to the event before Mevan Babkar, head of automated fact checking at FullFact, gave an insightful talk about human-based fact-checking. She discussed the various ways in which information can be used and abused, and she explained FullFact’s fact-checking processes. It was particularly fascinating to hear about their work during the recent general election.
James Thorne, a PhD student at the Department of Computer Science and Technology, talked about fact extraction and verification, and how approaches from Natural Language Processing can help. He also discussed the Fact Extraction and VERification (FEVER) shared-task (http://fever.ai/).
Jonty Page, a current 4th-year engineering student, gave an overview of an open source fact-checking system the participants could develop during the Hackathon, and he highlighted some potential challenges and topics they could explore. Given a claim to be fact-checked, the baseline system (i) retrieves Wikipedia pages relevant to the claim, (ii) selects particular sentences from those pages which relate to the claim, and (iii) classifies those sentences either as supporting or refuting the original claim, or else as providing too little information to either support or refute it.
Creating an Interdisciplinary Environment
The task of dealing with false claims automatically is necessarily an interdisciplinary task. The Hackathon created a collaborative environment for researchers from a variety of backgrounds. The weekend brought together people with expertise in areas including linguistics, psychology, sociology, education, criminology, mathematics, philosophy, critical thinking, natural language processing, computer science, and software engineering. Therefore, it was a profoundly interdisciplinary event. On the second day of the Hackathon, Dr Shauna Concannon ran some introductory sessions on Python for participants who wanted to learn more about coding, and especially using Python to analyse natural language.
“This is my first hackathon and I’ve really enjoyed its interdisciplinary nature, it’s really welcoming, it’s really engaging, it’s open to newcomers.”
Ideas & Projects
The teams worked on different aspects of the fact-checking task, including developing new methods for retrieving relevant sentences and documents by integrating information contained in hyperlinks, identifying claims that required multiple pieces of evidence in order to be correctly classified; identifying problematical linguistic patterns (such as claims that required comparisons or which included temporal assertions or quotations), and developing new methods for evaluating conflicting evidence using a confidence scoring metric.
“I came to the fact checking hackathon because I think it is a very important problem to work on. I learnt that automated fact checking is a very hard task that involves a number of different components.”
The interdisciplinary interest that this event generated confirms the urgent need for inclusive and collaborative events that bridge the divide between technology, the humanities, and the social sciences.
“It was a great opportunity to come together with people from different backgrounds, people who are doing mathematics, engineering, computer science, linguistics, criminology.”
In November 2019, Fodé Beaudet from the Canadian Foreign Service Institute from Global Affairs Canada visited the UK as part of a project to better understand how we can design, facilitate and evaluate our work to support behavioural change at the individual, group or system level. He met with Anna Alexandrova and Hannah Baker at the University of Cambridge to discuss overlaps between his own work and the Expertise Under Pressure project. Consequently, he kindly agreed to answer the questions that we put forward during our ‘Expert Bite’ discussions as a blog post, and we hope that there will be further collaboration in the future.
Fodé Beaudet is a Senior Learning Advisor at the Centre for Intercultural Learning with Global Affairs Canada (GAC). He has extensive experience in designing and facilitating multi-stakeholder initiatives around the world – themes include train-the-trainer platforms to facilitate change, Whole of Government Approaches to strategic collaboration, navigating through Complex Adaptive Systems and strengthening the intercultural effectiveness of professionals working overseas. Clients include international NGOs, global networks and institutions, foreign governments, the defence sector, research institutions and partner agencies affiliated with GAC. He currently serves on the Board of Directors of the Institute for Performance and Learning (I4PL).
Adult learning approaches and intercultural effectiveness
For the purpose of this blog, I will focus mostly on our adult learning approach to strengthening intercultural effectiveness competencies. Established in 1969, the Centre for Intercultural Learning (CIL) is Canada’s largest provider of intercultural and international training services for internationally-assigned government and private-sector personnel. One of the CIL significant research products was the development of a competency-based model for intercultural effectiveness, the profile of the interculturaleffective person (Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001). This model, delivered with an experiential learning approach (Kolb, 1984), has proven successful to help prepare individuals for short- or long-term missions. According to Kolb, “learning is the process whereby knowledge is created through the transformation of experience” (1984, p. 38). The experiential learning model, adapted from Kolb, invites learners in a series of cycles, as indicated below.
The expert-knowledge content is often integrated at the ‘Generalize’ stage of the ERGA learning cycle. At this point, the expert has a good overview of the knowledge in the room, and she or he can best distill valuable insights to complement what is already known. This may involve validating current knowledge, nuancing some points of views, or challenging what was said. The ‘application’ step invites learners to discuss in groups or in solo how to apply what they have learned and integrates their peers’ knowledge as well as the expert’s contribution. Thus, the cycles of learning loops look more like a spiral than a circle.
1. Considering your research and/or work in practice, what makes a good expert?
Understanding the andragogic approach to learning: comfort with emergence. We distinguish good intercultural experts by their ability to acknowledge and recognize how the content of their contribution can reinforce the knowledge and experience in the room. An expert content does not always have to be elaborated at length, because learners may have reached similar insights. A good expert will challenge, validate, nuance or enrich content. As such, comfort with emergence means a predisposition for active listening and demonstrate agility based on what is said in the moment. As for an expert facilitator, comfort with emergence and understanding andragogic approach to learning also applies. For instance, our train-the-trainer approach involves very little content from the facilitator. A facilitator will rarely speak for more than 5 minutes. In a train-the-trainer format, learners assess, design, facilitate and evaluate their work collaboratively, in real-time. Reflective practice encourages learners to deepen their self-awareness about the experience. I recall when I was first introduced to this work, co-facilitating a train-the-trainer: my first reaction, when learners were struggling with a task, requesting an example, was to provide one. And then, I got into a murky water: how would you proceed next? My gifted co-facilitator at the time took me to the side and reminded me: let them struggle. Let them figure it out. This is important. In this particular format and context, an expert facilitator has to hold and suspend their ideas or creativity. It’s about setting the container for learners to shine. The less an expert facilitator says, is seen, the better. Then, learners become the experts, taking ownership and responsibility to finding solutions instead of asking for them.
I also recall a learner asking at the start of a train-the-trainer workshop regarding expectations: “How can I deal with difficult people?” At the end of the workshop, the learner shared this, which I paraphrase “I think it is me, who can be difficult sometimes”. In other words, we are not isolated from the system we wish to intervene. We are part of any system. And as long as we project “fixing” to be solely about others, we may miss an important blind spot. Which means making discomfort and silence your friend. During another train-the-trainer in the Middle East, a man shared a powerful testimony. He pointed to a woman in the group, and said for all to hear:
“Before coming here, I didn’t think women could lead. Not only were you a leader, working with two men, but I have two daughters and I hope someday they will be like you.”
This is the potential transformative of participatory processes when learners are taking an active part in co-creating with others. Hierarchy breaks down. Self-organization is encouraged. And mental models are challenged. Because the expertise is across the room.
Inquisitive mind. The value of an expert’s skilled questions to better understand the learner profile. In contrast, an expert with a set presentation seeking to repurpose what she or he has already repeated may not be a good fit.
Humility. Here is an anecdote to make the point. At the beginning of an intercultural course about an Asian country, the intercultural expert said: “If someone tells you they know the country, they don’t know the country.” He described at length his experience, but only to convey how he felt short of truly understanding the culture and that he had a lot to learn. A learner, who was born and raised in that particular country, said to the expert for everyone to hear: “You understand the country very well.” This humility, in some circles, may be counter-intuitive. However, when engaging with understanding what it means to be effective across cultures, humility promotes curiosity and deepens one’s inquiry to what are complex webs and layers that evolve and transform over time. In contrast, an assertiveness and certainty about culture in general terms can do a disservice to learners, reinforcing assumptions and inadvertently promoting ill-conceived predictive behaviours. Here is one example about how the model can be adapted. Years ago, I was collaborating with a client hosting a Japanese delegation. One of their goals was to come up with a partnership agreement. They sought our services to learn about Japanese culture. We had less than a day. I worked with an expert facilitator and an intercultural expert with knowledge about Japanese culture to design an approach. Given the client’s end goal, we devised a plan where the client was given an opportunity to reflect on how their current approach relates to Japanese culture. In the morning, they learned key features about the culture. And then, in the afternoon, we asked the client to describe how they intended to host their counterparts, dividing the task into three categories: activities before the delegation arrives, during, and after. And they presented back their findings, strategies and questions to the intercultural expert. Details, like the value of greeting the delegation at the airport surfaced. Reviewing lunch time allocation, setting up social activities, and not just work-related activities. But the most insightful learning from the client was the need to reframe their goal: given the decision-making process they had learned about, it was unrealistic to expect a partnership agreement so soon. Therefore, emphasis on hosting was about nurturing and building relationships. I give a lot of credit to the client, who, instead of forcing their approach, reframed the goal itself. And the intercultural expert had the wisdom to let learners surface their assumptions, complementing their strategies with her own insight.
2. What are the pressures experts face in your field?
No one organization nor individual can understand nor convey the full extent of what needs to be learned. Hence the importance to acknowledge the boundaries of expert knowledge while being confident in what they can contribute. Problems are not solved with the help of one expert, but through the collaboration among diverse expertise. I’ll weave some examples of our work and research in addressing complex problems in a multi-stakeholder context. Here are a few pressures experts face:
Attribution:how do you communicate the value of your expertise, when the result is part of a larger ecosystem? How important is it for the expert to be visible, vs promoting the visibility of others?
Tensions between asking good questions vs offering answers.Inevitably, someone will want to know: can you tell me exactly what I need to say? What I need to do? How will someone respond if I do X? What if I do Y? It dependsdoesn’t answer the question and can be frustrating. And yet, it dependscan also be closer to the truth than coming up with a reassurance in the moment that can be deceptive later. Probing further questions about what is being asked may lead to better answers. It’s also tricky: aren’t you supposed to be an expert? Aren’t you supposed to know?
This is where experts and clients may collude: in the search of an easy solution, in the effort to prove that something is done, the lure of pursuing the wrong problem can provide an illusory solace, and therefore lead both experts and clients to ‘tick the box’, indicating that ‘actions’ were made.
Contractual agreement. Building on the previous points, experts feel pressure to operate within the contractual agreement they are accountable to. Yet, when facing complex problems, the emergence of unexpected scenarios may require reframing what was understood to be the problem. So if the contract has little flexibility, a new understanding about a problem cannot be accounted for. So experts are torn between the reality of what they see, and the murkier expectations of what they should deliver.
Hence, to make the most of a good expert, there is also a need for a good procurement process. Faulty procurement, with little flexibility, can turn the best experts into the worst. All with good intentions. For example, a common reflex for easy solution is training. Yet, in many instances, training is not the answer. Here’s an example about how a mental model shifted as a result of incorporating generative dialogues in our train-the-trainer workshops. Generative dialogues put emphasis on a shared understanding about a problem or an inquiry, to reflect on the collective wisdom, and propose actions to move forward. David Bohm (2013), among others, have written extensively about the value of dialogue. A key feature of generative dialogue is that neither the problem/inquiry nor the solutions are predetermined. Everyone’s voice is valued. The physical environment is key: conversations in small groups or in a large circle, as oppose to lecture-style format. During one of the train-the-trainer in the Horn of Africa, a learner approached me, after experimenting with such approach, saying; “You mean, we don’t need training? We can have dialogue?” In some instances, yes. Training has its place too. Lecture-style have their place too. It’s a matter of finding the appropriate response to the problem at hand. Generative dialogue requires dealing with discomfort, with not knowing what the answer will be. In other words, generative dialogue is useful when dealing with complex problems. And perhaps paradoxically, while it’s an ancient practice well known among First Nations communities, it may also be part of a practice for approaching the future.
3. Have you observed any significant changes occurring in recent times in the way experts operate?
The rise of social labs is perhaps an indication of an emerging ‘expertise’; the art convening space for experts to co-create. One of many examples would be the Art of Hosting. A change I see is the blend of expert content and the expert skilled to hold the container. I remember working with a client system that insisted to push the boundaries of what is possible. They were seeking to create new theories of change for education in Africa. In this context, all participants had a voice, whether former ministers of education, deans, vice-deans, farmers, CEOs or students. In hindsight, this reflected the complementary role of expertise. It’s alluring for an expert to define what should be done. But in some instances, a community, a collective, or a multi-stakeholder representation is needed to name, envision and frame a compelling and shared understanding towards a future horizon. Building from a shared vision and understanding, the role of experts is clearer. But if we name or assume a particular expert model too soon and especially prior to a shared understanding, then the risk is to fall into the trap of using a fashionable trend that serves a model rather than the actual problem or appreciative inquiry.
Finally, I also think that Donald Schön, in his thought-provoking book in 1987, the Reflective Practitioner, offered insights still relevant today: the growing shift of technical knowledge to address complex problems. And as a result, a more reflective professional stance, whereby framing the problem is of the upmost importance.
4. Do you envision any changes in the role of experts in the future?
It’s unclear for me if the future will look like the continuation of a trend, a pattern, or if there will be significant upheaval or sudden bifurcation. One factor that may influence the role of experts is our relationships with evidence and facts. The other is the independence in which experts operate: to what extent is the ecosystem in which experts will evolve is vulnerable to collusion, unconsciously, or implicitly? In the latter case, experts may serve to prove a point of view, rather than enrich point of views.
In the context of intercultural effectiveness, I foresee the continuation of expanding beyond the classroom and the virtual environment to include learning journeys and more action learning oriented approaches. I could see strengthening the model of experiential learning to include mindfulness, where sensemaking is not limited to what one thinks or feels, but also inquisitiveness about the body’s way of learning. This would call for more immediate attention to presence, in order to witness and discern how we are influenced by past experience, which affects how we project into the future. However, to embrace a more direct perception with less filter is an uncomfortable place to be, to hold, but one rich with clarity.
Overall, I see a growing emphasis on two streams that have been more or less separated: technical and process-oriented expertise. The latter is about holding the container, sensemaking, facilitation, convening and hosting skills. The former, is delving deeper into system dynamics and social network analysis, where culture is understood as systems, transcending borders. Here’s an example of what I’ve learned from experts in these fields through work projects; when we integrate the lens of systems and how nodes interact with each other, we see common features among patterns: systems are self-reinforcing. Systems have a purpose, which is to survive. The purpose of a system is what it does, once said the cyberneticist Stafford Beer. This has implications for living in a turbulent environment where survival instincts are heightened: if the purpose of a system is a system, then perhaps change happens by understanding patterns and purpose beyond their perceived shared values. I found the experience of collaborating with experts in social network analysis while inviting stakeholders into a sensemaking analysis revealing.
I wouldn’t be surprised, in the future, to witness more and more of these conversations among technical expertise and process-oriented approaches. Perhaps a process without good content is as useless as good content without a proper process to make sense of it.
Bohm, David. (2013). On Dialogue.Abingdon, Oxon: Routledge.
Kolb, D. A. (1984). Experiential Learning experience as a source of learning and development. New Jersey: Prentice Hall.
Schein, E. H. (1999). Process consultation revisited: Building the helping relationship. New York: Addison- Wesley.
Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action.United States of America: Basic Books.
Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001). A Profile of the Interculturally Effective Person. Centre for Intercultural Learning. Canadian Foreign Service Institute.
When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Researchhttps://hscif.org/wp-content/uploads/2020/01/The_Last_Day_of_a_Condemned_Man.jpg795599Federico BrandmayrFederico Brandmayrhttps://secure.gravatar.com/avatar/4d4b534f331ba9de387d42aee9566b39?s=96&d=mm&r=g
Organised by Federico Brandmayr and Anna Alexandrova
“Does understanding come at the price of undermining our capacity to judge, blame and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome?”
The Expertise Under Pressure project
hosted its first workshop on 27 September 2019 at the Centre for Research
in the Arts, Social Sciences and Humanities (CRASSH). The project is
part of the Centre
for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation. The overarching goal of Expertise Under Pressure is to establish a broad framework for
understanding what makes expertise authoritative, when experts overreach, and
what realistic demands communities should place on experts.
The talks and discussions of this first
workshop focused specifically on a charge frequently levelled against experts
who study human culture and social behaviour, i.e. that their explanations can
provide justifications or excuses for ill-intentioned people, and that
decisionmakers making choices on the basis of their advice might neglect to
punish and react effectively to harmful behaviours.
A good way to capture the theme of the workshop is a saying attributed to Germaine de Stael: “tout comprendre, c’est tout pardonner”, “to understand all is to forgive all”. Social scientists perhaps do not intend to understand all that there is, but they generally like the idea of increasing our understanding of the social world. By and large, historians, sociologists, political scientists and anthropologists, tend to show that the people they study do certain things not just because they want to do those things, but also because they are driven by various kinds of factors. And the more knowledge we have of these factors, the more choice, responsibility, and agency seem to fade away. This begs the question: does understanding come at the price of undermining our capacity to judge, blame, and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome? And how should scientific disciplines, professional associations, and individual researchers deal with this issue in their daily practice and especially in their interventions in public debates and in policymaking contexts? Indeed, these issues essentially relate to the question of how social knowledge is produced and how it circulates outside academia, and notably how it is appropriated and misappropriated by different groups in the endless disputes that divide society and in which attributions of credit and blame are widespread.
The one-day event brought together researchers from various academic disciplines, looking at the exculpatory potential of social research. Here is what they came up with.
Professor Livia Holden (University of Oxford) was the first speaker of the day. With a background in anthropology and socio-legal studies, Holden leads a European Research Council project titled Cultural Expertise in Europe: What is it useful for? The project looks at the role of anthropologists and other cultural experts in advising judges in court cases and policymakers in fields such as immigration law. In her talk, ‘Cultural Expertise and the Fear of Absolution’, she analysed the concept of cultural expertise and described the specific challenges cultural experts face, especially where anthropology enjoys little credit. Drawing on several examples, including her own experience as an expert witness in family law cases, she argued that experts oscillate between the fear of absolution, i.e. concerns of excusing harmful acts (such as genital mutilation) on the grounds that they are rooted in cultural traditions, and the fear of condemnation, i.e. concerns of being complicit with colonial rule and repressive criminal justice policies.
The following speaker was Hadrien Malier (École des hautes études en sciences sociales), a sociologist who studies policy measures aimed at nudging working-class people into adopting more ‘eco-friendly’ habits. His talk, ‘No (Sociological) Excuses for Not Going Green: Urban Poor Households and Climate Activism in France’, presented the results of an ethnography conducted in two low-income housing projects. The volunteers and activists that Malier followed in these neighbourhoods framed the protection of the environment as an individual and universally distributed moral obligation, independent of privilege, class and education. Climate activists, who are mostly middle-class and educated, recognise the social difference between them and the mostly poor people they try to nudge toward eco-friendly habits. But this difference is simply interpreted as proof that people with low income do not know or care enough about the environment. More relevant sociological insights on class differences, including well-supported claims according to which people with low income have a relatively light ecological footprint, are often seen as a bad excuse for acts that are detrimental to environment.
Dr Nigel Pleasants (University of Exeter) gave the next talk. Pleasants is a philosopher of social science who has written extensively on how sociological and historical knowledge influences our moral judgements. In his recent publications, he focused on various controversies related to historical explanations of the Holocaust. His talk, ‘Social Scientific Explanation and the Fact-Value Distinction’, explored and clarified the relation between excuse and justification. Excuses concern the responsibility of an actor in performing a certain action, while justifications refer to the moral status of an action (i.e. whether it is right or wrong) regardless of the responsibility of the actor that performs it. Drawing on scholarship on the Holocaust, he argued that while explanatory accounts from the social sciences are highly relevant to determine whether a certain act can be excused, the same cannot be said for whether a certain act is justified or not.
The morning session ended with a talk by Professor Marco Santoro (Università di Bologna): ‘Whose Sides (of the Field) Could We Be On? Situatedness, Perspectivism, and Credibility in Social Research’. Santoro is a sociologist who has written on such diverse topics as the notarial profession, popular music, the international circulation of social scientific ideas and the Sicilian mafia. His starting point was a personal experience in which his interpretation of the mafia was harshly criticised by a colleague. In his writings on the topic, he had argued that the mafia can be interpreted as a form of political organisation, a non-state political institution enjoying a certain legitimacy and providing protection and services to its constituency, in a region where poverty runs high and that many see as having been left behind by the Italian state. Those scholars who instead saw the mafia as functioning like a company, simply providing services (e.g. protection from violence) in exchange for money, considered his arguments tantamount to a justification of organised crime. This episode inspired Santoro’s forceful defence of a multi-perspectival approach, according to which we should broaden the range of interpretations of a single phenomenon while being aware that these perspectives are not morally and politically neutral. Some might put us in dangerous territory, but it is only by seriously advancing them that we can clarify our very moral ideals.
Opening the afternoon session, Dr Federico Brandmayr (University of Cambridge) reconstructed the debate on ‘sociological excuses’ that took place in France after the country was struck by several deadly terrorist attacks in 2015 and 2016. In his talk, ‘The Political Epistemology of Explanation in Contemporary French Social Thought’, he showed that the very expression of sociological excuse has clear intellectual and political origins, rooted in US right-wing libertarianism, and argued that it is mainly used in France in relation to accounts of the urban lower class that emphasise poverty, unemployment and stigmatisation. Sociology as a discipline was at the centre of much controversy after the 2015 terrorist attacks, and sociologists reacted in three main ways: some denied the allegations, others reappropriated the derogatory label of excuse by giving it a positive meaning, while others accepted criticism and called for a reformation of sociology. Accordingly, Dr Brandmayr argued that French sociology should not be considered as a monolithic block that experiences attacks from political sectors, but rather as a heterogeneous complex of different epistemic communities.
In a similar historical vein, Professor Stephen Turner (University of South Florida) gave a talk titled ‘Explaining Away Crime: The Race Narrative in American Sociology’. A renowned historian and philosopher of social science, he reconstructed the history of how social scientists have dealt with the fact that crime rates for Blacks in the US have always been higher than for other ethnic groups. Generally speaking, social scientists wanted to avoid racist accounts of this gap (like those based on a form of genetic predisposition of black people to commit crimes), but they also showed dissatisfaction with accounts that explained the gap by simply pointing to social factors such as poverty and discrimination. This is because of certain theoretical inconsistencies (such as the fact that black crime mainly targets black people, while one would assume that discrimination should cause Blacks to act violently against Whites), but also because it was seen as an excuse pointing to a deficiency in the agent and implying a form of inferiority. Spanning more than a century, Turner’s historical reconstruction identified three basic strategies US social scientists adopted to overcome this dilemma and delineated their ethical implications.
Finally, Gabriel Abend (Universität Luzern) took a more philosophical approach in a talk titled ‘Decisions, “Decisions”, and Moral Evaluation’. His talk built on a theoretical framework that he has recently developed in several publications, and which provides the foundation for the study of decisionism, i.e. the fact that people use decision (or choice) concepts and define certain things as decisions. Decisionism has clear moral and practical implications, as people are generally held accountable and subject to moral judgment when their acts are interpreted as decisions. Abend provided a striking list of examples from scientific journals in which the concept of decision was used to describe such unrelated things as bees’ foraging activities, saccadic eye movements and plant flowering. While these instances of decisionism offer plenty of material for the empirical sociologist, he raised concerns about the risk of conceptual stretching and advocated a responsible conceptual practice.
The workshop was a truly interdisciplinary inquiry, in the spirit of CRASSH. All interventions, whether their approach was philosophical, sociological, historical, or legal converged toward increasing our knowledge of the relationship between explaining and understanding on the one hand, and excusing and justifying on the other. Thanks to the lively and thorough responses given by an impressive battery of discussants (Dr Anna Alexandrova, Dr Jana Bacevic, Dr Cléo Chassonnery-Zaïgouche and Dr Stephen John), the talks were followed by fruitful exchanges. A special issue with the papers given in the workshop is in preparation and will be submitted soon to a prominent interdisciplinary journal.