Events Cambridge

Disaster Response | Knowledge Domains and Information Flows 1024 683 Hannah Baker

Disaster Response | Knowledge Domains and Information Flows

Eyjafjallajokull, Iceland.
Image by JohannHelgason/Shutterstock.com

DISASTER RESPONSE

Knowledge Domains and Information Flows

11 February 2020, 10.30:17:00

Cripps Court, Magdalene College, University of Cambridge, CB3 0AG

Convenors

Hannah Baker, Research Associate, CRASSH (University of Cambridge)

Robert Doubleday, Executive Director at the Centre for Science and Policy (CSaP), (University of Cambridge)

Emily So, Reader in the Department of Architecture (University of Cambridge)

Overview

Disaster management is formed of several parts including preparedness, mitigation, response and recovery. Critics argue that current disaster management practices are technocratic and call for a co-production of knowledge. This workshop, therefore, explores knowledge domains and flows of information in the context of disaster response. When responding to an earthquake, volcanic eruption, pandemic and other emergency situations, decisions need to be made at governmental level and on the ground. Information has to be collated, understood and disseminated to make decisions in these time-pressured environments subject to uncertainty. 

The workshop addresses a range of questions in the context of disaster response: 

  • What type of knowledge is and should be used? 
  • What constitutes an expert? 
  • How is and should uncertainty be factored into decisions and communicated? 
  • What happens to, and should happen to, knowledge after it is produced and the event has taken place?

Speakers from different disciplinary backgrounds represent both academia and policy, emphasising the need to think holistically about these problems. The workshop includes focus groups to allow for in-depth discussions about the questions posed and to facilitate collaboration between participants. 

Speakers

Amy Donovan (University of Cambridge)

Robert Evans (Cardiff University)

Dorothea Hilhorst (Erasmus University Rotterdam)

Mausmi Juthani  (Government Office of Science)

Benjamin Taylor (Evidence Aid)

Target audience

This is an interactive workshop, with the purpose of bringing together people from a range of disciplines and experiences. The target audience includes (but is not limited to) people working in/researching expertise, organisational theory, knowledge production and dissemination, and disaster management. 

All participants are expected to take part in the focus groups. Multiple perspectives and levels of experiences are encouraged and facilitators will be on hand to manage discussions. 

Further information

The workshop is followed by the Centre for Science and Policy’s (CSaP’s) annual lecture, which participants may also find of interest. This will be delivered by Professor Dame Sally Davies, Master of Trinity College, Cambridge and former Chief Medical Officer for England and Chief Medical Advisor to the UK Government. The lecture will take place in St John’s College at 17:30. Anyone interested in attending  should register with CSaP.

This workshop forms part of the Expertise Under Pressure (EUP) project, funded by the Humanities and Social Change International Foundation. The EUP project’s overarching goal is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach and what realistic demands communities should place on experts.

Queries: Contact Una Yeung

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research 795 599 Federico Brandmayr

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research

FIRST WORKSHOP – 27 September 2019

Organised by Federico Brandmayr and Anna Alexandrova

“Does understanding come at the price of undermining our capacity to judge, blame and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome?”


The Last Day of a Condemned Man (1869) by Mihály Munkácsy (Wikimedia Commons).

The Expertise Under Pressure project hosted its first workshop on 27 September 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation. The overarching goal of Expertise Under Pressure is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach, and what realistic demands communities should place on experts.

The talks and discussions of this first workshop focused specifically on a charge frequently levelled against experts who study human culture and social behaviour, i.e. that their explanations can provide justifications or excuses for ill-intentioned people, and that decisionmakers making choices on the basis of their advice might neglect to punish and react effectively to harmful behaviours.

A good way to capture the theme of the workshop is a saying attributed to Germaine de Stael: “tout comprendre, c’est tout pardonner”, “to understand all is to forgive all”. Social scientists perhaps do not intend to understand all that there is, but they generally like the idea of increasing our understanding of the social world. By and large, historians, sociologists, political scientists and anthropologists, tend to show that the people they study do certain things not just because they want to do those things, but also because they are driven by various kinds of factors. And the more knowledge we have of these factors, the more choice, responsibility, and agency seem to fade away. This begs the question: does understanding come at the price of undermining our capacity to judge, blame, and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome? And how should scientific disciplines, professional associations, and individual researchers deal with this issue in their daily practice and especially in their interventions in public debates and in policymaking contexts? Indeed, these issues essentially relate to the question of how social knowledge is produced and how it circulates outside academia, and notably how it is appropriated and misappropriated by different groups in the endless disputes that divide society and in which attributions of credit and blame are widespread.

The one-day event brought together researchers from various academic disciplines, looking at the exculpatory potential of social research. Here is what they came up with.

Livia Holden

Professor Livia Holden (University of Oxford) was the first speaker of the day. With a background in anthropology and socio-legal studies, Holden leads a European Research Council project titled Cultural Expertise in Europe: What is it useful for? The project looks at the role of anthropologists and other cultural experts in advising judges in court cases and policymakers in fields such as immigration law. In her talk, ‘Cultural Expertise and the Fear of Absolution’, she analysed the concept of cultural expertise and described the specific challenges cultural experts face, especially where anthropology enjoys little credit. Drawing on several examples, including her own experience as an expert witness in family law cases, she argued that experts oscillate between the fear of absolution, i.e. concerns of excusing harmful acts (such as genital mutilation) on the grounds that they are rooted in cultural traditions, and the fear of condemnation, i.e. concerns of being complicit with colonial rule and repressive criminal justice policies.

Jana Bacevic, Livia Holden, and Hadrien Malier

The following speaker was Hadrien Malier (École des hautes études en sciences sociales), a sociologist who studies policy measures aimed at nudging working-class people into adopting more ‘eco-friendly’ habits. His talk, ‘No (Sociological) Excuses for Not Going Green: Urban Poor Households and Climate Activism in France’, presented the results of an ethnography conducted in two low-income housing projects. The volunteers and activists that Malier followed in these neighbourhoods framed the protection of the environment as an individual and universally distributed moral obligation, independent of privilege, class and education. Climate activists, who are mostly middle-class and educated, recognise the social difference between them and the mostly poor people they try to nudge toward eco-friendly habits. But this difference is simply interpreted as proof that people with low income do not know or care enough about the environment. More relevant sociological insights on class differences, including well-supported claims according to which people with low income have a relatively light ecological footprint, are often seen as a bad excuse for acts that are detrimental to environment.

Nigel Pleasants

Dr Nigel Pleasants (University of Exeter) gave the next talk. Pleasants is a philosopher of social science who has written extensively on how sociological and historical knowledge influences our moral judgements. In his recent publications, he focused on various controversies related to historical explanations of the Holocaust. His talk, ‘Social Scientific Explanation and the Fact-Value Distinction’, explored and clarified the relation between excuse and justification. Excuses concern the responsibility of an actor in performing a certain action, while justifications refer to the moral status of an action (i.e. whether it is right or wrong) regardless of the responsibility of the actor that performs it. Drawing on scholarship on the Holocaust, he argued that while explanatory accounts from the social sciences are highly relevant to determine whether a certain act can be excused, the same cannot be said for whether a certain act is justified or not.

Marco Santoro and Nigel Pleasant

The morning session ended with a talk by Professor Marco Santoro (Università di Bologna): ‘Whose Sides (of the Field) Could We Be On? Situatedness, Perspectivism, and Credibility in Social Research’. Santoro is a sociologist who has written on such diverse topics as the notarial profession, popular music, the international circulation of social scientific ideas and the Sicilian mafia. His starting point was a personal experience in which his interpretation of the mafia was harshly criticised by a colleague. In his writings on the topic, he had argued that the mafia can be interpreted as a form of political organisation, a non-state political institution enjoying a certain legitimacy and providing protection and services to its constituency, in a region where poverty runs high and that many see as having been left behind by the Italian state. Those scholars who instead saw the mafia as functioning like a company, simply providing services (e.g. protection from violence) in exchange for money, considered his arguments tantamount to a justification of organised crime. This episode inspired Santoro’s forceful defence of a multi-perspectival approach, according to which we should broaden the range of interpretations of a single phenomenon while being aware that these perspectives are not morally and politically neutral. Some might put us in dangerous territory, but it is only by seriously advancing them that we can clarify our very moral ideals.

Federico Brandmayr

Opening the afternoon session, Dr Federico Brandmayr (University of Cambridge) reconstructed the debate on ‘sociological excuses’ that took place in France after the country was struck by several deadly terrorist attacks in 2015 and 2016. In his talk, ‘The Political Epistemology of Explanation in Contemporary French Social Thought’, he showed that the very expression of sociological excuse has clear intellectual and political origins, rooted in US right-wing libertarianism, and argued that it is mainly used in France in relation to accounts of the urban lower class that emphasise poverty, unemployment and stigmatisation. Sociology as a discipline was at the centre of much controversy after the 2015 terrorist attacks, and sociologists reacted in three main ways: some denied the allegations, others reappropriated the derogatory label of excuse by giving it a positive meaning, while others accepted criticism and called for a reformation of sociology. Accordingly, Dr Brandmayr argued that French sociology should not be considered as a monolithic block that experiences attacks from political sectors, but rather as a heterogeneous complex of different epistemic communities.

Stephen Turner, Federico Brandmayr, and Stephen John

In a similar historical vein, Professor Stephen Turner (University of South Florida) gave a talk titled ‘Explaining Away Crime: The Race Narrative in American Sociology’. A renowned historian and philosopher of social science, he reconstructed the history of how social scientists have dealt with the fact that crime rates for Blacks in the US have always been higher than for other ethnic groups. Generally speaking, social scientists wanted to avoid racist accounts of this gap (like those based on a form of genetic predisposition of black people to commit crimes), but they also showed dissatisfaction with accounts that explained the gap by simply pointing to social factors such as poverty and discrimination. This is because of certain theoretical inconsistencies (such as the fact that black crime mainly targets black people, while one would assume that discrimination should cause Blacks to act violently against Whites), but also because it was seen as an excuse pointing to a deficiency in the agent and implying a form of inferiority. Spanning more than a century, Turner’s historical reconstruction identified three basic strategies US social scientists adopted to overcome this dilemma and delineated their ethical implications.

Finally, Gabriel Abend (Universität Luzern) took a more philosophical approach in a talk titled ‘Decisions, “Decisions”, and Moral Evaluation’. His talk built on a theoretical framework that he has recently developed in several publications, and which provides the foundation for the study of decisionism, i.e. the fact that people use decision (or choice) concepts and define certain things as decisions. Decisionism has clear moral and practical implications, as people are generally held accountable and subject to moral judgment when their acts are interpreted as decisions. Abend provided a striking list of examples from scientific journals in which the concept of decision was used to describe such unrelated things as bees’ foraging activities, saccadic eye movements and plant flowering. While these instances of decisionism offer plenty of material for the empirical sociologist, he raised concerns about the risk of conceptual stretching and advocated a responsible conceptual practice.

The workshop was a truly interdisciplinary inquiry, in the spirit of CRASSH. All interventions, whether their approach was philosophical, sociological, historical, or legal converged toward increasing our knowledge of the relationship between explaining and understanding on the one hand, and excusing and justifying on the other. Thanks to the lively and thorough responses given by an impressive battery of discussants (Dr Anna Alexandrova, Dr Jana Bacevic, Dr Cléo Chassonnery-Zaïgouche and Dr Stephen John), the talks were followed by fruitful exchanges. A special issue with the papers given in the workshop is in preparation and will be submitted soon to a prominent interdisciplinary journal.

Text by Federico Brandmayr

Pictures taken by Judith Weik


Fact-Checking Hackathon 1024 684 Stefanie Ullmann

Fact-Checking Hackathon

Fact-Checking Hackathon

10 January 2020, 10:00 – 12 January 2020, 16:00

Room LR4, Baker Building, Department of Engineering, University of Cambridge,Trumpington Street, Cambridge CB2 1PZ


Overview

Fake news, misinformation and disinformation are being created and circulated online with unprecedented speed and scale. There are concerns that this poses a serious threat to our modern digital societies by skewing public opinion about important issues and maliciously interfering with national election campaigns.

Fact-checking is an increasingly vital approach for tackling the rapid spread of false claims online. Specifically, there is an urgent need for automated systems that detect, extract and classify incorrect information in real time; and linguistic analyses of argument structure, entailment, stance marking, and evidentiality can assist the development of such systems.

We want to bring together people with different kinds of expertise to develop new approaches for tackling the problems posed by fake news, misinformation and disinformation. Taking an existing automated fact-checking system as a baseline, the main hackathon task will be to find ways of improving its performance. The experimental framework will be that used for the FEVER: Fact Extraction and VERification challenge (http://fever.ai). 

 

Is it for me?

The task of dealing with false claims online is necessarily an interdisciplinary task. Therefore, this hackathon will create a collaborative environment for participants from a variety of backgrounds to come together to work in teams. Whether you already have strong coding skills, a specific interest in disciplines such as information engineering or natural language processing, a familiarity with linguistic theory, or even an interest in the philosophy of language, you will certainly be able to make valuable contributions during the hackathon!

In particular we encourage undergraduates and postgraduates:

  • in Engineering / Computer Science, with good programming skills (esp. Python) 
  • in Linguistics / Philosophy / Psychology / Sociology
  • with an interest in language-based AI technologies 

 

Do I need to be able to code?

There will be a variety of ways to get involved and contribute during the hackathon, so coding experience is not essential. For instance, participants with a background in linguistics can analyse the linguistic data in detail, and then work together with coders so that their insights can improve the baseline system.

For those participants who would like to learn more about coding, there will be introductory sessions on Python during the hackathon – so this will be a good opportunity to dip your toe in the water!

 

Why should I attend?

  • A chance to collaborate in interdisciplinary teams to address a language-based technology problem that has huge contemporary importance.
  • An opportunity to learn about the challenges of developing an automated fact-checking system, and benefit from advice and insights from fact-checking experts.
  • A chance to learn Python, if you are new to coding.

 

Further details

The event runs from Friday to Sunday and attendees are expected to participate throughout.

Lunch will be provided on all three days, and there will be coffee and snacks throughout the hackathon, to keep you going!

If you have any questions about the event or would like to discuss any specific requirements please contact Shauna Concannon 

Image by igorstevanovic/Shutterstock.com

The Future of Artificial Intelligence: Language, Society, Technology 1024 647 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Society, Technology

This workshop, the third in a series on the future of artificial intelligence, will focus on the impact of artificial intelligence on society, specifically on language-based technologies at the intersection of AI and ICT (henceforth ‘Artificially Intelligent Communications Technologies’ or ‘AICT’) – namely, speech technology, natural language processing, smart telecommunications and social media. The social impact of these technologies is already becoming apparent. Intelligent conversational agents such as Siri (Apple), Cortana (Microsoft) and Alexa (Amazon) are already widely used, and, in the next 5 to 10 years, a new generation of Virtual Personal Assistants (VPAs) will emerge that will increasingly influence all aspects of our lives, from relatively mundane tasks (e.g. turning the heating on and off) to highly significant activities (e.g. influencing how we vote in national elections). Crucially, our interactions with these devices will be predominantly language-based.

Despite this, the specific linguistic, ethical, psychological, sociological, legal and technical challenges posed by AICT (specifically) have rarely received focused attention. Consequently, the workshop will examine various aspects of the social impact of AICT-based systems in modern digital democracies, from both practical and theoretical perspectives. By doing so, it will provide an important opportunity to consider how existing AICT infrastructures can be reconfigured to enable the resulting technologies to benefit the communities that use them.

Speakers

Maria Luciana Axente (Pricewaterhouse Coopers)

Shauna Concannon (University of Cambridge)

Sarah Connolly (UK Department for Digital, Culture, Media & Sport)

Ella McPherson (University of Cambridge)

Trisha Meyer (Free University of Brussels – VUB)

Jonnie Penn (University of Cambridge)

The workshop is organised by Giving Voice to Digital Democracies, a research project that is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.

Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.

Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.

Please register for the workshop here.

Queries: Una Yeung (uy202@cam.ac.uk

Image by Metamorworks/Shutterstock.com

The Future of Artificial Intelligence: Language, Gender, Technology 724 1024 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Gender, Technology

A report of the event as well as videos of the talks can be found here.

The workshop will consider the social impact of Artificially Intelligent Communications Technology (AICT). Specifically, the talks and discussions will focus on different aspects of the complex relationships between language, gender, and technology. These issues are of particular relevance in an age when Virtual Personal Assistants such as Siri, Cortana, and Alexa present themselves as submissive females, when most language-based technologies manifest glaring gender-biases, when 78% of the experts developing AI systems are male, when sexist hate speech online is a widely-recognised problem and when many Western cultures and societies are increasingly recognising the significance of non-binary gender identities.

Speakers

Professor Alison AdamSheffield Hallam University

Dr Heather BurnettCNRS-Université Paris Diderot

Dr Dirk HovyBocconi University

Dr Dong NguyenAlan Turing Institute, University of Utrecht

Dr Ruth PageUniversity of Birmingham

Dr Stefanie UllmannUniversity of Cambridge

The workshop is organised by Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.

Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.

Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.

Please register for the workshop here.

Queries: Una Yeung (uy202@cam.ac.uk

Image by metamorworks/Shutterstock.com

The Future of Artificial Intelligence: Language, Ethics, Technology 1024 341 CRASSH Cambridge

The Future of Artificial Intelligence: Language, Ethics, Technology

A report of the event as well as videos of the talks can be found here.

This is the inaugural workshop of Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.

The workshop will bring together experts from politics, industry, and academia to consider the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions will focus on different aspects of the complex relationships between language, ethics, and technology. These issues are of particular relevance in an age when we talk to Virtual Personal Assistants such as Siri, Cortana, and Alexa ever more frequently, when the automated detection of offensive language is bringing free speech and censorship into direct conflict, and when there are serious ethical concerns about the social biases present in the training data used to build influential AICT systems.

Speakers

Professor Emily Bender, University of Washington
Baroness Grender MBE, House of Lords Select Committee on AI
Dr Margaret Mitchell, Google
Dr Melanie Smallman, University College London, Alan Turin Institute
Dr Marcus Tomalin, University of Cambridge
Dr Adrian Weller, University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation

Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.

Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy, and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.

Please register for the workshop here.

Queries: Una Yeung (uy202@cam.ac.uk

Image by vs148/Shutterstock.com