Blog Cambridge

Japan Prime Minister Shinzo Abe convening the First Novel Coronavirus Expert Meeting. 16 Feburary 2020
Reading Elizabeth Anderson in the time of COVID-19 800 533 Federico Brandmayr

Reading Elizabeth Anderson in the time of COVID-19

The pandemic is a good time to reflect on expertise (if you have the luxury).

During this particular emergency, governments appear to pay heed to experts. Or at least they do now that the extent of the crisis is clear. The public and the media show them respect and even reverence. This is especially true of physicians and public health scientists, especially epidemiologists and virologists. To a lesser extent, social scientists specializing in behavior and networks weigh in on how to organize life under partial or complete lockdown and how to make this lockdown effective. Economists are voicing ominous warnings on the magnitude of changes to come.

Japan’s Prime Minister Shinzo Abe at the First Novel Coronavirus Expert Meeting, 16 February 2020

One tempting conclusion is that after decades of being dismissed or scrutinized for their various weaknesses, experts are back with a vengeance. Indeed, it is striking how much more trust politicians and the public are willing to place in epidemiology than, say, in climate science. It is possible that after COVID-19 is overcome, this halo effect will last and many experts will enjoy greater trust, not just the ones whose advice was particularly relevant during the pandemic.

But this prediction may be wishful thinking. The UK government’s abrupt U-turn from mitigation to suppression, while officially justified by scientific advice, is more likely a result of internal rebellion and external criticisms. The critics of mitigation have appealed to a medley of scientific, ethical, political considerations against the pursuit of ‘herd immunity’. Some expressed astonishment at the fact that the UK experts arrived at advice so different from most other countries which overwhelmingly backed suppression. The radical uncertainty about how the epidemic will develop, the disagreements about how to ‘flatten the curve’ and contain further damage, as well as the now familiar bouts of fakes, misinformation and politicization of expertise, all undermine the optimistic ‘return of the experts’ narrative.

Even if the position of the experts after the pandemic will be stronger, this is not a reason to forget how complex and hard-won epistemic authority is. Public health scientists that are now considering strategies of containing the pandemic rely on models with inevitably speculative assumptions. Furthermore, in order to make inferences from these models, they have to make judgments about the appropriate levels of harm to the public, the acceptable numbers of dead, the tolerable restrictions on freedom, the likely behavior of masses under lockdown, and so on. These judgments are uncertain and controversial, and disagreements between different experts are often intractable. So even if experts are back, their return should not herald their rule.

Elizabeth Anderson

Professor Elizabeth Anderson of the University of Michigan is a moral philosopher known for her work on expertise and the politics of knowledge. Her writings are a must for anyone who cares about how to define expertise, whether expertise can and should be challenged by laypeople, and what is the proper place of experts in a democracy. Her ideas are as relevant as ever and we recommend two papers in particular. These are classic Anderson papers many of us know and love: they start with a theoretical claim and then illustrate it with historical and contemporary examples of expertise in action.

Anderson, E. (2011). Democracy, Public Policy, and Lay Assessments of Scientific Testimony. Episteme, 8(2), 144-164. doi:10.3366/epi.2011.0013

In this paper, Anderson observes that responsible public policy in a technological society must rely on complex scientific reasoning, which ordinary citizens cannot directly assess. But this inability should not call into question the democratic legitimacy of technologically driven policy. Citizens need not be able to judge whether experts are making justified claims, but rather they need to be able to make, what she calls, reliable second-order assessments of the consensus of trustworthy scientific experts. Her case study is anthropogenic global warming and she argues that judging the trustworthiness of climate experts is straightforward ‘for anyone of ordinary education with access to the Web’.

Anderson, E. (2006). The Epistemology of Democracy. Episteme, 3(1-2), 8-22. doi:10.3366/epi.2006.3.1-2.8

This is a paper on how institutions make knowledge, both theoretically and in practice. Theoretically, Anderson reconstructs democracy as an epistemic engine through deliberation and votes, arguing that democracy’s success in this task is due to the experimental nature of its institutions, just as John Dewey taught. Her case study is based on Bina Agarwal’s account of community forestry in India and Nepal. Its initial exclusion of women resulted in failure to solve the problem of firewood and fodder shortages.

We recommend reading these papers alongside Anderson’s answers to our questions below. We asked her to answer questions about these topics and she sent us her answers before the pandemic struck. Her reflections are on expertise and democracy in general, not on how it has played out in the last weeks:

1. Considering your research and/or work in practice, what makes a good expert?

  • EA: Expertise in any field must join technical knowledge in the field with certain virtues: (i) honesty in communicating findings in the field, including uncertainties about these findings and the most likely alternative possibilities; (ii) conscientiousness in communicating the “whole” truth, in the sense of not omitting findings that are normatively relevant to policymaking, although they may be inconvenient to one or more political views; (iii) avoidance of dogmatism – i.e., a willingness to revise conclusions in light of new evidence; (iv) taking the public seriously: listening to their concerns, which may include distrust of experts, and taking action to earn their trust, rather than dismissing them out of hand or treating them as stupid, even when their concerns are based on misinformation.

2. What are the pressures experts face in your field?

  • EA: As a moral and political philosopher, I am reluctant to claim that there are specifically moral experts, in the sense of people who convey technical conclusions to the public by way of testimony – that is, where we are asking the public to take our word for it in virtue of our being experts, because the considerations for these conclusions are too technical for the public to assess.  Philosophers don’t convey findings to the public by way of testimony.  We offer ideas, arguments, and perspectives to the public, which they can evaluate for themselves.

3. Have you observed any significant changes occurring in recent times in the way experts operate?

  • EA: We now live in a climate of distrust in expertise, of disinformation spread by social media, irresponsible politicized news, and authoritarian regimes, and of propaganda and toxic discourse that has displaced evidence-based, constructive, democratic policymaking with ideas designed to spread distrust and division among citizens.  Some of the distrust in scientific expertise arises from experts themselves, who have failed to take responsibility for bad predictions.  Some experts have also been corrupted by moneyed interests.  Experts need to repair their broken relations with the public.  But it’s not all on them.  Conflict entrepreneurs – including populist politicians and media – deliberately spread lies and unfounded doubts about experts, to create a climate in which they can operate with impunity while in power, without taking responsibility for the consequences.  Spreading doubt about climate change allows fossil fuel interests to wreck the conditions for a sustainable planet. Spreading doubt about economics may allow plutocrats to drive the UK over a no-deal Brexit cliff. Spreading doubt about the safety of vaccines spreads preventable disease while enriching quack doctors.

4. Do you envision any changes in the role of experts in the future?

  • EA: Experts can no longer rely on their technical knowledge alone, in order to be able to play a constructive role in policymaking.  They need to find constructive ways to relate to the public, to engage the public with their findings in ways that both earn their trust and empower the public to distinguish between real experts and those who disseminate lies, propaganda, and toxic discourse.  This will require a reinvigoration of democratic practices in conjunction with science. In the U.S., an exemplary case of what I have in mind is the citizen science undertaken in Flint, Michigan, which exposed the presence of lead in the water and consequent lead poisoning of children.  In this case, experts – doctors and environmental scientists—empowered citizens to collect data from their own water lines, and reason together about the meanings of their findings and what to do about them.  This is democracy in action, empowered by experts in ways that reinforce trust in expertise and democracy alike.

Reading these now, it is hard not to draw connections to the story of expertise during the pandemic. Anderson’s conception of a responsible expert – as transparent about value judgments, respectful of concerns of the public, and properly undogmatic – is a compelling standard against which to evaluate the experts driving the response to the epidemic. But this standard is also tricky to articulate and to apply in the present context. What it would mean for institutions of public health to produce knowledge that is properly representative and practical? Is there a place for citizen science of infectious diseases or does the urgency and danger of a virus like COVID-19 call for a less distributed, more centralised, and frankly a more authoritarian model than Dewey’s? A proper defence of participatory science needs to show that it is not a luxury that can be put aside during crisis, but rather a necessity. This is far from obvious. What could a citizen science about COVID-19 be? And how can such science command trust in an age of misinformation?

In an email on March 26th Anderson added that citizen science on COVID-19 is already happening:

We also wonder how Anderson’s view that the trustworthiness of experts is a second-order question (recall she argues that the public need not know the science to trust the experts), help us to understand the trustworthiness of epidemiologists in the time of COVID-19. How do they marshal as much trust as they do, at least once they do, and what accounts for the contrast with, say, climate scientists? Is epidemiologists’ knowledge or character somehow superior to that of so many other distrusted experts? Or is there something about the clear and present urgency of a pandemic and the vividly obvious threats to one’s own life that makes an expert on it trustworthy? (We mean to say trustworthy, rather than trusted, because, as the precautionary principle recommends, when the risk of tragedy is high, it is appropriate to act on less evidence than otherwise.) If so, the proper response to climate scepticism is not better science or better experts as such, but a better representation of urgency and crisis.

There is much more to say and in the coming weeks the Expertise Under Pressure team will be publishing our own and invited commentary on the role of experts during this pandemic. But the writings of classics such as Elizabeth Anderson are an obligatory passage point.

The Expertise Under Pressure team

Group picture of the participants from the fact-checking hackathon
Fact-checking Hackathon 1024 576 Shauna Concannon

Fact-checking Hackathon

The Giving Voice to Digital Democracies is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation, and we started 2020 with a Fact-checking Hackathon on 10-12 January. The event took place at the Cambridge University Engineering Department. 

The project manager Marcus Tomalin welcomed attendees to the event before Mevan Babkar, head of automated fact checking at FullFact, gave an insightful talk about human-based fact-checking. She discussed the various ways in which information can be used and abused, and she explained FullFact’s fact-checking processes. It was particularly fascinating to hear about their work during the recent general election. 

James Thorne, a PhD student at the Department of Computer Science and Technology, talked about fact extraction and verification, and how approaches from Natural Language Processing can help. He also discussed the Fact Extraction and VERification (FEVER) shared-task (http://fever.ai/).   

Jonty Page, a current 4th-year engineering student, gave an overview of an open source fact-checking system the participants could develop during the Hackathon, and he highlighted some potential challenges and topics they could explore. Given a claim to be fact-checked, the baseline system (i) retrieves Wikipedia pages relevant to the claim, (ii) selects particular sentences from those pages which relate to the claim, and (iii) classifies those sentences either as supporting or refuting the original claim, or else as providing too little information to either support or refute it. 

Creating an Interdisciplinary Environment 

The task of dealing with false claims automatically is necessarily an interdisciplinary task. The Hackathon created a collaborative environment for researchers from a variety of backgrounds. The weekend brought together people with expertise in areas including linguistics, psychology, sociology, education, criminology, mathematics, philosophy, critical thinking, natural language processing, computer science, and software engineering. Therefore, it was a profoundly interdisciplinary event. On the second day of the Hackathon, Dr Shauna Concannon ran some introductory sessions on Python for participants who wanted to learn more about coding, and especially using Python to analyse natural language. 

“This is my first hackathon and I’ve really enjoyed its interdisciplinary nature, it’s really welcoming, it’s really engaging, it’s open to newcomers.” 

Ideas & Projects 

The teams worked on different aspects of the fact-checking task, including developing new methods for retrieving relevant sentences and documents by integrating information contained in hyperlinks, identifying claims that required multiple pieces of evidence in order to be correctly classified; identifying problematical linguistic patterns (such as claims that required comparisons or which included temporal assertions or quotations), and developing new methods for evaluating conflicting evidence using a confidence scoring metric. 

“I came to the fact checking hackathon because I think it is a very important problem to work on. I learnt that automated fact checking is a very hard task that involves a number of different components.”  

The interdisciplinary interest that this event generated confirms the urgent need for inclusive and collaborative events that bridge the divide between technology, the humanities, and the social sciences.  

“It was a great opportunity to come together with people from different backgrounds, people who are doing mathematics, engineering, computer science, linguistics, criminology.” 

 

Expertise, Adult Learning & Intercultural Effectiveness 1024 724 Hannah Baker

Expertise, Adult Learning & Intercultural Effectiveness

In November 2019, Fodé Beaudet from the Canadian Foreign Service Institute from Global Affairs Canada visited the UK as part of a project to better understand how we can design, facilitate and evaluate our work to support behavioural change at the individual, group or system level. He met with Anna Alexandrova and Hannah Baker at the University of Cambridge to discuss overlaps between his own work and the Expertise Under Pressure project. Consequently, he kindly agreed to answer the questions that we put forward during our ‘Expert Bite’ discussions as a blog post, and we hope that there will be further collaboration in the future.


Fodé Beaudet

Senior Learning Advisor, Centre for Intercultural Learning, Canadian Foreign Service Institute (CFSI) at Global Affairs Canada

Biography

Fodé Beaudet is a Senior Learning Advisor at the Centre for Intercultural Learning with Global Affairs Canada (GAC). He has extensive experience in designing and facilitating multi-stakeholder initiatives around the world – themes include train-the-trainer platforms to facilitate change, Whole of Government Approaches to strategic collaboration, navigating through Complex Adaptive Systems and strengthening the intercultural effectiveness of professionals working overseas. Clients include international NGOs, global networks and institutions, foreign governments, the defence sector, research institutions and partner agencies affiliated with GAC. He currently serves on the Board of Directors of the Institute for Performance and Learning (I4PL). 

Adult learning approaches and intercultural effectiveness

For the purpose of this blog, I will focus mostly on our adult learning approach to strengthening intercultural effectiveness competencies. Established in 1969, the Centre for Intercultural Learning (CIL) is Canada’s largest provider of intercultural and international training services for internationally-assigned government and private-sector personnel. One of the CIL significant research products was the development of a competency-based model for intercultural effectiveness, the profile of the intercultural effective person (Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001).  This model, delivered with an experiential learning approach (Kolb, 1984), has proven successful to help prepare individuals for short- or long-term missions. According to Kolb, “learning is the process whereby knowledge is created through the transformation of experience” (1984, p. 38). The experiential learning model, adapted from Kolb, invites learners in a series of cycles, as indicated below. 

The expert-knowledge content is often integrated at the ‘Generalize’ stage of the ERGA learning cycle. At this point, the expert has a good overview of the knowledge in the room, and she or he can best distill valuable insights to complement what is already known.  This may involve validating current knowledge, nuancing some points of views, or challenging what was said. The ‘application’ step invites learners to discuss in groups or in solo how to apply what they have learned and integrates their peers’ knowledge as well as the expert’s contribution. Thus, the cycles of learning loops look more like a spiral than a circle. 

1. Considering your research and/or work in practice, what makes a good expert?

Understanding the andragogic approach to learning: comfort with emergence. We distinguish good intercultural experts by their ability to acknowledge and recognize how the content of their contribution can reinforce the knowledge and experience in the room. An expert content does not always have to be elaborated at length, because learners may have reached similar insights. A good expert will challenge, validate, nuance or enrich content. As such, comfort with emergence means a predisposition for active listening and demonstrate agility based on what is said in the moment. As for an expert facilitator, comfort with emergence and understanding andragogic approach to learning also applies. For instance, our train-the-trainer approach involves very little content from the facilitator. A facilitator will rarely speak for more than 5 minutes. In a train-the-trainer format, learners assess, design, facilitate and evaluate their work collaboratively, in real-time. Reflective practice encourages learners to deepen their self-awareness about the experience. I recall when I was first introduced to this work, co-facilitating a train-the-trainer: my first reaction, when learners were struggling with a task, requesting an example, was to provide one. And then, I got into a murky water: how would you proceed next? My gifted co-facilitator at the time took me to the side and reminded me: let them struggle. Let them figure it out. This is important. In this particular format and context, an expert facilitator has to hold and suspend their ideas or creativity. It’s about setting the container for learners to shine. The less an expert facilitator says, is seen, the better. Then, learners become the experts, taking ownership and responsibility to finding solutions instead of asking for them. 

I also recall a learner asking at the start of a train-the-trainer workshop regarding expectations: “How can I deal with difficult people?” At the end of the workshop, the learner shared this, which I paraphrase “I think it is me, who can be difficult sometimes”. In other words, we are not isolated from the system we wish to intervene. We are part of any system. And as long as we project “fixing” to be solely about others, we may miss an important blind spot. Which means making discomfort and silence your friend.  During another train-the-trainer in the Middle East, a man shared a powerful testimony. He pointed to a woman in the group, and said for all to hear: 

Before coming here, I didn’t think women could lead. Not only were you a leader, working with two men, but I have two daughters and I hope someday they will be like you.”

This is the potential transformative of participatory processes when learners are taking an active part in co-creating with others. Hierarchy breaks down. Self-organization is encouraged. And mental models are challenged. Because the expertise is across the room. 

Inquisitive mind. The value of an expert’s skilled questions to better understand the learner profile. In contrast, an expert with a set presentation seeking to repurpose what she or he has already repeated may not be a good fit. 

Humility. Here is an anecdote to make the point. At the beginning of an intercultural course about an Asian country, the intercultural expert said: “If someone tells you they know the country, they don’t know the country.” He described at length his experience, but only to convey how he felt short of truly understanding the culture and that he had a lot to learn. A learner, who was born and raised in that particular country, said to the expert for everyone to hear: “You understand the country very well.” This humility, in some circles, may be counter-intuitive. However, when engaging with understanding what it means to be effective across cultures, humility promotes curiosity and deepens one’s inquiry to what are complex webs and layers that evolve and transform over time. In contrast, an assertiveness and certainty about culture in general terms can do a disservice to learners, reinforcing assumptions and inadvertently promoting ill-conceived predictive behaviours. Here is one example about how the model can be adapted. Years ago, I was collaborating with a client hosting a Japanese delegation. One of their goals was to come up with a partnership agreement. They sought our services to learn about Japanese culture. We had less than a day. I worked with an expert facilitator and an intercultural expert with knowledge about Japanese culture to design an approach. Given the client’s end goal, we devised a plan where the client was given an opportunity to reflect on how their current approach relates to Japanese culture. In the morning, they learned key features about the culture. And then, in the afternoon, we asked the client to describe how they intended to host their counterparts, dividing the task into three categories: activities before the delegation arrives, during, and after. And they presented back their findings, strategies and questions to the intercultural expert. Details, like the value of greeting the delegation at the airport surfaced. Reviewing lunch time allocation, setting up social activities, and not just work-related activities. But the most insightful learning from the client was the need to reframe their goal: given the decision-making process they had learned about, it was unrealistic to expect a partnership agreement so soon. Therefore, emphasis on hosting was about nurturing and building relationships. I give a lot of credit to the client, who, instead of forcing their approach, reframed the goal itself. And the intercultural expert had the wisdom to let learners surface their assumptions, complementing their strategies with her own insight. 

2. What are the pressures experts face in your field?

No one organization nor individual can understand nor convey the full extent of what needs to be learned. Hence the importance to acknowledge the boundaries of expert knowledge while being confident in what they can contribute. Problems are not solved with the help of one expert, but through the collaboration among diverse expertise. I’ll weave some examples of our work and research in addressing complex problems in a multi-stakeholder context. Here are a few pressures experts face:  

Attribution:how do you communicate the value of your expertise, when the result is part of a larger ecosystem? How important is it for the expert to be visible, vs promoting the visibility of others? 

Tensions between asking good questions vs offering answers.Inevitably, someone will want to know: can you tell me exactly what I need to say? What I need to do? How will someone respond if I do X? What if I do Y? It dependsdoesn’t answer the question and can be frustrating. And yet, it dependscan also be closer to the truth than coming up with a reassurance in the moment that can be deceptive later. Probing further questions about what is being asked may lead to better answers. It’s also tricky: aren’t you supposed to be an expert? Aren’t you supposed to know? 

This is where experts and clients may collude: in the search of an easy solution, in the effort to prove that something is done, the lure of pursuing the wrong problem can provide an illusory solace, and therefore lead both experts and clients to ‘tick the box’, indicating that ‘actions’ were made. 

Contractual agreement.  Building on the previous points, experts feel pressure to operate within the contractual agreement they are accountable to. Yet, when facing complex problems, the emergence of unexpected scenarios may require reframing what was understood to be the problem. So if the contract has little flexibility, a new understanding about a problem cannot be accounted for. So experts are torn between the reality of what they see, and the murkier expectations of what they should deliver.

Hence, to make the most of a good expert, there is also a need for a good procurement process. Faulty procurement, with little flexibility, can turn the best experts into the worst.  All with good intentions. For example, a common reflex for easy solution is training. Yet, in many instances, training is not the answer. Here’s an example about how a mental model shifted as a result of incorporating generative dialogues in our train-the-trainer workshops. Generative dialogues put emphasis on a shared understanding about a problem or an inquiry, to reflect on the collective wisdom, and propose actions to move forward. David Bohm (2013), among others, have written extensively about the value of dialogue. A key feature of generative dialogue is that neither the problem/inquiry nor the solutions are predetermined. Everyone’s voice is valued. The physical environment is key: conversations in small groups or in a large circle, as oppose to lecture-style format. During one of the train-the-trainer in the Horn of Africa, a learner approached me, after experimenting with such approach, saying; “You mean, we don’t need training? We can have dialogue?” In some instances, yes. Training has its place too. Lecture-style have their place too. It’s a matter of finding the appropriate response to the problem at hand. Generative dialogue requires dealing with discomfort, with not knowing what the answer will be. In other words, generative dialogue is useful when dealing with complex problems. And perhaps paradoxically, while it’s an ancient practice well known among First Nations communities, it may also be part of a practice for approaching the future. 

3. Have you observed any significant changes occurring in recent times in the way experts operate?

The concept of the wisdom of the crowd has certainly grown. In the context of how experts operate, I will say that there is a greater openness and inquisitiveness about how their part fit into a broader picture. I noticed several experts exemplified in Ed Schein’s (1999) stance of the process consultant, whereby an expert will help clients identify the problem properly. I also noticed this stance when experts work in the field of systems thinking. This may also reflect experts gaining knowledge not only about the content they can provide, but the process they can support. 

The rise of social labs is perhaps an indication of an emerging ‘expertise’; the art convening space for experts to co-create. One of many examples would be the Art of Hosting. A change I see is the blend of expert content and the expert skilled to hold the container. I remember working with a client system that insisted to push the boundaries of what is possible. They were seeking to create new theories of change for education in Africa. In this context, all participants had a voice, whether former ministers of education, deans, vice-deans, farmers, CEOs or students. In hindsight, this reflected the complementary role of expertise. It’s alluring for an expert to define what should be done. But in some instances, a community, a collective, or a multi-stakeholder representation is needed to name, envision and frame a compelling and shared understanding towards a future horizon. Building from a shared vision and understanding, the role of experts is clearer. But if we name or assume a particular expert model too soon and especially prior to a shared understanding, then the risk is to fall into the trap of using a fashionable trend that serves a model rather than the actual problem or appreciative inquiry. 

Finally, I also think that Donald Schön, in his thought-provoking book in 1987, the Reflective Practitioner, offered insights still relevant today: the growing shift of technical knowledge to address complex problems. And as a result, a more reflective professional stance, whereby framing the problem is of the upmost importance. 

4. Do you envision any changes in the role of experts in the future?

It’s unclear for me if the future will look like the continuation of a trend, a pattern, or if there will be significant upheaval or sudden bifurcation. One factor that may influence the role of experts is our relationships with evidence and facts. The other is the independence in which experts operate: to what extent is the ecosystem in which experts will evolve is vulnerable to collusion, unconsciously, or implicitly? In the latter case, experts may serve to prove a point of view, rather than enrich point of views.   

In the context of intercultural effectiveness, I foresee the continuation of expanding beyond the classroom and the virtual environment to include learning journeys and more action learning oriented approaches.  I could see strengthening the model of experiential learning to include mindfulness, where sensemaking is not limited to what one thinks or feels, but also inquisitiveness about the body’s way of learning. This would call for more immediate attention to presence, in order to witness and discern how we are influenced by past experience, which affects how we project into the future. However, to embrace a more direct perception with less filter is an uncomfortable place to be, to hold, but one rich with clarity.

Overall, I see a growing emphasis on two streams that have been more or less separated: technical and process-oriented expertise. The latter is about holding the container, sensemaking, facilitation, convening and hosting skills. The former, is delving deeper into system dynamics and social network analysis, where culture is understood as systems, transcending borders. Here’s an example of what I’ve learned from experts in these fields through work projects; when we integrate the lens of systems and how nodes interact with each other, we see common features among patterns: systems are self-reinforcing. Systems have a purpose, which is to survive. The purpose of a system is what it does, once said the cyberneticist Stafford Beer. This has implications for living in a turbulent environment where survival instincts are heightened: if the purpose of a system is a system, then perhaps change happens by understanding patterns and purpose beyond their perceived shared values. I found the experience of collaborating with experts in social network analysis while inviting stakeholders into a sensemaking analysis revealing. 

I wouldn’t be surprised, in the future, to witness more and more of these conversations among technical expertise and process-oriented approaches. Perhaps a process without good content is as useless as good content without a proper process to make sense of it. 


References

Bohm, David. (2013). On Dialogue.Abingdon, Oxon: Routledge.

Kolb, D. A. (1984). Experiential Learning experience as a source of learning and development. New Jersey: Prentice Hall.

Schein, E. H. (1999). Process consultation revisited: Building the helping relationship. New York: Addison- Wesley.

Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action.United States of America: Basic Books.  

Vulpe, T., Kealey, D., Protheroe, D., & Macdonald, D. (2001). A Profile of the Interculturally Effective Person. Centre for Intercultural Learning. Canadian Foreign Service Institute.

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research 795 599 Federico Brandmayr

When Does Explaining Become Explaining Away? Compassion, Justification and Exculpation in Social Research

FIRST WORKSHOP – 27 September 2019

Organised by Federico Brandmayr and Anna Alexandrova

“Does understanding come at the price of undermining our capacity to judge, blame and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome?”


The Last Day of a Condemned Man (1869) by Mihály Munkácsy (Wikimedia Commons).

The Expertise Under Pressure project hosted its first workshop on 27 September 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation. The overarching goal of Expertise Under Pressure is to establish a broad framework for understanding what makes expertise authoritative, when experts overreach, and what realistic demands communities should place on experts.

The talks and discussions of this first workshop focused specifically on a charge frequently levelled against experts who study human culture and social behaviour, i.e. that their explanations can provide justifications or excuses for ill-intentioned people, and that decisionmakers making choices on the basis of their advice might neglect to punish and react effectively to harmful behaviours.

A good way to capture the theme of the workshop is a saying attributed to Germaine de Stael: “tout comprendre, c’est tout pardonner”, “to understand all is to forgive all”. Social scientists perhaps do not intend to understand all that there is, but they generally like the idea of increasing our understanding of the social world. By and large, historians, sociologists, political scientists and anthropologists, tend to show that the people they study do certain things not just because they want to do those things, but also because they are driven by various kinds of factors. And the more knowledge we have of these factors, the more choice, responsibility, and agency seem to fade away. This begs the question: does understanding come at the price of undermining our capacity to judge, blame, and punish? And should we conceive this as a price, as something that we should be worried about, or as something that we should welcome? And how should scientific disciplines, professional associations, and individual researchers deal with this issue in their daily practice and especially in their interventions in public debates and in policymaking contexts? Indeed, these issues essentially relate to the question of how social knowledge is produced and how it circulates outside academia, and notably how it is appropriated and misappropriated by different groups in the endless disputes that divide society and in which attributions of credit and blame are widespread.

The one-day event brought together researchers from various academic disciplines, looking at the exculpatory potential of social research. Here is what they came up with.

Livia Holden

Professor Livia Holden (University of Oxford) was the first speaker of the day. With a background in anthropology and socio-legal studies, Holden leads a European Research Council project titled Cultural Expertise in Europe: What is it useful for? The project looks at the role of anthropologists and other cultural experts in advising judges in court cases and policymakers in fields such as immigration law. In her talk, ‘Cultural Expertise and the Fear of Absolution’, she analysed the concept of cultural expertise and described the specific challenges cultural experts face, especially where anthropology enjoys little credit. Drawing on several examples, including her own experience as an expert witness in family law cases, she argued that experts oscillate between the fear of absolution, i.e. concerns of excusing harmful acts (such as genital mutilation) on the grounds that they are rooted in cultural traditions, and the fear of condemnation, i.e. concerns of being complicit with colonial rule and repressive criminal justice policies.

Jana Bacevic, Livia Holden, and Hadrien Malier

The following speaker was Hadrien Malier (École des hautes études en sciences sociales), a sociologist who studies policy measures aimed at nudging working-class people into adopting more ‘eco-friendly’ habits. His talk, ‘No (Sociological) Excuses for Not Going Green: Urban Poor Households and Climate Activism in France’, presented the results of an ethnography conducted in two low-income housing projects. The volunteers and activists that Malier followed in these neighbourhoods framed the protection of the environment as an individual and universally distributed moral obligation, independent of privilege, class and education. Climate activists, who are mostly middle-class and educated, recognise the social difference between them and the mostly poor people they try to nudge toward eco-friendly habits. But this difference is simply interpreted as proof that people with low income do not know or care enough about the environment. More relevant sociological insights on class differences, including well-supported claims according to which people with low income have a relatively light ecological footprint, are often seen as a bad excuse for acts that are detrimental to environment.

Nigel Pleasants

Dr Nigel Pleasants (University of Exeter) gave the next talk. Pleasants is a philosopher of social science who has written extensively on how sociological and historical knowledge influences our moral judgements. In his recent publications, he focused on various controversies related to historical explanations of the Holocaust. His talk, ‘Social Scientific Explanation and the Fact-Value Distinction’, explored and clarified the relation between excuse and justification. Excuses concern the responsibility of an actor in performing a certain action, while justifications refer to the moral status of an action (i.e. whether it is right or wrong) regardless of the responsibility of the actor that performs it. Drawing on scholarship on the Holocaust, he argued that while explanatory accounts from the social sciences are highly relevant to determine whether a certain act can be excused, the same cannot be said for whether a certain act is justified or not.

Marco Santoro and Nigel Pleasant

The morning session ended with a talk by Professor Marco Santoro (Università di Bologna): ‘Whose Sides (of the Field) Could We Be On? Situatedness, Perspectivism, and Credibility in Social Research’. Santoro is a sociologist who has written on such diverse topics as the notarial profession, popular music, the international circulation of social scientific ideas and the Sicilian mafia. His starting point was a personal experience in which his interpretation of the mafia was harshly criticised by a colleague. In his writings on the topic, he had argued that the mafia can be interpreted as a form of political organisation, a non-state political institution enjoying a certain legitimacy and providing protection and services to its constituency, in a region where poverty runs high and that many see as having been left behind by the Italian state. Those scholars who instead saw the mafia as functioning like a company, simply providing services (e.g. protection from violence) in exchange for money, considered his arguments tantamount to a justification of organised crime. This episode inspired Santoro’s forceful defence of a multi-perspectival approach, according to which we should broaden the range of interpretations of a single phenomenon while being aware that these perspectives are not morally and politically neutral. Some might put us in dangerous territory, but it is only by seriously advancing them that we can clarify our very moral ideals.

Federico Brandmayr

Opening the afternoon session, Dr Federico Brandmayr (University of Cambridge) reconstructed the debate on ‘sociological excuses’ that took place in France after the country was struck by several deadly terrorist attacks in 2015 and 2016. In his talk, ‘The Political Epistemology of Explanation in Contemporary French Social Thought’, he showed that the very expression of sociological excuse has clear intellectual and political origins, rooted in US right-wing libertarianism, and argued that it is mainly used in France in relation to accounts of the urban lower class that emphasise poverty, unemployment and stigmatisation. Sociology as a discipline was at the centre of much controversy after the 2015 terrorist attacks, and sociologists reacted in three main ways: some denied the allegations, others reappropriated the derogatory label of excuse by giving it a positive meaning, while others accepted criticism and called for a reformation of sociology. Accordingly, Dr Brandmayr argued that French sociology should not be considered as a monolithic block that experiences attacks from political sectors, but rather as a heterogeneous complex of different epistemic communities.

Stephen Turner, Federico Brandmayr, and Stephen John

In a similar historical vein, Professor Stephen Turner (University of South Florida) gave a talk titled ‘Explaining Away Crime: The Race Narrative in American Sociology’. A renowned historian and philosopher of social science, he reconstructed the history of how social scientists have dealt with the fact that crime rates for Blacks in the US have always been higher than for other ethnic groups. Generally speaking, social scientists wanted to avoid racist accounts of this gap (like those based on a form of genetic predisposition of black people to commit crimes), but they also showed dissatisfaction with accounts that explained the gap by simply pointing to social factors such as poverty and discrimination. This is because of certain theoretical inconsistencies (such as the fact that black crime mainly targets black people, while one would assume that discrimination should cause Blacks to act violently against Whites), but also because it was seen as an excuse pointing to a deficiency in the agent and implying a form of inferiority. Spanning more than a century, Turner’s historical reconstruction identified three basic strategies US social scientists adopted to overcome this dilemma and delineated their ethical implications.

Finally, Gabriel Abend (Universität Luzern) took a more philosophical approach in a talk titled ‘Decisions, “Decisions”, and Moral Evaluation’. His talk built on a theoretical framework that he has recently developed in several publications, and which provides the foundation for the study of decisionism, i.e. the fact that people use decision (or choice) concepts and define certain things as decisions. Decisionism has clear moral and practical implications, as people are generally held accountable and subject to moral judgment when their acts are interpreted as decisions. Abend provided a striking list of examples from scientific journals in which the concept of decision was used to describe such unrelated things as bees’ foraging activities, saccadic eye movements and plant flowering. While these instances of decisionism offer plenty of material for the empirical sociologist, he raised concerns about the risk of conceptual stretching and advocated a responsible conceptual practice.

The workshop was a truly interdisciplinary inquiry, in the spirit of CRASSH. All interventions, whether their approach was philosophical, sociological, historical, or legal converged toward increasing our knowledge of the relationship between explaining and understanding on the one hand, and excusing and justifying on the other. Thanks to the lively and thorough responses given by an impressive battery of discussants (Dr Anna Alexandrova, Dr Jana Bacevic, Dr Cléo Chassonnery-Zaïgouche and Dr Stephen John), the talks were followed by fruitful exchanges. A special issue with the papers given in the workshop is in preparation and will be submitted soon to a prominent interdisciplinary journal.

Text by Federico Brandmayr

Pictures taken by Judith Weik


Quarantining Online Hate Speech 680 332 Stefanie Ullmann

Quarantining Online Hate Speech

Research Publication

“quarantining online hate speech”

Ethics and information technology

10 October 2019

Press Release by Cambridge University

https://www.cam.ac.uk/research/news/online-hate-speech-could-be-contained-like-a-computer-virus-say-researchers

17 December 2019

Artificial intelligence is being developed that will allow advisory ‘quarantining’ of hate speech in a manner akin to malware filters – offering users a way to control exposure to ‘hateful content’ without resorting to censorship.

We can empower those at the receiving end of the hate speech poisoning our online discourses

Marcus Tomalin

The spread of hate speech via social media could be tackled using the same ‘quarantine’ approach deployed to combat malicious software, according to University of Cambridge researchers.

Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example.

As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended “psychological harm” is inflicted, with armies of moderators required to judge every case.

This is the new front line of an ancient debate: freedom of speech versus poisonous language.

Now, an engineer and a linguist have published a proposal in the journal Ethics and Information Technology that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.

Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.

As these algorithms get refined, potential hate speech could be identified and “quarantined”. Users would receive a warning alert with a “Hate O’Meter” – the hate speech severity score – the sender’s name, and an option to view the content or delete unseen.

This approach is akin to spam and malware filters, and researchers from the ‘Giving Voice to Digital Democracies’ project believe it could dramatically reduce the amount of hate speech people are forced to experience. They are aiming to have a prototype ready in early 2020.

“Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining,” said co-author and linguist Dr Stefanie Ullmann. “In fact, a lot of hate speech is actually generated by software such as Twitter bots.”

“Companies like Facebook, Twitter and Google generally respond reactively to hate speech,” said co-author and engineer Dr Marcus Tomalin. “This may be okay for those who don’t encounter it often. For others it’s too little, too late.”

“Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation,” he said.

Former US Secretary of State Hillary Clinton recently told a UK audience that hate speech posed a “threat to democracies”, in the wake of many women MPs citing online abuse as part of the reason they will no longer stand for election.

While in a Georgetown University address, Facebook CEO Mark Zuckerberg spoke of “broad disagreements over what qualifies as hate” and argued: “we should err on the side of greater expression”.

The researchers say their proposal is not a magic bullet, but it does sit between the “extreme libertarian and authoritarian approaches” of either entirely permitting or prohibiting certain language online.

Importantly, the user becomes the arbiter. “Many people don’t like the idea of an unelected corporation or micromanaging government deciding what we can and can’t say to each other,” said Tomalin.

“Our system will flag when you should be careful, but it’s always your call. It doesn’t stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate.”

In the paper, the researchers refer to detection algorithms achieving 60% accuracy – not much better than chance. Tomalin’s machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modeling.

Meanwhile, Ullman gathers more ‘training data’: verified hate speech from which the algorithms can learn. This helps refine the ‘confidence scores’ that determine a quarantine and subsequent Hate O’Meter read-out, which could be set like a sensitivity dial depending on user preference.

A basic example might involve a word like ‘bitch’: a misogynistic slur, but also a legitimate term in contexts such as dog breeding. It’s the algorithmic analysis of where such a word sits syntactically – the types of surrounding words and semantic relations between them – that informs the hate speech score.

“Identifying individual keywords isn’t enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process,” said Ullmann.

Added Tomalin: “Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses.”

However, the researchers, who work in Cambridge’s Centre for Research into Arts, Humanities and Social Sciences (CRASSH), say that – as with computer viruses – there will always be an arms race between hate speech and systems for limiting it.

The project has also begun to look at “counter-speech”: the ways people respond to hate speech. The researchers intend to feed into debates around how virtual assistants such as ‘Siri’ should respond to threats and intimidation.

Text by Fred Lewsey


Deutschlandfunk – Computer und kommunikation (Computer and communication)

11 January 2020

Interview with dr Stefanie Ullmann

Listen to the interview here (in German)


BBC World Service – Digital planet

25 November 2019

Interview with Dr Stefanie Ullmann

Listen to the interview here (relevant bit starting at 13:35mins)


Text by Stefanie Ullmann

Expert Bites – Bill Byrne 1000 690 Hannah Baker

Expert Bites – Bill Byrne

Professor Bill Byrne, Information Engineering, University of Cambridge

Summary

By Hannah Baker

22 November 2019

Professor Bill Byrne’s research is focused on statistical modelling of speech and language, with his current interest being in statistical machine translation. Bill is part of the ‘Giving Voice to Digital Democracies’ team who, as with the ‘Expertise Under Pressure’ project, is funded by the Humanities & Social Change International Foundation. 

At the beginning of the session, Bill gave an overview of his background, speaking about his interest in information theory and signal processing before the digital revolution and widespread use of computers. During this period, Bill began developing a deep understanding of the underlying mathematics for statistical probability, developing an expertise in how to approach speech recognition problems using probabilistic frameworks. Later in his career, he used his expertise in statistical modelling for machine translation. 

When machine translation research was emerging, Bill spoke about there being a tension between engineers developing these models and linguists, as the statistical models did not necessarily adhere to the pure principles of language. A key example of this in use today put forward by a member of the group is ‘Google Translate’, where the input words are converted to numbers, matched, then to words in the output language. A statistical modelling approach can perform translation well without any explicit knowledge of grammar or syntax; a clear departure from how linguists approach translation. An analogy was also put forward during the discussion that planes don’t fly like a bird, they fly but fly in a different way, so same function – different way of doing it – leading to a discussion about whether there is a need to truly understand linguistic theory anymore for application to machine translation, as well as questions about artificial intelligence (AI) mimicking a brain and cognitive abilities. 

It was noted that speech and language processing is a fast-moving area of research which is being developed incrementally but subject to jumps when new approaches such as neural networks are developed. It was also made apparent that these statistical models could be applied beyond speech translation, with parallels to ‘Renaissance Technologies’, an American hedge fund specialising in systemic trading using quantitative models being drawn by a member of the group. This posed a question as to what the drivers behind the development of speech and language processing were. In response, Bill felt that in the USA, development of these technologies had been state driven through government policy, and noted a time when funding was cut, known as the ‘AI winter’. One of the reasons for Bill’s move from speech recognition to translation was the availability of funding, acting as an indication of funding’s impact on technological development and research. Once speech translation technology was more mature, Bill commented that corporate organisations began to show more of an interest.

During the 90s a priority of state driven funding was the development open source datasets, which we continue to see in use today. An issue with open source data was raised, in terms of its impact on the expertise of people, with an example of non-experts, such as teenagers playing on computers at home, being able to use the data and develop a system without a deep understanding of the mathematics that underlies it. This was discussed alongside issues with the ethical and political response to the use of open source datasets and the anxiety of the ‘black-box’ and potential unintended consequences. However, Bill expressed support for open source platforms as they provide transparency which is necessary for ongoing development. 

 

Interview Questions

After the discussion took place, Bill was asked four questions about the understanding and changing nature of experts in speech and language processing. Below are his answers:

Considering your research and/or work in practice, what makes a good expert?

The field is vast with a huge amount of work being done, so an expert needs the ability to determine what is truly novel and what lines of research are emerging. This needs to be exploited to be able to develop a research agenda. As part of this, experts need to be able to make judgements and critique papers as to why the researchers are doing what they are doing and where the research fits in with everything else that is being done. 

What are the pressures experts face in your field?

Pressures are in the light of the former answer, in that, so much work is going on. The other thing to bear in mind is the traditional academic role. Apprenticeships and the pull from industry have played a huge part in changing the academic structure. Now people are developing expertise much quicker than they used to be and its harder to have stable research groups within academic institutions. 

Have you observed any significant changes occurring in recent times in the ways experts operate? 

The interesting thing is the use of social media. Experts used to make their way and get noticed through the peer-review process. Now the influence of social media has necessitated people share their research in through these platforms to get ideas out. It is not necessarily a bad thing, but has meant that it’s been necessary for research groups to re-think their communication strategies. 

Do you envision any changes in the role of experts in the future?

Expertise is much easier to claim than it had been in the past. There is so much information out there that people can read and understand, they can quickly get up to speed. Researchers are more likely to move from field to field, more so than previously. Hopefully there will still be a way for people to develop a deep expertise as people become much more mobile between these fields in the future. 

Artificial Intelligence and Social Change 900 506 Stefanie Ullmann

Artificial Intelligence and Social Change

Talk

19 October 2019

Festival of ideas, cambridge



Dr Marcus Tomalin, Project Manager of Giving Voice to Digital Democracies, gave a talk at the Festival of Ideas on “Artificial Intelligence and Social Change” on 19 October 2019.

The Giving Voice to Digital Democracies project investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives. The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

Text by Stefanie Ullmann and Marcus Tomalin

Video filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Society, Technology 1024 647 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Society, Technology

Third workshop

30 September 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its third workshop on 30th September 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

This workshop, focused on the impact of artificial intelligence on society, specifically on language-based technologies at the intersection of AI and ICT (henceforth ‘Artificially Intelligent Communications Technologies’ or ‘AICT’) – namely, speech technology, natural language processing, smart telecommunications and social media. The social impact of these technologies is already becoming apparent. Intelligent conversational agents such as Siri (Apple), Cortana (Microsoft) and Alexa (Amazon) are already widely used, and, in the next 5 to 10 years, a new generation of Virtual Personal Assistants (VPAs) will emerge that will increasingly influence all aspects of our lives, from relatively mundane tasks (e.g. turning the heating on and off) to highly significant activities (e.g. influencing how we vote in national elections). Crucially, our interactions with these devices will be predominantly language-based.

Despite this, the specific linguistic, ethical, psychological, sociological, legal and technical challenges posed by AICT (specifically) have rarely received focused attention. Consequently, the workshop examined various aspects of the social impact of AICT-based systems in modern digital democracies, from both practical and theoretical perspectives. By doing so, it provided an important opportunity to consider how existing AICT infrastructures can be reconfigured to enable the resulting technologies to benefit the communities that use them.

The project Giving Voice to Digital Democracies investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives.

Listen to the talks here:

Dr Marcus Tomalin (University of Cambridge, Project Manager)

 

Dr Ella McPherson (University of Cambridge)

 

Maria Luciana Axente (Pricewaterhouse Coopers)

 

Dr Shauna Concannon (University of Cambridge)

 

Dr Trisha Meyer (Free University of Brussels – VUB)

 

Text by Stefanie Ullmann and Marcus Tomalin

Videos filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Gender, Technology 800 800 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Gender, Technology

second workshop

17 May 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its second workshop on 17th May 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

The project investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives.


Professor Alison Adam (Sheffield Hallam University) was the first speaker of the day, and, she asked the very pertinent question of how relevant traditional feminist arguments and philosophical critiques of AI are nowadays in relations to gender, knowledge, and language. She pointed out that while older critiques focused on the distinction between machines and humans, newer critical approaches address concerns for fairness and bias. Despite these shifts, Adam stressed the abiding need to allow feminist arguments to inform the discussion.


Taking an approach based on formal semantics and computational psycholinguistics, Dr Heather Burnett (CNRS – Université Diderot Paris) presented the results of a study investigating the overuse of masculine pronouns in English and French. The talk ranged across numerous topics, including the way in which dominance relations can affect similarity judgments, making them no longer commutative. For instance, in male dominated professions women are likely to be considered similar to men, but in female dominated professions, men are unlikely to be considered similar to women. These asymmetries have implications for language use.


In his talk, Dr Dirk Hovy (Bocconi University) focused on the relation between gender and syntactic choices. He concluded, for instance, that how people identify in terms of gender (subconsciously) determines syntactic constructions they use in language (e.g., women use intensifiers [e.g.,  ‘very’] more often, while men use downtoners [e.g., ‘a bit’] more often). On the basis of a study of Wall Street Journal articles, he also notes that training data consisting mostly of female writing would in fact be beneficial for both men and women as women’s writing has shown to be more diverse overall. The importance of the corpora used for AICT research was emphasised repeatedly. Any linguistic corpus is a sample of a language, but it is also a sample of a particular demographic (or set of demographics).


Dr Ruth Page (University of Birmingham) discussed ‘Ugliness’ on Instagram and how the perception and representation of ‘ugly’ images on social media relate to identity and gender. She took a multimodal approach combining image and discourse analysis. Her research indicates that perceptions and discourses of ugliness are shifting on social media and, particularly, that users distinguish between playful and ironic illustrations of ugliness (using the hashtag #uglyselfie) and painful, negative post (#ugly). While ‘ugly’ is much more frequent in relation to girls than boys, the opposite is true for man/woman. She also showed that males favour self-deprecation more, whereas women are more likely to use self-mockery. 


In her talk, Dr Stefanie Ullmann (University of Cambridge) presented a corpus study of representations of gender in the OPUS English-German parallel data. She showed that the data sets are strongly biased towards male forms, particularly in German occupation words. The results of her study also indicate that representations of men and women reflect traditional gender-related stereotypes, such as doctors are male, nurses are female or women are caretakers, men are dominant and powerful. Using such clearly skewed texts as training data for machine translation inevitably leads to biased results and errors in translation. 


Finally, Dr Dong Nguyen (Alan Turing Institute, University of Utrecht) took a computational sociolinguistic perspective on the relation between language and gender. She presented the results of an experimental study in which a system (TweetGenie) had been trained to predict gender and age of people based on tweets they had written. She showed how speakers construct their own identify linguistically, and this process involves the gendered aspects of their language. Consequently, gender as manifest in written texts is fluid and variable, rather than something biological and fixed.


The workshop ended with a roundtable discussion involving all speakers, which gave the very engaged and interested audience a final chance to ask questions. It also provided an opportunity for the speakers, after hearing each other’s talks, to reconsider some core issues and discuss overarching themes and issues in more detail. One notable conclusion from the discussion was that all participants had similarly experienced the difficulty of addressing and representing non-binary gender notions in their research. It was observed that technology tends to impose binary gender with very little to no data available for analysis on other forms of gender-identification.   

The workshop demonstrated the great and acute contemporary relevance of the topic of gender in relation to language-based AI. The engaged participation of the audience, which included representatives from several tech companies, emphasised the importance of this issue when seeking to understand the social impact of language-based AI systems. 

Eugen Bär (left, Humanities and Social Change International Foundation) and Una Yeung (right, Giving Voice to Digital Democracies Project Administrator)

Text by Stefanie Ullmann and Marcus Tomalin

Pictures taken by Imke van Heerden and Stefanie Ullmann

Videos filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Ethics, Technology 1024 341 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Ethics, Technology

Inaugural workshop

25 March 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its inaugural workshop on 25th March 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

This significantly oversubscribed event brought together experts from academia, government, and industry, enabling a diverse conversation and discussion with an engaged audience. The one-day workshop was opened by Project Manager Dr Marcus Tomalin who summarised the main purposes of the project and workshop.


The focus was specifically on the ethical implications of Artificially Intelligent Communications Technology (AICT). While discussions about ethics often revolve around issues such as data protection and privacy, transparency and accountability (all of which are important concerns), the impact that language-based AI systems have upon our daily lives is a topic that has previously received comparatively little (academic) attention. Some of the central issues that merit careful consideration are:

  • What might more ethical AICT systems look like? 
  • How could users be better protected against hate speech on social media? 
  • (How) can we get rid of data bias inherent in AICT systems? 

These and other questions were not only key on the agenda for the workshop, but will continue to be central research objectives for the project over the next 3½ years. 

Olly Grender (House of Lords Select Committee on Artificial Intelligence) was the first main speaker, and she argued that we need to put ethics at the centre of AI development. This is something to which the UK is particularly well-placed to contribute. She emphasised the need to equip people with a sufficiently deep understanding not only of AI, but also of the fundamentals of ethics. This will help to ensure that the prejudices of the past are not built into automated systems. She also emphasised the extent to which the government is focusing on these. The creation of the Centre for Data Ethics and Innovation is a conspicuous recent development, and numerous white papers about such matters have been, and will be, published. The forthcoming white paper concerning ‘online harm’ will be especially influential, and the Giving Voice to Digital Democracies project has been involved in preparing that paper.


In her talk, Dr Melanie Smallman (University College London, Alan Turing Institute) proposed a multi-scale ethical framework to combat social inequality caused and magnified by technology. In essence, she suggested that the ethical contexts at different levels of the hierarchy, from individual members of society to vast corporations, can differ greatly. Something that seems ethical justifiable at one level may not be at another level. These different scales need to be factored into the process of developing language-based AI systems. As Smallman reminded us, “we need to make sure that technology does something good”. 


Dr Adrian Weller (University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation) gave an overview of various ethical issues that arise in relation to cutting-edge AI systems. He emphasised that we must take measures to ensure that we can trust the AI systems we create. He argued that we need to make sure people have a better understanding of when AI systems are likely to perform well, and when they are likely to go awry. While such systems are extremely powerful and effective in many respects, they can also be alarmingly brittle, and can make mistakes (e.g., classificatory errors) of a kind that no human would make. 

In his talk, Dr Marcus Tomalin (University of Cambridge) stressed that traditional thinking about ethics is inadequate for discussions of AICT systems. A more appropriate ethical framework would be ontocentric rather than predominantly anthropocentric, and patient-oriented rather than merely agent-oriented. He also argued that algorithmic decision-making can be hard to analyse in relation to AICT systems. For instance, it is not at all simple to determine where and when a machine translation system makes the decision to translate a specific word in a specific way. Yet such ‘decisions’ can have serious ethical consequences. 


Professor Emily M. Bender (University of Washington) presented a typology of ethical risks in language technology and asked the question: ‘how can we make the processes underlying NLP technologies more transparent?’ Her work centres on the foregrounding of characteristics of data sets in so-called ‘data statements’, providing information (e.g., nature of the data, whose language, speech situation, etc.) about data at all times. The underlying conviction that such statements would help system designers to appreciate in advance the impact that a specific data set may have on the system being constructed (e.g., whether or not it would reinforce an existing bias).


Dr Margaret Mitchell (Google Research and Machine Intelligence) also discussed the problem of data bias. She showed that such biases are manifold and that they interact with machine learning processes at various stages and levels. This is sometimes referred to as ‘bias network effect’ or ‘bias laundering’. Adopting an approach that was similar in spirit to the aforementioned ‘data statements’, she proposed the implementation of ‘model cards’ at the processing level. 


The workshop ended with a roundtable discussion involving the various speakers, with many of the questions coming from the audience. This provided an opportunity to consider some of the core ideas in greater detail and to compare and contrast some of the ideas and approaches that had been presented earlier in the day.

The considerable interest that this inaugural workshop generated confirms once again the great need for genuinely interdisciplinary events of this kind, which bring together researchers and experts from technology, the humanities, and politics to reflect upon the social impact of the current generation of AI systems – and especially those systems that interact with us using language.

Text by Stefanie Ullmann and Marcus Tomalin

Pictures taken by Imke van Heerden and Stefanie Ullmann

Videos filmed and edited by Glenn Jobson