Posts By :

Stefanie Ullmann

Mindful of AI: Language, Technology and Mental Health 1024 683 Stefanie Ullmann

Mindful of AI: Language, Technology and Mental Health

Virtual Event – 01 & 02 October 2020

 

Workshop overview

Convenors

Bill Byrne (University of Cambridge), Shauna Concannon (University of Cambridge), Ann Copestake (University of Cambridge), Ian Roberts (University of Cambridge), Marcus Tomalin (University of Cambridge), Stefanie Ullmann (University of Cambridge)

Language-based Artificial Intelligence (AI) is having an ever greater impact on how we communicate and interact. Whether overtly or covertly, such systems are essential components in smartphones, social media sites, streaming platforms, virtual personal assistants, and smart speakers. Long before the worldwide Covid-19 lockdowns, these devices and services were already affecting not only our daily routines and behaviours, but also our ways of thinking, our emotional well-being and our mental health.Social media sites create new opportunities for peer-group pressure, which can heighten feelings of anxiety, depression and loneliness (especially in young people); malicious twitterbots can influence our emotional responses to important events; and online hate speech and cyberbullying can cause victims to have suicidal thoughts.

Consequently, there are frequent calls for stricter regulation of these technologies, and there are growing concerns about the ethical appropriateness of allowing companies to inculcate addictive behaviours to increase profitability. Infinite scrolls and ‘Someone is typing a comment’ indicators in messaging apps keep us watching and waiting, and we repeatedly return to check the number of ‘likes’ our posts have received. The underlying software has often been purposefully crafted to trigger biochemical responses in our brains (eg the release of serotonin and/or dopamine), and these neurotransmitters strongly influence our reward-related cognition. The powerful psychological impact of such technologies is not always a positive one. Indeed, it sometimes seems appropriate that those who interact with these technologies, and those who inject drugs, are all called ‘users’.

However, while AI-based communications technologies undoubtedly have the potential to harm our mental health, they can also offer forms of psychological support. Machine Learning systems can measure the physical and mental well-being of users by evaluating their language use in social media posts, and a variety of empathetic therapy, care, and mental health chatbots, apps, and conversational agents are already widely available. These applications demonstrate some of the ways in which well-designed language-based AI technologies can offer significant psychological and practical support to especially vulnerable social groups. Indeed, medical professionals have started to consider the possibility that the future of mental healthcare will inevitably be digital, at least in part. Yet, despite their potential benefits, developments such as these raise a number of non-trivial regulatory and ethical concerns.

This two-day virtual interdisciplinary workshop brings together a diverse group of researchers from academia, industry and government, with specialisms in many different disciplines, to discuss the different effects, both positive and negative, that AI-based communications technologies are currently having, and will have, on mental health and well-being.

Speakers & Structure of Event:

Thursday 1 October

Session 1: Social Media and Mental Health

Speakers: Michelle O’Reilly (University of Leicester), Amy Orben (University of Cambridge)

Session 2: AI and Suicide Risk Detection

Speakers: Glen Coppersmith (Qntfy), Eileen Bendig (Ulm University)

Friday 2 October

Session 3: From Understanding to Automating Therapeutic Dialogues

Speakers: Raymond Bond (University of Ulster), Rose McCabe (City, University of London)

Session 4: The Future of Digital Mental Healthcare

Speakers: Valentin Tablan (IESO Digital Health), Maria Liakata (Queen Mary University of London)

Detailed Programme

 

Registration

The workshop comprises four sessions. You can register for more than one workshop session. Please register for each of the four sessions if you wish to attend the entire workshop. Follow the links below to register for the individual sessions on Eventbrite:

Session 1: Social Media and Mental Health

Session 2: AI and Suicide Risk Detection

Session 3: Automating Therapeutic Dialogues

Session 4: The Future of Digital Mental Healthcare

 

Queries: Una Yeung (uy202@cam.ac.uk

Image by GaudiLab/Shutterstock.com

Shutterstock/asiandelight
Tackling the Problem of Online Hate Speech 370 275 Stefanie Ullmann

Tackling the Problem of Online Hate Speech

06 May 2020

by Marcus Tomalin and Stefanie Ullmann

Source: shutterstock/asiandelight

In recent years, the automatic detection of online hate speech has become an active research topic in machine learning. This has been prompted by increasing anxieties about the prevalence of hate speech on social media, and the psychological and societal harms that offensive messages can cause. These anxieties have only increased in recent weeks as many countries have been in lockdown due to the Covid-19 pandemic (L1GHT Toxicity during Coronavirus report). António Guterres, the Secretary-General of the United Nations has explicitly acknowledged that the ongoing crisis has caused a marked increase in hate speech.

Quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.

Stefanie Ullmann

Online hate speech presents particular problems, especially in modern liberal democracies, and dealing with it forces us to reflect carefully upon the tension between free speech (i.e., allowing people to say what they want) and protective censorship (i.e., safeguarding vulnerable groups from abusive or threatening language). Most social media sites have adopted self-imposed definitions, guidelines, and policies for handling toxic messages, and human beings employed as content moderators determine whether or not certain posts are offensive and should be removed. However, this framework is unsustainable. For a start, the offensive posts are only removed retrospectively, after the harm has already been caused. Further, there are far too many instances of hate speech for human moderators to assess them all. In addition, it is problematical that unelected corporations such as Facebook and Twitter should be the gate-keepers of free speech. Who are they to regulate our democracies by deciding what we can and can’t say?

Demonstration of app
Demonstration of app

Towards the end of 2019, two Cambridge-based researchers, Dr Marcus Tomalin and Dr Stefanie Ullmann, proposed a different approach. Their framework demonstrated how an automated hate speech detection system could be used to identify a message as being offensive before it was posted. The message would then be temporarily quarantined, and the intended recipient would receive a warning message, indicating the degree to which the quarantined message may be offensive. That person could then choose either to read the message, or else to prevent it appearing. This approach achieves an appropriate balance between libertarian and authoritarian tendencies: it allows people to write whatever they want, but recipients are also free to read only those messages they wish to read. Crucially, this framework obviates the need for corporations or national governments to make decisions about which messages are acceptable and which are not. As Dr Ullmann puts it, “quarantining redirects control back into the hands of the user. No one should be at the mercy of someone else’s senseless hate and abuse, and quarantining protects users whilst managing the balancing act of ensuring free speech and avoiding censorship.”

 

A 4th-year student in the Engineering Department, Nicholas Foong who is supervised by Dr Tomalin, has now developed both a state-of-the-art automatic hate speech detection system, and an app that demonstrates how the system can be used to quarantine offensive messages by blurring them until the recipient actively chooses to read them. An Android version of the app is available, along with a short demo video of the app in action.

The state-of-the art system is able to correctly identify up to 91% of offensive posts. In the app, it is used to automatically detect and quarantine hateful posts in a simulated social media feed in real-time. The app demonstrates that the trained system can run locally on mobile phones, taking up just 9MB of space and requiring no internet connection to function.

Nicholas Foong, app developer, Department of Engineering Cambridge

Despite these promising developments, there is still a lot of work that needs to be done if the problem of online hate speech is going to be solved convincingly. The detection systems themselves need to be able to cope with different linguistic registers and styles (e.g., irony, satire), and the training data must be annotated accurately, to avoid introducing unwanted biases. In addition, since hate speech increasingly contains both words and images, the next generation of automated detection systems will need to handle multimodal input. Nonetheless, the quarantining framework offers an effective practical way of incorporating such technologies into our regular online interactions. And, as we adjust to life in lockdown, we can perhaps appreciate more than ever how quarantining can help to keep us all safe.

Quarantining Online Hate Speech 680 332 Stefanie Ullmann

Quarantining Online Hate Speech

Research Publication

“quarantining online hate speech”

Ethics and information technology

10 October 2019

Press Release by Cambridge University

https://www.cam.ac.uk/research/news/online-hate-speech-could-be-contained-like-a-computer-virus-say-researchers

17 December 2019

Artificial intelligence is being developed that will allow advisory ‘quarantining’ of hate speech in a manner akin to malware filters – offering users a way to control exposure to ‘hateful content’ without resorting to censorship.

We can empower those at the receiving end of the hate speech poisoning our online discourses

Marcus Tomalin

The spread of hate speech via social media could be tackled using the same ‘quarantine’ approach deployed to combat malicious software, according to University of Cambridge researchers.

Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example.

As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended “psychological harm” is inflicted, with armies of moderators required to judge every case.

This is the new front line of an ancient debate: freedom of speech versus poisonous language.

Now, an engineer and a linguist have published a proposal in the journal Ethics and Information Technology that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.

Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.

As these algorithms get refined, potential hate speech could be identified and “quarantined”. Users would receive a warning alert with a “Hate O’Meter” – the hate speech severity score – the sender’s name, and an option to view the content or delete unseen.

This approach is akin to spam and malware filters, and researchers from the ‘Giving Voice to Digital Democracies’ project believe it could dramatically reduce the amount of hate speech people are forced to experience. They are aiming to have a prototype ready in early 2020.

“Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining,” said co-author and linguist Dr Stefanie Ullmann. “In fact, a lot of hate speech is actually generated by software such as Twitter bots.”

“Companies like Facebook, Twitter and Google generally respond reactively to hate speech,” said co-author and engineer Dr Marcus Tomalin. “This may be okay for those who don’t encounter it often. For others it’s too little, too late.”

“Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation,” he said.

Former US Secretary of State Hillary Clinton recently told a UK audience that hate speech posed a “threat to democracies”, in the wake of many women MPs citing online abuse as part of the reason they will no longer stand for election.

While in a Georgetown University address, Facebook CEO Mark Zuckerberg spoke of “broad disagreements over what qualifies as hate” and argued: “we should err on the side of greater expression”.

The researchers say their proposal is not a magic bullet, but it does sit between the “extreme libertarian and authoritarian approaches” of either entirely permitting or prohibiting certain language online.

Importantly, the user becomes the arbiter. “Many people don’t like the idea of an unelected corporation or micromanaging government deciding what we can and can’t say to each other,” said Tomalin.

“Our system will flag when you should be careful, but it’s always your call. It doesn’t stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate.”

In the paper, the researchers refer to detection algorithms achieving 60% accuracy – not much better than chance. Tomalin’s machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modeling.

Meanwhile, Ullman gathers more ‘training data’: verified hate speech from which the algorithms can learn. This helps refine the ‘confidence scores’ that determine a quarantine and subsequent Hate O’Meter read-out, which could be set like a sensitivity dial depending on user preference.

A basic example might involve a word like ‘bitch’: a misogynistic slur, but also a legitimate term in contexts such as dog breeding. It’s the algorithmic analysis of where such a word sits syntactically – the types of surrounding words and semantic relations between them – that informs the hate speech score.

“Identifying individual keywords isn’t enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process,” said Ullmann.

Added Tomalin: “Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses.”

However, the researchers, who work in Cambridge’s Centre for Research into Arts, Humanities and Social Sciences (CRASSH), say that – as with computer viruses – there will always be an arms race between hate speech and systems for limiting it.

The project has also begun to look at “counter-speech”: the ways people respond to hate speech. The researchers intend to feed into debates around how virtual assistants such as ‘Siri’ should respond to threats and intimidation.

Text by Fred Lewsey


Deutschlandfunk – Computer und kommunikation (Computer and communication)

11 January 2020

Interview with dr Stefanie Ullmann

Listen to the interview here (in German)


BBC World Service – Digital planet

25 November 2019

Interview with Dr Stefanie Ullmann

Listen to the interview here (relevant bit starting at 13:35mins)


Text by Stefanie Ullmann

Fact-Checking Hackathon 1024 684 Stefanie Ullmann

Fact-Checking Hackathon

Fact-Checking Hackathon

10 January 2020, 10:00 – 12 January 2020, 16:00

Room LR4, Baker Building, Department of Engineering, University of Cambridge,Trumpington Street, Cambridge CB2 1PZ


Overview

Fake news, misinformation and disinformation are being created and circulated online with unprecedented speed and scale. There are concerns that this poses a serious threat to our modern digital societies by skewing public opinion about important issues and maliciously interfering with national election campaigns.

Fact-checking is an increasingly vital approach for tackling the rapid spread of false claims online. Specifically, there is an urgent need for automated systems that detect, extract and classify incorrect information in real time; and linguistic analyses of argument structure, entailment, stance marking, and evidentiality can assist the development of such systems.

We want to bring together people with different kinds of expertise to develop new approaches for tackling the problems posed by fake news, misinformation and disinformation. Taking an existing automated fact-checking system as a baseline, the main hackathon task will be to find ways of improving its performance. The experimental framework will be that used for the FEVER: Fact Extraction and VERification challenge (http://fever.ai). 

 

Is it for me?

The task of dealing with false claims online is necessarily an interdisciplinary task. Therefore, this hackathon will create a collaborative environment for participants from a variety of backgrounds to come together to work in teams. Whether you already have strong coding skills, a specific interest in disciplines such as information engineering or natural language processing, a familiarity with linguistic theory, or even an interest in the philosophy of language, you will certainly be able to make valuable contributions during the hackathon!

In particular we encourage undergraduates and postgraduates:

  • in Engineering / Computer Science, with good programming skills (esp. Python) 
  • in Linguistics / Philosophy / Psychology / Sociology
  • with an interest in language-based AI technologies 

 

Do I need to be able to code?

There will be a variety of ways to get involved and contribute during the hackathon, so coding experience is not essential. For instance, participants with a background in linguistics can analyse the linguistic data in detail, and then work together with coders so that their insights can improve the baseline system.

For those participants who would like to learn more about coding, there will be introductory sessions on Python during the hackathon – so this will be a good opportunity to dip your toe in the water!

 

Why should I attend?

  • A chance to collaborate in interdisciplinary teams to address a language-based technology problem that has huge contemporary importance.
  • An opportunity to learn about the challenges of developing an automated fact-checking system, and benefit from advice and insights from fact-checking experts.
  • A chance to learn Python, if you are new to coding.

 

Further details

The event runs from Friday to Sunday and attendees are expected to participate throughout.

Lunch will be provided on all three days, and there will be coffee and snacks throughout the hackathon, to keep you going!

If you have any questions about the event or would like to discuss any specific requirements please contact Shauna Concannon 

Image by igorstevanovic/Shutterstock.com

Artificial Intelligence and Social Change 900 506 Stefanie Ullmann

Artificial Intelligence and Social Change

Talk

19 October 2019

Festival of ideas, cambridge



Dr Marcus Tomalin, Project Manager of Giving Voice to Digital Democracies, gave a talk at the Festival of Ideas on “Artificial Intelligence and Social Change” on 19 October 2019.

The Giving Voice to Digital Democracies project investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives. The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

Text by Stefanie Ullmann and Marcus Tomalin

Video filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Society, Technology 1024 647 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Society, Technology

Third workshop

30 September 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its third workshop on 30th September 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

This workshop, focused on the impact of artificial intelligence on society, specifically on language-based technologies at the intersection of AI and ICT (henceforth ‘Artificially Intelligent Communications Technologies’ or ‘AICT’) – namely, speech technology, natural language processing, smart telecommunications and social media. The social impact of these technologies is already becoming apparent. Intelligent conversational agents such as Siri (Apple), Cortana (Microsoft) and Alexa (Amazon) are already widely used, and, in the next 5 to 10 years, a new generation of Virtual Personal Assistants (VPAs) will emerge that will increasingly influence all aspects of our lives, from relatively mundane tasks (e.g. turning the heating on and off) to highly significant activities (e.g. influencing how we vote in national elections). Crucially, our interactions with these devices will be predominantly language-based.

Despite this, the specific linguistic, ethical, psychological, sociological, legal and technical challenges posed by AICT (specifically) have rarely received focused attention. Consequently, the workshop examined various aspects of the social impact of AICT-based systems in modern digital democracies, from both practical and theoretical perspectives. By doing so, it provided an important opportunity to consider how existing AICT infrastructures can be reconfigured to enable the resulting technologies to benefit the communities that use them.

The project Giving Voice to Digital Democracies investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives.

Listen to the talks here:

Dr Marcus Tomalin (University of Cambridge, Project Manager)

 

Dr Ella McPherson (University of Cambridge)

 

Maria Luciana Axente (Pricewaterhouse Coopers)

 

Dr Shauna Concannon (University of Cambridge)

 

Dr Trisha Meyer (Free University of Brussels – VUB)

 

Text by Stefanie Ullmann and Marcus Tomalin

Videos filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Society, Technology 1024 647 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Society, Technology

This workshop, the third in a series on the future of artificial intelligence, will focus on the impact of artificial intelligence on society, specifically on language-based technologies at the intersection of AI and ICT (henceforth ‘Artificially Intelligent Communications Technologies’ or ‘AICT’) – namely, speech technology, natural language processing, smart telecommunications and social media. The social impact of these technologies is already becoming apparent. Intelligent conversational agents such as Siri (Apple), Cortana (Microsoft) and Alexa (Amazon) are already widely used, and, in the next 5 to 10 years, a new generation of Virtual Personal Assistants (VPAs) will emerge that will increasingly influence all aspects of our lives, from relatively mundane tasks (e.g. turning the heating on and off) to highly significant activities (e.g. influencing how we vote in national elections). Crucially, our interactions with these devices will be predominantly language-based.

Despite this, the specific linguistic, ethical, psychological, sociological, legal and technical challenges posed by AICT (specifically) have rarely received focused attention. Consequently, the workshop will examine various aspects of the social impact of AICT-based systems in modern digital democracies, from both practical and theoretical perspectives. By doing so, it will provide an important opportunity to consider how existing AICT infrastructures can be reconfigured to enable the resulting technologies to benefit the communities that use them.

Speakers

Maria Luciana Axente (Pricewaterhouse Coopers)

Shauna Concannon (University of Cambridge)

Sarah Connolly (UK Department for Digital, Culture, Media & Sport)

Ella McPherson (University of Cambridge)

Trisha Meyer (Free University of Brussels – VUB)

Jonnie Penn (University of Cambridge)

The workshop is organised by Giving Voice to Digital Democracies, a research project that is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.

Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.

Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.

Please register for the workshop here.

Queries: Una Yeung (uy202@cam.ac.uk

Image by Metamorworks/Shutterstock.com

The Future of Artificial Intelligence: Language, Gender, Technology 800 800 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Gender, Technology

second workshop

17 May 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its second workshop on 17th May 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

The project investigates the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions of this second workshop focused specifically on different aspects of the complex relationships between language, gender, and technology. The one-day event brought together experts and researchers from various academic disciplines, looking at questions of gender in the context of language-based AI from linguistic, philosophical, sociological, and technical perspectives.


Professor Alison Adam (Sheffield Hallam University) was the first speaker of the day, and, she asked the very pertinent question of how relevant traditional feminist arguments and philosophical critiques of AI are nowadays in relations to gender, knowledge, and language. She pointed out that while older critiques focused on the distinction between machines and humans, newer critical approaches address concerns for fairness and bias. Despite these shifts, Adam stressed the abiding need to allow feminist arguments to inform the discussion.


Taking an approach based on formal semantics and computational psycholinguistics, Dr Heather Burnett (CNRS – Université Diderot Paris) presented the results of a study investigating the overuse of masculine pronouns in English and French. The talk ranged across numerous topics, including the way in which dominance relations can affect similarity judgments, making them no longer commutative. For instance, in male dominated professions women are likely to be considered similar to men, but in female dominated professions, men are unlikely to be considered similar to women. These asymmetries have implications for language use.


In his talk, Dr Dirk Hovy (Bocconi University) focused on the relation between gender and syntactic choices. He concluded, for instance, that how people identify in terms of gender (subconsciously) determines syntactic constructions they use in language (e.g., women use intensifiers [e.g.,  ‘very’] more often, while men use downtoners [e.g., ‘a bit’] more often). On the basis of a study of Wall Street Journal articles, he also notes that training data consisting mostly of female writing would in fact be beneficial for both men and women as women’s writing has shown to be more diverse overall. The importance of the corpora used for AICT research was emphasised repeatedly. Any linguistic corpus is a sample of a language, but it is also a sample of a particular demographic (or set of demographics).


Dr Ruth Page (University of Birmingham) discussed ‘Ugliness’ on Instagram and how the perception and representation of ‘ugly’ images on social media relate to identity and gender. She took a multimodal approach combining image and discourse analysis. Her research indicates that perceptions and discourses of ugliness are shifting on social media and, particularly, that users distinguish between playful and ironic illustrations of ugliness (using the hashtag #uglyselfie) and painful, negative post (#ugly). While ‘ugly’ is much more frequent in relation to girls than boys, the opposite is true for man/woman. She also showed that males favour self-deprecation more, whereas women are more likely to use self-mockery. 


In her talk, Dr Stefanie Ullmann (University of Cambridge) presented a corpus study of representations of gender in the OPUS English-German parallel data. She showed that the data sets are strongly biased towards male forms, particularly in German occupation words. The results of her study also indicate that representations of men and women reflect traditional gender-related stereotypes, such as doctors are male, nurses are female or women are caretakers, men are dominant and powerful. Using such clearly skewed texts as training data for machine translation inevitably leads to biased results and errors in translation. 


Finally, Dr Dong Nguyen (Alan Turing Institute, University of Utrecht) took a computational sociolinguistic perspective on the relation between language and gender. She presented the results of an experimental study in which a system (TweetGenie) had been trained to predict gender and age of people based on tweets they had written. She showed how speakers construct their own identify linguistically, and this process involves the gendered aspects of their language. Consequently, gender as manifest in written texts is fluid and variable, rather than something biological and fixed.


The workshop ended with a roundtable discussion involving all speakers, which gave the very engaged and interested audience a final chance to ask questions. It also provided an opportunity for the speakers, after hearing each other’s talks, to reconsider some core issues and discuss overarching themes and issues in more detail. One notable conclusion from the discussion was that all participants had similarly experienced the difficulty of addressing and representing non-binary gender notions in their research. It was observed that technology tends to impose binary gender with very little to no data available for analysis on other forms of gender-identification.   

The workshop demonstrated the great and acute contemporary relevance of the topic of gender in relation to language-based AI. The engaged participation of the audience, which included representatives from several tech companies, emphasised the importance of this issue when seeking to understand the social impact of language-based AI systems. 

Eugen Bär (left, Humanities and Social Change International Foundation) and Una Yeung (right, Giving Voice to Digital Democracies Project Administrator)

Text by Stefanie Ullmann and Marcus Tomalin

Pictures taken by Imke van Heerden and Stefanie Ullmann

Videos filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Ethics, Technology 1024 341 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Ethics, Technology

Inaugural workshop

25 March 2019

Centre for research in the arts, social sciences and humanities (crassh), cambridge

The Giving Voice to Digital Democracies project hosted its inaugural workshop on 25th March 2019 at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). The project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

This significantly oversubscribed event brought together experts from academia, government, and industry, enabling a diverse conversation and discussion with an engaged audience. The one-day workshop was opened by Project Manager Dr Marcus Tomalin who summarised the main purposes of the project and workshop.


The focus was specifically on the ethical implications of Artificially Intelligent Communications Technology (AICT). While discussions about ethics often revolve around issues such as data protection and privacy, transparency and accountability (all of which are important concerns), the impact that language-based AI systems have upon our daily lives is a topic that has previously received comparatively little (academic) attention. Some of the central issues that merit careful consideration are:

  • What might more ethical AICT systems look like? 
  • How could users be better protected against hate speech on social media? 
  • (How) can we get rid of data bias inherent in AICT systems? 

These and other questions were not only key on the agenda for the workshop, but will continue to be central research objectives for the project over the next 3½ years. 

Olly Grender (House of Lords Select Committee on Artificial Intelligence) was the first main speaker, and she argued that we need to put ethics at the centre of AI development. This is something to which the UK is particularly well-placed to contribute. She emphasised the need to equip people with a sufficiently deep understanding not only of AI, but also of the fundamentals of ethics. This will help to ensure that the prejudices of the past are not built into automated systems. She also emphasised the extent to which the government is focusing on these. The creation of the Centre for Data Ethics and Innovation is a conspicuous recent development, and numerous white papers about such matters have been, and will be, published. The forthcoming white paper concerning ‘online harm’ will be especially influential, and the Giving Voice to Digital Democracies project has been involved in preparing that paper.


In her talk, Dr Melanie Smallman (University College London, Alan Turing Institute) proposed a multi-scale ethical framework to combat social inequality caused and magnified by technology. In essence, she suggested that the ethical contexts at different levels of the hierarchy, from individual members of society to vast corporations, can differ greatly. Something that seems ethical justifiable at one level may not be at another level. These different scales need to be factored into the process of developing language-based AI systems. As Smallman reminded us, “we need to make sure that technology does something good”. 


Dr Adrian Weller (University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation) gave an overview of various ethical issues that arise in relation to cutting-edge AI systems. He emphasised that we must take measures to ensure that we can trust the AI systems we create. He argued that we need to make sure people have a better understanding of when AI systems are likely to perform well, and when they are likely to go awry. While such systems are extremely powerful and effective in many respects, they can also be alarmingly brittle, and can make mistakes (e.g., classificatory errors) of a kind that no human would make. 

In his talk, Dr Marcus Tomalin (University of Cambridge) stressed that traditional thinking about ethics is inadequate for discussions of AICT systems. A more appropriate ethical framework would be ontocentric rather than predominantly anthropocentric, and patient-oriented rather than merely agent-oriented. He also argued that algorithmic decision-making can be hard to analyse in relation to AICT systems. For instance, it is not at all simple to determine where and when a machine translation system makes the decision to translate a specific word in a specific way. Yet such ‘decisions’ can have serious ethical consequences. 


Professor Emily M. Bender (University of Washington) presented a typology of ethical risks in language technology and asked the question: ‘how can we make the processes underlying NLP technologies more transparent?’ Her work centres on the foregrounding of characteristics of data sets in so-called ‘data statements’, providing information (e.g., nature of the data, whose language, speech situation, etc.) about data at all times. The underlying conviction that such statements would help system designers to appreciate in advance the impact that a specific data set may have on the system being constructed (e.g., whether or not it would reinforce an existing bias).


Dr Margaret Mitchell (Google Research and Machine Intelligence) also discussed the problem of data bias. She showed that such biases are manifold and that they interact with machine learning processes at various stages and levels. This is sometimes referred to as ‘bias network effect’ or ‘bias laundering’. Adopting an approach that was similar in spirit to the aforementioned ‘data statements’, she proposed the implementation of ‘model cards’ at the processing level. 


The workshop ended with a roundtable discussion involving the various speakers, with many of the questions coming from the audience. This provided an opportunity to consider some of the core ideas in greater detail and to compare and contrast some of the ideas and approaches that had been presented earlier in the day.

The considerable interest that this inaugural workshop generated confirms once again the great need for genuinely interdisciplinary events of this kind, which bring together researchers and experts from technology, the humanities, and politics to reflect upon the social impact of the current generation of AI systems – and especially those systems that interact with us using language.

Text by Stefanie Ullmann and Marcus Tomalin

Pictures taken by Imke van Heerden and Stefanie Ullmann

Videos filmed and edited by Glenn Jobson

The Future of Artificial Intelligence: Language, Gender, Technology 724 1024 Stefanie Ullmann

The Future of Artificial Intelligence: Language, Gender, Technology

A report of the event as well as videos of the talks can be found here.

The workshop will consider the social impact of Artificially Intelligent Communications Technology (AICT). Specifically, the talks and discussions will focus on different aspects of the complex relationships between language, gender, and technology. These issues are of particular relevance in an age when Virtual Personal Assistants such as Siri, Cortana, and Alexa present themselves as submissive females, when most language-based technologies manifest glaring gender-biases, when 78% of the experts developing AI systems are male, when sexist hate speech online is a widely-recognised problem and when many Western cultures and societies are increasingly recognising the significance of non-binary gender identities.

Speakers

Professor Alison AdamSheffield Hallam University

Dr Heather BurnettCNRS-Université Paris Diderot

Dr Dirk HovyBocconi University

Dr Dong NguyenAlan Turing Institute, University of Utrecht

Dr Ruth PageUniversity of Birmingham

Dr Stefanie UllmannUniversity of Cambridge

The workshop is organised by Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.

Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.

Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.

Please register for the workshop here.

Queries: Una Yeung (uy202@cam.ac.uk

Image by metamorworks/Shutterstock.com