The Future of Artificial Intelligence: Language, Ethics, Technology
A report of the event as well as videos of the talks can be found here.
This is the inaugural workshop of Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.
The workshop will bring together experts from politics, industry, and academia to consider the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions will focus on different aspects of the complex relationships between language, ethics, and technology. These issues are of particular relevance in an age when we talk to Virtual Personal Assistants such as Siri, Cortana, and Alexa ever more frequently, when the automated detection of offensive language is bringing free speech and censorship into direct conflict, and when there are serious ethical concerns about the social biases present in the training data used to build influential AICT systems.
Professor Emily Bender, University of Washington
Baroness Grender MBE, House of Lords Select Committee on AI
Dr Margaret Mitchell, Google
Dr Melanie Smallman, University College London, Alan Turin Institute
Dr Marcus Tomalin, University of Cambridge
Dr Adrian Weller, University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation
Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.
Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy, and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.
Queries: Una Yeung (email@example.com)
Image by vs148/Shutterstock.com