Posts By :

Hannah Baker

Expert Bites – Bill Byrne 1000 690 Hannah Baker

Expert Bites – Bill Byrne

Professor Bill Byrne, Information Engineering, University of Cambridge

Summary

By Hannah Baker

22 November 2019

Professor Bill Byrne’s research is focused on statistical modelling of speech and language, with his current interest being in statistical machine translation. Bill is part of the ‘Giving Voice to Digital Democracies’ team who, as with the ‘Expertise Under Pressure’ project, is funded by the Humanities & Social Change International Foundation. 

At the beginning of the session, Bill gave an overview of his background, speaking about his interest in information theory and signal processing before the digital revolution and widespread use of computers. During this period, Bill began developing a deep understanding of the underlying mathematics for statistical probability, developing an expertise in how to approach speech recognition problems using probabilistic frameworks. Later in his career, he used his expertise in statistical modelling for machine translation. 

When machine translation research was emerging, Bill spoke about there being a tension between engineers developing these models and linguists, as the statistical models did not necessarily adhere to the pure principles of language. A key example of this in use today put forward by a member of the group is ‘Google Translate’, where the input words are converted to numbers, matched, then to words in the output language. A statistical modelling approach can perform translation well without any explicit knowledge of grammar or syntax; a clear departure from how linguists approach translation. An analogy was also put forward during the discussion that planes don’t fly like a bird, they fly but fly in a different way, so same function – different way of doing it – leading to a discussion about whether there is a need to truly understand linguistic theory anymore for application to machine translation, as well as questions about artificial intelligence (AI) mimicking a brain and cognitive abilities. 

It was noted that speech and language processing is a fast-moving area of research which is being developed incrementally but subject to jumps when new approaches such as neural networks are developed. It was also made apparent that these statistical models could be applied beyond speech translation, with parallels to ‘Renaissance Technologies’, an American hedge fund specialising in systemic trading using quantitative models being drawn by a member of the group. This posed a question as to what the drivers behind the development of speech and language processing were. In response, Bill felt that in the USA, development of these technologies had been state driven through government policy, and noted a time when funding was cut, known as the ‘AI winter’. One of the reasons for Bill’s move from speech recognition to translation was the availability of funding, acting as an indication of funding’s impact on technological development and research. Once speech translation technology was more mature, Bill commented that corporate organisations began to show more of an interest.

During the 90s a priority of state driven funding was the development open source datasets, which we continue to see in use today. An issue with open source data was raised, in terms of its impact on the expertise of people, with an example of non-experts, such as teenagers playing on computers at home, being able to use the data and develop a system without a deep understanding of the mathematics that underlies it. This was discussed alongside issues with the ethical and political response to the use of open source datasets and the anxiety of the ‘black-box’ and potential unintended consequences. However, Bill expressed support for open source platforms as they provide transparency which is necessary for ongoing development. 

 

Interview Questions

After the discussion took place, Bill was asked four questions about the understanding and changing nature of experts in speech and language processing. Below are his answers:

Considering your research and/or work in practice, what makes a good expert?

The field is vast with a huge amount of work being done, so an expert needs the ability to determine what is truly novel and what lines of research are emerging. This needs to be exploited to be able to develop a research agenda. As part of this, experts need to be able to make judgements and critique papers as to why the researchers are doing what they are doing and where the research fits in with everything else that is being done. 

What are the pressures experts face in your field?

Pressures are in the light of the former answer, in that, so much work is going on. The other thing to bear in mind is the traditional academic role. Apprenticeships and the pull from industry have played a huge part in changing the academic structure. Now people are developing expertise much quicker than they used to be and its harder to have stable research groups within academic institutions. 

Have you observed any significant changes occurring in recent times in the ways experts operate? 

The interesting thing is the use of social media. Experts used to make their way and get noticed through the peer-review process. Now the influence of social media has necessitated people share their research in through these platforms to get ideas out. It is not necessarily a bad thing, but has meant that it’s been necessary for research groups to re-think their communication strategies. 

Do you envision any changes in the role of experts in the future?

Expertise is much easier to claim than it had been in the past. There is so much information out there that people can read and understand, they can quickly get up to speed. Researchers are more likely to move from field to field, more so than previously. Hopefully there will still be a way for people to develop a deep expertise as people become much more mobile between these fields in the future.