View Static Version
Loading

Communication and chatbots How AI is changing the world with words

THIS LECTURE IS DESIGNED FOR UNDERGRADUATE STUDENTS ON MASTERCLASSES IN RELIGIOUS STUDIES (2020-21) AT THE UNIVERSITY OF MANCHESTER.

Overview

  • Lecturer: Dr Scott Midson (Lecturer in Liberal Arts)
  • Interests: Philosophy and ethics of AI, social robots, human-computer interaction, theology and technology, theological anthropology
  • Office hours (via Zoom): Mon 10-11, Thurs 3-4
  • Zoom room: scottmidson (864 204 1742)
  • Email: scott.midson@manchester.ac.uk
Hello! Here's your lecturer for this week and Paro the Seal. Paro, like other robots, incorporates a form of artificial intelligence (AI), which he/she/it shares in common with chatbots, another form of AI that we'll focus on in this session...

Welcome to this Religions & Theology Masterclass! During this session, I'll be introducing you to some of my research into human-robot interaction (HRI, which is related to human-computer interaction, or HCI) and its importance for theological, philosophical, and ethical reflection.

By the end of the session, you will have had chance to find out more about, and reflect on, the following key points:

  • What a chatbot is and how it works
  • Different contexts in which we can find chatbots
  • Why chatbots and their 'artificial speech' matter
  • Critiques of chatbots (both 'positive' and 'negative')

How to work through this lecture

The lecture is designed to be worked through at your own pace, in one go or in smaller chunks.

The lecture includes a range of videos, readings, and other activities, including questions to prompt reflections, and they are highlighted by embedded videos for you to watch, and buttons to external sites for you to click on. These activities should take no more than 2 hours to work through. Additional resources that are entirely optional (and so are not part of the 2hrs of workable content) are given as in-text hyperlinks.

Of course, this is a 0-credit course, though, and so if you would prefer to skip ahead to focus just on certain parts of the lecture, you can do so using the buttons below.

What is a chatbot?

Google definition of a chatbot.

According to Google, a chatbot is "a computer program designed to simulate conversation with human users, especially over the internet."

Wikipedia expands on this definition by noting that "a chatbot is a type of software that can automate conversations and interact with people through messaging platforms."

Chatbots, then, are computer-based forms of communication. Computer code in a software program is able to parse input text from a human user and provide rule-based responses. We'll see more of this computational side of things shortly in exploring briefly how chatbots work.

What's important to note here is that the term 'chatbot' is broad and can encapsulate a wide range of software programs. Some chatbots operate on a very basic level of programming and only work if you input a certain word. Other chatbots are more sophisticated, and have more detailed lines of code that enable them to provide more detailed and credible responses. We might say here that chatbots are a form of 'artificial intelligence', although, as we will see, that is a controversial term.

The video below casts a little bit more light on these different types of chatbots.

One way of beginning to reflect on these different types of chatbot programming and what they say about principles of communication is to ask, can conversation be automated?

When you're talking to friends, colleagues, workers, or strangers, might it be said that your conversation is fluid, or does it follow a set of norms or rules? Can conversation be considered as both fluid and rule-based?

A key question that emerges from these points is, how (successfully) do chatbots model human communication?

A brief history of chatbots

this timeline shows how chatbots are now used in a range of contexts, and how they have developed from their foundations in turing tests and eliza, both of which are explored in more detail later in this lecture.

These images (above and below) provide a brief history of some of the main developments in the history of chatbots, some of which we will consider in more detail in this session.

How many of the examples below have you come across or heard of before? What is your knowledge or experience of them?

An infographic to highlight some key historical and contemporary examples of chatbots.

What do you think are the primary uses or functions of chatbots? Add your thoughts to your notes, and consider this question as you read through the resource below.

Having now familiarised ourselves with the basics of chatbots, we will next delve deeper into some of the contexts, concepts, and critiques that surround their development and uses.

ELIZA: The world's first chatbot

Image of ELIZA's user interface. Source: http://heiko-angermann.com/comparing-five-recent-chatbot-applications-against-eliza/

As you'll have seen from the timeline and brief history above, the world's first chatbot was designed in 1966 by Joseph Weizenbaum, a computer scientist based at MIT. This chatbot, 'ELIZA', is an important case study in human-computer interaction (HCI).

Use the button below to interact with software that has been designed to emulate ELIZA's original code. Consider the following questions throughout your interaction:

  1. What type of chatbot is ELIZA? How can you tell?
  2. Is a conversation with ELIZA similar to, or different from, a conversation with a human? In what ways?
  3. What do you think are the strengths and weaknesses of ELIZA?

Now watch the following short video about ELIZA and add some of the principles of ELIZA's programming to your notes and reflections from the exercise above.

The following video illustrates something more of how ELIZA works.

For more information about how ELIZA works, click here.

One of the important issues that ELIZA raises is about a possible distinction between knowledge and understanding. Knowledge corresponds with the ability to process information and to follow a script in the way that ELIZA does; yet the chatbot is unable to understand the meaning or significance of the keywords that it is detecting in accordance with its script and lexicon.

John Searle's thought experiment of The Chinese Room (1980) highlights this issue of the inability for computers and chatbots to understand inputs and outputs. Watch the video below and consider why this might be important to n0te when evaluating and critiquing AIs like ELIZA.

Responses to ELIZA

Would you mind leaving the room please?

Weizenbaum noticed that participants in his study had an interesting reaction to his chatbot. These are epitomised by an anecdote about how his secretary responded to ELIZA, which the clip below documents.

After having watched the clip, consider the following questions:

  1. What is the significance of the language used by ELIZA? (You might find it helpful to refer to your notes from the previous videos)
  2. What language is used to refer to ELIZA by different people in the clip (including the narrator)?
Joseph Weizenbaum, programmer and critic of ELIZA (as a model of HCI).

Now watch this short video to find out more about Weizenbaum's response to others' responses to ELIZA, and to see how his observations impacted his own views on AI and HRI. While watching the video, continue to add your thoughts to the questions above to help you think about the ethics and implications of ELIZA as a chatbot used in certain settings.

After having watched the video, use the button below to access an excerpt of the paper where Weizenbaum presents and discusses ELIZA. If you haven't already done so, read the sections of the text that are not crossed out. Consider how the text expresses Weizenbaum's thoughts about ELIZA and any significant passages, and add your annotations to the document.

  • Reference: Weizenbaum, Joseph (1966), 'ELIZA—a computer program for the study of natural language communication between man and machine', Communications of the ACM, 9(1), pp. 36-45.

Finally, to conclude this section, reflect on the following questions about Weizenbaum and ELIZA:

  1. Why might ELIZA have been so successful?
  2. Do you agree or disagree with Weizenbaum's concerns?

Other examples of chatbots

Watch the video below to see some of the contemporary uses and examples of chatbots.

Here are some of the links suggested in the video:

Optional: read this article that compares ELIZA with more recent examples of chatbots.

Ethical issues

Various commentators have put forward different critiques of chatbots, some of which we have begun to explore throughout this session. In this part of the lecture, I'll spotlight two of the main issues that have been raised of chatbots, which will help us to reflect on if and how chatbots are 'changing the world with words' in the final section.

Deception

One of the charges that is often set out against chatbots and other AIs is that of deception. The criticism presumes that users' expectations of AI capabilities are mismanaged or deliberately manipulated in order to 'sell' the appeal and convincingness of the AI product.

The charge of deception can be traced back to the classic model of human-computer interaction (HCI), the 'Turing Test'. This was forwarded by Alan Turing in 1950 as a principle for measuring 'machine intelligence', or what we might now refer to as AI. In the video below, which is taken from a different lecture that I prepared, I introduce the notion of a Turing Test (watch 24:55 - 30:42).

So, for Turing, for a machine to effectively communicate with us it should present itself or at least be judged as having a human level of intelligence. For those that claim such machines and AIs are deceptive, these presentations or judgements are misleading and potentially harmful for users.

Noel Sharkey is one of the foremost critics of what he terms 'showbots' that do precisely this work of deception for users:

For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care. It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy.

On the other hand, other critics such as Maciej Musiał note that the enchantment of machines in the way that Sharkey describes is not the work of deception but rather a projection of our own tendency to anthropomporphise, to personify, and to communicate with or relate to in some meaningful way.

Fritz Heider and Marianne Simmel, in a famous psychological experiment from 1944, demonstrated how humans have a tendency to anthropomorphise a huge range of phenomena in order to make sense of the world. This even includes animated shapes, such as those in the video below:

What do you think when watching the video? If you find yourself reading a narrative into the movements of the shapes, then you support Heider and Simmel's hypothesis that we can't seem to stop ourselves from anthropomorphising things.

What, then, does that mean for chatbots?

  • Do you think chatbots deceive users? Do users (necessarily) anthropomorphise them? Can, and should, this be avoided?

Human[e]ness

Consider the following quotation taken from Hendrik Kempt's book, Chatbots and the Domestication of AI: A Relational Approach. Do you think chatbots, in light of the above examples and discussions, bring out, or hinder our expressions of humanness?

Many children treat their plastic pets with the same care and empathy they would treat a living one. It seems that there are ways of relating to machines in ways unknown, unfamiliar, and possibly uncomfortable to us due to preconceived notions not only of what technology can do, but also of what relationships should entail.

Other theorists, like Sherry Turkle, express much caution about the ways that AI might be leading us to humanise our machines while dehumanising ourselves.

  • What do you think Turkle might mean by this? Do you agree with her?

The question of whether AI is related to humanness or dehumanising trends is an open yet important one, and it calls to mind deep questions about how we see ourselves and the value of humanness, and what types of relationships we seek with our machines.

Optional: Can you tell the difference between humans and chatbots? This news story about Amazon workers on Twitter highlights the blurred lines between the two and why this matters for our reflections on humanness.

Theological issues

Watch the below clip, which is a trailer for the documentary Plug and Pray (2010). In the trailer, you will see Joseph Weizenbaum amongst others including (in order of appearance) Ray Kurzweil, Minoru Asada, Hiroshi Ishiguro, and Giorgio Metta, all of whom share their hopes and fears about AI.

Consider the following questions:

  1. How is theological language used to express our self-understandings?
  2. How, if at all, does AI and computing change the meaning or connotations of such theological language?

For further enquiry, see this interactive video that I produced, which introduces and explores the relationship between religion and robots.

Changing the world with artificial words?

Do you think the language used to programme AI and/or the words uttered by AI have the power to change the world?

Below I round up three of the main issues that we have touched on in this session that can help us to reflect on HCI and developments and uses of chatbots. You can also add your own reflections using the Padlet below.

Created By
Scott Midson
Appreciate

Credits:

Created with images by geralt - "artificial intelligence robot ai" • geralt - "hand robot human" • ergoneon - "robot toy grey" • geralt - "businessman tablet head" • geralt - "hacker attack mask"

NextPrevious