An “echoborg” is a person whose speech (and in some cases, actions) are determined wholly or in part by artificial intelligence.
(Credit: Rik Lander and the I am Echoborg team)
A brief history of the echoborg
Professor Alex Gillespie and I coined the term “echoborg” in 2014 while designing experiments wherein a person held an unscripted conversation with someone, in-person and face-to-face, who was simply repeating words they covertly received from an artificial conversational agent (e.g., a chat bot program).
Why would we explore such a scenario? The answer is that we wanted understand how people perceive computer-generated “thought” under conditions wherein they fully believe themselves to be interacting with the words of a fellow human being. The echoborg creates and maintains the illusion that the person with whom one is interacting is an individual self-authoring spontaneous speech, as occurs in ordinary conversation.
Other methods for exploring computer-generated speech under conditions where a human interlocutor ostensibly believes they are speaking with a real person typically involve a research participant engaging with a conversational agent via an artificial interface of some sort (e.g., via a screen and text terminal) after having been primed to believe that they would be interacting with another human. These methods invariably fail to maintain the psychological illusion that one is interacting with a real person unless gimmicky rules and constraints (e.g., scripting) are placed around the conversational protocol. They fail for the simple reason that even the most advanced forms of conversational artificial intelligence (as of 2019) are simply terrible at non-domain-specific conversation. People immediately recognize that they are not interacting with a real person when chatting with a chat bot via a screen and keyboard.
In our experiments, we had research participants interact in-person with an echoborg for upwards of 20 minutes, without the imposition of a conversational script, with almost all reporting afterwards that they believed their interlocutor to have been speaking their own self-authored words. Sure, the conversations were strange, awkward, or difficult, many reported, but nothing about the conversations seemed computer-generated or inhuman.
Why is this finding significant? A few key reasons:
For one, it tells us that when our brains determine themselves to be in a human-human conversational context, they do so not on the basis of the particularities or sophistication of the syntax received from an interlocutor. Rather, what seems to matter much more so is the context of the interaction, and the psychological heuristics primed by this context. A rudimentary chat bot program can “pass” as a human so long as it has the requisite interface (i.e., a real human body) just as a person who is capable only of producing gibberish is no less able to pass as a human than is someone who can speak in meaningful sentences. One of the disastrous consequences of the fetishization of the Turing Test among many technologists is that it has become incorrectly assumed that humanness can be reproduced simply by mimicking humanlike syntax.
The findings are also significant because they validate a new methodology for benchmarking artificial conversational agents. If it is the goal of certain technologists to develop artificial conversational technology that truly mimics human’s capacity for non-domain-specific, highly contextualized and spontaneous prose and is perceived as fully human, then the best test of that technology would be under conditions that mimic the psychological context of actual human-human interaction when intersubjective demands are at their highest: face-to-face interaction. The “echoborg method” enables this.
(A technical demonstration of the “echoborg method”)
In addition to our academic publications (below), our echoborg work has received attention from the popular science press in recent years, and was even demonstrated at the 2016 BBC World Changing Ideas Festival in Sydney, Australia. Shortly thereafter the echoborg work appeared on BBC’s Click television program.
Corti, K., & Gillespie, A. (2015). A truly human interface: interacting face-to-face with someone whose words are determined by a computer program. Frontiers in Psychology.
Corti, K., & Gillespie, A. (2015). Co-constructing intersubjectivity with artificial conversational agents: people are more likely to initiate repairs of misunderstandings with agents represented as human. Computers in Human Behavior.
Gillespie, K., & Corti, K. (2016). The body that speaks: recombining bodies and speech sources in unscripted face-to-face communication. Frontiers in Psychology.