"I'm alive-ish."
This is what Microsoft's virtual assistant, Cortana, was programmed to say in 2014 when a user asked if it was alive. Fast-forward to today, as the public is grappling with the social and philosophical implications of artificial intelligence technologies like ChatGPT, which is now integrated into the Bing search engine. While the advanced capabilities of these virtual assistants — particularly their ability not only to mimic but also to contribute — have caused some to worry that humans are turning too much over to machines, history reveals that we aren't likely to pump the brakes in any significant way. In fact, looking at how people have interacted with robots in the past, we are more likely to welcome, collaborate and even accommodate what James Vlahos refers to as "quasi-beings" going forward. And that could have implications — both good and bad — we can't yet anticipate.
As Weizenbaum later wrote, "I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
While it wasn't until the 2010s that virtual assistants like Siri, Cortana and Alexa achieved widespread adoption, ChatGPT precursors track all the way back to the 1960s. Between 1964 and 1966 at MIT, computer scientist Joseph Weizenbaum designed Eliza, a natural language processing program that could convincingly mimic short human conversations. In one famous application, the program could imitate the back and forth of client and therapist. Eliza was a pattern-matcher and ran on scripts, but nonetheless, users swooned. Eliza was available to students and colleagues to try, including Dr. Sherry Turkle, who has since spent her life studying the social effects of machines. Eliza ran on a mere 200 lines of code, but was compelling enough for Weizenbaum's secretary to ask him to leave the room so she could speak privately with it. Although Weizenbaum himself designed it as a parody of the doctor-patient relationship, users were keen to speak with Eliza, ascribing it with intelligence and compassion, even though its designer made clear that it had no such capacities.
In practice, Eliza was rigid, not intuitive. New interaction patterns had to be programmed into Eliza. The eagerness of users to ascribe life-like capacities to Eliza was an important finding, in direct contrast to what Weizenbaum had hoped to show. As Weizenbaum later wrote, "I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." In short, users breathed life and personality into a rudimentary chatbot that could not learn or generate. As Dr. Sherry Turkle explains, "We create robots in our own image, we connect with them easily, and then we become vulnerable to the emotional power of that connection." This tendency of humans to read emotions, intelligence and even consciousness into machines is now called the Eliza effect.
In the intervening decades, many chatterbots (later shortened to chatbot) rolled out, including one named Jabberwacky that was the first to incorporate voice rather than text interaction. Released in 1997, it is still available to try under its new name Cleverbot. In the 20th century, the chatbot Mitsuki has had the most lasting power. Marketed as a virtual friend, Mitsuki was launched in 2005 and has remained accessible ever since. Recently, the bot was renamed Kuki. Millions of users interact with Kuki each month as a form of leisure and company. Some users have been doing so for years. Even though Kuki has improved over time, enhancements to these older technologies were incremental, not evolutionary as the more recent capacities made possible by large language models.
Broader use of virtual assistants/chatbots began in the 2010s, when Siri, Cortana and Alexa arrived. These virtual assistants could replace typed searches for anything from a recipe to a weather forecast. According to the Pew Research Center, by 2017 more than half of all Americans used digital voice assistants, mainly in their phones. Users can now look up from the screen, speak into the air and be "heard." Voice assistants also condition users to the constant company of a listening device, able to be "woken" at any moment, while smart speakers like Alexa offer up a hub for the smart home, extending constant monitoring to the home as well.
Consumers acclimated to these now-common assistants quickly, making space for them by changing human practices. For example, users of Siri learn how to phrase things in ways that are easier for the AI to "hear" and fulfill. Approximately 40% of Americans use voice assistants, and while overall sales of smart speakers has started to level off, young adults are the most likely to rely on them. Once they become accustomed to voice as interface, users can become more impatient with typing.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
Users also often push the outer limits of the quasi-beings' designs by seeking interactions the virtual assistants are not explicitly designed for — including declaring their love for the virtual assistants, proposing marriage, or chit-chatting about their days. These human cravings lay the groundwork for relations with chatbots that, due to the advances in machine learning that fuel them, seem more spontaneous, even more "social" than their predecessors.
Released by OpenAI first in 2018, ChatGPT offers an even wider expanse of conversational and interactional capabilities, mainly due to what is possible via generative AI. Generative AI does not wait to be programmed. Instead, it digests the human world using existing text and language, trains on that data, and synthesizes responses in real time, generating novel material.
ChatGPT is a giant leap from earlier text generators. Given its far superior simulation of human thinking, users are understandably fascinated and sometimes a little freaked out by its behavior and implications. In February of 2023, a new version of Microsoft's Bing search engine was released that was enhanced by OpenAI, the creators of ChatGPT. This new Bing worked like a chatbot, and raised eyebrows when the New York Times' Matthew Roose got some unsettling results in response to his queries.
Writing about Eliza's reception, Jake Rossen explains that in the 1960s, "it was a tantalizing flirtation with machine intelligence. But Weizenbaum wasn't prepared for the consequences."
Although Bing's chatbot has now been altered for wider release, the wilder tendencies of its first release have added fuel to the growing attention to ChatGPT and its kin. In response to the quick succession of increasingly able chatbots, organizations and governments are wondering how to differentiate credible information from non-credible, how to entice humans to learn independently without relying on AI shortcuts, and where to draw the line between collaboration and human deskilling.
Situating these new entities in a lineage is especially important now that ChatGPT-4, Google's Bard, and Baidu's Wenxin Yiyan ("Ernie Bot") were all just released. As chatbots arrive so fast that scholars and users alike scramble to make sense of their limits and potential, already, one pattern is clear: the range and sophistication of communication in which ChatGPT and its kin can successfully participate will mean that AI will be more broadly deployed. Rather than H2H (human to human) interactions, users will likely be more adept at H-AI-H (human to artificial intelligence to human) interfaces, in which we first practice or rehearse with them rather than one another. For example, companies like Replika and Anima currently offer romantic chatbots as an alternative (or supplement) to the awkward work of being intimate with other humans. Many human users tout the superiority of H-AI romance to H2H romance, citing the work and inconvenience of a human partner or lover. Given the growing impatience with and disdain for other humans that earlier devices have inspired, ChatGPT and its progeny may once again downgrade how humans value our own thoughts, our own words, and our own ability to be curious and come to conclusions.
Writing about Eliza's reception, Jake Rossen explains that in the 1960s, "it was a tantalizing flirtation with machine intelligence. But Weizenbaum wasn't prepared for the consequences." As we now enter a stage of history in which quasi-beings are much more common and capacious, we remain deeply unprepared for the consequences, not only of their abilities but also our tendency to generously welcome and accommodate them, sometimes to our own detriment.
Read more
about artificial intelligence