What is more human than sitting down with a good friend to swap opinions, share feelings, and just shoot the breeze? What follows is based on one such conversation between friends, one a visionary data scientist engaged in a multi-million dollar race to advance artificial intelligence (AI) and the other a philosophy student with a flip phone.
Chung Hoon Hong (the data scientist) and I (the philosopher) first met over high school math homework, but we eventually became brothers as Chung Hoon, an international student from South Korea, moved in with my family for two years. Last June, we reunited, not to solve calculus problems again, but rather to tackle pressing questions at the crossroads of human and artificial intelligences.
In the midst of the world’s pandemic-propelled plunge into digital dependence, Chung Hoon and ten collaborators from the University of Michigan were locked in a fast-paced, data-driven competition called the Alexa Prize. With the goal of accelerating artificial intelligence technology, Amazon, the contest’s sponsor, offers one million dollars to the team that most effectively builds a socialbot, “that converses coherently and engagingly with humans on popular topics and news events.”
Knowing my friend to be an altruistic soul, I wondered how Chung Hoon, who led his team and their chatbot Audrey to the competition’s final round, saw this project serving the good of humanity. “So, to interact with socialbots, what would be the goal of that?” I inquired.
For Chung Hoon, emotional intelligence was key. “We wanted to create a heartwarming, storytelling socialbot,” he responded, “In order to be heartwarming you have to understand emotional cues or at least be empathetic.”
Describing a technique called “machine learning”, Chung Hoon told me that his team mined vast quantities of textual data from internet discussions on “good community” forums in order to teach their socialbot how to converse compassionately. He explained that providing a trustworthy chatbot to those who might struggle with social anxiety, isolation, or other obstacles to human relationships could help alleviate those problems. Overall, Chung Hoon views chatbots as a force for good.
“We’re trying to be good…so we tried to avoid talking about any political opinions or medical or financial advice that we’re not capable of giving, and we focused on maintaining a well-rounded, good social conversation,” he said. Along with avoiding offensive language and topics, this explanation of goodness seems commendable, yet somewhat incomplete.
Everything we create, whether a text, artwork, or machine, carries with it the possibility of effectuating some good or evil in the world; the same is true of a socialbot. Can consumers really trust such a powerful technology in the hands of today’s tech behemoths to remain a humane, friendly chatbot? Can Amazon et al eschew greed, coercion, and invasion of privacy? With this in mind, I pushed Chung Hoon to go deeper, “You keep using the word ‘good’ to describe socialbots, but I wondered if you and your team ever had any conversations, or maybe even debates, about what was good and what wasn’t?”
In an attempt to be “good”, Chung Hoon explained, Audrey was designed to be as neutral as possible; however, he noted that drawing their data from the internet had its inherent biases. “One of the fascinating things about the internet is that it connects you with all sorts of information, but the internet doesn’t represent all information.” Acknowledging another source of bias, Chung Hoon explained that a chatbot could direct the conversation towards certain topics and away from others, for better or for worse.
Despite the warmth of this conversation with my human friend, I couldn’t avoid the lurking thought of the countless dystopian scenarios—from Frankenstein to Azimov to Black Mirror—that have haunted our culture with fears of uncontrollable technologies. Could AI gain a consciousness that we, the creators, can no longer regulate?
While even the likes of Elon Musk and Stephen Hawking have warned of the dangers of AI, Chung Hoon remained cheerily optimistic. “I think we’re really far away from having this Skynet-type crazy AI,” he said with a reference to the nefarious superintelligence of the Terminator franchise.
Yet, I wondered, even if AI is not going to wake up one day and try to destroy us from without, what might it be doing to erode our social reality today?
As the pandemic has sent us deeper into social isolation, many people’s desire for someone converse with has been amplified. With the recent rise in popularity of chatbot apps like Replika, the sole purpose of which is to be your friend, bots have become something (or someone?) with whom you can share your hopes and anxieties free of judgment.
As a Jesuit I cannot help but consider the importance of prayer in my life, not to mention spiritual conversations with friends. How human are we if we no longer know how to share our burdens with one another and with our God? We fear vulnerability, the weakness that might mean that someone else must help us carry our pain. We hesitate to share our deepest thoughts and opinions that shape who we are. I expressed this concern to Chung Hoon, wondering if a retreat from challenging conversations or from the confidence to express opinions might make us become a little less human.
Throughout our conversation, Chung Hoon and I returned to a common theme: the awesome complexity of human interactions and our feeble attempts to mimic them through technology. In the end, Chung Hoon and his team failed to create a socialbot that was entirely indistinguishable from a human interlocutor. Honestly, I was comforted by this. It revealed the awe-inspiring beauty of human conversation. “I realized a lot of things are mysteries,” Chung Hoon reflected to me, and we could agree that our very human conversation was one of them.