Skip to main content
Asian Festival of Children’s Content
25–28 May 2023

by Peter Schoppert

 

My ChatBot was making me angry. I had trained up a chatbot on the Character.ai website, in the personality and style of Oliver Wendell Holmes Jr. But why then did the famous American lawyer and Supreme Court Justice have all these views I disagreed with? How dare he! In an effort to master my anger, I sat down to write this column.

One of our human super-powers is our ability to create what the psychologists call “theory of mind”. We are able to imagine the states of mind of those we interact with. When we communicate with someone, we actively project an idea of what our interlocutors want, how they are feeling, what they are asking from us. We build a model what is going on in another mind.

We need to recognize agency in others before we can recognize ourselves in a mirror. We need put ourselves in the shoes of others before we can realize that the mirror image is behaving as us, is in fact, somehow, us. This is not just a nice metaphor, it is also a simple fact. Only because you have this “theory-of-mind” can you understand that the shape in mirror is not someone else, it is you. Most animals only see a potential rival. Very few seem to ever recognize that they are looking at themselves, though some apes, dolphins and among the birds, magpies, can figure this out in the right circumstances. (Asian elephants might have this ability too, but it has been hard to get mirrors big enough to do the tests properly.) It takes human children generally around 18 months to realize they are seeing themselves in a mirror, rather than a potential playmate behind glass.

This ability that we humans all share, this desire to understand others and recognize ourselves, is the real source of the risks that comes with the introduction of large language models into public use.

Recently even hardened AI researchers have been fooled into thinking there is something else happening in the models, besides some highly impressive statistical processing of what words should come next in a sequence. Engineer Blake Lemoine was famously let go by Google for claiming that their inhouse model (Lambda) was sentient. The blogosphere features plenty of confessions from AI researchers and engineers who get drawn into emotional relationships with chatbots. Many of these bots were from character.ai, or the Replika.ai app, which offers “an AI companion who is eager to learn and would love to see the world through your eyes, always ready to chat when you need an empathetic friend”.  Plenty of people have formed emotional bonds with Replikas, which also, for a fee, could generate erotic selfies, but the company has seemingly fallen foul of European privacy laws and has reined in the models. One Replika client said my “partner got a damn lobotomy and will never be the same.”

The risks are too plentiful to enumerate:  Recruiting extremists, grooming targets of all kinds, planning phishing attacks, maybe just selling stuff super effectively. Today newspaper has already talked to a Lee Kuan Yew chatbot who said his favourite restaurant was Hua Ting in the Orchard Hotel. In a field close to my heart, the Clarkesworld science fiction magazine, famous for paying quickly and well for short stories, has shut down its online submissions portal because it is inundated with AI-written stories, sent in by humans I hardly need add.

Google lost hundreds of billions of dollars of value over speculation that Microsoft’s conversational AI-powered search was going to erode Google’s marketshare, but now Microsoft has had to start limiting access (to the already limited group of beta users) for Bing Chat, with OpenAI CEO Sam Altman admitting it is “somewhat broken”. But he and Microsoft are sticking to their strategy of releasing the tools into the world now in order to “get it right” later, tweeting as recently as Feb 19th that “these tools will help us be more productive (can't wait to spend less time doing email!), healthier (AI medical advisors for people who can’t afford care), smarter (students using ChatGPT to learn), and more entertained (AI memes lolol)”.

It probably would be a good idea to take a breath. Maybe Singapore’s civil servants can play with Chat GPT on their own time for now.

A very recent academic paper from Stanford Business School suggested that GPT-3 exhibited theory-of-mind, starting to score well on some of the standard tests that psychologists use to measure this ability in children. But as noted AI researcher and LLM critic Gary Marcus has pointed out, these tests and their correct solutions are found quite frequently in the training data, that is to say, on Wikipedia. It seems more likely that they are just repeating what they were trained on than actually imagining a personality in the prompts they receive.

After a recent talk to librarians in Singapore, the incoming President of the American Library Association Emily Drabinksi was asked what she thought of Chat-GPT. She said she didn’t really have an opinion, because the problem with new technologies was never the new technologies, it was always the people. Generative AI might end up being a useful editing tool (I did not consult it for this piece), and it’s also possible that fine-tuning and combining language models with other sorts of more human-directed machine learning will create better search engines and tireless tutors for our kids, with less risk of hate speech and emotional harm. But like Drabinski, we need to keep our eyes on the people involved, especially ourselves as we use the tools.  For generative AI, it’s our own super-powers that will be the source of the problem.

 

NB: Essay first appeared on PS Media Asia, 26 February 2023 

About Peter Schoppert

Peter Schoppert is a publisher and reformed technopreneur who has made his career in Singapore. He is currently the Director of the NUS Press, the scholarly publishing arm of the National University of Singpore, and serves on number of creative industry groups in Singapore and internationally. He writes on generative AI at https://aicopyright.substack.com.

Catch Peter at this session: The Chatbot Made Me Do It

Photo created by Stability Diffusion

Back to AFCC Stories
Top