Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’


Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’

When I ask Ted Chiang if he will sit down with me over lunch, his response — like the stories he writes — is succinct and precise: “I’d be happy to talk about the current moment in AI and how science fiction relates to it,” he writes back. “But I won’t talk about my personal life. If that’s OK with you, I’m available for lunch.”

It’s not Chiang’s personal life I’m interested in: it’s the worlds inside his head. The Chinese-American writer is one of the most lauded science-fiction writers of his generation, having won multiple major sci-fi awards for the mere 18 short stories he has written over 30-odd years. His novella Story of Your Life, about a linguist who learns to communicate with an alien species, was adapted into the Hollywood film Arrival.

Chiang’s score of stories bear the marks of his distinctive style: simplicity, scientific rigour and, above all, a startling originality. In one of his shortest stories, “What’s Expected of Us”, a device called a Predictor drives humanity insane. The gadget is like a car remote, consisting of a button and a green LED light. The light always flashes one second before you press the button. When people try to outsmart it, they find that to be impossible. The concept demonstrates the lack of free will in this imagined world — and yet why humans need to believe in it in order to survive. All in two-and-a-half pages.

We’ve agreed to meet at Mediterranean Kitchen, a no-frills restaurant in leafy Bellevue, Washington state, just across the river from Seattle, where Chiang has lived with his wife for many years. Chiang walks in diffidently, 55 years old, lean and spare, with an unlined face and grey-streaked hair that he wears pulled back in a long ponytail. He’s dressed in a white T-shirt and cream trousers. He is polite but never responds to a question immediately if he can help it.

“People are often surprised to learn I grew up on the East Coast,” he says. “There’s this cartoon by this cartoonist [John] Callahan that I always think of — it’s a little panel of the difference between New York and LA. And in New York, the person says, ‘fuck you’, but the thought bubble is ‘hi there!’ And in LA, the person says, ‘Hi there’, but the thought bubble is ‘fuck you!’” He promises me that isn’t what he’s currently thinking. “But I guess I’m quiet.”

I’ve come straight from San Francisco, where I visited world-leading artificial intelligence companies. On everybody’s minds was “generative” AI, a new type of software that can produce human-like prose and imagery in response to conversational queries. Silicon Valley inventors of these new tools are grappling with unprecedented philosophical challenges that come with a technology that can use human language.

These are themes with which readers of Chiang’s work will be familiar: the relationship between language and cognition, the implications of a superhuman intelligence, and ultimately, the shifting nature of our place in the world.

Before we have had a chance to order, the proprietor, who also doubles as the waiter, turns up with two steaming bowls of peppery red lentil soup. The flavours instantly awaken my taste buds: salty and pungent. As we dive in, Chiang, in his contemplative way, takes issue with my observation that his fictional worlds and the one we’re inhabiting are getting uncomfortably close together.

“The machines we have now, they’re not conscious,” he says. “When one person teaches another person, that is an interaction between consciousnesses.” Meanwhile, AI models are trained by toggling so-called “weights” or the strength of connections between different variables in the model, in order to get a desired output. “It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.”

Chiang’s main objection, a writerly one, is with the words we choose to describe all this. Anthropomorphic language such as “learn”, “understand”, “know” and personal pronouns such as “I” that AI engineers and journalists project on to chatbots such as ChatGPT create an illusion. This hasty shorthand pushes all of us, he says — even those intimately familiar with how these systems work — towards seeing sparks of sentience in AI tools, where there are none.

“There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’,” he says. “And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.”

So if he had to invent a term, what would it be? His answer is instant: applied statistics.

“It’s genuinely amazing that . . . these sorts of things can be extracted from a statistical analysis of a large body of text,” he says. But, in his view, that doesn’t make the tools intelligent. Applied statistics is a far more precise descriptor, “but no one wants to use that term, because it’s not as sexy”.


In The Lifecycle of Software Objects, Chiang’s 2010 novella, former zookeeper Ana takes a job at an AI company developing sentient digital beings (known as “digients”) to be sold as virtual pets. These machines, unlike the AI of today, are conscious but immature. The novella spools this thought experiment out over many years, examining the relationships between tech creators and their inventions as they develop, and also the philosophical questions spawned by the creation of a new type of intelligence. What sort of morals do they have? Who is responsible for them? Can they be left to make their own decisions? Somehow, in Chiang’s hands, the story also becomes an intimate portrait of parenthood and letting go.

I’m curious about the origins of his stories, which always seem to work on two levels: the single expansive scientific concept such as quantum mechanics, AI or theoretical mathematics pushed to its limits — and the nuances of ordinary human life: work, love and family.

We are interrupted by our food arriving in rapid succession: first, a meze platter to share, with a selection of dips such as smoky baba ganoush, spiced cauliflower and creamy labneh flecked with mint leaves, accompanied by olives and crudités of tomatoes and cucumber. There’s warm pita bread for dipping too.

“For me, ideas come and then oftentimes they go almost immediately afterwards. But sometimes an idea keeps coming back to me again and again, over a period of months or years,” Chiang says, tucking into the crispy cauliflower. “Then I start to suspect maybe this is something that I need to write a story about. Because for some reason this idea won’t leave me alone.”

Mediterranean Kitchen
103 Bellevue Way NE, Bellevue, WA 98004

Red lentil soup x2
Meze tray $20.95
Foul mudammas $14.95
Spanakopita $15.50
Baklava x 2 $10
Total (incl tax and tip) $85.08

Before I’ve made much headway, Chiang’s foul mudammas, a slow-cooked stew of fava and garbanzo beans tossed in olive oil and lemon juice, and my spanakopita — filo pastry stuffed with feta and spinach — appear, both served with a mound of saffron rice and hummus on the side. I can almost hear the table groan. 

There are themes to which Chiang returns often: namely, the ways in which language shapes how we think and who we are; and the existence of free will.

In his 2019 story Anxiety Is the Dizziness of Freedom, people routinely open a portal to a parallel universe — a common trope of science fiction — and converse with their alternate selves. His initial idea was to write about what such a device would look like, and how that would work using quantum computers.

But the story also explored people’s changing sense of their own agency; how the weight of his characters’ decisions somehow vanished when their alter-egos acted differently. “I just started thinking more and more about that, and then that turned into a story that was sort of about free will.”

Although his stories embody complex concepts, Chiang has stuck to the short story form, which he points out is part of a long tradition in science fiction. He submitted his first short story to a magazine at the age of 15, inspired by the likes of Arthur C Clarke and Isaac Asimov. And while he firmly identifies in this tradition, rather than with literary or speculative fiction writers such as Margaret Atwood or Kazuo Ishiguro, his work somehow reaches across the boundaries of genre to an entirely new audience — all the way into Hollywood.

“I have to say that the fact that my work has reached readers who are not regular science-fiction readers has been a complete surprise to me. It was not something that I ever imagined,” Chiang says. Several literary agents told him his work would never cross over to mainstream audiences.

The reason he writes, he says, is because it is an imperative. He quotes writer Annie Dillard who said: “There’s something you find interesting, for a reason hard to explain. It is hard to explain because you have never read it on any page; there you begin. You were made and set here to give voice to this, your own astonishment.”

“It is interesting precisely because no one else has articulated it yet, and you want to,” says Chiang. “And so that’s what you do.”


Chiang suggests we walk off our lunch at the nearby Bellevue Downtown Park. I persuade him to stay just a while longer, to share some baklava. He disappears into the restaurant and brings them out himself on a small white plate, one square each that we eat in a single, delicious mouthful.

Given his fascination with the relationship between language and intelligence, I’m particularly curious about his views on AI writing, the type of text produced by the likes of ChatGPT. How, I ask, will machine-generated words change the type of writing we both do? For the first time in our conversation, I see a flash of irritation. “Do they write things that speak to people? I mean, has there been any ChatGPT-generated essay that actually spoke to people?” he says.

Chiang’s view is that large language models (or LLMs), the technology underlying chatbots such as ChatGPT and Google’s Bard, are useful mostly for producing filler text that no one necessarily wants to read or write, tasks that anthropologist David Graeber called “bullshit jobs”. AI-generated text is not delightful, but it could perhaps be useful in those certain areas, he concedes.

“But the fact that LLMs are able to do some of that — that’s not exactly a resounding endorsement of their abilities,” he says. “That’s more a statement about how much bullshit we are required to generate and deal with in our daily lives.”

Chiang outlined his thoughts in a viral essay in The New Yorker, published in February, titled “ChatGPT Is a Blurry JPEG of the Web”. He describes language models as blurred imitations of the text they were trained on, rearrangements of word sequences that obey the rules of grammar. Because the technology is reconstructing material that is slightly different to what already exists, it gives the impression of comprehension.

As he compares this to children learning language, I tell him about how my five-year-old has taken to inventing little one-line jokes, mostly puns, and testing them out on us. The anecdote makes him animated.

“Your daughter has heard jokes and found them funny. ChatGPT doesn’t find anything funny and it is not trying to be funny. There is a huge social component to what your daughter is doing,” he says.

Meanwhile ChatGPT isn’t “mentally rehearsing things in order to see if it can get a laugh out of you the next time you hang out together”. Chiang believes that language without the intention, emotion and purpose that humans bring to it becomes meaningless. “Language is a way of facilitating interactions with other beings. That is entirely different than the sort of next-token prediction, which is what we have [with AI tools] now.”

It’s a glorious day for a walk in the park, especially this verdant space with bright pink hydrangea bushes and expansive water features. We start off at a brisk pace, discussing why science fiction matters. Although he doesn’t write in order to incite, he sees how sci-fi could be a radicalising force. “Science fiction is about change, and helping people imagine the world is different than it is now,” he says.

It’s like what Mark Fisher, the British cultural critic and political theorist, once said. Chiang paraphrases: the role of emancipatory politics is to reveal that the things we are told are inevitable are in fact contingent. And the things that we are told are impossible are in fact achievable. “I think the same thing could be said about science fiction.”

Although Chiang doesn’t mix politics with his fiction, he does worry that AI is a “force multiplier” for capitalism. In an essay for BuzzFeed in 2017, he compared technologists to their supposedly superintelligent AI creations: entities that “[pursue] their goals with monomaniacal focus, oblivious to the possibility of negative consequences”.

His fear isn’t about a doomsday scenario, like researchers predict, where AI takes over the world. He is far more worried about increasing inequality, exacerbated by technologies such as AI, which concentrates power in the hands of a few.

By now, we’ve done a few laps of the park, and I begin to recognise some of the other walkers: a mother-and-daughter duo, a lady with a two-legged dog, and people sitting on benches, with books, magazines and ice-creams. I turn to Chiang, asking how he imagines the world will change when people routinely communicate with machines.

We walk in silence for a few minutes and then suddenly he asks me if I remember the Tom Hanks film Cast Away. On his island, Hanks has a volleyball called Wilson, his only companion, whom he loves. “I think that that is a more useful way to think about these systems,” he tells me. “It doesn’t diminish what Tom Hanks’ character feels about Wilson, because Wilson provided genuine comfort to him. But the thing is that . . . he is projecting on to a volleyball. There’s no one else in there.”

He acknowledges why people may start to prefer speaking to AI systems rather than to one another. “I get it, interacting with people, it’s hard. It’s tough. It demands a lot, it is often unrewarding,” he says. But he feels that modern life has left people stranded on their own desert islands, leaving them yearning for companionship. “So now because of this, there is a market opportunity for volleyballs,” he says. “Social chatbots, they could provide comfort, real solace to people in the same way that Wilson provides.”

But ultimately, what makes our lives meaningful is the empathy and intent we get from human interactions — people responding to one another. With AI, he says: “It feels like there’s someone on the other end. But there isn’t.”

Madhumita Murgia is the FT’s artificial intelligence editor

Find out about our latest stories first — follow @ftweekend on Twitter




Source link