A sideways look at economics
I am writing this in an airport lounge on my return from the second summit of the American Society for AI (ASFAI) that I have had the pleasure to attend: this one in Sonoma, California. The lounge experience on the return leg of an international journey always induces in me an odd frame of mind, unique to this environment. I feel an emotional release alongside a sense of tranquillity and reflection. Perhaps it’s a time when feelings that I’ve repressed while work had to be done come flooding through, along with the adrenaline that has kept me going for the past few days draining out of my system. All in the strange, safe space that is an airport lounge: nothing is required of me. So I sit here and feel things — happy-sad, bittersweet, full and empty — and reflect.
My current reflections run like this.
I’ve been one of some 70 people who’ve spent the past two days together talking and thinking about artificial intelligence. The crowd is quite intimidating, which is not something I often feel nowadays. I’ve met a lot of people in my career, at all levels, and have just about got over feeling intimidated. People are people. But this lot, well — it’s a throwback to when I was in my 30s, and in over my head as I perceived it then.
Of course, it turns out that these people are also just people, once you get to know them. I expect that everyone enters those meetings with a little flutter of self-doubt, as they look around them and feel the imposter syndrome assert itself. And then, a few drinks, a few laughs later, and those feelings dissipate. And, after that, I start to appreciate the range of deep and different experience that each person brings to the group. There are C-suiters of huge multinationals, representatives from the military, members of Congress, partners in law firms, founders of tiny startups and of unicorns, venture capitalists and angel investors, eminent scientists, medical practitioners, musicians and academics. People at the top of the field in AI, in quantum computing, in cyber security, in foundation models and LLMs. And one economist: me. And we’re all just people. The moment when we all start to see that is when the tendrils of mutual understanding, coiled and protected at first, start to uncoil, reach out, interact with each other. That’s when the magic starts to happen, and we all leave with more knowledge and understanding than we had when we arrived. We learn. We think. We create. The group becomes more intelligent.
It’s curious that the conference theme was artificial intelligence, because precisely the same processes are in focus in that field. I don’t mean between humans: I mean between AIs.
There is a famous thought experiment proposed by the philosopher John Searle, known as ‘The Chinese room’. We’ve to imagine someone who speaks no Chinese put in a closed room furnished with a chair and table, a pencil and a pad of paper, and a book of instructions. Every so often, someone unseen pushes a slip of paper under the door, and the occupant picks it up and reads it. It has Chinese characters written on it, which our hero does not recognise or understand. She takes it to the table and consults the book of instructions which indicates, either in English or diagrammatically that when she sees those characters in that order, she should draw another, specified character or set of characters on a sheet of paper and pass it back under the door.
Viewed from outside the room, the occupant appears to be answering a series of questions, accurately. The person outside the door could reasonably conclude that the occupant of the room was intelligent and fluent in Chinese. But, in fact, she has absolutely no understanding of the meaning of the communication in either direction.
Viewed from outside the room, the occupant of the room would probably pass the ‘Turing test’, provided the book of instructions were sufficiently extensive, and especially if the responses were furnished quickly. In fact, in the thought experiment, the occupant is human, so passing the Turing test would be a true positive. But the point is that the tasks she carries out could be done just as well, and probably faster, by a machine.
This thought experiment is often taken as a slam dunk against the concept of artificial intelligence qualifying as intelligence at all. It’s a kind of intelligence, to be able to follow a set of instructions, which is what AI is fundamentally doing. But it’s surely not what we usually mean by the word. It’s syntactic, not semantic, intelligence: about how words or other symbols should be ordered to comply with the rules, not about what they ‘mean’.
However, the challenge to Searle’s diss to AI goes like this: the person in the room is just one part of a larger system that encompasses the person outside the room, the room itself, and the book of instructions. That system is intelligent and grasps the meaning of what is being communicated, even if the occupant of the room doesn’t. You would not expect a cell in the skin of your hand to understand the meaning of a gesture you make that includes that cell. Why does every cell have to understand? Why does any? The person in the room is like that cell: why is it important that they understand the meaning of what they do? Intelligence is not a property of individual cells (or occupants of weird, closed rooms). It is a property of systems.
Millions, probably billions of individual AIs are now in existence, and their number is multiplying. Each one, perhaps, is like the occupant of the room, a syntactic but not a semantic intelligence. But they communicate with each other and with humans and, as a result — a point emphasised in a book written by one of the participants at the summit, Human + Machine by Paul Daugherty and James Wilson[1] — the system that encompasses both us and them is growing in intelligence.
This huge and growing system is becoming exponentially more intelligent even now, when the tendrils of mutual understanding between humans and AI, between humans and other humans, between AI and other AI, are still coiled and protected for the most part. We’re in that early phase of uncertainty and doubt. We’re not sure what to make of AI, we humans: not yet. Perhaps it’s a threat. Perhaps it has the real intelligence – we are imposters after all.
That might be the case if the next few years see the creation of an artificial general intelligence (AGI) — which would far surpass human intelligence in all applications. Some of the attendees at the summit believe there is a 40% likelihood of AGI within four years. As an economist, I’m used to 40% likelihoods. It’s the perfect number. It allows you to talk about the scenario for a while, but it doesn’t leave you on the hook because if it fails to transpire you can always say: “Well, that was the most likely outcome, as I said”; while if it does transpire you can say: “Look, I basically called it.”
40% means “I don’t know, but I am worried.” That’s a fair description of my position, too. This is something we need to know a great deal more about, as we hurtle towards whatever future AI might bring. My hope is that the collective intelligence of the humans thinking about this problem set can grow rapidly enough at least to temper the possible worst effects of AI. I guess that’s part of the point of the ASFAI: facilitating the growth in human intelligence that is a property of the group rather than of any individual within it. Bring on the next summit!
Blue-sky thinking in the lounge at San Francisco airport.
[1] Daugherty, J, and Wilson, J, Human + Machine, Harvard Business Review Press, 2024.
More by this author
All change please: the old model terminates here