For a long time, human beings have been disturbed by the possibility that their actions might be far more predictable than they would like to think they are. Philosophers who have taken a naturalistic view of human nature, placing us squarely within the realm of physical cause and effect, have fuelled these worries. For example, David Hume argued that “the conjunction between motives and voluntary actions is as regular and uniform, as that between the cause and effect in any part of nature.”
Today it is an artefact, not an argument, that is stoking this perennial concern. New generation AI systems are creating words and images that seem uncannily like those specific individuals would create, in style and content. Somewhat predictably, I found myself wondering how predictable are my own thoughts and words were. So I asked ChatGPT to “Write an op-ed column by Julian Baggini on whether there is too much hype about ChatGPT.” Here it is, with my annotations.1
As a philosopher and writer, I have been closely following the development of artificial intelligence for many years.2 One of the latest innovations in this field is ChatGPT, a large language model developed by OpenAI that can generate human-like text responses to a wide range of prompts.
While there is no doubt that ChatGPT represents a significant breakthrough in the field of AI, I cannot help but wonder whether the hype surrounding this technology is justified.3
First, it is important to acknowledge the impressive achievements of ChatGPT.4 This model can answer complex questions, generate creative writing, and even engage in entertaining conversations with human users. Its ability to understand and respond to natural language is truly remarkable, and it is undoubtedly a powerful tool that can be used for a wide range of applications.
However, I believe that the hype surrounding ChatGPT has created unrealistic expectations about the capabilities of this technology. Many people seem to view it as a kind of magical oracle that can provide perfect answers to any question, or even as a potential replacement for human writers, journalists, and other creative professionals.
In reality, ChatGPT is far from perfect. Like any AI model, it has its limitations and biases.5 Its responses can be inconsistent, and it may struggle to understand certain types of language or cultural references. It also lacks the nuanced understanding of human emotions and experiences that is necessary for truly empathetic communication.
Furthermore, while ChatGPT may be useful for certain tasks, it is not a substitute for human creativity, intuition, or critical thinking. Its responses are based on patterns and data, and it lacks the ability to generate truly original ideas or insights.6
In short, while ChatGPT is undoubtedly an impressive achievement, it is not the panacea that some people seem to believe it is. We should be cautious about placing too much faith in this technology, and instead focus on using it as a tool to augment human intelligence and creativity, rather than replacing it altogether.
Ultimately, the value of ChatGPT and other AI models will depend on how we choose to use them.7 If we approach these technologies with a clear-eyed understanding of their limitations and potential, and use them to enhance human capabilities rather than replace them, then they may indeed prove to be a valuable asset in our ongoing quest for knowledge and understanding.
1 Before reading my responses, I’d be interested to know whether you think this reads like something I would actually have written. If you think this is as good as one of my usual newsletters, it may be time to unsubscribe and simply ask ChatGPT to generate a Julian Baggini column on whatever topic interests you. (On the other hand, if you did that you would never read anything on a topic that you hadn’t been thinking about, but which I had…)
2 ChatGPT gets off to a poor start. I can’t say for sure that I have never used the phrase “As a philosopher and writer”, but I don’t tend to announce myself as a philosopher. ChatGPT works by processing huge amounts of data and I suspect the corpus of my works online just isn’t big enough for it to be able mimic my voice and style. Or maybe I just don’t have one.
3 ChatGPT gets a tick here. Typically, I write about things when I think that the received wisdom is lacking in some way. I also tend to have a “deflationary” point of view, cutting claims and ideas down to size, resisting master theories of everything or claims that events are epoch-changing. So if I were to write a column on ChatGPT, I probably would start by questioning the hype.
4 This for me is the most impressive move ChatGPT makes. Whenever I make a case, I try to give the strongest account I can of the case against. And I often start with that. So having me begin by listing what is undeniably impressive about ChatGPT seems to me to be an authentically Baggini move.
5 Here I hope we are beginning to see the limitations that, uncannily, ChatGPT said I would say it had. It seems to me that the general points being made are the kinds of points I would make. But here they seem generic, the kind of thing everyone is saying. If I write about something, I am always trying to find something to say which isn’t just the same old same old, even if the general shape of my point of view is not an entirely new one. These read like notes I might have drafted, but ultimately elaborated on or rejected as too banal. It’s Baggini-lite, or me on auto-pilot, writing in a hurry.
6 Finally, I can point to something that I think definitively reveals the difference between me and simulated me. I would not have stopped with the bold assertion that ChatGPT “lacks the ability to generate truly original ideas or insights.” That may be true today but the technology is evolving so quickly that it would be a hostage to fortune to simply make this point and move on. The AI has stopped at the point where I think I would just have got started. What if (when?) programmes like this can have originally thoughts, show emotional nuance and so on? To say we have nothing to worry about because it can’t do these things yet is complacent and uninteresting.
7 Another tick for the AI. I think I probably would make the point that how we use it is a key issue. But, again, I would hope that I had more to say than just this.
Overall, the perhaps self-serving impression I get is that the reason ChatGPT can’t write as good a column as the one I would have done is that it can do no more than reheat what I have already said. There are people (no names, no pack drill) who do seem to write as though they were AI programmes, generating words on the basis of in-built algorithms that simply rearrange old ideas in new orders. But when a serious thinker sits down to write, they always push their thinking further. Working out what we believe should not be a simple matter of consulting our pre-existing beliefs but thinking things through, moving our thought forward.
It seems to me that the reason why human thinking – at its best at least – still beats AI is that there is no algorithm for it. (If you want a deep-dive into how ChatGPT actually works, take a look at this highly informed piece recommended to me, but which I have not yet read.)
I’m guessing that’s one reason it is so bad at humour. John Crace’s parody digest of my book The Virtues of the Table is funny not because it accurately mimics me but because it identifies very specific elements that are ripe for exaggeration or mockery. Ask ChatGPT to generate a joke about philosophy that has never been told, however, and you’re more likely to end up scratching your head than splitting your sides.
Why did the philosopher refuse to believe in the existence of chairs?
Because he thought they were just abstract concepts, and he refused to sit down and accept their material reality.
That does not mean, however, that we can be confident no artificial intelligence could ever write a better Baggini column that I can, or compose a tenth symphony even better than the nine Beethoven wrote. The only way to assuage the fear is to take away the threat assumed to be in its object. This is what Hume did. He argued that not only was it true that human actions are as subject to rules of cause and effect as anything else in nature, and therefore in principle just as predictable, but that “this regular conjunction has been universally acknowledged among mankind, and has never been the subject of dispute, either in philosophy or common life.”
The claim of universal assent sounds strange. But what Hume meant was that our attitudes and behaviours towards others reveal that, in practice, we take people to be reliably predictable, even though we cannot actually predict their words and actions with complete accuracy. When we do behave in surprising ways, there are reasons for this which those who know us better may discern. “The most irregular and unexpected resolutions of men may frequently be accounted for by those, who know every particular circumstance of their character and situation. A person of an obliging disposition gives a peevish answer: But he has the toothache, or has not dined.” Hence “The internal principles and motives may operate in a uniform manner, notwithstanding these seeming irregularities; in the same manner as the winds, rain, clouds, and other variations of the weather are supposed to be governed by steady principles; though not easily discoverable by human sagacity and enquiry.” (It’s worth reading the whole section on this in the Enquiry, if you haven’t already.)
But this is not something to fear or lament. Our lives together depend on the assumptions that people “are to continue, in their operations, the same, that they have ever found them.” Without this regularity, there would be no constancy of character or moral values, no skills or abilities that we are reliably able to use. The more unpredictable we are, the more random the causes of our actions and beliefs. And what is the value in being a random thought and action generator?
So perhaps it will be a good thing if one day ChatGPT really could write this newsletter for me. It would force everyone to face up to the truths Hume so eloquently described nearly three centuries ago. Finally, we would embrace ourselves as fully part of nature, as subject to its regularities as the weather, but no less wonderful for it.
News
The big news this week is that barring death, disease or disaster, I am due to appear on BBC Radio Four’s Start the Week on Monday morning at 9:00 (repeated in edited form at 21:30). I’ll be alongside Sarah Bakewell, whose latest book on Humanism I gave a rave review for in the Literary Review (paywall alert) and the novelist Leila Aboulela, who is new to me but a wonderfully serendipitous discovery. If you miss it, it will be available afterwards on BBC Sounds and other podcast outlets.
I’ll also be interviewing Sarah Bakewell for a Bristol Ideas event on 25 April.
Episodes in series five of the Microphilosophy podcast are piling up. Each episode in the series features two philosopher guests with their advice on how to think better. It takes as its cue my new book How to Think Like a Philosopher, which offers 12 key principles for a more humane, balanced and rational approach to thinking. You can subscribe to the series at Apple, Google and all the other usual podcast outlets. Out already we have:
Episode one with Lisa Bortolotti and Rebecca Buxton, recorded live at St Georges, Bristol, on “doing your own research” and the external conditions required for thinking.
Episode two with Patricia Churchland and Owen Flanagan, on following the facts, without assuming they speak for themselves.
Episode three with Peter Adamson and Tom Kasulis, on how to use thought experiments and being misled by concepts.
Episode four with Clare Chambers and Lucy O’Brien, on the principles of charity and sincerity, and the need to abide in uncertainty.
Just time for a quick reminder that the online benefit conference for Ukraine started yesterday (17 March) and ends tomorrow. I believe talks will be available afterwards. It asks What Good Is Philosophy? There are some great speakers, including Margaret Atwood. Attending is less important than donating to support the Kyiv Mohyla Academy.
Remember that if you enjoy these newsletters and would like to support my work, you can get access to exclusive content and regular online discussions by becoming a supporter.
Until next time, if nothing prevents, thanks for your interest.