I put filters on "ChatGPT" and "LLM" on Mastodon because the constant flow of discourse, to which I was also contributing, was starting to wear me out, so I'm writing this post with a slightly guilty conscience.

Here's something about nerd culture which it has taken me longer than it should have to realise. (I hate the term "nerd culture" but I can't think of any other way to describe this.)

You know how we emit quotes from Monty Python or the Simpsons or XKCD? We do the same thing in all domains, including the ones where people expect us to have particular expertise, even when we don't, and this practice has the same relationship to actual knowledge or insight as a head full of Monty Python quotes does to being funny.

If you're not a technical person, I want you to bear this in mind when you read a take about ChatGPT from someone in IT which is peppered with references to the Turing test or ELIZA.

A while ago I posted about how we're probably too emotionally attached to our imaginary robot friends to be very objective about what's happening with large language models. There are a couple of other tendencies which make us untrustworthy guides: we love apocalyptic scenarios; we have a tendency, when faced with change, to want to either act like we've seen it all before (and emit references to something that happened in the 1960s) or that it's so unutterably new that it invalidates everything that went before it.


While I'm adding to the sea of AI takes: there's a pattern I've noticed where someone will refer to ChatGPT as a "stochastic parrot" and someone else will reply with "that's just what humans are". This is, in a way, a replay of Skinnerian behaviourism, the idea that the mind or any internal states of a person are irrelevant to the study of what they do, which can be reduced to a mapping, however complex, from sensory inputs to behaviour. (Turing's imitation game has a similar bracketing of the question of "can a machine think", in terms which really only make sense in the context of logical positivism, another twentieth-century tendency which operated by denying the legitimacy of large areas of intellectual activity, but that's something for another post.)

Skinner aside, I think it's possible that the people who assert that the human mind is just a system for pattern-recognition are not just wielding Occam's Razor with abandon, but actually perceive the workings of their own minds as such. Here are two things I've come to believe about consciousness:

  • Introspection is still our best source of evidence for what sort of thing human consciousness is;
  • Individual humans' experience of consciousness is much more varied than we generally suppose.

This is the profound truth behind the dress, and there are other ways in which the internet is allowing us to discover that different people experience the workings of their own minds in very different ways. Some of these differences are marked enough to get the attention of the medical community, or to form distinct subcultures, but many are not. I think that our mental forms are much more diverse than our physical bodies, and that this is a wonderful thing.

The relevance of this to AI is that the question "can machines think?", like a great deal of philosophy, implies a tacit model of what it is like to be a thinking subject, and the answer an individual human gives to it will be very dependent on what they think their own thinking is like.

If we build conscious machines, what sort of consciousness will they have?