Like every other nerd, I spent way too much time earlier last week playing with and worrying about Chat-GPT, but you can relax, I've stumped it:
I never thought that my addiction to lipograms would lead to any useful conclusions, but these exchanges seem to capture exactly what Chat-GPT is not doing. Writing a lipogram, like writing a poem in a strict form, involves holding two things in your head: the meaning you are trying to express, and the linguistic resources with which you are permitted to express it. There's feedback between the two, of course. That's part of what makes the process enjoyable, if you like such things.
But Chat-GPT is not doing anything like this, despite how easy it is to anthropomorphise: it's simulating certain features of texts, and its internal states shouldn't be thought of as "thoughts". It's illuminating that the first example has imitated the sort of whimsical subject matter often found in lipograms and other forms of wordplay, without actually obeying the instructions.
Another objection to Chat-GPT which I haven't seen yet is the implication that all the texts it's been trained on are correct. People seem very surprised when it gets things wrong, but, as with GitHub Copilot, no amount of GPU training can make something correct out of buggy or mistaken inputs.
I've said this before but we nerds have been dreaming of robot friends for too long, we are not to be to be trusted on the matter.
-
An interesting model of distributed computation - I'm saving reading about the details for my holidays
-
Stack Overflow is being flooded with poor-quality answers, but now they're being generated with Chat-GPT
-
Building a VM inside Chat-GPT except surely what's happening here is that the LLM is emulating countless Linux tutorials and walkthroughs. If I write a how-to about a Linux shell, am I "emulating" it? Sure, in a sense.