This post is meant to point out some of the ways in which I think mainstream journalism has been dropping the ball in its coverage of large language models. I'm not an AI expert, but I can tell when journalists are confused and out of their depth when writing about IT, so here are a few things I think they're missing, and that you should know about. I'm only talking about large language models because one of the problems in this field is that the press lump everything labelled "AI" together, and that's not very helpful.
Real, boring risks
When you interact with ChatGPT, or any other LLM-powered chat interface, you're sending all of your text prompts across the internet to OpenAI or some other big Silicon Valley operator, and they are going to do whatever they like with it. You should definitely not put anything into it which you wouldn't want read out in a courtroom or printed on an A4 flyer and taped to a telegraph pole, or used to train the next generation of large language models.
The official line from ChatGPT is that you can opt out of this, but they are no more to be trusted on this than any other big tech company. They'll use all the data they can get for whatever reason they see fit until they get in trouble for it.
OpenAI's name has played well with the press, but their data sources, models and internal practices are not transparent, which is another reason to not take their disclaimers at face value.
In general, large language models open up a bunch of engineering and governance risks. They open up whole new security risks (look up "prompt injection" for more details) but they're so cool and magical-seeming that people are likely to build them into systems without allowing for these. And some of these risks may be intrinsic to how LLMs work and thus impossible to mitigate.
A related but more vague engineering risk is that organisations will use LLMs to automate infrastructure and generate code, and because they seem cool and magical, and have much lower hourly rates than developers, the usual engineering safeguards and standards will be slackened, and we'll end up with flakier, more vulnerable systems.
Governance risks include: people putting private or confidential data into them; that data being leaked back out in subsequently-trained models; biases from their training data being reflected back in their decisions; the likelihood of Robodebt-style situations where institutions use them to sidestep procedure.
More generally, we're going to see a lot of pressure from the institutions we work for to build new tools on the platform of a company with every reason to want to get a monopoly in this space, and software giants like Microsoft are going to start integrating these things into applications which we really want to be reliable and secure, like spreadsheets.
I'm not going to talk about the risk of job losses from AI, partly because I think answers to that mostly reflect one's prejudices about what sort of work is real (tell me who you think will be replaced by a chatbot and I will tell you who you are) but mostly because that's the sort of risk which journalists already like to talk about.
Imaginary, exciting risks
Much more press attention gets given to a less boring form of AI risk: whether they're going to achieve human-level general intelligence, then rapidly advance to superhuman levels and do bad things. There's a mythology of AI as embodied in pop culture, and tech people often sneer at "Hollywood AI", but the tech industry has its own AI mythology, one which is all the more powerful because we don't acknowledge that it's based on thought-experiments (also known as "thoughts") and science fiction novels. And we get really excited and emotional about this stuff: if you think the mainstream media are going wild about it, you should see what the programming forums look like.
OpenAI started as a nonprofit in the AI risk community, but now inflating AI risk seems to be part of their marketing strategy - if ChatGPT is a super autocorrect, ok, but if it's going to become self-aware and bootstrap itself into the singularity, that's very exciting. And urging the government to regulate AI seems to me to be part of their monopoly play.
Arguing either side of this debate is not the point of this post, although when I read those programming forums, I think of an old Captain Goodvibes comic where three pages of psychedelic nonsense started with the caption "light scoobs and start wanking".
But the AI risk and rationalist communities have been being publicly weird about this issue on the internet for more than a decade, they have links to people with some really bad political positions on race and intelligence, and journalists should be doing better at digging into this history and not taking the AI risk stuff at face value.
Ask-an-expert
There's a subgenre of AI story where some expert from either the academic or commercial side is wheeled out to pronounce on the exciting kind of AI risk. I feel uneasy saying this in the light of how much the idea of expertise has come under attack both during the pandemic and the climate change debate, but I don't think expert opinion of this kind counts for very much when it comes to current events. One of the curious things about AI is that for many decades, people have held dogmatic opinions about what it would be like or whether it would be possible, in the absence of much actual evidence. Now that things are advancing very fast - not in a singularity way, just a very rapid expansion of what's technically possible - there's no reason to suppose that Johnny Bighead's particular take is better or worse then anyone else's.
One exception to this rule is that if an article quotes Elon Musk's opinion about any of this, you can ignore it: his main contribution to AI discourse has been to lie about his self-driving cars.
Another exception is if the person is an AI ethicist who's recently been fired by a tech giant. You should pay attention to them.