The fourth and last in a series of posts about how we think about coding and functional programming. The first three are The Homunculus, State and Immutable and pure
An algorithm is a procedure or formula for calculating a result, an idea which predates the invention of the computer by several centuries. Although an algorithm can be imagined as a series of instructions, it seems to me to involve a different metaphorical space than the language of imperative programming. An algorithm is a recipe, not a list of commands: it is exploratory.
The classic geek misunderstanding of how law works is based on a confusion between what the statements making up a computer program do - in theory, together with any inputs, they wholly determine the behaviour of the machine - and what laws do. The law of the land does not determine the actions of every person within a territory: in social terms, all law is about edge cases, defining which acts are not permitted by the state (criminal law) and providing means of redress and resolution in cases where the social relations between people breaks down (civil law). It's interesting to consider where an analogy of the algorithm falls on this spectrum: the algorithm does not say, thou shalt not do this (the law), nor does it say thou must do this (the voice of the master) but rather, if you want an answer to this question, follow these steps.
The algorithm seems to be becoming one of the ways in which the general public understands computers at the historical moment when AI is really starting to be a significant factor in our lives. This moment started with the launch of Google, but really got going with the widespread use of algorithms to predict our likes and dislikes on services such as Amazon or iTunes, and has become a significant factor in economics, considering the importance of high-speed trading to "flash crashes" and the global financial crisis. Significantly, an algorithm is not a person.
The imaginary is a huge factor in how people conceptualise AI: it's fair to say that in no other field have fantasies played such an important part in generating our ideas. We talk, half-seriously, about "robots taking our jobs", as if robots were not modern myth-figures, with roughly the same sort of value and relationship to reality as vampires, werewolves and Frankensteins' monster: the Robot is an emblem of the slave in revolt, or what human nature would be like with all emotions drained out, or the Pinocchio-like figure of an autistic, benign personality who yearns to become a real boy. This mythology aside, people hold extraordinarily strong beliefs as to whether "strong AI" is or is not possible ("strong AI" being sentient artificial consciousness with human-level or higher intelligence) in the absence of very much empirical evidence either way. As a teenager, I was one of those on the "yes" side of the debate, with a passion that I can recall perfectly well but the intensity of which I find a bit bewildering. Now, my answer to the question of whether strong AI is possible is that I honestly don't know, and I don't see any evidence that anyone else knows, either.
The robot-person is recognisable as a form of our homunculus: essentially, an artificial slave. With the advent of the algorithm as a way of understanding how computers are being used to shape our lives and economy, it could be that we are beginning to leave this model behind. The term "homunculus" comes from alchemy, one of the goals of which was the creation of an obedient para-human slave, like Ariel in The Tempest. Alchemy, remembered now with derision by its scientific descendants, was the ancestor of the modern chemistry which borrowed its name, shearing off the Arabic article al- that it shares with algorithm and algebra. It may be that we can no more use our dreams or nightmares of robots to foretell the eventual outcome of artificial intelligence than a person of the 16th century could anticipate modern industrial chemistry from the dreams of the homunculus or the philosophers' stone.
When we say that an occupation will be replace with a robot or an algorithm, whether we do this ruefully or with contempt, we are passing an implicit value judgement on the people who now serve in that capacity: you are nothing more than a slave, mindlessly following the procedures and processes, the imperative program of your employment. I think that this form of comment says much more about how we value human labour, and how we see the relationship between manager and worker, than it is any serious comment on the possibilities of artificial intelligence. At heart, it's based on the same fallacy as the geekly misunderstanding of how law works that I started from. The nerd sees law as a set of rigid rules which apply automatically to facts and situations; any of these rules which seem to be in conflict are treated as fatal errors, rather than the (unavoidable) tensions between different principles which exist in any system of human law, and which are resolved by the judgements and negotiations of the participants.
Though imagining how easily we'd all be replaced with machines is less extreme, it's based on a similar elision of what we actually do when we work. Employment is not reducible to muscle power, acquired skill, training or intellectual ability: there's an element of personal responsibility required of anyone's work. At minimum, and despite proverbial wisdom, you actually are paid just to show up: to make sure that someone is holding the fort or minding the store, to make decisions, to answer for what goes on under your watch. (The fact that so many cliches leap to the defence of this idea is, I think, proof of its validity.) Depending on your job, the aspect of it I'm describing may be more or less complicated, onerous or momentous, but it's always there. The advent of the machine has been describe as 'dehumanising', but when we conceptualise employment as merely mind- or hand-power, without considering the element of care, we have dehumanised it without the need for any robot.
I'll end this essay, which is already four times longer than I'd expected, with a description of an algorithm I've worked with personally, and which I quickly started to anthropomorphise in spite of myself: GHC, the Glasgow Haskell Compiler. A compiler turns source code into a working piece of software. Haskell has a strong static type system: to return to our workshop analogy, there are very strict rules about what kinds of things can be but into which boxes. Languages with strict type systems can be a bit of a bore to code in, as they require a lot of verbose code describing variables, but Haskell uses a process called type inference to reduce the amount of boilerplate by deducing the types of variables by how they are used in statements. For example, if you say "a = 8 + 9", it will deduce that 'a' is an integer.
The type inference feature of the GHC is based on an algorithm so legit that it has a name (Algorithm W) and compared to the gymnastics which the compiler does to generate code, it's relatively easy to understand, or so I believe: I've never bothered to try, because I don't need to. The thing I found surprising about the compiler is how quickly I started to attribute a personality to a system which, although it's the extremely good work of many clever computer scientists, is not AI in the sense of an attempt to create a thinking machine. All developers who've worked with a compiler will know how it can seem like an enemy, and a spiteful one at that, but the GHC is the first compiler that I've felt on friendly terms with, because type inference is such a helpful way of allowing one to understand what was muddle-headed or badly-thought-out about a piece of code. When I was a couple of weeks into my first serious Haskell project I recognised a mode of work which I thought of as trolling the compiler: taking some code which I knew was slightly half-baked and compiling it in a good-natured way to get a reaction which would allow me to understand it. This phase was often followed by a second phase which I'd describe as the compiler trolling me. Partly, also, my reaction of surprise at this was that of a Perl programmer who'd spent too long in a language with inadequate tooling (for various reasons, both technical and cultural, Perl doesn't have a rich ecosystem of editors and development environments). But it was also excitement at the feeling that an algorithmic system was extending my own capacities as a programmer.
We don't really have the right language to talk about this generation of tools yet, for all the reasons I've been describing. It's not collaboration, for there isn't a second person: it's something like a prosthetic for consciousness, but even that sounds clumsy. The way we talk about technology's advent is very often in terms of violence and rupture. This metaphor is really about the social and economic structures being damaged: if you want to get right down to it, is a convenient displacement of actual power struggles between people and interest groups into a realm which is seen as technical and somehow beyond our control. I would like to hear more about how IT can extend and improve our own intuition and intelligence.
(When I first wrote this conclusion, I thought it seemed a bit drippy and utopian, but the familiar, negative images of dehumanising computers and threatening robots are what might be called a drippy dystopia: lazy thinking about social relations disguised as ironic or hard-headed grimness.)