top of page
  • Writer's pictureJake Browning

If Language Models are alien intelligences, then they're a failure

Many people at the moment are excitedly declaring that large language models (LLMs), such as ChatGPT and Bard, are "alien intelligences." The claim is that, much like the cephalopod, LLMs are markedly different from anything familiar to humans and, as such, they should be understood as doing something else entirely--as a new form of intelligence, one distinct in kind from those seen in biological species, much less humans.


We shouldn't discount the possibility that, someday, an AI will become unlike biological species in non-trivial ways. But applying this to a language model is a massive confusion. Language possesses what philosophers call "derivative intentionality": words and sentences mean nothing on their own, but only mean something relative to an interpretation by someone else who understands the representational schema. Think about a graph: if you've never seen one before, it's just a bunch of random lines. Part of scientific literacy is learning to treat graphs not as squiggly lines, but instead as meaningful representations of something else, like the state of the economy. If you understand the meaning of the graph, it is because you understand what it is representing: the line represents GDP, and the fact that it goes up and down gives you information about the performance of the economy at different points. If you don't understand what an economy is, then you can't really understand the graph; at best, you can read the graph the same way someone in a frat who doesn't read Greek can at least spell out the Greek letters of a word.


This helps us get clear on what we want from an AI trained on representations with derivative intentionality. If you train a model on images with the goal of driving a car, you hope it represents the meaning of the images roughly like humans do: it should represent its visual space as occupied with persisting things, like pedestrians, roads, other cars, and stop lights. It would be really bad if it is "alien," because this would indicate it isn't reading the image right. If it doesn't grasp that it is part of a persisting world, then it will think cars disappear when in a blind spot, pedestrians behind a tree don't exist, the world is only 2-dimensional, and so on. We don't want it to come up with a clever new interpretation of visual images, so the more alien it is the more likely it will fail in edge cases.


Language models are even more so. Since they are trained on human-created sentences, there is pretty much always a concrete meaning to each sentence--a relatiely specific, right interpretation that a listener should hear or read when encountering the sentences. If it has an alien understanding of the sentences, it is a failure. That's not a partial claim; that's a full stop. The reason we test models on things like the Winograd Schema Challenge is because they are supposed to show they understand language like we do, by possessing a roughly comparable "background understanding of the world." If they do, they will interpret sentences the same way we do. If they are solving it in an "alien" way (which they often are), that is evidence they don't really have a good grasp of the understanding of the world. If they solve common-sense problems posed in language while lacking common-sense, that tells us its a bad test.


So, I'm kind of hopeful that LLMs aren't all that alien. I think they possess a superficial understanding of the world, one that interprets a shockingly high number of sentences similarly to how a human would. But if I'm wrong and they really are alien, they are also less interesting. Which is what many critics of LLMs have argued: these machines do not interpret sentences correctly, do not grasp what they mean, and only seem to do so because we foolishly we project mindedness on them. If they really are alien, then the critics are right.

40 views0 comments

Recent Posts

See All

Copyright and Generative AI

The recent wave of cases concerning generative AI, such as Silverman v OpenAI, have not gone well. From a legal perspective, this isn't...

댓글


bottom of page