top of page
  • Writer's pictureJake Browning

"Consciousness Winter" wasn't that cold--and the "spring" generated more heat than light

Updated: Sep 24, 2023

There is a story that goes, "in the 20th century, you couldn't talk about consciousness. Before Edelman, Crick, and Koch, it was a bad word that no one would touch." The recent criticism of the integrated information theory (IIT)--endorsed by Edelman, Koch, and its central innovator, Giulio Tononi--has led some to fear a second "consciousness winter" might set in, shutting down consciousness research. If this theory is bad science, so the argument goes, might funders think every theory of consciousness is bad science? Might talk of consciousness once again become taboo?


There is some truth to the tale of a consciousness winter; in the United States, behaviorists hated the term and largely refused to discuss it directly. Their rise to prominence in the United States in the early-20th century created a few generations of scientists who were deeply uncomfortable with the term. And the United States, after World War II, was the pre-eminent site for scientific research for many decades.


But the idea that behaviorists successfully shut the conversation down is a stretch. Research went on, as LeDoux, Michel, and Lau show:


The Crick and Koch papers were indeed important for stimulating enthusiasm for research on consciousness and the brain in mainstream psychology and neuroscience. However, this was hardly the beginning of scientific interest in and research on consciousness. In the 1960s and 1970s, studies of split-brain, blindsight, and amnesia patients laid the conceptual foundations for later work on consciousness. Of note is the fact that even then most of this work focused on visual consciousness because of the progress that had been made in understanding the visual system. Additionally, consciousness and the brain were the subject of a number of scientific conferences starting in the 1950s that were attended by leading researchers in psychology and brain science. Furthermore, theories about what consciousness is and how it relates to the brain were proposed by a number of prominent researchers long before the 1990s, including Karl Lashley, Wilder Penfield, Donald Hebb, Roger Sperry, Sir John Eccles, George Miller, Lord Brain, Michael Gazzaniga, Leon Festinger and coworkers, George Mandler, Tim Shallice, and Michael Posner and coworkers among others. 2020

Philosophy was no different in keeping up the conversastion (especially outside the Anglo-speaking world). Behaviorist-inspired philosophers actually did speak of it, though critically. They talked about the "inverted spectrum" and similar thought-experiments in the same way we do today (compare Reichenbach and Dennett on the matter, for example). The issue was not that consciousness per se was a problem; it was the fact that people puffed it up and made it seem magical. Behaviorism made that option seem unacceptable--a stance many in the field still hold.


It also became a major topic in the 1970s and 1980s of research, both in philosophy and science. The rise of cognitivism brought with it a willingness to talk about internal states of mind--the main enemy of behaviorism, of which consciousness was only a particular type. Cognitivism also demanded researchers specify which processes, like working memory, are conscious and which, like Chomsky's syntactical parsing, are not. This also brought with it interest in the function of consciousness, or whether it had one. Some functionalists opted for the whole theory being epiphenomenal, arguing philosophical zombies and inverted spectra are possible and yet behaviorally undetectable. Others, like Dennett in 1968, argued the behaviorists were broadly right: most consciousness talk is mythical and should be explained away. But these theorists typically argued consciousness served some function, and thus there would be some behavioral difference between conscious and unconscious beings.


Thus, while some people were shying away, there wasn't a genuine "winter." This isn't like AI. AI received much of its funding from the military, which desired better intelligent systems to replace the older devices, like fire-control systems. The work of computers in cracking the ENIGMA code was still forefront in many minds. This led to enormous sums of money being funneled into the field to research everything from machine translation to object detection. The money, though helpful for basic research and some commercial applications, produced less military value, resulting in the boom-bust cycle we label "AI spring" and "AI winter".


Psychologists, neuroscientists, and philosophers, by contrast, were hardly swimming in funds. The fields just moved more slowly, and much of the interesting work could be found in behaviorist research on animals--an area where ascribing consciousness is fraught on any scientific approach. There was no "consciousness winter." Most people just worked on other topics (same as today) and constructing comprehensive theories takes a long time.


Why do some people believe in a consciousness winter? Because Edelman, Crick, and Koch told them there was. Edelman (Nobel winner for work on the immune system) and Crick (Nobel winner for discovery of DNA) told a tale that outside voices were needed to work on a topic scientists were ignoring. Dishonest narratives are common in science, so it really isn't a big deal on its own. The problem came from the re-framing these researchers brought, since it largely ignored all the research on consciousness which already existed. It also not only ignored this scientific research, but also avoided academic scrutiny, instead appealing to rich foundations to fund outside research centers "brave enough" to investigate consciousness. If no one in academia dared study consciousness, so the argument went, we needed non-academic approaches that followed non-academic methods.


The "consciousness spring" largely stemmed from this private money, which allowed researchers to pursue their idiosyncratic theories without scrutiny from the usual institutional safeguards, like peer review. The money wasn't distributed to consciousness research tout court; it was given to the people willing to make the big claims.


The IIT is the most blatant example of a "big claim" theory of consciousness. Its starting point is bad, dubious philosophy and science. The bad philosophy is the belief that consciousness forms an integrated experience which brings together lots of rich, nuanced information about the world. Mine doesn't, and many other people are pretty skeptical of it. And part of my skepticism is because I've read enough scientific reports on the limits of conscious attention to know it is pretty unlikely consciousness is as rich and integrated as all that. I'm not willing to trust my phenomenology that far in general, and certainly not willing to trust it over good evidence to the contrary.


Worse, this was bad philosophy and bad science before the theory existed. Re-read Dennett's 1994 book Consciousness Explained. You don't need to agree with his theory (global workspace with maybe some higher-order theory, a position he still broadly holds). Just focus on his sections on the science of consciousness, seen in discussions of blindsight, memory, and critiques of phenomenology. He may be wrong. But the evidence he brings in are the starting point for theory-building, and it makes it really hard to introduce a theory arguing our conscious experience is richly detailed and integrated.


Tononi and Edelman did skip it.* It simply passed right by providing scientific evidence and introduced a grand theory, based on mathematics and not brains or minds or whatever. If you introduce a grand new theory, you should explain most of the available evidence as well as the other theories, plus explain things other people didn't, plus offer surprising predictions. If you don't have that, you probably shouldn't offer a grand new theory. You should offer a hypothesis, maybe use it to test a few things, maybe dabble at the margin for a decade or two. Then, if your hypothesis works out, you might write a nice book.


IIT skipped the dabbling and evidence. It took its message to the people, writing pop books and giving talks that left the scientific evidence largely aside in favor of proselytizing. And the books and talks were really exciting because these researchers thought big: it's the first theory to answer the hard problem! Consciousness research needs to start from the brute reality of qualia! Consciousness needs to explain why it feels like something! Consciousness is about subjectivity--about why it feels like something to be you!


This message was attractive because academic consciousness researchers were instead saying lame things, like consciousness might be merely heightened activity of certain neural pathways compared to others, "qualia" might be reducible to the way our perceptual systems represent objects, and a lot of the integration and richness is a user illusion.


This wasn't just boring; it was kind of frustrating. Many people feared an explanation of consciousness wouldn't be satisfying--that is, that we wouldn't intuitively believe it. A theory of consciousness, many believe, should "feel" correct. It is unclear why anyone would think this, especially after science introduced us to quantum mechanics, but some people still think this. And academic scientists have a terrible habit of unraveling rainbows, leaving the natural world disenchanted of the wonder and mystery.


So if you want an enchanted nature and your rainbows left raveled, IIT is the only "scientific" theory of consciousness on the block. There is a conviction, for many, that the other theories are just dabbling at the margins--that they don't really explain why consciousness seems rich, integrated, deeply personal, and downright magical. This is inaccurate, of course. Michael Graziano's attention-schema actually explains all of this, often at painstaking detail. He may be wrong, but he gets that we want consciousness to be cooler than just "brain states."


All the signatories of the letter get the desire for consciousness to be wonderful. Totally get it. That's why they study consciousness. But their loyalties lie with science, following the evidence to the ultimately disappointing conclusion that consciousness is going to just be some function of the brain (or maybe a byproduct of some set of functions). Consciousness science is sadly, and frustratingly, boring: your big questions about life, the universe, and everything will not be answered or addressed by a complete theory of consciousness. It will turn out to be rather dull and technical.


Does that mean IIT is a "pseudoscience"? That's not my word. But it certainly isn't a very attractive philosophical position, nor does it better explain the accumulated scientific evidence than the alternatives. And it leads to counterintuitive results, such as conscious logic-gates and plants. Counterintuitive doesn't mean wrong, of course; but some of the counterintuitive results are also untestable, because a conscious plant would behave no differently than an unconscious one. But this doesn't demand we use the "p-word."


Regardless, it is a bad theory. While calling out a bad theory may make the whole field of consciousness research look bad at the moment, that's not an excuse to let a bad theory go on causing havoc. Especially since IIT defenders, like Anil Seth, keep publishing pop books. Seth's book is honest enough to admit that the empirical evidence come up with for IIT is inconsistent and doesn't support even the modest claims he is willing to make. His peer-reviewed papers defending IIT are an ugly mix mostly dealing with how bad a theory it is: arguing we can test it (if properly reinterpreted), we can render it less counterintuitive (by ignoring certain comments), and so on. These are signs of a painfully unhealthy theory. It is rather pathetic that 23 years of ample funding and breathless support has given us so little.


Might excising the tumor of IIT lead to another "consciousness winter"? No, the first one didn't happen. The so-called "winter" was just normal, slow, banal science. That is usually what good science looks like, and we shouldn't fear a return to it. There might be less billionaire money flooding in, but so much the better if they are just throwing it behind bad theory. That money isn't a boon to consciousness research; it is a boon to people who largely avoid peer review and aren't advancing us towards a better picture of consciousness. Private money needs to fund rigorous empirical research, as the Templeton funding for COGITATE did, and not more pie-in-the-sky theorizing.


We should, moreover, fear that continuing the "spring" of consciousness research depends on making the whole scientific project harder. If a group of "outsiders" with private funding and minimal institutional scrutiny convinces everyone that consciousness is something it isn't--that it is magical, ubiquitous, private, ineffable, super detailed, and all around weird compared to everything else in the universe--then it will be hard to change minds later. Naive intuitions, especially ones goosed on by wild-eyed theorists, will make it harder to develop a scientific theory that will (inevitably) feel wrong to many people. It is necessary, for a mature science, to occasionally kill off a feeble theory--especially if it is doing more harm than good.


*Which is weird for Edelman. His earlier theory, discussed in Remembered Present, seemed rather normal, skirting around ideas similar to the global workspace and higher-order theories. Why he shifted from those ideas to IIT is unclear (to me, anyways).

Recent Posts

See All

Critical Comment on "The Psychology of Misinformation"

The Psychology of Misinformation, by Roozenbeek and van der Linden, is an excellent book. Short, balanced, readable. The authors are also remarkably honest with the faults of their views and, while so

Copyright and Generative AI

The recent wave of cases concerning generative AI, such as Silverman v OpenAI, have not gone well. From a legal perspective, this isn't surprising. The plaintiff couldn't definitively show their book

Why isn't Multimodality Making Language Models Smarter?

Philosophy has something called the "symbol grounding" problem. The basic question is whether the words and sentences of language need to be "grounded" in the world through some sort of sensory relati

bottom of page