When I saw that Christof Koch, once my possible graduate advisor at CalTech and now the head of the Allen Institute for Brain Science, had written a new book about consciousness, I was pretty excited. Not only has Koch done a lot of great work on the visual system over the years, but he also struck me as more broadminded than your average research scientist. Indeed it seems that since the days when he worked on very specific neural systems, he has gone on to become a leading proponent of the Integrated Information Theory of consciousness -- a big picture mathematical theory that purports to calculate whether it feels like anything to be a given chunk of matter.
Unfortunately, the book was pretty disappointing. Koch does a good job of posing the question of consciousness by distinguishing it from intelligence, attention, linguistic ability, and information processing in the conventional sense. All he's concerned with is the most vexing problem of what it means to have an experience at all, for a particular state to feel like something. He also does a reasonable job of outlining what evidence there is that this particular theory (IIT) is the right one. This isn't something one can prove of course, especially when we're talking about a phenomenon as slippery as consciousness. But he at least does a good job of talking about what testable predictions have so far been made, and extrapolating what other surprising predictions the theory implies. However, what he does not do is give you a decent explanation of the damn theory itself.
Koch tries to pack his entire description of IIT into a single 12 page chapter. After reading it 3 times and trying to work through the simple example system he shows (but does not explain) I still have only the vaguest notion of how the theory works. As far as I can tell, the basic idea is that some parts of the universe are so densely and reciprocally connected by causal interactions that cutting them into pieces would produce some sort of qualitative change in how they behave. Since the states of these parts of the universe "matter to themselves", in the sense that they form a sort of self-causing feedback loop, and since we look for loops that can't be made any smaller without breaking them in this sense, then these are the parts that are conscious. This is an appealing idea to me, very reminiscent of Spinoza's conatus, but like I say, Koch's description of even the basic notion is so poor that I'm not confident of my interpretation.
The actual theory is entirely mathematical, and meant to provide a precise calculus behind the basic intuition that consciousness is another name for the causal organization of matter. I wish I could explain that theory to you. The overall point is clearly to calculate one number Φ that measures consciousness. However, even though I don't see any math in it above my pay grade, the explanation Koch gives for this calculation is so crummy that I'd have to carefully read another source to be able to tell you about it. In an otherwise fairly readable book this seems like an abject failure to me. I mean, in an unforgivable move, Koch doesn't even spend a page or two working through the simple example system he presents. This part, "the heart of the book" as Koch himself calls it, is just a total and complete flop.
It's hard to understate how disappointing this failed chapter is for the book as a whole. If we don't come away with at least some genuine understanding of the theory, how can we evaluate whether it responds to the problems Koch outlined at the beginning or is useful in the applications (mostly thought experiments at this point) he mentions towards the end? This is a shame because I think there is something really intriguing about the theory. For one, it shares a flavor similar to the interpretation Manuel DeLanda gives of Deleuze's philosophical system -- the virtual is defined as the structure of the phase space of the actual. And it also leads to several counterintuitive thought experiments. For example, one of the most surprising claims of IIT is that even a perfect computational simulation of a conscious system will not be conscious. This comes straight out of the basic premise that consciousness is not a property of the functional aspects of a chunk of matter, not about the input-output relations between the world and that chunk, but about the internal causal architecture of the given chunk. As a result, Koch ends up claiming that a brain simulated with a Von Neumann architecture cannot be conscious, but one simulated on neuromorphic hardware could be. In other words, some day Google may simulate me in such a way that it can predict all of my behavior and store all of my memories without this simulation being at all conscious. Another intriguing example comes up at the end of the book in reference to something called "expander graphs" which are organized in a way similar to the topographic maps of visual, auditory, or somatic sensation that are so important to our brain (and phenomenology). These systems are meant to illustrate the opposite kind of surprise to the first example. While no one claims these systems are highly intelligent or have anything other than a simple function, IIT predicts that they have a surprisingly large amount of consciousness. I'd love to be able to think more about these debates, but unfortunately Koch has not equipped us to do so. Perhaps he's a zombie scientist?
Update: The physicist Scott Aaronson, a critic of IIT, manages to give an understandable technical definition of Φ in a blog post. Which only makes Koch's failed attempt more mysterious to me.
No comments:
Post a Comment