Wednesday, November 25, 2020

Philosophy and Simulation: The Emergence of Synthetic Reason

Generally, I'm a fan of Manuel DeLanda, but this book was a disappointment.  After reading Difference and Repetition, I really wanted to pursue two ideas that were touched on fairly briefly there -- the idea of the simulacrum and its relation to simulation, and the idea that Kant takes for granted the conscious unity of a priori synthetic reason, precluding its emergence from the interaction of smaller, passive, unconscious syntheses.  You can see from the title why I picked this volume off the shelf.  Unfortunately, it is misnamed.  DeLanada does very little philosophy in this book.  The simulation bit only refers to computer simulations of systems like the weather, or fluid dynamics, or Kaufmann's autocatalytic sets, or Axelrod's autonomous Prisoner's Dilemma tournaments -- an idea of simulation that is related to what Deleuze had in mind, but only tangentially.  And finally, DeLanda thinks he addresses the problem of the subtitle by using modern assembly line automation software to discuss how the Egyptians learned to build pyramids.  So while he discusses interesting stuff, it wasn't at all what I was looking for.

DeLenda devotes most of the book to reviewing a number of computer simulations that try to model how various puzzling phenomena may perhaps have emerged in evolution.  So he summarizes, in a kind of pop-science style, toy models like those that attempt to explain how a population of replicators, or cell membranes, or memory, or altruism, or language, could have emerged spontaneously from the blind interaction of lower level components.  These summaries are fine as far as they go.  DeLanda seems perfectly competent to explain this stuff (judging from his explanations of the ones I was familiar with).  But he has none of the wit or literary skill to make these explanations very interesting.  It's dense, dry reading even if you already understand how the NK model or the perceptron work.  It kinda reminded me of my father's explanations of battery chemistry -- simultaneously way more than you ever wanted to know, but still not enough to feel confident you wouldn't blow yourself up if you tried it on your own.

Which makes you wonder who the target audience is here.  On the one hand, these mathematical simulations are pretty technical and far afield from my impression of what interests the average philosopher.  On the other hand, DeLanda's summaries are just scratching the surface of all the work that's been done on each of these, and I don't see how his recaps would contribute much to the understanding of someone who already knew the models well.  Is he trying to interest philosophers in computer science, or computer scientists in philosophy?  Or is he just trying to make the case that scientists should embrace simulations more?  Don't they already do quite a lot of this though?  Since it's not clear who the audience is, you start to get the impression that maybe you're just reading DeLanda notes to himself.  He spent a lot of time understanding each of these simulations, and he just wrote down what he learned in compressed format.  The impression is amplified by the fact that the book is absolutely chockablock with typos and missing commas that make sentences needlessly difficult to parse.  After all, who edits their personal notes?  

For me, the most interesting part of the book was actually the appendix, where DeLanda pulls back to summarize the summaries and link them to the larger project which he calls "assemblage theory".  In fact, the appendix really contains all the philosophy that the book has to offer.  If you've read other DeLanda, you already know that he's a popularizer of Deleuze and Guattari who takes their key insight to be that the virtual is the structure of a space of possibilities of some system.  I think this is a fine interpretation of the virtual (though hardly the only possible one).   DeLanda has done a lot of good historical and philosophical work with this concept in both A Thousand Years of Nonlinear History and Intensive Science and Virtual Philosophy.  The question is really what new insight it has to offer us in the present context.  

Unfortunately, this is where the book really falls down philosophically.  Because here DeLanda does little more than reiterate the idea that assemblages (systems of interacting elements) have both a concrete mechanism, as well as a mechanism independent structure of their space of possibilities.  The latter may sound esoteric, but it's actually quite familiar -- we call it math.  For example, the mechanism independent structure of the space of possibilities for a system composed by the coupling of two liquids of different temperatures is just ... equilibrium.  The hot side gets cold and the cold side gets hot (that's exactly what made the McDLT such a revolution!).  Of course the possibility space could be more complicated, and we could get interested in the time course by which the system approaches equilibrium, or what happens if it is held away from equilibrium, but basically in all these cases, we're just saying that the virtual is nothing more than the structure of the phase space of a dynamical system.  Since you can use the same math to describe, say, the way a liquid approaches thermal equilibrium and the way a market achieves a clearing price, then, sure, there is a "universal mechanism independent structure" of what these types of systems can do.  That's a truly fascinating observation that amounts to being blown away by the fact that math works.  I too am blown away by this fact.  Some hairless chimps came up with a symbol system whose space of possible behaviors exhibits the same structure as waves and planetary motion.  That is to say that a bunch of grey matter in the human skull can simulate other matter.   How does this work?  Great question, but one DeLanda doesn't even really try to address.  He states that since they can be shared, these virtual structures must exist objectively, outside the systems that incarnate them.  On the other hand, the structures themselves need some concrete mechanism to be visible, so we should consider them immanent in matter itself.  Again, I'm not arguing with this conclusion at all.  I'm just asking whether we've posed the question adequately enough to advance.  It seems to me that DeLanda hasn't even begun to address the really deep philosophical problem this implies.  If some matter can run a simulation of other matter by repeating some (portion of) a universal structure immanent to it, then matter is nothing much like the bunch of marbles we normally imagine it to be.  So do the simulations go all the way down, or is there some sort of 'base matter' that would be like the hardware of the universe?  And how would we be able to identify this with the simulation software installed in our brains?  


No comments: