Tuesday, April 9, 2019

DRAFT: Ubiquity

I recently read Mark Buchanan's Ubiquity.  It had some interesting ideas, though was certainly one of those books that would be better confined to the dust jacket (aside: why is it that we complain of people's shortening attention span and continual content hopping when in fact much of the problem is the over-production of filler content -- many books could easily be compressed into the space of a longish essay; they only publish them as books because we have a poor system for monetizing essays; is it any wonder then that we read them very quickly, trying to extract the nugget from the inevitable cruft in the most efficient way we can?  We read blogs this way too; again because it doesn't pay to edit them -- to wit, did you really just read this parenthesis?).  

Anyhow, the book is mostly a pop science restatement of the idea of self-organized criticality as originally expounded by Per Bak,  and if you are familiar with his work there's little new ground here, scientifically speaking.  Buchanan takes the basic sand pile metaphor at the heart of SOC and follows the trail of various folks who have modeled systems like forest fires, earthquakes, extinctions, and the spread of scientific citation, all with similar algorithms.  He does an able job of summarizing the research, and if you're unfamiliar with the curious power laws that relate the size of these events to their frequency, this is a good place to begin.  The upshot is that certain systems appear to be scale invariant, meaning that there is no natural unit of size with which to describe them (just a restatement of the fact that the statistics of their variation obey a power law, that they have "long tails", that their standard deviation is infinite, etc ...). 

The first thing that I find odd about his discussion is that it's simply not true, at least from a very general perspective.  While it's empirically certain that variations in all of these systems obey a power law over several orders of magnitude, they do not go on to scale indefinitely.  Perhaps there is no smallest earthquake, but there is most certainly a largest one, constrained by the thermal energy of the planet in the ultimate instance.  And the same is true of all the systems he discusses.  Every chart he shows of a power law distribution of events has a nice straight line in the middle range, but inevitably tails off at some very large and very small magnitude/probability.  In other words, real systems do have some characteristic scale, even if the range over which they appear not to is remarkably broad.  I'm not sure what to do with this observation, which doesn't invalidate any of the science but I think does pose some sort of problem for any attempt to reach general philosophical conclusions. 

The second odd omission (post Wolfram at least) for what is really a philosophical work at heart, is that he doesn't really emphasize the algorithmic nature of these models.  The original Per Bak et. al. paper was simply meant to demonstrate that a simple cellular automaton can produce complex and unpredictable seeming behavior that is nevertheless governed on a statistical level by a power law.  Doesn't this immediately lead you to want to know what other simple programs are out there that produce complex behavior?  Why focus on just this one algorithm?  Why not figure out what class of algorithms produce this type of behavior (short answer: those where local interactions can propagate through the whole system)?  Are there others that produce complex behavior whose statistics follows something other than a power law?

But never mind what was left out.  For me, the one new scientific idea in the book turns out to be the oldest part of the story, namely the discovery that critical phenomena have certain universality classes related to the renormalization group.  I had heard vaguely of this idea, perhaps because Laughlin mentioned it in passing in A Different Universe, but I hadn't understood that it applied as well to SOC models (obvious in retrospect of course).  I still don't know the mathematics behind this stuff, but the basic idea is that you can actually prove that, near a phase transition, most everything you might say about the microscopic details of a system is irrelevant.  Since the whole idea of self-organized criticality is that many systems actually seem to spontaneously hold themselves near a critical point, this seems an important defense of the usefulness of these models.  If a system does naturally approach a phase transition, then you may legitimately expect to describe many aspects of it with a toy model that has just a few variables.

The real thrust of the book, however, is philosophical, rather than scientific.  Buchanan spends some time explaining and justifying the research only to make it plausible, and is really more interested in drawing out the consequences of seeing the world in this way.  And it's here that I found his conclusions both appealing and strangely superficial.

I think the ideas are appealing for a couple of reasons.  First, they take us away from our typical banally linear notion of cause and effect.  

Step one in this is fairly simple, and you don't actually need any of these ideas to reach it, though they serve to reinforce the concept -- a lot of events we call causes are actually just triggers.  It's not useful to say that dropping the grain on the sand pile right there causes the avalanche any more than it is to say that the assassination of Archduke Ferdinand caused WWI.  If we have a spliff and some spare time, we can argue about whether it's "true" or not, but it's definitely not useful.

Step two is a bit more subtle, though I think is actually entailed in step one, and this is to realize that big effects don't necessarily have big causes. This is something he harps on repeatedly throughout the book, bludgeoning the point home.  Nothing about a big fire, a big avalanche, a big stock market crash or a big extinction event is at all special in terms of its trigger.  Our typical confusion between cause and trigger makes us imagine these events as special, and to look for special causes for them.  In fact, these events aren't special in size, statistically speaking (the whole idea of a power law distribution being that there is no standard deviation that would mark an event as an outlier -- bigger events happen less frequently, but no size event is "inconceivable") and their causes aren't special either (the system is always poised at a critical state where any old grain of sand is capable of generating an avalanche of any size).  

So, basically, shit happens.

I find this conclusion appealing partly because it strips away our belief that we know why shit happens.  What we invent with all our laws and explanations and study of history are simply rules of thumb.  The underlying mechanism, though, is a tremendous seething mass of complexity that only occasionally organizes itself enough to reveal its patterns if we squint just right. Seeing the real world as the execution of a simple algorithm with surprising results does a lot to undo the enlightenment humanism that has led us astray.   

But of course, observing that shit happens is also rather superficial.  


No comments: