Tuesday, April 9, 2019

DRAFT: The Second Coming of Capitalism

So I read two interesting economics books recently.  The Second Machine Age by Brynjolfsson and McAfee and the now-so-famous-it's-in-famous Capital in the 21st Century, by socialism's newest bad boy Thomas Piketty.  Unless you've been living under a rock for the last six months, I'll assume that you're as familiar with the latter as the rest of the Twitterati.  The former, while listed by Amazon as a best-seller, seems to have gotten less attention and perhaps requires some introduction.  

Brynjolfsson and McAfee are a couple of (only modestly, from the looks of the dust jacket) bearded professor types from MIT.  There's probably some more biographical information at the beginning of this book talk they gave, if you're into that stuff, and the type who prefers the fast and loose shoddiness of the public performance of economics. At any rate, they wrote an interesting book in the techno-optimistic genre, arguing that we are on the cusp of a new machine age, driven by artificial intelligence, that will be as consequential as the first and second industrial revolutions (#1 being basically the steam engine and #2 being electricity, internal combustion, chemicals and everything else, according to Vaclav Smil).  While optimistic, however, the book is not utopian, and they also try to figure out what all this new technology will mean for employment and equality.  You might see the whole thing as an expansion of the could-be-apocryphal exchange between Henry Ford and the head of the UAW:

Ford -- "Look at this great new machine, it replaces 5 workers and never misses a shift.  How are you going to get that thing to pay union dues, eh?" 

Head of UAW -- "Very impressive Mr. Ford.  But how are you going to get it to buy your cars?"

My new write-a-review-of everything-I-read project got a little derailed when I started reading Piketty before I had written something about B. and McA.  But sometimes these little setbacks are just what we need to take a giant step forward.  The books make a natural complement in my mind because they offer related meditations on what we're coming to see is the big question of 21st century capitalism -- what sort of relation will there be between growth and inequality? 

So here goes a joint review of the next 100 years.

...

Shit happens.  

...





DRAFT: Ubiquity

I recently read Mark Buchanan's Ubiquity.  It had some interesting ideas, though was certainly one of those books that would be better confined to the dust jacket (aside: why is it that we complain of people's shortening attention span and continual content hopping when in fact much of the problem is the over-production of filler content -- many books could easily be compressed into the space of a longish essay; they only publish them as books because we have a poor system for monetizing essays; is it any wonder then that we read them very quickly, trying to extract the nugget from the inevitable cruft in the most efficient way we can?  We read blogs this way too; again because it doesn't pay to edit them -- to wit, did you really just read this parenthesis?).  

Anyhow, the book is mostly a pop science restatement of the idea of self-organized criticality as originally expounded by Per Bak,  and if you are familiar with his work there's little new ground here, scientifically speaking.  Buchanan takes the basic sand pile metaphor at the heart of SOC and follows the trail of various folks who have modeled systems like forest fires, earthquakes, extinctions, and the spread of scientific citation, all with similar algorithms.  He does an able job of summarizing the research, and if you're unfamiliar with the curious power laws that relate the size of these events to their frequency, this is a good place to begin.  The upshot is that certain systems appear to be scale invariant, meaning that there is no natural unit of size with which to describe them (just a restatement of the fact that the statistics of their variation obey a power law, that they have "long tails", that their standard deviation is infinite, etc ...). 

The first thing that I find odd about his discussion is that it's simply not true, at least from a very general perspective.  While it's empirically certain that variations in all of these systems obey a power law over several orders of magnitude, they do not go on to scale indefinitely.  Perhaps there is no smallest earthquake, but there is most certainly a largest one, constrained by the thermal energy of the planet in the ultimate instance.  And the same is true of all the systems he discusses.  Every chart he shows of a power law distribution of events has a nice straight line in the middle range, but inevitably tails off at some very large and very small magnitude/probability.  In other words, real systems do have some characteristic scale, even if the range over which they appear not to is remarkably broad.  I'm not sure what to do with this observation, which doesn't invalidate any of the science but I think does pose some sort of problem for any attempt to reach general philosophical conclusions. 

The second odd omission (post Wolfram at least) for what is really a philosophical work at heart, is that he doesn't really emphasize the algorithmic nature of these models.  The original Per Bak et. al. paper was simply meant to demonstrate that a simple cellular automaton can produce complex and unpredictable seeming behavior that is nevertheless governed on a statistical level by a power law.  Doesn't this immediately lead you to want to know what other simple programs are out there that produce complex behavior?  Why focus on just this one algorithm?  Why not figure out what class of algorithms produce this type of behavior (short answer: those where local interactions can propagate through the whole system)?  Are there others that produce complex behavior whose statistics follows something other than a power law?

But never mind what was left out.  For me, the one new scientific idea in the book turns out to be the oldest part of the story, namely the discovery that critical phenomena have certain universality classes related to the renormalization group.  I had heard vaguely of this idea, perhaps because Laughlin mentioned it in passing in A Different Universe, but I hadn't understood that it applied as well to SOC models (obvious in retrospect of course).  I still don't know the mathematics behind this stuff, but the basic idea is that you can actually prove that, near a phase transition, most everything you might say about the microscopic details of a system is irrelevant.  Since the whole idea of self-organized criticality is that many systems actually seem to spontaneously hold themselves near a critical point, this seems an important defense of the usefulness of these models.  If a system does naturally approach a phase transition, then you may legitimately expect to describe many aspects of it with a toy model that has just a few variables.

The real thrust of the book, however, is philosophical, rather than scientific.  Buchanan spends some time explaining and justifying the research only to make it plausible, and is really more interested in drawing out the consequences of seeing the world in this way.  And it's here that I found his conclusions both appealing and strangely superficial.

I think the ideas are appealing for a couple of reasons.  First, they take us away from our typical banally linear notion of cause and effect.  

Step one in this is fairly simple, and you don't actually need any of these ideas to reach it, though they serve to reinforce the concept -- a lot of events we call causes are actually just triggers.  It's not useful to say that dropping the grain on the sand pile right there causes the avalanche any more than it is to say that the assassination of Archduke Ferdinand caused WWI.  If we have a spliff and some spare time, we can argue about whether it's "true" or not, but it's definitely not useful.

Step two is a bit more subtle, though I think is actually entailed in step one, and this is to realize that big effects don't necessarily have big causes. This is something he harps on repeatedly throughout the book, bludgeoning the point home.  Nothing about a big fire, a big avalanche, a big stock market crash or a big extinction event is at all special in terms of its trigger.  Our typical confusion between cause and trigger makes us imagine these events as special, and to look for special causes for them.  In fact, these events aren't special in size, statistically speaking (the whole idea of a power law distribution being that there is no standard deviation that would mark an event as an outlier -- bigger events happen less frequently, but no size event is "inconceivable") and their causes aren't special either (the system is always poised at a critical state where any old grain of sand is capable of generating an avalanche of any size).  

So, basically, shit happens.

I find this conclusion appealing partly because it strips away our belief that we know why shit happens.  What we invent with all our laws and explanations and study of history are simply rules of thumb.  The underlying mechanism, though, is a tremendous seething mass of complexity that only occasionally organizes itself enough to reveal its patterns if we squint just right. Seeing the real world as the execution of a simple algorithm with surprising results does a lot to undo the enlightenment humanism that has led us astray.   

But of course, observing that shit happens is also rather superficial.  


DRAFT: When did the Singularity happen?

The quote in that last post was from a book called Accelerando (Singularity) by Charlie Stross.  As you might imagine from the title, the book is about the moment when humans cease to be the most intelligent life form on the planet.  Or, that's the short version at least.  Because the more you dig into this idea (which I admit to being perhaps unhealthily obsessed with) the longer it gets.  Maybe the easiest way to see this is to ask yourself how exactly you will know when the singularity has happened?  It sounds like it would be so obvious, right?  I mean, it's a singularity after all; who is going to remain unaware that they've just fallen into a black hole?

Our culture, being deeply Christian at base, imagines everything in terms of apocalypse.  We have spent at least 2000 years now scared stiff that the world is going to end because we are sinners.  Pop culture's treatment of the singularity is no different; it's just the Rapture for nerds.  The apocalypse, you'll note, never happens gradually.  The end of times might sneak up on you, or at least on those sinners who will be swept away by it, but when it finally happens there will be fire and brimstone type disasters accompanied by trumpets -- you're gonna fuckin' know it. You're not going to have to ask your neighbor whether that's the end of the fireworks display, and do you think they'll do an encore?

Now, it doesn't take a whole lot of philosophical or scientific reflection to realize that the Hollywood version of the singularity makes about as much sense as their version of Catholicism (aka Scientology).   But realizing that the Singularity won't have an the obvious climax of Tom Cruise film is only the first step in a long chain that unravels the entire concept.  If we define the Singularity as the moment when some new "artificial" intelligence that operates with greater speed arises on earth, then the Singularity has already happened, and is happening, and will continue to happen.  Life, humans, corporations, robots, maybe bacteriophages, all qualify.  The Singularity is actually a whole set of nested singularities, so many limits or phase transitions that divide one type of intelligence from another.

DRAFT: Olson

PREFACE: I'm cleaning out drafts of various posts that I have sitting in gmail because, let's be honest, these are never going to get finished now that I'm writing FPiPE.  Accordingly, your mileage may vary.



So I just finished reading my favorite economist Mancur Olson's very first book The Logic of Collective Action (1965). This completes my reverse chronological reading of his books that began with Power and Prosperity (2000) and The Rise and Decline of Nations (1982).  I would highly recommend any of them individually; together they constitute one long train of very general thought about how groups organize and pursue their collective advantage. 

The Logic of Collective Action: Public Goods and the Theory of Groups

People usually assume that a group of people with a common interest will naturally organize themselves to collectively pursue that interest.  However, if the action of the group needs to be explained by the rational action of the individuals who compose it, it turns out that even a group with a clear common goal and a clear consensus about how to achieve it will NOT, in fact, always spontaneously organize for their collective benefit.  Often, in fact, they won't manage to get organized at all, and even if they do, they will tend to achieve an outcome for the group as a whole that is much worse than could be achieved if they were able to act as one unit.

Olson demonstrates this failure to spontaneously organize simply by examining the cost benefit calculation faced by an individual actor in a large group that has a collective interest in some common good.  By hypothesis we have a group of people with unanimous agreement on the value of a public, collective good.  In addition, the benefit of providing this group good exceeds the cost -- that is the benefit to the group taken as a whole exceeds the cost taken as a whole.  The group has a clear common incentive to provide this good for themselves.  However, if the good is truly public and non-excludable, that is, if it must be provided either for everyone at the same time or for no one at all, then each individual has an incentive to free-ride and let his neighbor front the cost of providing the good while he sits back and enjoys the inevitable benefits.  In a large group, a single individual's share of the benefits of a collective good will be very small, and their contribution to the cost will not in itself be enough to pay for the good.  If each individual operates in this rational cost-benefit maximizing way, nobody will pay for their share of the cost of the good, and the good will not be provided.  

You might think that Olson would link this analysis to the tragedy of the commons idea, or to the irrationality of voting, but in fact he begins the book by making a strict analogy between this situation and that of firms in a competitive industry trying to collude to collectively lower output and hence raise prices.  Theoretically, all the widget makers might have an interest in reducing volumes and forcing up the price of widgets.  However, if there are many widget makers, the incentive for each individual widget maker is to maintain their volumes but still reap almost the full benefit of the higher price created by the others' reductions.  After all, the volume of an individual widget maker is not large enough by itself to change the market price significantly, but the cost of a given maker's volume reduction is borne entirely by that individual.  It's rational for individual widget makers to run full tilt even if they would all be better off if they could collectively agree to reduce output, which is of course why price moves toward marginal cost in these types of markets.

I think it pays to spend a little time letting that idea soak in.  Economists think that incentives matter.  But incentives for whom?  If we take individual incentives as the unquestioned atoms, as it were, of our economic system, then the behavior of any group will need to be explained by some mechanism that harnesses those incentives.  The group has to be created and held together, and for that you need some sort of mechanism.  Olson shows us that our often implicit assumption that groups will act just like big individuals is based on "anthropomorphising" the incentives we think drive individuals.