To Stewart Brand
Nassim Nicholas Taleb

From: Nassim Nicholas Taleb, New York
To: Stewart Brand, Sausalito

3 July 2013

Dear Stewart,

I would like to reply to Brian Eno’s important letter by proposing a methodology to deal with risks to our planet, and I chose you because of your Long Now mission.

First let us put forward the Principle of Fragility As Nonlinear (Concave) Response as a central idea that touches about anything.

1. PRINCIPLE OF FRAGILITY AS NONLINEAR (CONCAVE) RESPONSE

If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. Every additional meter, up to the point of my destruction, hurts me more than the previous one. This nonlinear response is central for everything on planet earth, from objects to ideas to companies to technologies. 

Another example. If I am hit with a big stone I will be harmed a lot more than if I were pelted serially with pebbles of the same weight.

If you plot this response with harm on the vertical and event size on the horizontal, you would notice the plot curving inward, hence the “concave” shape, which in the next figure I compare to a linear response. We can already see that the fragile is harmed disproportionately more by a large event (Black Swans) than by a moderate one.

Figure 1 – The nonlinear response compared to the linear.

The general principle is as follows:

Everything that is fragile and still in existence (that is, unbroken), will be harmed more by a certain stressor of intensity X than by k times a stressor of intensity X/k, up to the point of breaking.

Why is it a general rule? This has something to do with the statistical structure of stressors, with small deviations much, much more frequent than large ones. Look at the coffee cup on the table: there are millions of recorded earthquakes every year. Simply, if the coffee cup were linearly sensitive to earthquakes, it would not have existed at all as it would have been broken in the early stages of the graph.

Anything linear in harm is already gone, and what is left are things that are nonlinear.

Now that we have this principle, let us apply it to life on earth. This is the basis of a non-naive Precautionary Principle that the philosopher Rupert Read and I are in the process of elaborating, with precise policy implications on the part of states and individuals.

Everything flows —by theorems — from the principle of nonlinear response.

2. PRECAUTIONARY RULES

Rule 1 – Size Effects. Everything you do to planet earth is disproportionally more harmful in large quantities than in small ones. Hence we need to split sources of harm as much as we can (provided these don’t interact). If we dropped our carbon by, say, 20% we may reduce the harm by more than 50%. Conversely we may double our risk with just an increase of 10%.

It is wrong to discuss “good” or “bad” without assigning a certain quantity to it. Most things are harmless in some small quantity and harmful in larger ones.

Because of the “globalization” and the uniformization of tastes we now concentrate our consumption across the same items, say, tuna and wheat, whereas ancient population were more opportunistic and engaged in “cycling”, picking up what was overabundant so to speak.

Rule 2 – Errors. What is fragile dislikes the “disorder cluster” beyond a point, which includes volatility, variability, error, time, randomness, and stressors (The “Fragility” Theorem).

This rule means that we can —and should— treat errors as random variables. And we can treat what we don’t know —including potential threats— as random variables as well. We live in a world of higher unpredictability than we tend to believe. We have never been able to predict our own errors, and things will not change any time soon. But we can consider types of errors within the framework presented here.

Now, for mathematical reasons (a mechanism called the “Lindy Effect”), linked to the relationship between time and fragility, mother nature is vastly “wiser” so to speak than humans, as time has a lot of value in detecting what is breakable and what is not. Time is also a bullshit detector. Nothing humans have introduced in modern times has made us unconditionally better without unpredictable side effects, and ones that are usually detected with considerable delays (transfats, steroids, tobacco, Thalidomide, etc.)

Rule 3 – Decentralizing Variations (the 1/N rule). Mother nature produces small isolated generally independent variations (technically belonging to the thin-tailed category, or “Mediocristan”) and humans produce fewer but larger ones (technically, “fat tailed” category, or “Extremistan”). In other words nature is a sum of micro variations (with, on the occasion, larger ones), human systems tend to create macro shocks.

By a statistical argument, had nature not produced thin-tailed variations, we would not be here today. One in the trillions, perhaps the trillions of trillions, of variations would have terminated life on the planet.

The next two figures show the difference between the two separate statistical properties.

Figure 2 Tinkering Bottom Up, Broad Design. Mother Nature: no single variation represents a large share of the sum of the total variations. Even occasional mass extinctions are a blip in the total variations

Figure 3 Top-down, Concentrated Design Human made clustering of variations, where a single deviation will eventually dominate the sum.

Now apply the Principle of Fragility As Nonlinear (Concave) Response to Figures 2 and 3. As you can see a large deviation harms a lot more than the cumulative effect of small ones because of concavity.

This in a nutshell explains why a decentralized system is more effective than one that is command-and-control and bureaucratic in style —it is that errors are decentralized and do not spread. It also explains why large corporations are problematic, particularly when powerful enough to lobby their way into state support.

This method is called the 1/N rule of maximal diversification of source of problems —a general one I apply when confronting decisions in fat-tailed domains.

Rule 4 – Nature and Evidence. Nature is a better statistician than humans, having produced > trillions of “errors” or variations without blowing up; it is a much better risk manager (thanks to the Lindy effect). What people call the “naturalistic fallacy” applies to the moral domain, not in the statistical or the risk areas. Nature is certainly not optimal but it has trillions of times the sample evidence of humans, and it is still around. It is a matter of a long multidimensional track record versus a short low-dimensional one.

In a complex system it is impossible to see the consequences of a positive action (from the Bar Yam theorem), so one needs —like nature— to keep errors isolated and thin-tailed.

Implication 1 (Burden of Evidence). The burden of evidence is not on nature but on humans disrupting anything top-down to prove their errors don’t spread and don’t carry consequences. Absence of evidence is vastly more nonlinear than evidence of absence. So if someone asks “do you have evidence that I am harming the planet?”, ignore him: he should be the one producing evidence, not you. It is shocking how people can put the burden of evidence the wrong way.

Implication 2 (Via Negativa). If we can’t predict the effects of a positive action (adding something new), we can predict the effect of removing a substance that has not been historically part of the system (removal of smoking, carbon pollution, carbs from diets).

3. POLICY IMPLICATIONS

This tool of analysis is more robust than current climate modeling, as it is anticipatory, not backward fitting. The policy implications are:

Genetically Modified Organisms, GMOs. Top-down modifications to the system (through GMOs) are categorically and statistically different from bottom up ones (regular farming, progressive tinkering with crops, etc.) To borrow from Rupert Read, there is no comparison between the tinkering of selective breeding and the top-down engineering of taking a gene from a fish and putting it into a tomato. Saying that such a product is natural misses the statistical process by which things become “natural”.

What people miss is that the modification of crops impacts everyone and exports the error from the local to the global. I do not wish to pay —or have my descendants pay — for errors by executives of Monsanto. We should exert the precautionary principle there —our non-naive version — simply because we would discover errors after considerable damage.

Nuclear. In large quantities we should worry about an unseen risk from nuclear energy. In small quantities it may be OK —how small we should determine, making sure threats never cease to be local. Keep in mind that small mistakes with the storage of the nuclear are compounded by the length of time they stay around. The same with fossil fuels. The same with other sources of pollution.

But certainly not GMOs, because their risk is not local. Invoking the risk of “famine” is a poor strategy, no different from urging people to play Russian roulette in order to get out of poverty. And calling the GMO approach “scientific” betrays a very poor —indeed warped —understanding of probabilistic payoffs and risk management.

The general idea is that we should limit pollution to small, very small sources, and multiply them even if the “scientists” promoting them deem any of them safe.

**********

There is some class of irreversible systemic risks that show up too late, that I do not believe are worth bearing. Further, these tend to harm other people than those who profit from them. So here is my closing quandary.

The problem of execution: So far we’ve outlined a policy, not how to implement it. Now, as a localist fearful of the centralized top-down state, I wish to live in a society that functions with similar statistical properties as nature, with small thin-tailed non-spreading mistakes, an environment in which the so-called “wisdom of crowds” works well and the state intervention is limited to law enforcement (and that of contracts).

Indeed, we should worry about the lobby-infested state, given the historical tendency of bureaucrats to produce macro harm (wars, disastrous farming policies, crop subsidies encouraging the spread of corn syrup, etc.) But there exists an environment that is not quite that of the “wisdom of crowds”, in which spontaneous corrections are not possible, and legal liabilities difficult to identify. I’ve discussed this in my book Antifragile where some people have an asymmetric payoff at the expense of society: keep the profits and transfer harm to others.

In general, the solution is to move from regulation to penalties, by imposing skin-in-the game-style methods to penalize those who play with our collective safety —no different from our treatment of terrorist threats and dangers to our security. But in the presence of systemic —and branching out —consequences the solution may be to rely on the state to ban harm to citizens (via negativa style ), in areas where legal liabilities may not be obvious and easy to track, particularly harm hundreds of years into the future. For the place of the state is not to get distracted in trying to promote things and concentrate errors, but in protecting our safety. It is hard to understand how we can live in a world where minor risks are banned by the states, say marijuana or other drugs, but systemic threats such as those represented by GMOs encouraged by them. What is proposed here is a mechanism of subsidiarity: the only function of the state is to do things that cannot be solved otherwise. But then, it should do them well.

**********

I thank Brian Eno for the letter and for making me aware of all these difficulties. I hope that the principle of fragility helps you, Stewart in your noble mission to insure longevity for the planet and the human race. We are not that many Extremistan-style mistakes away from extinction. I therefore sign this letter by adopting your style of adding a 0 to the calendar date:

Nassim Nicholas Taleb,
July 3, 02013

References:

Bar-Yam, Y., 1997, Dynamics of Complex Systems, Westview Press, p 752

Taleb, N. N., 2012, Antifragile: Things that Gain From Disorder, Penguin and Random House.

Taleb, N. N., and Douady, R., 2012, Mathematical Definition, Mapping, and Detection of (Anti)Fragility, in print, Quant Fin, Preprint: http://arxiv.org/abs/1208.1189

With thanks to William Goodlad.

© Nassim Nicholas Taleb, 2013

Nassim Nicholas Taleb is a modern philosopher, a former trader and is currently Distinguished Professor of Risk Engineering at New York University. With a polymathic command of subjects ranging from mathematics to ancient history he also speaks many languages. He is also the author of Fooled by Randomness, Antifragile and The Black Swan, an international bestseller which has become an intellectual and cultural touchstone which has been published in 33 languages.
www.fooledbyrandomness.com

Stewart Brand is co-founder and president of The Long Now Foundation and co-founder of Global Business Network. He created and edited the Whole Earth Catalog (National Book Award) and co-founded the Hackers Conference and THE WELL. His books include The Clock of the Long Now; How Buildings Learn and The Media Lab. His most recent book, titled Whole Earth Discipline, is published by Viking in the US and Atlantic in the UK. He graduated in Biology from Stanford and served as a Infantry officer. sb.longnow.org

Brian Eno is a composer, producer and visual artist. A founding member of Roxy Music in the 01970s, his solo albums and collaborative musical compositions with John Cale, Robert Fripp, David Byrne, Jon Hassell and David Bowie have been in circulation world-wide over the last 30 years. Eno has also been involved in the design and production of audio-visual gallery installations since 01978 and is a board member of the disarmament group BASIC (British American Security Information Council) and the environmental NGO ClientEarth. Recently he has produced (with Peter Chilvers) Bloom, a generative music piece – and one of the most successful musical apps for the iPhone – and Scape, a new form of album which offers users deep access to its musical elements. www.enoshop.co.uk

Comments [2]

  • Brian Tamm
    30 Apr 2013 at 14:04

    While I have great respect for Mr. Eno and I agree with a lot of the idea expressed in his letter. I do have to have to take exception to the example he chose to use, Nuclear Energy. I support it in theory but in practice I find that these facilities are run inadequately. Just one example as reported by a local news station in Hanford, WA. http://www.king5.com/news/investigators/Hanford-worker-pushed-WRPS-to-action-204789701.html As for FUKUSHIMA the most disturbing and outrageous that went on and is probably still going on were and are the outright deceptions by the Japanese Gov., the company and the scientists involved about the severity of the disaster as it was going on.

  • Charles Kim
    1 May 2013 at 01:05

    if you believe the world is run by software, then we just just need a new generation of quants to figure out how to decouple ourselves from unstainable-detrimental trends … fix the code for gods sakes!

Post a comment:

Please note that your comment may be moderated, and as such might not appear immediately.

More about Longplayer

Overview of Longplayer

Longplayer is a one thousand year long musical composition. It began playing at midnight on the 31st of December 1999, and will continue to play without repetition until the last moment of 2999, at which point it will complete its cycle and begin again. Conceived and composed by Jem Finer, it was originally produced as an Artangel commission, and is now in the care of the Longplayer Trust.

Conceptual Background

While Longplayer is most often described as a 1000 year long musical composition, the preoccupations that led to its conception were not of a musical nature; they concerned time, as it is experienced and as it is understood from the perspectives of philosophy, physics and cosmology. At extremes of scale, time has always appeared to me as baffling, both in the transience of its passing on quantum mechanical levels and in the unfathomable expanses of geological and cosmological time, in which a human lifetime is reduced to no more than a blip.

How does Longplayer work?

The composition of Longplayer results from the application of simple and precise rules to six short pieces of music. Six sections from these pieces – one from each – are playing simultaneously at all times. Longplayer chooses and combines these sections in such a way that no combination is repeated until exactly one thousand years has passed.

About Longplayer's Survival

From its initial conception, a central part of the Longplayer project has been about considering strategies for the future. How does one keep a piece of music playing across generations? How does one prepare for its technological adaptability, knowing how few technologies have remained viable over the last millenium? How does one legislate for its upkeep? And how can one communicate that responsibility to those who might be looking after it some 950 years after its original custodians have perished?