Antifragile book notes Part 2

This is the final part to my notes and reflections on the book, Antifragile, by Nassim Taleb. Part one covered such things as skin in the game, the principle-agent problem, fragility of optimisation and convexity effects, and the power of options over deterministic thinking.

Now we dive into the other topics in the book that I found interesting and/or applicable to my work: via negativa, the curse of size, the average versus the dispersion, and the burden of proof of the novel.

Via negativa

The “do something” fallacy is the mistaken belief that in the face of a problem the act of doing something/anything is always superior to doing nothing. It is closely related to the agency problem, where the agent (e.g., a doctor) must often be seen to be doing something in order to justify their position as an agent, even if premeditated inaction is in the best interest of the principle (e.g., the patient). The refrain, “don’t just stand there; do something!”, is common. The complement, “don’t just do things; stop intervening!”, much less so.

One of the most prominent illustrations of this in my work is in asset investment planning, where the no-investment case is always the most risky, and any intervention (whether refurbishment or replacement of assets) will reduce the risk. In general this is true, but there are nuances to the approach. What about infant mortality of assets where damage has occurred during installation or where there are manufacturing defects? How is this superior to not replacing an asset that has exceeded its expected life but is otherwise operating just fine? What about finger tip maintenance or stress tests that due to operator error actually cause failure? The observer effect is a phenomenon not readily acknowledged in asset maintenance. It is worth remembering that the Chernobyl nuclear power station meltdown of 1986 was caused during a drill to test the preparedness for the reactor being operated outside of its limits.

Via negativa is the antithesis to the do-something fallacy. The best example that Taleb uses is medicine, where reducing something (like exposure to harmful levels of stress) is superior to the addition of something (like the taking of sleeping pills or anti-anxiety medication). Another book called Cradle to Cradle discovered this phenomenon when the authors were asked to make paint more sustainable. Instead of just trying to find like-for-like natural alternatives they also started reducing the list of chemicals in the paint. Once they began simplifying the ingredients list they unpicked a tangle web of dependencies, where many chemicals were only present in order to stabilise the other cocktail of chemicals present in the paint.

The curse of size

No government ever bailed out a grocer’s stall. Yet Small to Medium Enterprises (SMEs) accounted for around 52% of UK private sector turnover in 2021 and employed almost 54% of private sector jobs in the UK (Merchant Savvy). By comparison, the UK finance sector contributed under 9% to economic output and employed just over 3% of all jobs (public and private) (Commons Library).

The reader and Taleb intuitively know why banks are bailed out and SMEs are not: they are “too big to fail”. Something strange happens with size and risk where failure of a large bank can cause contagion in the wider economy. Taleb opposed the building of a major supermarket in his neighbourhood for a similar reason: he objected to the community being subjected to the curse of size and being vulnerable to the vicissitudes of one monopoly over local employment opportunities. From part one of this blog post we have already seen that the transfer of fragility means that the curse of size is ultimately borne by society not the corporation or the executive.

Planner’s fallacy in a connected world

So what does this mean for a consultancy? In project management, Danish economic geographer, Bent Flyvberg, evidences that an increase in the size of projects translates to poor outcomes and higher costs of delays as a proportion of the total budget. Small projects have small errors that come out in the wash but large projects are potential company killers. I have seen this for myself at work on projects across the whole scale range: £20k training and workshop projects that take a month for one person to complete through to £2m projects that involve teams of teams for over a year. Both inevitably experience difficulties but somehow you muddle through with the small projects. For the large projects the issues grow to sizes that have material knock-on effects to budgets, deadlines and consequences for other projects too.

Some of this could be explained by the planner’s fallacy, a catch-all term to explain the optimism and blindness to any future possibility other than the happy path where all things are done right first time with no delay. However, despite the sophistication of the modern world we actually seem to be doing progressively worse at delivering large projects rather than better. Examples in the book of historic mega projects delivered on time or ahead of schedule include the Empire State Building in New York and the Crystal Palace in London. The contrast provided is that these projects relied on a small pool of contractors with local labour and materials whereas most modern projects now rely on global interconnected and “efficient” (read lean, optimised, lacking in resilience) supply chains. For IT related projects the outlook is even worse.

My preference is towards many small projects rather than mega projects for the reasons above and more besides. However, the book explains a clever nuance that I will be trying to implement going forward: a large project can often be broken down into smaller discrete projects. Where this is possible I will be trying to implement smaller delivery teams (based in one location), individual budgets and isolated project plans, aiming to remove interdependency and reduce the risk of issues snowballing through an otherwise monolithic project.

The average versus the dispersion

Antifragile uses a humorous illustration of the temperature of a grandmother to illustrate this point. You are informed that your grandmother will spend the next two hours at an average temperature of 21 degrees Celsius. Sounds fantastic. However, the first hour will be spent at -18C and the next hour at 60C. The prognosis is almost certainly no grandmother, a funeral, and possibly an inheritance.

Individual people or things are inherently fragile as the effect of departure from average exposes us to harm. In other words,

never cross a river that is on average 4 feet deep.

This is an example of a convexity effect. An average value is not enough: one needs to know the amount of dispersion (or variance around a value) in order to judge the effect. I wonder why there is such fascination around reporting averages when what matters more in so many cases is the variance?

I discovered an excellent example of this in the electricity sector, provided to me by someone working for a Danish electricity network operator. In many substations you will find 50 kV transformers used to transform grid level voltage to a lower voltage used in the local network. How many customers the substation serves and the electricity demanded at different times of the day will govern how hot the transformers run. The key to understanding how long each transformer lasts for is not to understand the average load or the average temperature: it is time spent above the rated load. This is because the time spent operating outside the rated limit and the amount of load in excess of the limit is directly related to the thermal related degradation and harm experienced by the transformer.

Burden of proof of the novel.

Modern society always seems to be chasing the new thing. Maybe it’s a continuation of the do-something fallacy: to change or update is seen to improve. However, in the same way that Taleb cautions against medical intervention unless the stakes are high (like a car crash or cancer), he also cautions against the introduction of the novel and/or unnatural unless the stakes are similarly high. The author uses the example of Thalidomide, first sold over the counter in West Germany in 1975 to treat expecting mothers with morning sickness but was discovered to cause birth defects in the children born to those mothers. Given the benefits were relatively low, the burden of proof for the introduction of a new chemical to the body should have been high. In the present day its use is only for cancer treatment, where its benefit far outweighs its risks.

The same can be said for tobacco. For years the argument ran that the link between smoking and cancer was unproven. A similar argument was also given for the link between industrial emissions and climate change (especially the short time frame of global temperature observation). The burden of proof in both cases was wrong: it was not beholden on the incumbent system (the human body or the planet) to prove unequivocally it was harmed, it was the responsibility of the novel addition to the stable system that it could prove unequivocally that it would do no harm.

Time filters wisdom

In the end, time filters all wisdom, winnowing the ideas with proven results from the chaff of fine sounding arguments. Ideas that survive one additional year have an increased likelihood of survival into the future (the Lindy effect). To summarise the book,

suckers try to win arguments, non-suckers try to win.

So in the spirit of the book I will be aiming to put the ideas into practice, increasing my reliance on the ones that work. Hopefully they will improve the way I do things, maybe convincing a few people along the way through their success rather than their theory.

Richard Davey
Richard Davey
Group Leader, Data & Decision Science

My interests include earth science, numerical modelling and problem solving through optimisation.

Related