This is the first in a series of three blog post about uncertainty by Andy Stirling. The second post is here and the third post is here.
Uncertainty is not a condition out there in the world. It is a state of knowledge – deeply embedded and shaped in society. The difference may seem abstract. But it could hardly be of more profound or practical importance. And this has arguably never been more true than in today’s turbulent world.
One key upshot lies in what may confidently be aimed at (let alone claimed) in the understanding of uncertainty. All that can reliably be known in any situation, is the uncertainty under a particular view – not the actual uncertainty in the situation itself. Fudging this contrast is dangerously misleading.
For instance in the nuclear field, experts often express confidence that uncertainties about safety can be ‘objectively’ quantified, using probabilities to express them as ‘risk’.
Elaborate models, used for many years in nuclear regulation around the world, thereby suggested risks that a given reactor might suffer a large release of radioactivity are on average lower than once every 1,000,000 years. Since then, experience has taught many lessons about flaws and gaps in these calculations. Large scale releases of radioactivity occurred in a nuclear meltdown at Chernobyl in 1986 and from three more at Fukushima in 2011. This real-world empirical record so far shows the actual global average risk of a meltdown with radioactive release to be worse than once every 4,000 reactor-years.
There is a big difference between one in 1/1,000,000 and 1/4,000. So there clearly seems to be something seriously adrift with risk modelling in this case. With roughly 400 reactors in the world, this empirical evidence suggests a crude expectation that a serious nuclear accident will cause a major release of radioactivity somewhere in the world, on average about once every ten years.
So we come to the crucial distinction between ‘uncertainty’ and ‘risk’. A risk is what results from a structured calculation that must necessarily reflect a particular view. An ‘uncertainty’ is what these risk calculations might leave out. And when the calculations are influenced by specialist interests (as is often the case), uncertainties and risks can be expected to differ especially strongly. For nuclear accidents, then (as in other cases), the actual uncertainties remain quite literally incalculable.
Of course, neither risk nor uncertainty are necessarily good or bad. Either can be about possible benefits as much as harms. The real difference lies in the apparent tractability. For even in the technical terminologies that underpin expert modelling, it has long been well recognised that ‘risk’ numbers are only valid where there is confidence that probabilities can be fully determined. Accordingly, the technical term uncertainty has also been used for a century or so, to refer to a situation where probabilities are not fully known.
The trouble is (as in the nuclear example), that there are many temptations to reduce ‘uncertainty’ to ‘risk’. In academic and policy debates worldwide, so-called ‘uncertainty analysis’ defies both everyday and technical meanings of uncertainty, by insisting (as ‘necessary simplification’) on confident assignments or aggregations of probabilities. Whoever is thereby most successful in asserting their own ‘risk’ numbers gets to define an apparently ‘objective’ view of uncertainty. But the basis on which the numbers are calculated will always contain a hidden subjective element.
So the aim in emphasising this distinction between risk and uncertainty is not to insist on words. The point is rather that – whatever it is called – there are obvious practical differences between situations where probabilities are notionally known, and those where they are not. Where the word ‘uncertainty’ is used for something that is expressed as a simple numerical ‘risk’, the question arises as to whether there is even a word any more, to express any lack of confidence in these numbers?
Again and again, policy debates across different areas show the most crucial dilemmas are broader and deeper than risk alone. Yet what gets asserted most loudly are simple risk numbers. Political crises, financial crashes, climate change, GM foods, new diseases and many natural disasters all – like nuclear accidents – defy neatly simple probabilities. Narrowing these uncertainties down to ‘risk’ seriously overstates the confidence justified in any resulting asserted ‘way forward’.
But isn’t risk quantification simply trying to be practical – making the best of a tricky situation? Unfortunately, this is not necessarily always the case. The difference between risk and uncertainty falls on a tectonic fault-line in contemporary politics. And efforts to obscure the divide with new terminologies can cause a major part of the resulting friction.
Across technology, health, environment and national and global economies, loud voices on all sides vie to express messily unknown subjective uncertainties as if these were neatly quantified objective risks. However they are seen, the stakes are very high. Huge forces are pressuring for a state of uncertainty-denial.
What all this means, is that the drive for ostensibly objective probabilities is not innocent. Even if inadvertently, it helps shape reassuring policy storylines. And to those interested in ‘business as usual’, the apparent authority and clarity of simple numbers can offer a precious sense of stability.
But – depending on how these risk calculations are performed – this aim to ‘keep it simple’ with apparently straightforward probabilities, conceals many crucial complexities. It can yield very different answers under contrasting assumptions. So there are strong incentives for interested parties to select assumptions that yield whatever is their most favoured ‘simple’ number.
* * *
Daunting though these problems are, the difficulties do not end here. For the complexities that are side-lined in the expedient simplifications of risk, go far beyond the deeper and broader dilemmas of uncertainty. Even less tractable challenges emerge of ambiguity and ignorance. And here, the scope for misleading reductions in risk assessment are even more serious.
As with the difference between risk and uncertainty, challenges of ambiguity and ignorance are not about external criticism. They are intrinsic to the logic of risk assessment itself. For risk calculations by definition require knowledge both of probabilities and possibilities. Uncertainty arises where there is a problem with probabilities. But knowledge can be problematic about possibilities too.
Ambiguity, then, concerns the obvious point that the possibilities for which probabilities are computed, are also themselves far from self-evident. Contrasting meanings and values can lead to radically different categorisations and prioritisations of benefit or harm. Likewise issues of fairness are entirely excluded in the formal calculus of risk. And there are often crucial questions over what alternative options for action there might be, beyond those focused on in any given risk assessment?
In the same terms, ignorance arises where difficulties go beyond just the distinguishing, interpreting or prioritising of different possibilities and probabilities. Knowledge may also be seen as problematic in the ways possibilities are conceived in the first place. Crucial issues may be excluded or entirely unthought of. There may be ‘wilful blinkers’, which systematically exclude particular implications. Here there lurk the notorious ever-present prospects of ‘unknown unknowns’ and surprise.
Across all these dimensions, the problem grows around fixations merely with risk. Just as someone equipped only with a hammer is doomed to treat every problem as a nail, so the expediency of this tool can obscure a host of more nuanced methods and institutional cultures. These ‘Cinderella approaches’ might address uncertainty, ambiguity and ignorance in better ways. But the language of risk rules them out.
So in practice too, words can matter. How, then, can all this confusion be resisted? One starting point may be with the language itself. With the word ‘uncertainty’ routinely conflated with risk – and itself understating other conditions like ambiguity and ignorance – a use arises for a less loaded encompassing term: one that will resist more strongly these many spurious kinds of reduction and simplification.
Here, there has long been another word in the dictionary that offers usefully to help challenge these politically dangerous forces. The term ‘incertitude’ equally encompasses risk, uncertainty, ambiguity and ignorance – as well as all their many real-world permutations and entanglements. Terminology alone can achieve little. But, in a world where methods and institutions are under such powerful pressures, it can at least help guard against closures of imaginations themselves.
Either way, the bottom line across many different sectors around the world, is that prevailing risk-based methods and institutions address only a small part of the political realities of incertitude. There is always room for doubt over scope and meanings. So analysis and expertise alone can never solve the problem. And they are often vulnerable to inadvertent bias or manipulation.
In the end, the routine practice of reducing incertitude to risk is both hubristic and deceptive. If the full scope and depth of incertitude are to be properly tackled, there is a need for greater humility and vigilance.
Uncertainties can make it hard to plan ahead. But recognising them can help to reveal new questions and choices. What kinds of uncertainty are there, why do they matter for sustainability, and what ideas, approaches and methods can help us to respond to them?
Find out more about our theme for 2019 on our Uncertainty theme page.
Comments are closed.