Our civilization has the potential to be a cornucopia of health, knowledge and freedom. Scientific and technological advances have significantly (to steal a phrase) ‘levelled up’ our wealth, safety from illness and ability to appreciate the world around us.
Unfortunately, part of such creative technological abilities includes the potential to destroy. The creation of nuclear weapons and advances in artificial intelligence carry with it the threat of wiping out the human race altogether. Such developments are only part of a series of ‘existential risks’ that have been made more likely due to our successful scientific inquiry, posing a serious threat to the very future of our species.
That is the major worry of philosopher Toby Ord. Working at the aptly named ‘Future of Humanity Institute’, Toby and a collection of philosophers and scientists are researching how to ensure that the human race lives thousands of years into the future, so that it can maximise its potential.
Some may ask, why should we prioritise the long term? The world today is currently riddled with problems, and they need urgent solutions. ‘Longtermism’ doesn’t deny the existence of such issues, but simply posits that compared to the potential destruction of our species, they hold little purchase.
The argument for ‘longtermism’ is as follows. There is the potential that our species could survive for thousands of generations into the future. The Earth (providing some anthropogenic disaster doesn’t destroy it) should be inhabitable for tens of millions of years from now. If we continue improving our standard of living at its current rate, everyone could have a prosperous lifestyle compared to the 21st century. This means that there is this enormous potential value in the future that could be realised.
The value for today’s and the next generation is dwarfed by the sum utility that could be achieved when you consider generations thousands of years into the future. Presuming that we want to maximise this good, it then makes sense to think about focusing our efforts into ensuring the long term future of humanity.
Even if you don’t subscribe to this rather consequentialist approach, you can still hold a longtermist view. A virtue ethics approach to longtermism might claim that prudence as a species is vital, and that looking out for future generations is a kind and thoughtful thing to do. Alternatively, we could presume that human beings could well be one of (if not the) only intelligent life in our universe. This cosmic significance is something that is very difficult to overlook.
So, what are these threats to the continuity of our civilization? Toby Ord lays out the central scenarios to consider in his latest book, The Precipice. Although we shouldn’t discount natural risks, such as solar flares and comet collisions, the most pressing problems lie with anthropogenic and biological risks.
Nuclear warfare is a clear example. Ord recounts how many times nuclear bombs were almost dropped by accident. AI takeover and dystopian totalitarianism are also explored in great detail. One risk particularly salient today that Ord explicates regards global pandemics, with biologically engineered diseases being one of the new weapons of choice in the modern military armoury.
Mapping out the probability of each event occurring, Ord believes that an existential risk in the next one hundred years via climate change or nuclear war is 1 in 1000, engineered pandemics as 1 in 30, and unaligned AI at one in 10, adding up together to create a 1 in 6 possibility. “Given everything I know, I put the existential risk at around one in six: Russian roulette”.
Many longtermists disagree with modern activists about the prioritisation of climate change. Many see this as the greatest threat to our civilization, but longtermists tend to disagree.
Because irreversible climate change wouldn’t entirely destroy the human race, it isn’t seen as problematic as something that could end the future existence of humanity, although it would heavily reduce our capabilities to solve other serious problems.
Philosopher Nick Bostrom has previously suggested that population reduction from 1% to 0% is much worse than 100% to 1%, because the former 1 percentage point fall signals the denial of any future generations realising their value, whereas the 99 percentage point fall, whilst losing huge direct value, will help to facilitate future generations for thousands of generations.
Ord, Bostrom and the Future of Humanity Institute are part of a constellation of actors who are taking longtermism very seriously. The Effective Altruism (EA) movement aims to maximise social impact, and longtermism is rising sharply up their agenda for doing good. Organisations such as DeepMind, the Future of Life Institute, 80,000 hours, and The Bill and Melinda Gates foundation are leading the way in reshaping this approach to morality and constructing social institutions.
What are the world’s most pressing problems? For the EA community, global poverty and health need solving right away, but longtermism is now high up on their list. Maybe it should be for you too.