‘Long termism’ helps the tech elite justify ruining the world

0

“It’s as if they wanted to build a car that goes fast enough to escape from its own exhaust.”

Those words were the takeaway from a meeting Douglas Rushkoff, who describes himself as a “Marxist media theorist,” had with five hugely powerful people in tech who were looking to survive the impending climate catastrophe. Rushkoff’s account of this in The survival of the wealthy reveals something sinister: that the high-ranking tech elites really know what’s coming – but rather than stop it, they plan to save themselves from it in the form of underground luxury bunkers and guards armed with Navy SEALs.

The people who are almost directly responsible for the biggest problems in the world today – the climate crisis, the erosion of democratic institutions and the selling of people’s attention for profit – do not find themselves responsible. Moreover, accountability, or even solving today’s problems, is not even a desirable goal for them. At least not, according to “longtermism” – the philosophy that underlies much of the trajectory of technology and, if we are not careful, our own destruction as a race.

It is an idea which, due to its forward-looking scope, seems ambitious and futuristic at first glance. But he is the one who believes in a future for only a few people – self-proclaimed representatives of humanity – to the detriment of everyone else. And when billionaires start moving underground or soaring into space with the goal of colonizing other planets, they’re not doing it for humanity as a whole; they do it for a humanity composed exclusively of their own ilk.

Stephen Hawking said just a few years ago that “we are at the most dangerous moment in the development of mankind”. Even though we do next to nothing about it, most agree that the situation looks grim, and we are already seeing the effects of climate change in countries that have done little to reinforce it. But if the long run succeeds, it’s nothing more than a dent in humanity’s balance sheet. What makes philosophy so dangerous is its ethical foundation, summed up by one of its early theorists Nick Bostrom: “A non-existential catastrophe causing the collapse of world civilization is, from the point of view of humanity as a whole, a potentially salvageable setback. ”


Related to The Swaddle:

What is the environmental cost of space tourism?


Bostrom’s work is warmly endorsed by tech giants who have the resources and ability not only to outrun all current global crises, but to irreversibly influence the direction of our species as a whole. There are two key concepts in Bostrom’s argument: potential and existential risk. Potential is what long-termists understand to be humanity’s capacity on a cosmic scale, a trillion years in the future. Our potential is as vast as the universe itself. An existential risk, according to long-termist ethics, is one that threatens to annihilate humanity and with it, humanity’s potential. This is the most tragic outcome and the one that must be avoided at all costs. Now, it’s possible that a few people – say, 15% of the world’s population – will survive climate change. It doesn’t annihilate our potential even if it annihilates an unfathomable number of people – and so, in the long run, it’s not an existential risk.

“The case for long-termism rests on the simple idea that people in the future matter…Just as we should care about the lives of people who are distant from us in space, we should care about the people who are distant from us in timewrote William MacAskill, the public face of longtermism. His book was endorsed by Elon Musk, who cited MacAskill’s philosophy as a “close match” with his own. Musk also happens to be one of the biggest players in the privatized space race, and his vision of colonizing Mars is less and less of a semi-ironic joke.

The roots of longtermism lie in a philosophy called effective altruism. It’s the one that “effective altruism, which was once a loose, internet-based affiliation of like-minded people, is now a broadly influential faction, especially in Silicon Valley, and controls philanthropic resources on the order of thirty billion dollars,” notes a profile of MacAskill in The New Yorker.

There is a network of influencers writing the script for long-termism from various think tanks – together they are a company worth over $40 billion. Among other things, some advocate gender redistribution, others argue that saving lives in rich countries is more important than saving lives in poor countries, as philosopher Émile P. Torres reported in Salon. The utopia of longtermism is a future where human beings are engineered to perfection – leading to the creation of posthumans who possess only the best and most superior traits without any flaws. It’s an idea rooted in eugenics, and it fuels civilization’s most cynical ideas about who can be considered superior and who qualifies as inferior enough to be eliminated from our collective genetic makeup. Importantly, what unites all of these ideas is the benign-sounding idea of ​​the long term – and it even creeps into the United Nations. “The foreign policy community in general and the United Nations in particular are beginning to take the long view,” noted a UN dispatch.

But if it wasn’t already clear why the ideas themselves are dangerous, the people who formulate them make it clear which interests are at stake and which are not. “…contributors to fast-growing fields like the study of ‘existential risk’ or ‘global catastrophic risk’ are overwhelmingly white…Bostrom idealizes a future in which the continued evolution of ‘(post)humanity’ culminates in a form of “technological maturity”. ‘which adheres to dominant norms of white masculinity: deeply disembodied, unconnected to place, and dominant or independent of ‘nature’,” note scholars Audra Mitchell and Aadita Chaudhury, who work in the fields of human ethics, of ecology science, and technology.


Related to The Swaddle:

What the “Soulless Zuckerberg” memes say about our relationship with technology


The tech overlords find ways to survive what they know is to come – and euphemistically refer to an “event” – isn’t just a short-sighted way out of the mess they are themselves- same accomplices. It’s all part of the long game – perhaps the longest we’ve ever imagined.

Nick Bostrom carries considerable ideological weight. As President of the Future of Humanity Institute (FHI) at Oxford, he is part of a growing group of philosophers who have their future in mind in terms of what we can think, do, build and discover on a scale previously thought. to be unthinkable. “Anders Sandberg, a researcher at the Institute for the Future of Humanity, told me that humans may be able to colonize a third of the now visible universe, before dark energy pushes the rest out of reach. . It would give us access to 100 billion galaxies, a mind-blowing amount of matter and energy to play with,” wrote Ross Andersen, who has investigated the philosophies that are actively shaping the future of our civilization.

Technology is the key to reaching this kind of potential, which is why tech folks are so heavily invested (and investing) in the idea. At the heart of ethical deliberations is the cost-benefit analysis: how much is it acceptable to lose to ensure the potential of the people of tomorrow? Maybe even post-people? This is the ‘greater good’ dilemma that has already been used to justify devastating wars and political decisions: “Now imagine what could be ‘justified’ if the ‘greater good’ is not security national but the cosmic potential of intelligent life of terrestrial origin. over the next billion years? Torres asks.

“…the crucial fact long-termists miss is that technology is far more likely to cause our extinction before that far future event than to save us from it“, they add.

But the lack of transparency, the outsized resources, and the power concentrated in the hands of predominantly cis, white, techno-optimistic men who hold the future of the world in their hands stand in the way of recognizing this crucial fact.

Share.

Comments are closed.