What are Generative Futures?
Escaping techno-determinism
'The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!'
— Joseph Weizenbaum
Technologies are malleable. The way technologies are used and their impact is not inevitable, but the product of a negotiation between the people who use them, institutions and the affordances of the technology itself.
We have chosen the name ‘Generative Futures’ because we are currently trapped in a debate between technological Inevitability (the belief that the effects of AI and emerging technologies are inevitable) and Nostalgia (the desire to simply reject them).
Analysts have argued we are living through a ‘polycrisis’: "The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear." We need new ideas for how we can make sure AI and emerging technologies are used to steer us towards a progressive future, rather than prolong crisis.
Denaturalising technology
It seems we are returning to a naive techno-determinism. The history of AI so far has been very contingent and the future is very malleable but still we talk about new technologies as if they are forces that we can’t avoid, but need to adapt to. Dario Amodei recently predicted that AI will wipe out half of entry-level white-collar jobs as if this is the inevitable result of the development of powerful AI.
As Nobel Prize winning economist Daron Acemoglu has laid out in detail, whether technologies are enabling or replacing is a political choice. David Autor, an economist at MIT who popularised the term ‘China Shock’, has argued that AI could rebuild a working class in the US and Europe that were disadvantaged by skills-biased technical change and the rising graduate wage premium: "Rather than catalyzing a new era of mass expertise as did the Industrial Revolution, computerization fed a four-decade-long trend of rising inequality … The unique opportunity that AI offers humanity is to push back against the process started by computerization - to extend the relevance, reach and value of human expertise to a larger set of workers … By providing decision support in the form of real-time guidance and guardrails, AI could enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators. This would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and — akin to what the Industrial Revolution did for consumer goods — lower the cost of key services such as healthcare, education and legal expertise.’1
However, this is not an inevitability but requires political work to achieve. As Autor argues ‘AI will not decide how AI is used, and its constructive and destructive applications are boundless.’ He adds ‘The erroneous assumption that the future is determined by technological inevitabilities … deprives citizens of agency in making, or even recognizing, the collective decisions that will shape the future.’
Almost all technologies of the past, however, involved a careful and uncertain negotiation. In almost all cases the negotiation could have turned out differently, and still could. Radio was used to spread democracy and Nazism. Fission was used for nuclear power and nuclear weapons. Writing was used to educate and deceive.
The development of AI has already been very contingent and specific. If it wasn’t for GPUs, cheap money from quantitative easing, cheap energy from fracking, and increasing scepticism among the US public about social media, AI could have taken a different path.2 Many researchers believe that the dominant approach to AI (transformer-based LLMs) is not necessarily optimal, but plays well with GPUs: they won the hardware lottery. Diving into the details reveals how contingent our technological present is, and how malleable the future could be.
There are particular characteristics which make AI seem techno-determined. ‘Scaling Laws’ sound like unstoppable forces. Of course, they aren’t. The idea that the first organisation to reach transformational AGI will have a decisive advantage due to the possibility that AI can improve itself adds a unique time pressure to developing AI. There’s no time for reflecting on what we want this technology to look like.
But there is nothing inevitable about the impact of these technologies on society. Neither are technologies inherently ‘good’ or ‘bad’. They are always double-sided, both a cure and a poison. As Bernard Stiegler argued, positive or negative outcomes are not inherent in technology, but in our failure to think critically about it. Deeply thinking about and understanding technology is the first step.
To adapt Will Davies’s phrase, we are seeing the disenchantment of politics by technology. But when you ignore politics it has a tendency to reassert itself. One short decade before the election of Donald Trump and the Brexit vote, Tony Blair said that we can’t question globalisation any more than we could question autumn following summer. In the late 1990s Francis Fukuyama described ‘The End of History’.
Understanding AI to shape it
To borrow a phrase from Anthropic, we need to think deeply to move quickly. AI is changing fast and we need to grapple with the technical aspects of the technology continuously. This is why we speak to people at the frontier of building AI and AI safety mechanisms.
The barrier to entry on the AI debate is currently high. We believe there is a useful analogy to economics here. Between roughly 1980 and 2008, economics became more ostentatiously technical and the ‘political’ in political economy was largely relegated. It was difficult to engage in serious discussions about the economy unless you were versed in econometrics or understood complex concepts such as collateralised debt obligations, CDS, securitisation. It took the 2008 crisis for economists to work out that these concepts should have been more legible in the first place, and that the debate should have been opened up to a wider set of actors.
We think there is a risk the technical AI conversation is also too narrow and the barrier to entry is too high. AI is political, and keeping political discussions of AI’s technicalities reserved to a small group in the Bay Area is not good for anyone. We want this project to contribute to raising the level of understanding so more people can engage, and to help make it possible for people without a technical background to think through possible futures.
Generative Futures
‘Plans are useless but planning is indispensable’ — Dwight Eisenhower
We agree with Will MacAskill that ‘almost no one has articulated a positive vision for what comes after superintelligence … Without a positive vision, we risk defaulting to whatever emerges from market and geopolitical dynamics, with little reason to think that the result will be anywhere close to as good as it could be. We need a north star, but we have none.’ As Milton Friedman said, ‘Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around.’
Given that we don’t know when an opportunity or a crisis will occur, we think we should start thinking of new ideas now.
Generative Futures is a project dedicated to steering AI toward a progressive future.
Get involved:
Subscribe to receive new posts
Comment to if you want to join our reading group at Newspeak House in London
Write with us — we’re looking for contributors
Collaborate — suggest a podcast guest or co-host an episode
Where to start
Why we don’t understand AI models yet — a conversation with Lee Sharkey on mechanistic interpretability
Why AI can’t learn on the job — on the continual learning problem and why it matters
Why poetry breaks AI — on the literature blindspot in AI research
Is China really racing for AGI? — a conversation with Seán Ó hÉigeartaigh
There is some evidence that AI is currently compressing productivity differences between workers of different skill levels.
On quantitative easing: As Mercer argue, the Fed had a target interest rate of 0 from 2008-2015 and 2020-2022. This was never seen before 2008. The Fed also launched a bond-buying spree (called Quantitative Easing) which meant that across the economy there were even lower rates. Essentially, there was a lot of money sloshing around looking for a home. Mercer note that in 2009, Microsoft had $3.75 billion in long-term debt. By 2019, it had $63 billion. In 2009, Apple had no long-term debt; by 2019, it had $92 billion. This meant there were huge amounts of capital to be put into AI.
On fracking: With a share of over 40%, natural gas is currently the biggest source of electricity for data centres in the United States, followed by renewables (mostly solar PV and wind) with 24%, as well as nuclear and coal power with shares of around 20% and 15%, respectively. Globally the share is different (Coal is the biggest source globally and gas is third). The NYMEX natural gas futures closing price, which hit $8.39 on August 27, 2008, plummeted to $3.47 by October 29, 2012 - a decrease of 58.5 percent. The US became a net exporter of energy, which significantly shifted world geopolitics. The AI infrastructure buildout is based on gas.



