Michael Roberts
The Magnificent 7 stocks — NVIDIA, Microsoft, Alphabet (Google), Apple, Meta, Tesla and Amazon — now make up around 35% of the value of the US stock market, and NVIDIA’s market value makes up about 19% of the Magnificent 7. The S&P 500 has never been more concentrated in a single stock than it is today, with Nvidia representing close to 8% of the index.
This is a hugely top heavy stock market, now at record levels, driven by just seven stocks and in particular, Nvidia, the company that is making all the processors needed by AI companies to develop their models. If Nvidia’s revenue growth should weaken, that will put huge downward pressure on this highly overvalued stock market. As Torsten Slok, chief economist at one of the largest investment institutions, put it: “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s.”
So is the great AI sector a huge bubble, funded by fictitious capital that will not be realised by revenues and, more important, profits for the AI leaders? By the end of this year, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, but they have only accrued revenues of about $35 billion. Amazon plans to spend $105 billion in capital expenditures this year but will get revenues of just $5bn. And revenues are not profit, as revenues are measured before the costs of delivering AI services. Investment in AI is now at $332 billion of capital expenditures in 2025 for just $28.7 billion of revenue. Investment in the huge data centers necessary to train and source AI models is planned to reach $1trn by the end of the decade.
But if any of the Magnificent Seven start getting cold feet on what they are spending relative to revenues and profit and so reduce their purchases of chips, Nvidia’s stock price could head downwards fast, taking others with it.
Are the expected returns of revenue on this massive capital investment likely to materialise? Goldman Sachs’ head of equity research, Jim Covello, questioned whether the companies planning to pour $1tn into building generative AI would ever see a return on the money. A partner at venture capital firm Sequoia, meanwhile, estimated that tech companies needed to generate $600bn in extra revenue to justify their extra capital spending in 2024 alone — around six times more than they were likely to produce.
Take the well known ChatGPT. It has, allegedly, 500 million weekly active users — but by the last count, only 15.5 million paying subscribers, just a 3% conversion rate. While increasing numbers of people now use AI chatbots, only a tiny number are paying for the AI service they use, producing annual revenue of about $12bn, according to a survey of 5,000 American adults by Menlo Ventures.
When it comes to profits from AI, the situation is even worse. Big Tech’s annual earnings growth results have been flat or slowing for the past few quarters and are expected to slow further in 2025 and 2026.
So huge investment of money and resources, astronomic payments to AI trainers, and massive data centers being constructed – with the AI hype driving the stock market to ever new heights – but so far, with no significant revenues raised and virtually no profits. This is a repeat of the dot.com bubble on steroids.
However, a bubble there may be, but that does not mean that eventually a new ‘disrupting’ technology will not emerge that will radically change the productivity frontier for the major economies and thus deliver a new period of growth. The dot.com bubble burst in 2000 with a massive drop in the stock market, but the internet went on to spread into all sectors of business and into all households – and the Magnificent Seven emerged.
Take another example from the 19th century. During the 1840s, there was Railway Mania, as huge numbers of companies raised funds to invest in constructing rail lines across Britain. Railway shares rocketed, with stock prices doubling in 18 months from early 1843. But after the bubble came the burst in 1845, with many companies going bust and stock prices falling by half. This triggered a widespread financial crisis and a slump in production. Nevertheless, the railways were built, transport costs dropped sharply and consumer demand for travel expanded mightily. Britain entered an economic boom in the 1850s.
Will the AI bubble follow the same path, producing a financial collapse and crisis, but eventually provide the basis for new growth in productivity? In previous posts on AI, I have recounted the scepticism about the productivity benefits of AI offered by such experts as Nobel prize winner, Daren Acemoglu and others. Also in a recent in-depth report by the OECD on productivity growth in the major economies, cold water was poured onto impact of the internet in raising productivity growth in the last 25 years.
As the OECD report put it: “Over the past half-century we have filled offices and pockets with ever-faster computers, yet labour-productivity growth in advanced economies has slowed from roughly 2 per cent a year in the 1990s to about 0.8 per cent in the past decade. Even China’s once-soaring output per worker has stalled”. Research productivity has sagged. The average scientist now produces fewer breakthrough ideas per dollar than their 1960s counterpart.
Labour productivity growth has been on a declining trend since the 1970s across the OECD and weakened further since the turn of the century. In the US, productivity picked up from the mid 1990s to the mid-2000s on the back of rising efficiency in the production of ICT equipment and the diffusion of internet-related innovations that were adopted in ICT-using sectors, notably retail. “However, this rebound was relatively short-lived and productivity growth has since then been lacklustre.”
The key factor in raising productivity of labour is investment in new labour-saving technology. But business investment has slowed markedly in all countries. And the OECD makes clear why. The “investment slowdown despite readily available and cheap credit for firms with access to capital markets is in line with historical patterns showing that uncertainty and expected profits tend to play a greater role than financial conditions in investment decisions.” In other words, the profitability of capital declined, reducing the incentive to invest in new technologies.
And so-called ‘intangibles’, like software investment, did not compensate for the decline in investment in plant, equipment etc. “Notwithstanding the rise of intangibles, total investment since the GFC has been weak overall, which worsened the labour productivity slowdown directly.”
Will AI be different? Can it deliver higher productiivity through companies replacing millions of workers across the econmy with AI tools? The problem here is that economic miracles usually stem from discovery, not repeating tasks at greater speed. So far, AI primarily boosts efficiency rather than creativity. A survey of over 7,000 knowledge workers found heavy users of generative AI reduced weekly email tasks by 3.6 hours (31 per cent), while collaborative work remained unchanged. But once everyone delegated email responses to ChatGPT, inbox volume expanded, nullifying initial efficiency gains. “America’s brief productivity resurgence of the 1990s teaches us that gains from new tools, be they spreadsheets or AI agents, fade unless accompanied by breakthrough innovations.” (OECD).
Large language models gravitate towards the statistical consensus. A model trained before Galileo would have parroted a geocentric universe; fed 19th-century texts, it would have proved human flight impossible before the Wright brothers succeeded. A recent Nature review found that, while LLMs lightened routine scientific chores, the decisive leaps of insight still belonged to humans. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward-looking and imitative, whereas human cognition is forward-looking and capable of generating genuine novelty.
The great Holy Grail of OpenAI and other AI companies is a super-intelligent generative AI that can take over innovation from humans. So far, that remains as mythical as the Holy Grail was in literature. Current GenAI can make only incremental discoveries, but cannot achieve fundamental discoveries from scratch as humans can.
But OpenAI guru, Sam Altman promises that its AI won’t just be able to do a single worker’s job, it will be able to do all of their jobs: “AI can do the work of an organization.” This would be the ultimate in maximising profitability by doing away with workers in companies (even AI companies?) as AI machines take over operating, developing and marketing everything. That’s why Altman and the other AI moguls will not stop expanding their data centres and developing yet more advanced chips, just because Chinese AI models like DeepSeek have undercut their current models. Nothing must stop the objective of super-intelligent AI.
Unfortunately, as MIT Tech explains, many AI models are notorious black boxes, which means that while an algorithm might produce a useful output, it’s unclear to researchers how it actually got there. This has been the case for years, with AI systems often defying statistics-based theoretical models. In other words, AI trainers don’t really know how AI models work. That is a major obstacle to achieving the Holy Grail.
So the AI boom is still just a financial bubble. As one commentator put it: “Generative AI does not do the things that it’s being sold as doing, and the things it can actually do aren’t the kind of things that create business returns, automate labor, or really do much more than one extension of a cloud software platform. The money isn’t there, the users aren’t there, every company seems to lose money and some companies lose so much money that it’s impossible to tell how they’ll survive.”
Meanwhile, the massive construction of data centers is consuming unprecedented levels of energy. The International Energy Agency predicts data centre electricity consumption will double to 945 terawatt-hours by 2030 — more than the current power used by an entire country such as Japan. Ireland and the Netherlands have already restricted the development of new data centres due to concerns about their impact on the electricity network. There are huge surges in power demand at data centres in training AI models, along with a bumpy renewable energy supply that threatens the resilience and capacity of current energy systems.
As for the productivity and growth outcomes, the OECD hedges its bets. If AI technologies spread and are successively implemented, the OECD reckons global labour productivity will rise by 2.4% pts over the next ten years, and add 4% to world GDP from where it would have been on current trends. However, if AI is not so successful in reducing the need for human labour and does not spread to all sectors, then labour productivity may rise only 0.8% pts above the current trend level in ten years (from the current 0.8% a year) and world economic growth will be unchanged. The jury is out.