Silicon Valley's AI Gamble: Big Tech's Trillion-Dollar Bet vs. Wall Street's Skepticism
- Justin Jungwoo Lee
- Aug 3, 2024
- 4 min read
Recently, Wall Street's view of Big Tech has been turning skeptical. However, Big Tech companies are maintaining their plans for enormous CAPEX investments regardless. It's a showdown between West Coast tech companies and East Coast financiers. Who will ultimately be the winner?
Let's first introduce Wall Street's skeptical perspective on Big Tech.
Jim Covello (Goldman Sachs head of global equity research and longtime tech analyst)
Building and operating AI capabilities is very expensive, much more so than the technology and business processes it aims to replace. This contrasts with previous technological innovations like the internet, which were cheaper than what they were replacing.
Unless the hardware market underlying AI becomes highly competitive, AI prices won't become cheaper just because more people use it. NVIDIA's position in the AI chip market is similar to ASML's position in advanced chip lithography, which has proven nearly unassailable.
People overestimate AI's capabilities today. For example, AI is not very good at providing basic but accurate summaries of complex information. The great use cases for AI have not yet materialized. With previous technologies like smartphones, most ultimate use cases were accurately presented even when the technology was in its early stages. While AI can create process efficiencies in areas like software coding, revolutionary applications have not yet been discovered or even proposed.
Indeed, the costs required for training Generative AI models, including GPUs, power, data center construction, etc., are enormous. In particular, due to the shortage of GPUs, many Big Tech companies and other AI start-ups are experiencing obstacles in developing products and services. More competitors to NVIDIA need to emerge. The recent initiation of an antitrust investigation into NVIDIA by U.S. authorities is therefore significant. We need to watch the trend and see if hardware prices decrease.
Elliott, "NVIDIA has entered a bubble, and AI is overhyped
Many of the so-called 'use cases' for AI will never be cost-effective, will not actually work properly, will consume too much energy, or will prove to be unreliable.

The Economist (July 2, 2024): "So far the technology has had almost no economic impact"
The proportion of companies adopting AI technologies is still minimal, and data security, biased algorithms, and hallucinations are barriers to the adoption of these technologies. It will take time for the AI technology revolution to impact the overall economy, and assuming Big Tech companies' AI revenue grows at an average of 20% annually, most of the profits from AI will be realized after 2032.

Nevertheless, U.S. Big Tech companies are promising even more AI investments regardless.
Big Tech companies say the $100 billion AI investment boom is just the beginning
Despite Wall Street raising doubts about the profitability of this unprecedented AI infrastructure investment, U.S. Big Tech companies are jumping into the race to build AI-supporting infrastructure, increasing their capital expenditure by 50% this year to over $100 billion.
Mark Zuckerberg of Meta, Facebook's parent company, says, "At this point, I think it's better to take the risk of building capacity early rather than the risk of getting capacity later than needed." Zuckerberg predicted that Meta's capital expenditure could reach $40 billion this year.
Zuckerberg estimated that the computing power needed to train the next version of their LLM (Large Language Model) would be "nearly 10 times more than the previous version." However, he acknowledged that it would take "years" for some AI features like Meta AI chatbots to generate revenue "independently."
Google CEO Sundar Pichai says, "When we go through these transitions in technology... the risk of under-investing [in AI] is dramatically bigger than the risk of over-investing."
Most of the investments by U.S. Big Tech companies are being used to acquire land for cloud computing businesses and build new data centers. Also, huge amounts are being spent on hardware, including GPU clusters necessary for training and operating LLMs (Large Language Models), which form the basis of chatbots.

In fact, AI technology has experienced two "AI winters" in the past:
1. 1974-80: Limitations and inefficiencies in machine translation, difficulty in solving complex problems due to limited computer capabilities at the time, lack of scalability of early AI systems, reaction to excessive optimism, significant reduction in funding for machine translation research.
2. 1987-1993: Exposure of limitations in expert systems that had raised high expectations in the early 1980s, limitations exposed when applied to complex real-world situations, reduced need for AI-dedicated devices due to performance improvements in general-purpose computers, reduced government support for AI research.
While it's too early to discuss a third AI winter, if the gap between excessive expectations for generative AI and actual technology widens and people's disappointment accumulates, the current praise for AI could turn into great disappointment at any time.
The immediate fact is that it's not making money. While Generative AI can perform impressive tasks like document summarization, translation, coding assistance, image/music generation, etc., it's still somewhat lacking for companies to apply in their work, and there are data security issues. The fact that enormous amounts of money are needed to build the infrastructure and data centers to support such AI raises questions about the foundation of the business.
Also, riding on the popularity of LLM-based chatbots, many Big Tech AI startups are boldly claiming they will create AGI (Artificial General Intelligence), but considering that the AI model (Transformer) underlying LLMs is based on principles completely different from how human natural intelligence and reasoning work, it's questionable how much credence we can give to such claims.
Comments