Recently, long-time investor and market sceptic Michael Burry deregistered his hedge fund, Scion Asset Management, effectively stepping away from managing outside capital while remaining active in markets. Almost simultaneously, he re-emerged publicly with a series of posts and a Substack focused on what he characterizes as an AI-driven bubble, with particular emphasis on how mega-cap “AI winners” are using accounting choices to exaggerate economic profitability.
In these posts, Burry makes three linked claims:
First, major AI-exposed firms are inflating earnings by extending the useful life of servers and data-center hardware, suppressing depreciation, and disconnecting reported profitability from true economic return.
Second, stock-based compensation is understating labor costs and transferring massive value from shareholders to employees, with buybacks masking the dilution rather than offsetting it.
Third, these accounting choices sit inside a broader pattern of earnings management, aggressive revenue recognition and heavy reliance on non-GAAP metrics that exaggerate the underlying economics of the AI trade.
On the forefront on these allegations stands NVIDIA [NASDAQ: NVDA], which some argue is responsible for the next Industrial Revolution. Co-Founded in 1992 by Jen-Hsun Huang NVIDIA develops a platform for scientific computing, AI, data science, autonomous vehicles, robotics, metaverse, 3D internet applications, and PC graphics. Its strategy is to (re)-shape its overall business area, catapulting the world into a new age of technology. Still, currently around 90% of its revenue stems from the segment “Compute & Networking and Graphics”, which services its data centre and AI operations, but also includes robotics, illustrating the company’s dominance in the artificial intelligence industry. Its chips are the backbone of the current market momentum.
This is underscored by its financial performance. The company’s revenue has been rising in the last five years. Overall, it increased by 683% during the period, reaching $130.5bn in 2024 (+114% YoY), primarily supported by strong demand for accelerated computing and AI solutions. Net income in 2024 was $72.9bn, a 145% increase from the previous year’s net income of $29.8bn. NVIDIA’s growth is nearly unprecedented and is a dominant factor in the recent gains across the wider stock market. But is this growth sustainable or is the true profitability hidden from the world?
Useful Life Extensions in AI Infrastructure
According to GAAP or IFRS accounting principles, tangible and intangible assets are typically expensed over their estimated useful life. This appears in the financial statements as D&A. Companies estimate the useful life of their assets annually and, if it has changed, alter their D&A schedule accordingly. This could be deemed necessary due to, for example, technological improvements in hardware durability, shifts in usage patterns, alterations in the operating environment or simply updated maintenance practices. If the useful life of an asset is extended, its cost will be spread over a longer period, lowering each years D&A expenses, and artificially increasing net income for the current period. Even though this change does not affect a company’s free cash flow (D&A is a non-cash expense), it may still have a powerful impact on profitability metrics such as P/E or EPS. As a result, firms may appear more stable than they intrinsically are.
As pointed out by Michael Burry in his recent posts on X, this behaviour has been prominent in hyperscalers and other AI-focused companies. Since 2020, Meta [NASDAQ: META], Google [NASDAQ: GOOGL], Oracle [NASDAQ: ORCL], Microsoft [NASDAQ: MSFT], and Amazon [NASDAQ: AMZN] have collectively extended the useful life of their servers and networking equipment 10 times, saving them billions in depreciation expenses, as seen in the table below.
In 2020, the average useful life of these systems was 3.6 years, whereas by 2025 this was revised to 5.7 years. The firms did not frame these revisions as cost-cutting measures, but rather attributed them to improvements in hardware durability, better maintenance, and enhanced performance analytics. According to them, the technology can be fully utilized for longer thus justifying the longer useful life.
Still, it is important to differentiate. The predominant difference is the usage of the system: AI or no-AI. If the technology is used to support the boom in AI processing power, it is debatable if 5.7 years is a sustainable useful life. This is supported by Amazon’s decision, which effective on Jan. 1, 2025, changed their “estimate of the useful lives of a subset of [their] servers and networking equipment from six years to five years”. The reduction primarily impacted AWS, which is the segment that focuses on cloud computing, and which makes Amazon NVIDIA’s 5th largest customer at 7.52% of their total revenue.
It is fully plausible that there is a technical reason for the adjustment of useful life, but the timing and the collectiveness of the changes is still worth exploring. During a time of massive capital expenditure on AI infrastructure, something straining each company’s margins in the short run, a revision of accounting changes may mitigate these impacts. The concrete financial implications are shown in the table below. An increase in useful life by as little as 1 year resulted in a reduction in D&A of 20-30% for the fiscal year, improving net income by 3-5%, and lifting EPS. This could suggest an industry-wide strategy to smooth profitability amid soaring investment to further sustain the AI-boom, which has been carrying the entire stock market in recent times.
In 2024, Baidu [NASDAQ: BIDU], a Chinese technology company specializing in internet services and AI, extended the useful life of its servers from 5 years to 6 years to “reflect actual usage”. This saved them $198mn in depreciation (20% of total D&A) and raised net income by $171mn (5% of total NI) that year. Consequently, in Q3 2025 they recorded a $2.3bn impairment loss attributable to its “core asset group”, likely referring to server infrastructure which has aged prematurely. Baidu’s situation raises questions about whether the long-lived server assumptions, which are utilized heavily by AI now, hold up and are sustainable. It illustrates the potential downsides of overestimating the durability of assets, which could have a disastrous effect on the entire economy.
Lastly, it is important to note that extension of useful life is mostly unique to hyperscalers. Out of 12 smaller-cap AI infrastructure and quantum computing firms (further evaluation follows) only 3 have reported upwards revisions in the last 5 years.
Small-Cap & Russell Trends
Previously IonQ [NYSE: IONQ], an American quantum computing hardware and software company, extended its quantum hardware life from 2 to 3 years in 2022. The change was not material to its finances as reported by the company. Reasons for the correction were stated as “actual operating performance and future expected usage”, but no further clarification was given. Similarly, Knightscope [NASDAQ: KSCP], a robotics security firm, raised the upper bound of the useful lives’ assumption from 4.5 to 5 years on their Autonomous Security Robots, again without explicitly stating the reasons.
Hyperscalers extended server lives and dropped D&A by hundreds of millions. That behavior has not broadly trickled down to Russell-2000 AI & quantum names. The exception as of recently is Serve Robotics [NASDAQ: SERV], as the firm lengthened robot useful life from roughly two to four years in 2025, thereby lowering depreciation per unit and improving reported margins as its fleet continues to mature. The rest Arbe[NASDAQ: ARBE], Knightscope [NASDAQ: KSCP], BigBear.ai [NYSE: BBAI], C3.ai [NYSE: AI], Veritone [NASDAQ: VERI], Absci [NASDAQ: ABSI], SoundHound [NASDAQ: SOUN], IonQ [NYSE: IONQ], Rigetti [NASDAQ: RGTI], D-Wave [NYSE: QBTS], QCI [NASDAQ: QUBT], Richtech [NASDAQ: RR] did not disclose new life-extensions in FY2024-2025 filings as most continue to use their predefined standard 3-5 year ranges. Two structural reasons explain the gap. First, these companies are asset-light; PP&E and owned compute are small relative to people, R&D, and cloud opex, so stretching lives barely moves the P&L. Second, with constrained revenue and frequent capital raises the auditor and optics risk from overt estimate changes is high. Conclusively Burry’s useful life extension mechanism remains for the most part a large-cap accounting strategy. Among small caps, Serve is an exception, hence there is no proof of trend found in the Russell indices.
Where small caps actually inflate their position presents itself in a dominant pattern of a three-step playbook. Step one is non-GAAP dependency where nearly every firm reports results via adjusted EBITDA or “non-GAAP net” that adds back stock based compensation, fair-value swings on warrants/convertibles, impairments, and restructuring. That composition claims “core improvement” while GAAP losses persist. Step two is using SBCs as a propellant since equity awards substitute for cash, shifting real labor cost from today’s income statement to tomorrow’s share count. In multiple cases, SBC is a double digit percent of revenue or a multiple of it (refer to Table 3). This practice dilutes holders and hides per-share economics. Step three is speculative/low-quality revenue and survival via the market. Here top lines are built on pilots, related-party or government R&D milestones, consumption spikes, or dealer-funded arrangements. These firms do not scale similarly to sticky, high-margin ARR models, hence they are not able to cover burn. Finally the runway is extended with ATMs, PIPEs, converts, and warrant packages overall relying on selling hope to investors.
If we test for Burry’s thesis of useful-life extension leading to overstated earnings, the small-cap evidence is weak. Other than one notable adopter (Serve) we see no conclusive evidence throughout the FY24/25 period. But if we test if reported profitability exaggerates economic situations, the evidence is overwhelming. GAAP losses remain large as perceived improvement is primarily a function of what gets adjusted away (SBC, derivative swings, impairments) rather than durable unit economics. “Earnings quality” is thus inflated not by shaving depreciation lives, but by relabelling losses and outsourcing costs to dilution. When market optimism retreats, the props fail in sequence as adjusted metrics lose credibility, equity windows shut, and concern reappears.
In summary, the trend of useful life extensions in big tech has not yet spread industry wide. Fundamentally, the increases are technically defensible and legal under modern accounting standards, yet the timing implies a financial purpose as well. Investors and analysts should critically evaluate the true profitability of these companies to avoid collective overvaluation, which could result in an AI-bubble.
What is the probability that something goes wrong?
Assessing the probability that “something goes wrong” in the AI trade is less about producing a precise numerical estimate and more about understanding how today’s extreme supplier concentration, most notably Nvidia, interacts with the investment cycle and the competitive dynamics of AI infrastructure. The fragility of the current setup becomes clearer when contrasted through two stylised scenarios.
In an “Overbuilt, Overcrowded” scenario, supply expands aggressively, still anchored around Nvidia but increasingly supported by emerging alternative accelerators (like Google). If demand ultimately fails to reach the most optimistic expectations, the system becomes vulnerable to a classic boom-bust pattern: excess capacity, underutilised assets, write-downs, and sharp reductions in capex. The disappointment would not remain isolated. Cloud providers, chipmakers, and AI-native startups, many with business models implicitly assuming sustained GPU investment, would face correlated downside pressure. The risk here is systemic not because of a single point of failure, but because expectations across the ecosystem have become synchronised and self-reinforcing.
By contrast, a “Managed Transition” scenario features a slower, more incremental broadening of supply and a gradual normalisation of AI demand as speculative enthusiasm gives way to monetisable, durable use cases. Shocks in this environment tend to be idiosyncratic rather than systemic, and the overall distribution of outcomes becomes more skewed: smoother if diversification and monetisation succeed, yet still prone to sharp repricing if expectations collectively run ahead of reality.
Monetary policy adds another layer to this dynamic. The Federal Reserve currently appears more willing to ease conditions, with Polymarket assigning roughly an 87% probability to a rate cut in December. This shift is driven more by concerns over a cooling labour market than by any conviction that inflation has fully normalised. Asset valuations are already elevated, and a cut is nearly fully priced in. Historically, the Fed has at times positioned itself as a stabilising force, engineering a controlled slowdown to guide the economy toward a “Soft Landing” and avoid a deeper recession. That doesn’t seem to be the case now, as the current stance appears more reactive to labour-market risks than oriented toward managing overheating or speculative excess.
Is the right moment to develop AI?
Right now the real question is not if we should build AI, but what it means to build it responsibly in a world where there is effectively no way back. Governments, big tech and investors are already committed to AI as a core pillar of future growth: global private investment in AI is in the tens of billions per year and the market is expected to reach almost €1.9 trillion by 2030. At the same time, global institutions and investors (including people like Burry) are warning about serious risks: a potential bubble, labour displacement and wider social disruption. If we look at Nvidia, 90% of Nvidia’s Q3 revenue came from AI data centre chip purchases. Data centre operators, such as Microsoft, Amazon, Oracle and CoreWave, are scrambling to increase supply to meet this demand and are buying millions of Nvidia chips to do so. The reasonable answer, then, is that yes, this is the moment in which AI will be built out, whether we like it or not; the real choice for firms and policymakers is between shaping that deployment with governance, safety and distributional policies, or being pulled along by a trajectory set elsewhere.
Is AI the new industrial revolution?
We would best define AI not as a revolution by itself, but as the engine of a new industrial revolution. Coal powered the first, electricity the second, and the internet the third; AI can now play the same role as the core engine of the next one. AI and AI-related technologies can genuinely drive a new revolution that changes how we live and work every day. This does not mean, however, that the path will be smooth or without exaggerations: 2001 already showed us what happens when a powerful technological shift meets unrealistic expectations and speculative excess.
Correlation between NVIDIA and the Market
In order to conduct a sound analysis about NVDA drift that the market is living in these months we conducted a study based on intraday timeframes for all the components of the SP500 and RUSSELL2000, trying to find how the actual time horizon and potential impact looks like in the market, searching for inefficiencies and possible trade ideas that we could systematically create.
So we first built an infrastructure that allowed us to do so: having minute by minute SP500 and RUSSELL2000 components price and volume.
How much is NVIDIA shaping the microstructure of the market?
The first step in understanding NVIDIA’s influence on the United States equity market is to examine the contemporaneous relationship between its idiosyncratic returns and those of the SP500. Using intraday return data constructed from the log return idiosyncratic, we estimate regressions of index returns on NVIDIA returns at multiple frequencies ranging from one minute to one day. The 5 minute results illustrate the phenomenon most clearly: the regression yields a beta of approximately zero point two three and an R squared of roughly zero point zero five, with the NVIDIA coefficient highly significant both statistically and economically. A single stock’s idiosyncratic shock explaining five percent of the variation in an index composed of five hundred constituents is unlikely under any model that assumes diversified idiosyncratic noise. This relationship is displayed in Table 1, with the corresponding regression table reproduced in the appendix. When the same regression is repeated at other frequencies, the pattern remains stable. At one minute the beta rises toward zero point three. At two minutes it remains around zero point two six. At fifteen minutes it remains near zero point two. Even when aggregating intraday returns to the daily horizon, the beta is approximately zero point two five and the R squared rises to roughly zero point one three, as shown in Figure 1. The persistence of these estimates across frequencies indicates that NVIDIA’s influence does not depend on sampling frequency or transient microstructure noise; instead it reflects a stable structural transmission channel.
To understand whether this influence extends beyond the same bar, we construct forward horizon returns and regress them on contemporaneous NVIDIA returns. The forward horizons include one, three, six, and twelve bars. At the five minute frequency the contemporaneous relationship is strong, but at the very next bar the beta collapses almost to zero, and by three bars it becomes indistinguishable from noise. The R squared values undergo an even faster decay. These results appear in Table 2 and Figure 2, where the rapid decay is evident across all horizons. NVIDIA’s influence on the index therefore appears overwhelmingly contemporaneous. There is no evidence that NVIDIA predicts future index returns in a forecasting sense; rather, the index adjusts immediately to NVIDIA’s innovations. This insight is reinforced by lead lag regressions in which the contemporaneous NVIDIA coefficient remains significant while lagged coefficients do not contribute meaningfully to the model. The absence of delayed adjustment suggests that the transmission mechanism is mechanical rather than informational. It is consistent with the behaviour of index futures, exchange traded funds, and systematic hedging flows reacting instantly to NVIDIA price changes.
Granger causality tests provide a further robustness check. At the five minute frequency, NVIDIA Granger causes the index at nearly every lag examined, while the reverse direction does not reach standard significance levels. These findings appear in Table 3 with the corresponding p value curves in Figure 3. This pattern establishes temporal precedence: NVIDIA’s innovations contain information that helps explain future movements in the index, whereas the index does not predict NVIDIA in the same way. Interestingly the picture changes at the daily horizon, where both series Granger cause each other. At slower horizons, NVIDIA and the index are influenced by similar macroeconomic and sentiment drivers, generating mutual causality. The distinction between intraday and daily regimes aligns with the notion that NVIDIA drives market microstructure at fast frequencies, while at daily horizons both respond jointly to broader economic and thematic forces.
A final perspective on NVIDIA’s microstructural imprint comes from examining the response of the index to extreme NVIDIA return innovations. Using the top and bottom one percent of the NVIDIA return distribution as shock events, we compute the average cumulative index response across the subsequent hour. Positive NVIDIA shocks produce sustained positive drift in the index, while negative shocks generate immediate declines followed by slow mean reversion. This behaviour is illustrated in Figure 4 and confirms that NVIDIA’s influence is directional and monotonic at short horizons. The index does not simply oscillate around the shock; it follows the sign of NVIDIA’s innovation in a manner consistent with flow driven transmission. These results collectively indicate that NVIDIA plays a quantitatively meaningful role in shaping the contemporaneous price formation of the United States equity index. Its idiosyncratic returns no longer behave as isolated firm specific noise, but rather as systematic drivers that propagate rapidly and consistently across the broader market.
How NVIDIA drive the whole market?
Understanding how NVIDIA influences the entire market requires placing the statistical evidence in a coherent structural context. The striking feature of the empirical results is the almost instantaneous transmission of NVIDIA shocks into index returns. When the contemporaneous regression explains five percent of the variation in index returns at five minute intervals, and when this influence decays entirely within one bar, the mechanism must operate on timescales associated with modern market microstructure.
Index futures, large exchange traded funds, and the liquidity provision behaviour of dealers all create channels through which price innovations in a highly traded name can propagate into index levels within seconds. NVIDIA’s position as one of the most traded equities in the world, combined with its immense weight in technology thematic portfolios and discretionary funds, amplifies the speed of this transmission.
This view is consistent with the Granger causality evidence. At high frequencies the direction of causality is one way. NVIDIA’s return innovations consistently predict index return innovations, whereas the reverse relationship is statistically weak. Since neither series embeds lagged predictive content beyond one bar, this causality does not reflect genuine informational advantage about future fundamentals. Instead it points to NVIDIA’s role as a source of mechanical adjustment within the ecosystem of index linked instruments. The index is recalibrated in real time as NVIDIA absorbs large volumes of buyer and seller initiated order flow.
At the daily horizon the dynamics change. Here NVIDIA and the index move together. Both Granger cause the other, implying that they are jointly influenced by macro information, sector sentiment, and risk appetite. NVIDIA is not simply a stock within a sector; in recent years it has evolved into a representation of several broader phenomena at once. These include artificial intelligence investment cycles, data centre capital expenditure, technology leadership, and general expectations about productivity. Because of this symbolic role, movements in NVIDIA reflect more than firm specific information. They often capture shifts in broader sentiment, which the index naturally incorporates. In this sense NVIDIA drives the market not because it transmits idiosyncratic shocks, but because the market interprets NVIDIA as a distillation of several dominant themes in equities.
Shock response analysis strengthens this interpretation. After large NVIDIA shocks, the index displays a structured and orderly reaction. Positive shocks lead to persistent upward drift. Negative shocks lead to immediate declines followed by slow reversals. The shape of these response functions suggests that market participants treat NVIDIA shocks as signals about the risk environment rather than as isolated idiosyncratic events. Index level reactions resemble macro spillovers more than stock specific rebalancing. The shock functions shown in Figure 4 therefore capture not only mechanical ETF or futures adjustment but also the collective response of participants who interpret NVIDIA as a market thermometer. This dual character explains why the instantaneous relationship is mechanical and why the daily relationship is thematic. NVIDIA operates simultaneously as a microstructure fulcrum and as a macro sentiment proxy.
Is this systematically tradable?
The economic relevance of NVIDIA’s influence depends on whether it can be translated into repeatable trading performance. To evaluate this, we employ simple threshold based strategies that take long or short positions in the index conditional on NVIDIA’s idiosyncratic returns. The approach is intentionally minimal, since the goal is not to optimize a trading system but to test whether the observed statistical dependencies have economic value by themselves. When positions are initiated based on moderate positive or negative NVIDIA shocks, the resulting strategy exhibits positive performance with annualized Sharpe ratios above zero point six, as illustrated in Figure 5. These results imply that moderate NVIDIA innovations consistently generate predictable short horizon index drift. This is consistent with the shock response evidence, where positive shocks produce monotonic upward responses and negative shocks produce orderly declines.
The behaviour changes when shocks become too extreme. At the ninety ninth percentile of the distribution, performance deteriorates. This suggests that very large NVIDIA shocks correspond to company specific events or market wide news that interact with other factors in complex ways. Moderately sized shocks, by contrast, appear to arise from flow imbalances that propagate mechanically and predictably through index instruments. This distinction between flow induced and information induced shocks is crucial for interpreting the tradability of the effect. Only the former generate systematic drift that can be captured by simple rules.
An additional dimension of tradability is the absence of forward predictive content beyond the same bar. Since NVIDIA’s influence is absorbed almost immediately, strategies must focus on exploiting the immediate mechanical adjustment rather than forecasting future index behaviour. Attempts to use NVIDIA to predict multi bar returns are not supported by the data and are unlikely to yield robust performance.
Despite these constraints, the evidence shows that a fundamental component of NVIDIA’s influence is indeed systematically exploitable. The transmission of moderately sized NVIDIA shocks into index returns is sufficiently regular to generate positive, statistically reliable returns using simple strategies with very short holding periods. More complex approaches that incorporate transaction cost modelling, volatility adjustment, and cross asset reinforcement may extend this result, but even the simplest specifications confirm the existence of a tradable structure.







0 Comments