What we are experiencing today goes beyond mere technological euphoria or fears of sector disruption. The AI cycle reveals a paradox: as intelligence becomes increasingly dematerialized, markets are rediscovering the value of the tangible.
We are witnessing a double commoditization.
By commoditization, we mean a process whereby a product or service, initially differentiated and offering high added value, gradually becomes interchangeable. Technological scarcity fades, differentiation diminishes, barriers to entry are reduced and, ultimately, pricing power erodes. What was once premium becomes standard; what was once strategic becomes accessible.
Historically, commoditization has affected industries when innovation spreads faster than the ability to maintain competitive advantage. Computer hardware experienced this, then the cloud, then certain layers of software. Today, this phenomenon is simultaneously affecting two levels of Artificial Intelligence.
On the one hand, AI threatens the traditional SaaS model.
The SaaS model — Software as a Service — is based on a simple principle: software is no longer sold as a perpetual license installed locally, but is offered via the cloud, accessible by subscription. The company pays a recurring fee, usually monthly or annually, based on the number of users (per seat) or the level of service subscribed to.
This model has profoundly transformed the software economy over the past two decades. It has given publishers exceptional visibility into their revenues thanks to recurring subscriptions. It has reduced sales cyclicality, improved cash flow predictability, and enabled high margins once fixed costs are absorbed. For investors, SaaS has become synonymous with sustainable growth, strong customer retention, and distant cash flows valued at significant multiples.
But this model is based on three implicit assumptions: software complex enough to justify a continuous subscription, use by human users, and high replacement costs. Generative AI challenges these pillars. If a tool can be recreated internally using models, dependence on the publisher decreases. If agents automate cognitive tasks, the number of licenses required decreases. And if functional differentiation erodes, the ability to maintain high prices weakens.
The threat therefore lies less in the products themselves than in the very structure of the model: recurring monetization per user in a world where code is becoming abundant. Generative tools already make it possible to recreate a significant portion of functional software, particularly for the small and mid-business segment. The per-seat model, the pillar of high margins over the past twenty years, is under pressure. This is the first commoditization.
On the other hand, open-source and distillation are accelerating the convergence of models. If 70 to 80% of uses can be covered locally, dependence on centralized APIs and hyperscalers could be less structural than anticipated. This is the second commoditization. We will return to this in the second part of this paper.
By open source, we mean artificial intelligence models whose architecture and weights are made accessible — either fully or partially — to the community. This allows companies, developers, and governments to run these models locally, modify them, optimize them, and adapt them to their own data. Unlike closed models offered via API by players such as OpenAI or Anthropic, open-source reduces dependence on a single provider and centralized cloud.
Distillation, meanwhile, is a training technique that involves using the outputs of a high-performance model — often costly to train — to train a smaller, lighter, and less expensive model. The distilled model does not necessarily match the original model in every respect, but it can reproduce most of its capabilities for standard uses. This technique accelerates the spread of performance and reduces the gap between cutting-edge (frontier) models and more accessible alternatives.
Model convergence therefore means that the technological advantage is shrinking faster than expected. If, for an SME, an open-source model running locally is sufficient to handle most tasks — report generation, document analysis, administrative automation — then the need to constantly use APIs billed by token decreases. Dependence on hyperscalers, i.e., large cloud platforms that concentrate computing infrastructure, becomes less structural.
This is where the second commoditization lies: artificial intelligence itself, which is supposed to be rare and expensive, is becoming partially interchangeable. When “good enough” performance becomes widely available, technological scarcity fades and the ability to maintain high margins erodes. We will explore this dynamic in more detail in the second part of this paper.
Let's explore the first commoditization in detail, with a concrete example:
The enterprise version of Anthropic, via its Claude model, can already generate internal management software for a small business in just a few hours: invoicing, customer tracking, inventory management, financial dashboards, email automation, and reporting.
Anthropic is an American company specializing in the development of advanced language models. Its flagship product, Claude, is a Large Language Model (LLM) capable of understanding complex instructions, generating code, analyzing large documents, and interacting with databases. Unlike a simple conversational interface, the enterprise version of Claude is designed to be integrated into an organization's internal systems.
This means that users can describe their needs in natural language: “I want a management system for my clinic with billing, patient tracking, and monthly reporting.” The model then generates the database structure, proposes a user interface, creates the necessary backend functions, configures automations, and can even assist with deployment in a cloud or local environment.
The major difference with traditional development is that users no longer need to master programming languages or software architecture. The model acts as a versatile developer capable of writing code, explaining it, and correcting it in real time. The software production cycle — once long and costly — has been dramatically compressed.
It is precisely this capability that weakens the small business segment. When the creation of a business tool becomes a conversational function, the scarcity of custom development disappears. Value no longer lies in writing code, but in the orchestration, data, and governance of the system thus created.
In the small business segment, this radically changes the economics of software.
Development agencies — historically responsible for creating custom solutions for clinics, restaurants, local SMEs, or professional firms — are seeing their value proposition compressed. Their advantage was based on the scarcity of code and technical complexity. That scarcity is disappearing. A significant portion of their business, centered on CRUD applications, simple back offices, or lightweight CRMs, is becoming reproducible at virtually zero marginal cost.
CRUD refers to a type of application that is very common in software development. The acronym stands for Create, Read, Update, Delete. It is the foundation of most business tools: managing customer records, recording orders, modifying inventory, and deleting obsolete entries. A CRUD application is generally based on a database, a simple interface, and a few management rules. It is the technical core of many back offices for SMEs.
Historically, these applications have required development time: database modeling, form creation, access rights management, deployment. Today, an advanced AI model can generate this entire structure in a matter of minutes from a functional description. The technical complexity is automated.
A CRM, or Customer Relationship Management, is customer relationship management software. It allows you to track prospects, sales, interactions, follow-ups, invoices, and even after-sales service. In the small business segment, many CRMs are relatively simple: customer files, communication history, sales pipeline, tracking tables.
However, these features are based precisely on enhanced CRUD logic: creating files, updating statuses, consulting data, deleting or archiving. When AI makes it possible to automatically generate these basic building blocks, the functional differentiation between many lightweight CRMs diminishes.
In other words, a large part of the activity of agencies and publishers specializing in small businesses consists of producing systems structured around standard databases and interfaces. If these building blocks can be generated on demand, their marginal cost tends toward zero. The pressure is no longer just on price, but on the very justification for recurring fees.
Software publishers specializing in this same small business segment are also exposed. Many sell vertical solutions (management for medical practices, management for restaurants, management for tradespeople) with recurring pricing per user. If AI can recreate 70 to 80% of these features internally, at a lower cost and with more refined customization, competitive pressure becomes immediate. The software barrier to entry is collapsing.
It is therefore hardly surprising, under these conditions, to see the ETF dedicated to the software sector undergo a sharp correction over the last two months, even though valuation levels remain high in absolute terms.

For major SaaS players, the risk is different but real. It is not a question of outright disappearance, but rather a weakening of the per-seat licensing model. This model assumes that each knowledge worker needs individual, permanent access, billed monthly. However, if AI orchestrates workflows, generates reports, automates analyses, and reduces the number of human interactions required, the number of active licenses may automatically decrease.
AI does not eliminate infrastructure. It potentially reduces the number of human users needed to produce the same result. In a world where an AI agent processes data, generates exports, and prepares audits, the company may only need a limited number of licenses for validation and final supervision.
In other words, the threat is not only to the product, but to the economic unit of the software. The seat is no longer necessarily the relevant unit of monetization when intelligence is automated.
It is this change — the scarcity of code disappearing and the increasing automation of cognitive tasks — that is putting pressure on the SaaS model as it has been designed over the past two decades.
This pressure on the per seat model is not confined to the technology sector. It immediately affects the entire financing chain.
For more than a decade, SaaS has been the ideal asset for private equity and private credit. Recurring revenues, high visibility, high margins, low capital intensity, predictable growth: all of this made it possible to apply high multiples and, above all, to support significant debt structures. The per-user subscription model offered an illusion of near-bond-like stability. Distant cash flows were valued as if they were almost guaranteed.
If AI undermines this stability — not necessarily by destroying revenues, but by broadening the distribution of scenarios — then the perceived risk changes radically.
The potential compression of the per-seat model has several consequences. First, there is a risk to organic growth: if companies reduce the number of licenses they need through automation, net revenue retention declines. Second, there is a risk to margins: faced with internal alternatives generated by AI, publishers must adjust their prices or enrich their offerings. Finally, there is a risk to the visibility of future cash flows, which is at the very heart of SaaS valuation.
However, a large proportion of software acquisitions over the past ten years have been financed through heavily leveraged LBOs.
An LBO (Leveraged Buyout) is a company buyout financed largely by debt. In concrete terms, a private equity fund acquires a company by contributing only a fraction of the capital in equity, with the rest being financed by loans taken out by the target company itself. The debt is then repaid using the cash flow generated by the acquired company.
The software sector, and SaaS in particular, has lent itself ideally to this type of financial arrangement. Why? Because recurring subscription revenues offer high visibility on future cash flows. This predictability makes it possible to anticipate repayment capacity and therefore support high leverage — i.e., a high debt-to-EBITDA ratio.
In an environment of low interest rates and abundant liquidity, many software publishers were acquired at high multiples, often exceeding 15 or 20 times EBITDA, with a significant portion of debt. As long as growth remained solid and margins remained high, the arrangement worked: rising revenues made it possible to gradually reduce leverage and prepare for a resale at an attractive multiple.
Private equity funds overweighted SaaS precisely because the recurring nature of subscriptions made it possible to sustain high leverage. Private credit vehicles, for their part, massively financed these transactions, attracted by attractive spreads on businesses considered defensive.
If the perception of stability of these flows cracks, the problem no longer concerns only listed publishers. It concerns all the balance sheets that financed their expansion.
A compression of SaaS multiples automatically reduces the value of collateral. A decline in growth or margins weakens covenants. Increased cash flow volatility increases refinancing risk. What was perceived as a “stable long-duration” asset becomes an “uncertain long-duration” asset. In this context, the market reaction may seem inconsistent in the short term — selling software stocks while holding on to financials or private equity vehicles — but the economic logic remains relentless. If the stability of underlying cash flows is called into question, the entire financing chain must be reevaluated.
The risk does not stop with software publishers. It spreads, at the end of the chain, to credit and financing players. The banking sector is not immune: it is indirectly exposed through the financing of LBOs, private credit, and investment vehicles heavily positioned in SaaS. If the perceived quality of cash flows deteriorates, credit risk automatically increases.
The market is already beginning to factor in this possibility. The Financial Select Sector SPDR Fund (XLF) ETF is already underperforming relative to the major indices, a sign that the potential spread of risk to financials is gradually being taken into account.

This first commoditization we described therefore threatens more than just publishers' business models. It calls into question the financial architecture that has been built around the supposed stability of SaaS.
This is where the issue becomes systemic. When technology changes the visibility of future cash flows, it also changes the perception of credit risk. And when credit is repriced, it is no longer a simple sectoral adjustment, but a regime change.
The second commoditization is more insidious. It no longer targets only application software; it attacks the very heart of the AI business model. It threatens AI tools in their core business by changing their future monetization model.
Until now, the revenue of players such as OpenAI and Anthropic has been based on an implicit assumption: that frontier models would be rare, costly to train, and difficult to replicate.
This scarcity justified high API pricing, premium subscriptions, and, above all, massive CapEx devoted to training clusters.
However, this scarcity is eroding faster than expected.
Open source is playing a decisive role here. Models such as Meta (with Llama) have paved the way for the rapid spread of advanced architectures. Frameworks such as Ollama, LM Studio, and GGUF quantized formats now make it possible to run high-performance models locally on accessible hardware. Quantization, distillation, and inference optimization drastically reduce deployment costs.
At the same time, several Chinese players — such as DeepSeek and MiniMax — have accelerated convergence by leveraging distillation and rapid iteration techniques. The logic is simple: observe the behavior of frontier models, train more compact models capable of reproducing a large part of their performance, and then distribute them at low marginal cost. The barrier to entry does not disappear, but it is reduced.
In this context, absolute performance is no longer the only variable. For an SME, a model that covers 70 to 80% of uses — report generation, document analysis, automation of administrative tasks — is often sufficient. If this model can be executed locally, with internal data, without relying on a cloud API billed by token, the economic equation changes radically.
We are entering a hybrid world: local AI processes internal flows, audits, and sensitive documents; only complex, multimodal cases or those requiring exceptional power are sent to the cloud. Dependence on centralized APIs is decreasing. The pricing power of frontier suppliers is weakening.
And this is where the second commoditization becomes strategic. If the majority of everyday uses shift to open-source or localized models, the revenue stream is no longer simply the execution of the model. It shifts to orchestration, proprietary data, and infrastructure.
Compression would no longer affect only SaaS publishers. It would also affect the justification for the massive CapEx devoted to training frontier models. If the performance gap between frontier models and locally optimized models is perceived as critical for only a minority of uses, then the price elasticity of APIs increases, and the data center arms race becomes more difficult to make profitable.
In other words, the first commoditization weakens the application layer. The second calls into question the very scarcity of artificial intelligence as a service.
It is this dual pressure that broadens the distribution of scenarios and makes the current valuation of the technology sector increasingly sensitive to the slightest inflection.
This dual pressure broadens the range of possible scenarios. The future cash flows of technology companies become less predictable. However, software and AI valuations are based on distant cash flows over an extremely long duration. When uncertainty increases, multiples are automatically compressed.
This is where sector rotation comes into its own.
The market is not only fleeing risk. It is fleeing the instability of duration. Long-duration assets — software, enterprise tech, applied AI — become more fragile in the face of a wider distribution of outcomes. At the same time, assets with tangible cash flows in the near term are revalued.
The run on commodities, metals, and mining stocks should be viewed in this context.
Technology companies are the ones that will bear the massive CapEx of the AI cycle: data centers, GPUs, electrical networks, cooling, digital infrastructure. Mining companies, meanwhile, are the shovel sellers in this rush. Copper, uranium, nickel, rare earths, industrial silver: without these physical inputs, no AI infrastructure can exist. Digital dematerialization relies on extreme materiality.
Unlike technology companies, mining companies are not valued on the basis of promises of perpetual cash flows over ten or fifteen years. They generate flows linked to physical assets, measurable reserves, and tangible supply and demand cycles. In a world where visibility on software rents is blurring, visibility on the ton extracted and sold is regaining its premium.
This dynamic also explains the relative resilience of the energy, materials, and industrial sectors. The duration is shorter, cash flows are more immediate, and dependence on uncertain technological distribution is lower.
And at the top of this hierarchy of tangible assets is gold.
In an environment where the outcome of the AI cycle remains uncertain — with ongoing explosive growth in computing power or local hybridization limiting centralization — gold plays its traditional role as a safe haven in the face of systemic uncertainty. AI widens the economic distribution tails. It changes business models, capital requirements, and potentially even the structure of skilled employment. Faced with this growing uncertainty, investors are looking for an asset that does not depend on a SaaS model, hyperscale CapEx, or specific technology adoption.
Physical gold does not depend on any innovation scenario. It depends on trust.
We are therefore faced with a complex configuration. The AI cycle may justify massive CapEx if its use becomes permanent and widespread. Conversely, it may lead to compression if local commoditization reduces platform rents. Between these two extremes, investors are arbitraging towards what is rare, tangible, and less sensitive to long duration.
The rush toward AI is fueling demand for strategic metals. Uncertainty about its monetization is fueling demand for gold.
The paradox is striking: the more artificial intelligence becomes, the more safe-haven assets are becoming profoundly material again.
Reproduction, in whole or in part, is authorized as long as it includes all the text hyperlinks and a link back to the original source.
The information contained in this article is for information purposes only and does not constitute investment advice or a recommendation to buy or sell.