Nvidia (NASDAQ:NVDA), long known as the dominant supplier of graphics processing units (GPUs) for artificial intelligence (AI), is now expanding deeper into the cloud computing space—putting it in increasingly direct competition with the very companies that helped fuel its rise. At the Computex conference in May 2025, Nvidia unveiled DGX Cloud Lepton, a new platform that connects developers directly with a network of GPU cloud vendors, including partners like CoreWeave, Lambda, and Crusoe. This new marketplace initiative follows on the heels of its earlier DGX Cloud offering, which Nvidia launched two years ago to lease AI infrastructure to enterprises through arrangements involving major cloud providers. What’s becoming evident is that Nvidia is no longer content being just the “arms dealer” of AI—it now wants to own more of the battlefield.
Direct GPU Marketplace Expands Nvidia’s Developer Reach
With the introduction of DGX Cloud Lepton, Nvidia has created a direct line to developers—something it previously lacked while relying on its cloud provider partners to reach end users. This marketplace aggregates excess GPU inventory from a network of smaller AI-focused cloud companies like CoreWeave, Lambda, and Crusoe, allowing AI developers to browse, select, and rent available computing power on-demand. By acting as an aggregator of idle GPU capacity across multiple vendors, Nvidia reduces the friction traditionally faced by developers looking for high-performance AI infrastructure. This is especially valuable at a time when demand for GPU access is soaring, but supply remains limited and fragmented. Lepton also helps smaller GPU cloud players monetize their unused capacity, which in turn creates a broader ecosystem around Nvidia’s chips. Crucially, it allows Nvidia to interact directly with end users, bypassing intermediaries and potentially collecting more data and insight about customer needs. It also gives developers more flexibility to select or switch between cloud vendors—something that traditional cloud service contracts don’t always allow. Nvidia’s move into marketplace-style cloud services mirrors strategies used in other industries (e.g., Airbnb, Uber) to unlock latent supply and empower buyers. By offering choice and transparency, Nvidia improves access to AI computing while embedding itself more deeply in the workflows of both startups and large enterprises. This channel expansion could significantly increase Nvidia’s influence in the AI development pipeline, not just as a chip supplier but as a service orchestrator, further differentiating it from traditional semiconductor peers.
Strategic Partnerships With AI-Native Cloud Providers
Nvidia’s investments in CoreWeave and Lambda, two of the fastest-growing AI-native cloud providers, highlight its intent to build a competitive ecosystem that complements—and potentially rivals—the traditional hyperscalers. CoreWeave, which recently went public and is forecasting around $5 billion in revenue for 2025, was an early partner in Nvidia’s DGX Cloud initiative. These companies focus exclusively on AI workloads and are optimized to serve customers with intensive training and inference needs, often more efficiently than general-purpose cloud providers. Nvidia has supported these partners not just with hardware but with capital, indicating a deliberate strategy to seed and scale a competitive alternative to AWS, Azure, and Google Cloud. The benefits are twofold: Nvidia expands the overall addressable market for its GPUs and creates a fallback in case traditional cloud giants accelerate development of in-house AI chips that could displace Nvidia’s offerings. These partnerships also provide a testbed for new services like DGX Cloud and DGX Cloud Lepton, allowing Nvidia to fine-tune its cloud offerings in environments specifically built around its hardware and software stack. As the AI ecosystem diversifies beyond the largest tech players, Nvidia’s early and deep involvement in these second-tier but rapidly growing providers ensures it retains pricing power, platform loyalty, and operational influence. Moreover, by backing these providers, Nvidia positions itself to benefit from any shifts in enterprise preference toward more specialized, AI-optimized cloud platforms—especially as security, performance, and cost concerns reshape IT decision-making.
Hybrid Cooperation With Hyperscalers Creates an Uncomfortable Symbiosis
One of the most unique—and strategically complex—aspects of Nvidia’s DGX Cloud model is its hybrid relationship with hyperscalers like Microsoft, Google, and Amazon. In many DGX Cloud deployments, these companies buy Nvidia hardware, manage it within their infrastructure, and lease it back to Nvidia, which then rents it out to enterprise clients as a complete AI platform. While this model accelerates time-to-market for Nvidia and ensures access to global infrastructure, it puts hyperscalers in a difficult position: they are enabling a platform that could eventually compete with their own cloud offerings. This awkward alliance stems from Nvidia’s unique market power, with an estimated 80% share in the AI chip space, which forces hyperscalers to partner with it even if they harbor ambitions to reduce dependence. Google’s conspicuous absence from Nvidia’s DGX Cloud chip rental marketplace launch in May 2025 is a case in point. As hyperscalers build their own custom AI chips (e.g., Google’s TPU, AWS’s Trainium), the cooperative model with Nvidia may become increasingly strained. However, for now, the economics of high-margin cloud infrastructure and the lack of viable GPU alternatives make this arrangement mutually beneficial. Nvidia’s leverage in this equation allows it to dictate favorable terms and extract value while quietly building its own parallel cloud layer. The longer Nvidia can maintain this balance—where competitors are both customers and enablers—the more runway it has to scale DGX Cloud before competition forces the arrangement to unravel.
Growing Multi-Year Cloud Contracts Indicate Long-Term Momentum
Although Nvidia doesn’t break out specific revenue from DGX Cloud, it has disclosed multiyear service agreements that give a sense of the business’s trajectory. In its latest fiscal year, Nvidia reported $10.9 billion in outstanding multiyear cloud service contracts, up sharply from $3.5 billion the year prior. Much of this growth is attributed to its DGX Cloud business, suggesting strong enterprise demand for AI computing as a service. This momentum reflects broader market trends where companies prefer renting AI infrastructure on-demand rather than investing in expensive, quickly depreciating on-premise hardware. The recurring revenue nature of these contracts gives Nvidia greater earnings visibility and buffers it from short-term hardware demand cycles. Importantly, these long-term commitments also validate Nvidia’s positioning as a trusted partner for mission-critical AI workloads. While these numbers still pale in comparison to the $107 billion in annual cloud revenue at AWS, the steep growth curve suggests Nvidia’s offering has resonated with enterprise buyers. The ability to bundle not just compute but also Nvidia’s proprietary software, optimization tools, and expert support creates a sticky ecosystem that may be harder for customers to walk away from. This foundation of locked-in contracts provides Nvidia with optionality: it can continue expanding its cloud footprint, layer on additional services, and eventually scale to a point where it becomes a formidable standalone cloud provider, or at least a niche leader in AI-centric workloads.
Final Thoughts
Source: Yahoo Finance
We can see Nvidia’s stock trajectory in the chart above as the company continues to be the fastest growing chip supplier in the world. Interestingly, its recent shift comes at a time when Amazon Web Services (AWS), Microsoft Azure, and Google Cloud face increasing scrutiny, tighter IT budgets, and growing internal competition from AI-native platforms. While Nvidia claims its goal is to expand access to computing power and not to displace hyperscalers, its recent moves suggest a growing ambition to shape the direction of AI infrastructure directly. Through DGX Cloud and now DGX Cloud Lepton, the company is building direct bridges to developers and enterprises while also laying the groundwork for long-term recurring revenue through multiyear contracts. We believe that its unique partnerships—with both AI-native cloud upstarts and traditional hyperscalers—create a complex yet potentially powerful ecosystem that may redefine the AI infrastructure landscape.