Nvidia The stock jumped nearly to a market capitalization of $1 trillion in after-hours trading on Wednesday after reporting an extremely strong outlook and CEO Jensen Huang said the company is set to have a “giant record year.” “.
Sales are on the rise due to growing demand for graphics processing units (GPUs) made by Nvidia, which power AI applications like those from Google, Microsoft and OpenAI.
Demand for AI chips in data centers prompted Nvidia to hit $11 billion in sales in the current quarter, blowing analyst estimates of $7.15 billion.
“The flashpoint was generative AI,” Huang said in an interview with CNBC. “We know CPU scaling slowed down, we know accelerated computing is the way to go, and then the killer app came along.”
Nvidia thinks it’s a distinct shift in the way computers are built that could drive even more growth — parts for data centers could even become a $1 trillion market, says Huang.
Historically, the most important part of a computer or server was the central processor, or CPU. This market was dominated by Intelwith AMD as its main rival.
With the advent of AI applications that require a lot of computing power, the graphics processing unit (GPU) takes center stage and the most advanced systems use up to eight GPUs for one CPU. Nvidia currently dominates the AI GPU market.
“The data center of the past, which was largely processors for file retrieval, will be generative data in the future,” Huang said. “Instead of fetching data, you’re going to fetch some data, but you have to generate most of the data using AI.”
“So instead of millions of CPUs, you’ll have far fewer CPUs, but they’ll be connected to millions of GPUs,” Huang continued.
For example, Nvidia own DGX systemswhich are essentially an AI computer for training in a single package, use eight of Nvidia’s high-end H100 GPUs and just two CPUs.
from google A3 Supercomputer combines eight H100 GPUs with a single high-end Xeon processor made by Intel.
That’s one of the reasons Nvidia’s data center business grew 14% in the first calendar quarter, versus flat growth for AMD’s data center unit and a decline in 39% of Intel’s AI and Data Center business unit.
Additionally, Nvidia’s GPUs tend to be more expensive than many central processors. Intel’s latest generation of Xeon processors can cost up to $17,000 at current price. A single Nvidia H100 can sell for $40,000 in the secondary market.
Nvidia will face increased competition as the AI chip market heats up. AMD has a competitive GPU business, especially in gaming, and Intel has its own line of GPUs as well. Startups are building new types of chips specifically for AI, and mobile-focused companies like Qualcomm and Apple continues to push the technology so that one day it can work in your pocket, not a giant server farm. Google and Amazon design their own AI chips.
But Nvidia’s high-end GPUs remain the chip of choice for today’s companies building applications like ChatGPT, which are expensive to train by processing terabytes of data, and expensive to run later in a process called “inference.” “, which uses the model to generate text, images or make predictions.
Analysts say Nvidia remains in the lead for AI chips because of its proprietary software that makes it easy to use all GPU hardware features for AI applications.
Huang said Wednesday that the company’s software would not be easy to replicate.
“You have to design all the software, all the libraries and all the algorithms, integrate and optimize them into the frameworks, and optimize them for the architecture, not just a chip, but the architecture of an entire data center”, Huang said. said in a call with analysts.
[colabot2]
Source link