Nvidia’s inventory slips on introduction of recent AI data-center CPU+GPU chip

You didn’t suppose Nvidia Corp. would let the lead AI data-center product of Advanced Micro Devices Inc., its closest rival within the artificial-intelligence data-center chip market, go unanswered, did you?

At 2023’s Special Interest Group on Computer Graphics and Interactive Techniques, or SIGGRAPH, convention, Nvidia
NVDA,
-1.66%
founder and Chief Executive Jensen Huang unveiled his reply to AMD’s
AMD,
-3.06%
Insight MI300X CPU + GPU, which that firm has been teasing all yr.

In his keynote handle Tuesday, Huang launched the next-generation DGX GH200 Grace Hopper Superchip, to be used with large-memory generative-AI fashions like OpenAI’s ChatGPT — which is backed by Microsoft Corp.
MSFT,
-1.23%
— “to scale out the world’s data centers.”

Nvidia shares, which had been down about 1% earlier than the announcement, dropped as a lot as 3% to an intraday low of $440.56 following the announcement, and completed down 1.7% at $446.64.

AMD shares closed down 3.1% at $113.23. Meanwhile, the PHLX Semiconductor Index
SOX
fell 1.6%, the S&P 500
SPX
declined 0.4% and the tech-heavy Nasdaq Composite
COMP
dropped 0.8%.

In a press convention forward of the announcement, Nvidia’s head of hyperscale and high-performance computing, Ian Buck, advised reporters the GH200 packs extra reminiscence and extra bandwidth than the corporate’s H100-based data-center system. The GH200 makes use of Nvidia’s Hopper GPU and marries it with its Arm Ltd. architecture-based Grace CPU. The chip carries 141 GB of HBM3 reminiscence and 5 TB per second of bandwidth.

The GH200 might be doubled up within the NVLink-dual GH200 system to extend capability by 3.5 occasions and triple bandwidth. Both will likely be out there within the second quarter of 2024, however Nvidia didn’t touch upon pricing.


Nvidia

Buck stated a overwhelming majority of AI coaching and inferencing is completed on Nvidia’s present HGX programs. The GH200 presents a brand new possibility for inference prospects to assist AI workloads, at two occasions the efficiency per watt, as cloud-service suppliers hope to construct out their capability with out considerably growing their power prices.

In addition to Microsoft’s Azure cloud-service supplier, different firms with hyperscaler capability, like Amazon.com Inc.’s
AMZN,
-1.60%
large Amazon Web Services and Alphabet Inc.’s
GOOG,
-0.08%

GOOGL,
-0.10%
Google Cloud Platform, are anticipated to gasoline gross sales of AI chips in buildouts within the second half of the yr.

Read: Chip-equipment suppliers rally after Lam says AI servers will drive progress

On Friday, AMD broke with the broad tech selloff to complete greater on the week as analyst assist gathered for the chip maker’s AI place following its earnings report, wherein AMD Chair and CEO Lisa Su forecast “multiple winners” within the AI race. Nvidia experiences its earnings after the market shut on Aug. 23.

Read: Will AI do to Nvidia what the dot-com increase did to Sun Microsystems? Analysts examine present hype to previous ones.

AMD launched its MI300X CPU + GPU at its AI product launch in June. The chip maker is thought to be a distant second to Nvidia with regards to AI data-center {hardware} market share.

Read: Nvidia will get extra good news from Big Tech, at the same time as AI spending ‘may not lift all boats’

Year so far, AMD shares have gained 74.8%, whereas Nvidia shares have soared greater than 205% and the SOX index has rallied 45.3%. The S&P 500 has superior 17.2% and the Nasdaq has grown 327% in the identical timeframe.

Read: Nvidia ‘should have at least 90%’ of AI chip market with AMD on its heels

Source web site: www.marketwatch.com

Rating
( No ratings yet )
Loading...