Every Editor | Du Yu
On the early morning of October 29th Beijing time, NVIDIA held the GTC conference in the capital of the United States, where CEO Huang Renxun took the stage to discuss the cutting-edge prospects of the AI industry.
Unlike previous press conferences with clear focus, Huang Renxun's speech this time covered a wide range of topics, including 6G, quantum computing, physical AI and robotics (18.550, 0.16, 0.87%), nuclear fusion, and autonomous driving, which are all hotly discussed in the global capital market.
Image source: Visual China - VCG31N2205305958
Faced with Nvidia's technology roadmap until 2028 and the debut of the next generation Vera Rubin architecture product, coupled with Huang Renxun's boasting that "Blackwell and Rubin chip orders have accumulated $500 billion by the 2026 fiscal year," Nvidia's stock price has reached a new historical high and is approaching the $5 trillion market value mark as of the close of the stock market on October 28th local time.
Huang Renxun revealed that Nvidia's fastest AI chip, Blackwell GPU, has achieved full production in Arizona.
Huang Renxun disclosed astonishing data on Nvidia's chip shipments. He stated that Nvidia is expected to ship 20 million Blackwell chips. By comparison, the previous generation product Hopper architecture chip only shipped 4 million units throughout its entire lifecycle.
Huang Renxun also stated that 6 million Blackwell GPUs have been shipped in the past four quarters, and demand remains strong. Nvidia expects that Blackwell and Rubin chips, which will be launched next year, will bring a total of $500 billion in GPU sales for five quarters.
Earlier this month, Nvidia and TSMC announced that the first batch of Blackwell wafers have been produced at their factory in Phoenix, Arizona. Nvidia stated in a video that the Blackwell based system will now also be assembled in the United States.
As the first officially announced cooperation agreement, Huang Renxun announced a partnership agreement with Nokia. In addition to a $1 billion equity investment, the two companies will also collaborate to launch NVIDIA ARC (Aerial RAN Computer), a telecommunications computing platform for 6G, to capture opportunities in the AI-RAN market. NVIDIA Arc is a wireless communication system running on top of CUDA-X.
Nvidia stated that "AI traffic" is currently experiencing explosive growth, with almost 50% of ChatGPT's 800 million weekly active users accessing the AI through mobile devices. With the AI-RAN system, mobile operators can improve performance and efficiency, enhance the network experience of AI applications, and provide 6G services with the same facilities, providing network connectivity for drones, cars, robots, and AI glasses.
In a press release after the meeting, Nvidia also announced the creation of the first AI native wireless stack for 6G in the United States with partners such as T-Mobile and Cisco, and the launch of new applications to advance next-generation wireless technology.
Nvidia also showcased NVQLink built on CUDA-Q core, which is used to connect traditional GPUs and quantum computers to accelerate quantum computing together. The current quantum computing is highly sensitive to environmental noise and has limited availability. Therefore, GPU based supercomputers are needed to bear some of the load of quantum processors and support the control algorithms required for quantum error correction.
Nvidia announced that NVQLink technology has received support from 17 quantum processor manufacturers and 5 controller manufacturers, including Alice & Bob、Atom Computing、IonQ、IQM Quantum Computers、Quantinuum、Rigetti Waiting for the company. Nine national laboratories led by the US Department of Energy will use NVQLink to drive breakthroughs in quantum computing, including Brookhaven National Laboratory, Fermi Laboratory, Los Alamos National Laboratory (LANL), and others.
Nvidia stated that developers can access NVQLink through the CUDA-Q software platform to create and test applications that seamlessly call CPU, GPU, and quantum processors.
Huang Renxun also announced an agreement with the US Department of Energy to build an additional seven supercomputers. These supercomputers will use Blackwell and next-generation Vera Rubin architecture chips, respectively deployed at Argonne National Laboratory and Los Alamos National Laboratory.
Nvidia announced a partnership with Oracle to build the US Department of Energy's largest AI supercomputer, the Solstice system, which will be equipped with a record breaking 100000 Nvidia Blackwell GPUs. Another system called Equinox will include 10000 Blackwell GPUs and is expected to be put into use in the first half of 2026.
Both systems are interconnected through the NVIDIA network, providing a total of 2200 exaflops of AI performance. These supercomputers will enable scientists and researchers to develop and train new cutting-edge models and AI inference models using the NVIDIA Megatron Core library, and extend them using the TensorRT inference software stack.
Energy Secretary Chris Wright said, "Maintaining America's leadership in high-performance computing requires us to build a bridge to the next computing era: accelerating quantum supercomputing. Deep collaboration between our national laboratories, startups, and industry partners such as NVIDIA is crucial to this mission
Huang Renxun believes that agent-based AI is no longer just a tool, but an assistant for all people's work. The opportunities brought by AI are countless. Nvidia's plan is to build a factory dedicated to AI, filled with chips.
Nvidia announced the launch of the Bluefield-4 processor, which supports AI factory operating systems.
NVIDIA releases BlueField-4 data processing unit, supporting 800Gb/s throughput, providing breakthrough acceleration for gigabit level AI infrastructure. This platform combines Nvidia Grace CPU and ConnectX-9 network technology, with computing power six times that of BlueField-3, and can support AI factories that are four times larger in scale.
BlueField-4 is designed specifically for a new type of AI storage platform, laying the foundation for efficient data processing and large-scale breakthrough performance in AI data pipelines. This platform supports multi tenant networks, fast data access, AI runtime security, and cloud resilience, with native support for Nvidia DOCA microservices.
Huang Renxun stated that Nvidia will collaborate with cybersecurity company CrowdStrike on AI network security models.
NVIDIA announced a strategic partnership with CrowdStrike to provide NVIDIA AI computing services on the CrowdStrike Falcon XDR platform. This collaboration combines Falcon platform data with NVIDIA GPU optimized AI pipelines and software (including new NVIDIA NIM microservices), enabling customers to create customized secure generative AI models.
CrowdStrike will leverage NVIDIA Accelerated Computing, NVIDIA Morpheus, and NIM microservices to introduce customized LLM driven applications into enterprises. Combining Falcon platform's unique contextual data, customers will be able to address new use cases in specific domains, including processing PB level logs to improve threat search, detecting supply chain attacks, identifying user behavior anomalies, and proactively defending against emerging vulnerabilities.
Huang Renxun introduced that NVIDIA's end-to-end autonomous driving platform DRIVE Hyperion is ready to launch cars that provide Robotaxi services. Global car manufacturers including Stellantis, Lucid, and Mercedes Benz will leverage NVIDIA's new technology platform DRIVE AGX Hyperion 10 architecture to accelerate the development of autonomous driving technology.
NVIDIA announced a partnership with Uber to expand the world's largest L4 level mobile network using the next-generation NVIDIA DRIVE AGX Hyperion 10 autonomous driving development platform and DRIVE AV software. Nvidia will support Uber and gradually expand its global autonomous driving fleet to 100000 vehicles starting from 2027.
DRIVE AGX Hyperion 10 is a reference level production computer and sensor architecture that enables any vehicle to reach L4 readiness. This platform enables car manufacturers to build cars, trucks, and vans equipped with validated hardware and sensors, and can host any compatible autonomous driving software.
Huang Renxun said, "Autonomous taxis mark the beginning of a global transportation transformation - making traffic safer, cleaner, and more efficient. Together with Uber, we have created a framework for the entire industry to deploy autonomous driving fleets on a large scale. "
Uber CEO Dara Khosrowshahi said, "NVIDIA is a pillar of the AI era and is now fully leveraging this innovation to unleash L4 autonomous driving capabilities on a massive scale. "
The core of the collaboration between NVIDIA and Palantir is to integrate NVIDIA's GPU accelerated computing, open-source models, and data processing capabilities into the Ontology system of Palantir AI Platform (AIP). Ontology creates digital replicas of enterprises by organizing complex data and logic into interconnected virtual objects, links, and actions, providing a foundation for AI driven business process automation.
Huang Renxun said, "Palantir and Nvidia share a common vision: to put AI into action and transform enterprise data into decision intelligence. By combining Palantir's powerful AI driven platform with NVIDIA CUDA-X accelerated computing and Nemotron's open-source AI model, we are building the next generation engine to power AI specialized applications and agents that run the world's most complex industrial and operational pipelines. "
NVIDIA and Palantir also plan to introduce the NVIDIA Blackwell architecture into Palantir AIP to accelerate the end-to-end AI pipeline from data processing and analysis to model development, fine-tuning, and production AI. Enterprises will be able to run AIP in Nvidia AI factories to achieve optimization acceleration. Palantir AIP will also receive support in Nvidia's newly launched government AI factory reference design.
Huang Renxun also showcased Nvidia's GPU roadmap up to 2028 and a prototype of the next generation Vera Rubin architecture chip on site. This product may not be mass-produced and shipped until this time next year or later.
Nvidia's liquid cooled AI server rack was also showcased on site. Huang Renxun compared that a 1-gigawatt data center requires 8000 such racks. A single rack weighs 2 tons and consists of 1.5 million components.
In the field of "physical AI" that has attracted much attention from investors, Huang Renxun's speech mainly focused on Omniverse digital twin technology, including using this technology to build modern factories, as well as training and building robots. Robot startup Figure announced a partnership with Nvidia to accelerate the development of the next generation of humanoid robots. Figure is using Nvidia accelerated computing to build its Helix visual language action model, and simulating and training it on the Isaac platform.
Nvidia has also launched a new generation of industrial grade edge AI platform, IGX Thor, aimed at bringing real-time physical artificial intelligence to the edge. Compared with the previous generation product IGX Orin, IGX Thor can provide 8 times AI computing power in integrated GPU form, 2.5 times computing power in independent GPU form, and double connectivity, enabling seamless operation of large language models and visual language models on the edge side.
By the way, nuclear fusion reactors can also be simulated using digital twins. Nvidia has revealed that it has partnered with General Atomics and a range of international partners to create a high fidelity, AI driven digital twin fusion reactor with interactive capabilities. This model can predict plasma behavior at a rate of seconds.
Daily Economic News Comprehensive Public Information
Cover image source: Daily Economic News material image