Technology

Here’s every major reveal from Jensen Huang’s keynote at Nvidia GTC 2026 | Technology News

Nvidia GTC 2026 kicked off on Monday, March 16, with CEO Jensen Huang taking the stage at the SAP Centre in San Jose, California, to lay out the chipmaker’s vision for the future of computing in the AI era.The GPU Technology Conference (GTC) 2026 is a four-day gathering that is expected to draw over 30,000 attendees this year. It has become one of the flagship tech events of the year, serving not just as a showcase of innovation but as a barometer of the appetite for and adoption of artificial intelligence (AI). It also offers a glimpse of where the most valuable publicly traded company in the world (worth about $4.5 trillion) is headed.
In a keynote address lasting more than two hours, Huang unveiled a sweeping lineup of new technologies across the full AI stack, including an all-new AI inference chip, CPU processors, advanced networking systems, server racks, computing platforms, and open models. He also said that Nvidia expected to sell $1 trillion of its Blackwell and Vera Rubin AI chips the end of 2027. The Vera Rubin GPUs, said to deliver 10 times more performance per watt than its predecessor Grace Blackwell, will go to market later this year.
Here are all the major announcements from Huang’s keynote at Nvidia GTC 2026.
Groq 3 LPUs and LPX Rack
Huang on Monday unveiled the Nvidia Groq 3 LPU (language processing unit), which is an inference chip that integrates Groq’s technology with a specialised core optimised to accelerate Nvidia’s flagship GPUs (graphics processing units). It is the outcome of Nvidia’s $20 billion-licensing agreement signed in December last year with chipmaking startup Groq, which has gained traction for its high-performance, low-cost inference chips.

Nvidia also announced the Groq 3 LPX Rack that will house 256 LPUs and will be deployed as part of the Vera Rubin rack-scale system that is shipping to customers later this year. The Groq LPX rack can increase the tokens per watt performance of its Rubin GPUs 35 times, according to Huang. The LPX rack architecture is designed to optimise trillion-parameter AI models with a million-token context window. It is fully liquid cooled and built on Nvidia’s MGX infrastructure.
Vera CPU
Nvidia also announced its next-generation Vera CPUs that are said to be 50 per cent faster than traditional rack-scale CPUs with twice the efficiency. The new class of CPUs build on the previous generation Grace CPUs delivering higher AI throughput, responsiveness and efficiency for large-scale AI services such as coding assants, as well as consumer and enterprise agents.Story continues below this ad
The new Vera CPU rack comprises 256 liquid-cooled Vera CPUs to sustain more than 22,500 concurrent CPU environments. A single rack can be used to scale to tens of thousands of simultaneous instances and agentic tools. The Nvidia Vera CPU is currently in production and will become available in the second half of the year through leading cloud service providers such as Alibaba, teDance, Cloudflare, Oracle, and others. ASUS, Cisco, Dell, Foxconn, and Lenovo are also looking to adopt Nvidia’s Vera CPUs.
DLSS 5
Computer graphics was Nvidia’s bread and butter for a long time before the generative AI boom happened. On Monday, Huang announced the company’s latest innovation in computer graphics known as DLSS 5, which can be run in real time at up to 4K resolution for smooth, interactive gameplay.
DLSS 5 is a real-time neural rendering model that infuses pixels with photoreal lighting and materials. It takes a game’s colour and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consent from frame to frame.

Nvidia said that DLSS 5 is its most significant breakthrough in computer graphics since the AI-powered technology made its debut in 2018. At CES this year, Nvidia announced DLSS 4.5, which uses AI to draw 23 out of every 24 pixels seen on the screen.Story continues below this ad
DLSS 5 is expected to ship this fall with support for popular titles such as AION 2, Assassin’s Creed Shadows, Black State, CINDER CITY, Delta Force, Hogwarts Legacy, Justice, and more. It will be supported Bethesda, CAPCOM, Hotta Studio, NetEase, NCSOFT, S-GAME, Tencent, Ubisoft, Warner Bros Games, and other publishers.
Vera Rubin Space-1
During its GTC 2026 conference, Nvidia announced the launch of computing platforms designed specifically for orbital data centres in space.
The Vera Rubin Space-1 Module includes the IGX Thor and Jetson Orin modules comprising collections of chips that are specifically “engineered for size-, weight- and power-constrained environments,” Nvidia said in a press release. Nvidia is working with several companies such as Starcloud, Axiom Space, and Planet to deploy the platform as part of infrastructure housed in satellites orbiting the Earth.
Huang also said that Nvidia is working with partners on a new computer for orbital data centres. However, he also acknowledged that there were several engineering hurdles to overcome such as radiation, lack of convection and cooling systems, etc.Story continues below this ad
NemoClaw
OpenClaw is an open-source platform that can be used developers to build, deploy, and orchestrate autonomous AI agents or ‘claws’, which can further spin up their own sub-agents to execute specialised tasks with access to local file systems and data.
However, the requisite of giving OpenClaw access to all of a user’s data and systems in order for it to work as a true personal assant, has sparked concerns over the possibility that these agents can go rogue and tamper with or delete valuable files. With NemoClaw, Nvidia is looking to address these concerns.
It is a software tool kit designed to help claws run safely in an enterprise context, via a contained virtual environment. “It’s the missing infrastructure layer beneath. We’re working with OpenClaw builder Peter Steinberger to make self-certified agents, or claws, more trustworthy, scalable, and accessible to the world,” said Kari Briski, Nvidia’s vice president of generative AI software for enterprise.
Nemotron 3 Ultra, Omni, and Voice Chat
Nvidia has announced new additions to its Nemotron family of open-weight AI models that are designed to run agentic systems. Nemotron 3 Ultra can be used to power AI agents with natural conversational skills, complex reasoning, and advanced visual capabilities. It has been trained using Nvidia’s Blackwell GPUs and delivers 5x throughput efficiency.Story continues below this ad

Nemotron 3 Omni allows AI agents to extract insights from videos and documents while Nemotron 3 VoiceChat can power AI agents that len and respond automatically. Nvidia has also released Nemotron-Personas, a collection of privacy-preserving, fully synthetic datasets grounded in local census and demographic data.
In terms of availability, Nvidia said that select models are already available on GitHub and Hugging Face as well as its NIM microservices platform and build.nvidia.com.
GR00T N2
During his GTC keynote, Huang previewed GR00T N2, a foundational AI model to power robotic systems that can succeed at new tasks in new environments more than twice as often as other vision-language-action (VLA) models, as per the company. GR00T N2 topped MolmoSpaces and RoboArena for general robot policies, Nvidia said. It is slated to become available the end of 2026.
NVIDIA also introduced nvQSP, a GPU-accelerated simulation engine that enables pharmaceutical researchers to explore far more treatment scenarios in computer models before clinical trials begin.Story continues below this ad
In benchmark tests, nvQSP delivered up to 77x faster performance compared with traditional single-threaded CPU simulations, allowing scients to analyse hundreds of dose levels and patient subpopulations in the time it previously took to simulate just a few.

Related Articles

Back to top button